The importance of user research in your information architecture strategy

The importance of user research in your information architecture strategy

11 minute read

The importance of user research in your information architecture strategy

11 minute read

The importance of user research in your information architecture strategy

Alice Chen

Content Strategist at TELUS

The difference between good information architecture (IA) and great IA? User research. You can rely on different techniques to determine your IA strategy but to truly ensure that it’s intuitive, easily navigable, and customer-first, you should base your decisions on real user data.

There are two types of user experience (UX) research techniques that you can rely on when it comes to developing your IA strategy: card sorting and tree testing. You can use both at various points as you’re developing your sitemap, and you have different options depending on your company’s budget. 

Card sorting

Card sorting is a UX research technique where participants organise all the topics (cards) within your site into the groups that make the most sense to them. This helps you better understand your customers’ mental models so that you can organiSe content on your site how they would expect to find and interact with it. 

Whether you’re building a new website or redesigning, card sorting helps answer questions like:

  • What are our users’ mental models of this content? 
  • How should we structure our website to reflect their mental models?
  • What navigation labels will be intuitive for users?

The screenshot below shows an example of an open card sort in Optimal Workshop. Participants drag and drop cards from the first column into groups of their choosing on the right which they can then label.


A screenshot that shows an example of an open card sort in Optimal Workshop. Participants drag and drop cards from the first column into groups of their choosing on the right which they can then label.

Optimal Workshop card sorting demo

When to use card sorting

Card sorting should be one of the first steps that you take when it comes to designing your sitemap for two reasons:

  1. To ensure that you’re using real user data to inform your design
  2. To gain input and alignment with your stakeholders

If your stakeholders understand and see the value of IA at the start of a project, then congratulations, you have some content-savvy stakeholders. Usually, there’s some buy-in needed to get them on board. Card sorting is a great kick-off exercise that helps everyone understand current navigation problems and why a proper IA is necessary.

How to set up card sorting

There are three types of card sorting:

  • Open: The participants group the content and also label the categories that they’ve created. This gives you a sense of the language that users are familiar with
  • Closed: You set predefined categories for participants to group content into. This works best if you already know the labels you are going to use
  • Hybrid: A mix of both open and closed. You create a couple of easy or obvious categories but also give participants the option of creating additional ones

Card sorting can be conducted remotely or in-person:

  • Remotely: Programs such as Optimal Workshop allow you to set up an online test with virtual “cards”. You can get a higher number of results this way
  • In-person: You can also manually write your topics on cue cards to conduct an in-person card sort. While you might not garner a ton of responses, the benefit of card sorting in person is you can hear participants’ thought processes and you’re able to ask your participants questions to dig into their mental models

Try to limit your cards to between 30-40 so you don’t overwhelm your participants. If you have more than 40 topics, focus on the most important ones that you want to gain insight on.

Card sorting by budget

If you have a budget for user research, you can use a recruitment service like Optimal Workshop, Validately, or UsabilityHub to get a participant pool that reflects your target users. The services typically offer an incentive like a gift card to participate so recruiting participants can be expensive, especially if you also have a specific demographic or requirements to cater for.

If your company has a user research team, reach out to them as they might already have a panel of customers that you can leverage to run your tests. The downside to this option is that they likely only run a few tests a month as not to inundate the customer panel and dilute response rates so you might not be able to test right away.

If you don’t have the budget, you still have some options. Try a guerilla testing tactic where you ask your friends, family, and colleagues to complete the activity to gather some results. Ask them to send and share with their network to expand your participant pool. You can even hit the streets and ask strangers. The risks with this approach are that the participants might not be your target audience, and it can be difficult to get a large number of responses. 

With the data you glean from card sorting, along with the research and audits you’ve done, you can then draft a sitemap. 

Tree testing

The other UX testing technique is tree testing (sometimes called task testing). You load your sitemap (tree) into a software and then set tasks to see how successful participants are in finding the correct information. If a task gets a low success rate, then it’s likely that you need to revisit where you’ve placed that information. 

In addition to success rates, tree testing gives you first click data which helps you understand where your users intuitively go first. For example, if the first click data for a task is accurate but the participant did not successfully get to the right destination, it may indicate that the second-level label is not clear enough. 

Other insights are whether participants’ success was direct or indirect. You can dig into each participant’s activity to see if there are any trends or insights to be gained. Tree testing results will help you test categories and labels, as well as identify any potential navigation and findability issues. 

Whether you’re testing current state or a redesign, tree-testing helps you answer questions like:

  • How well is our current sitemap / navigation supporting key user tasks? 
  • What labels or categories don’t make sense to users?
  • How might we restructure information to improve navigation and discovery?

Below is an example of what Optimal Workshop’s tree testing looks like. For each task, more of the sitemap is revealed as participants click through. They can go back and forth between levels before selecting their choice.


An example of what Optimal Workshop’s tree testing looks like with a list of content items arranged in a hierarchy. The main level is 'BananaCom Homepafe' and sub items include Help and Support, My Account, and Home Phone.

Optimal Workshop tree testing demo

When to use tree testing

There are a couple points where you can conduct tree testing:

  • Before a redesign: To test a current sitemap to benchmark
  • When you have a proposed sitemap: To identify potential navigation and usability issues

When you test a proposed sitemap, you can also run A/B tests to measure the efficacy of labels. For example, you can compare success rates of sitemap A with branded terms and sitemap B with plain language labels. This is particularly valuable if you weren’t able to card sort as it can give you a good idea of the language that best resonates. However, just keep in mind that with A/B testing, you’ll need double the participant pool. 

How to set up tree testing

Turn your sitemap into a virtual one using a software like Optimal Workshop. When you write tasks, make sure they are based on the user tasks (jobs to be done) of your target audience. This is where your user personas come into play. Another tip with writing your tasks is to avoid including words or terms that are used in your navigation labels as not to “give away the answer” and avoid using branded terms in your tasks as participants may not be familiar. 

Tree testing by budget

Unlike card sorting, tree testing is not an activity that can easily be done in person. At the very least, you’ll need to generate a virtual sitemap. But like card sorting, you have the same options for participants: either use your in-house customer panel, a recruitment service, or guerilla testing.

For both card sorting and tree testing, you want to aim to get at least 40 good participants to obtain enough data to glean insights. Try to aim for 100 participants as you will likely get some unreliable data that you need to filter out. This includes participants who abandoned, didn’t get any tasks right (generally people will at least get some right), got through the test too quickly (suggests they did not pay attention).

Case study: Redesigning TELUS’ “About” section

In a recent redesign of a section on the TELUS website, we conducted both card sorting and tree testing to ensure that we were building a structure that was customer-first. We had various stakeholder groups across the company who were unfamiliar with IA. So during our discovery phase, we conducted an in-person, open card sort with our stakeholders that helped them better understand the problem and gain buy-in. It’s one thing to agree that the site needs improvement, it’s another thing to be faced with the task and have to solve the problem themselves. The exercise helped put them in the customers’ shoes, and they were that much more motivated to find a customer-first solution. 

It’s also a great interactive exercise to kick off the project and work collaboratively with your stakeholders - the last thing you want to do is work in a silo. Another thing we did to ensure alignment throughout the project was to bring them along on the journey. After every cart sort and tree test, we put together a summary and ran our stakeholders through the results. This helps ensure your stakeholders are invested from the start and become advocates of proper IA. 

Card sorting - what went well?

We also conducted card sorting with real users which allowed us to see how they intuitively grouped the information on our site, and the insight into behaviours was used to inform the sitemap. We opted for a hybrid approach and were able to incorporate “real” language that customers use in our navigation labels.

Card sorting - what to do differently next time?

One thing we saw from the results was that some participants got confused with the defined categories, thinking they had to sort all the cards into the three predefined ones. Next time, I would clarify in the activity instructions that people can create new categories and are not confined to the existing ones. 

Tree testing - what went well?

We were working on a redesign so we tree tested two sitemaps: 

  1. Current state to benchmark
  2. Proposed state to measure the efficacy of our suggestions

This allowed us to test assumptions and hypotheses. In the current state, we suspected that duplicate information on the website confused people and that branded program terms did not resonate with people unfamiliar with the company. Testing the current state allowed us to prove both assumptions as those tasks yielded low success rates. 

At the same time, we were also able to validate our hypotheses. In the proposed sitemap, we consolidated the repetitive information and organized it in a manner that we thought would best reflect our users’ mental models based on our card sorting results. We also replaced the branded program names with more descriptive labels to aid understanding and clarity. Both tasks in the proposed version performed better. 

We already had solid data to start with because we had the results from card sorting but tree testing added that additional layer of validation to ensure that we were building a sitemap that truly reflects and meets our users’ needs. We gained valuable insight that helped us iterate and further consolidate our proposed sitemap into a final structure.

Tree testing - What to do differently next time?

Next time, I would opt for a moderated tree test so we can hear users’ thought processes and ask them follow-up questions to dig into any surprising behaviours. Unmoderated testing gave us enough responses to identify trends in behaviours but moderated testing would have given us the insight behind the behaviours.

Some useful resources:

If you’re interested in learning more, here are some helpful resources: 

The difference between good information architecture (IA) and great IA? User research. You can rely on different techniques to determine your IA strategy but to truly ensure that it’s intuitive, easily navigable, and customer-first, you should base your decisions on real user data.

There are two types of user experience (UX) research techniques that you can rely on when it comes to developing your IA strategy: card sorting and tree testing. You can use both at various points as you’re developing your sitemap, and you have different options depending on your company’s budget. 

Card sorting

Card sorting is a UX research technique where participants organise all the topics (cards) within your site into the groups that make the most sense to them. This helps you better understand your customers’ mental models so that you can organiSe content on your site how they would expect to find and interact with it. 

Whether you’re building a new website or redesigning, card sorting helps answer questions like:

  • What are our users’ mental models of this content? 
  • How should we structure our website to reflect their mental models?
  • What navigation labels will be intuitive for users?

The screenshot below shows an example of an open card sort in Optimal Workshop. Participants drag and drop cards from the first column into groups of their choosing on the right which they can then label.


A screenshot that shows an example of an open card sort in Optimal Workshop. Participants drag and drop cards from the first column into groups of their choosing on the right which they can then label.

Optimal Workshop card sorting demo

When to use card sorting

Card sorting should be one of the first steps that you take when it comes to designing your sitemap for two reasons:

  1. To ensure that you’re using real user data to inform your design
  2. To gain input and alignment with your stakeholders

If your stakeholders understand and see the value of IA at the start of a project, then congratulations, you have some content-savvy stakeholders. Usually, there’s some buy-in needed to get them on board. Card sorting is a great kick-off exercise that helps everyone understand current navigation problems and why a proper IA is necessary.

How to set up card sorting

There are three types of card sorting:

  • Open: The participants group the content and also label the categories that they’ve created. This gives you a sense of the language that users are familiar with
  • Closed: You set predefined categories for participants to group content into. This works best if you already know the labels you are going to use
  • Hybrid: A mix of both open and closed. You create a couple of easy or obvious categories but also give participants the option of creating additional ones

Card sorting can be conducted remotely or in-person:

  • Remotely: Programs such as Optimal Workshop allow you to set up an online test with virtual “cards”. You can get a higher number of results this way
  • In-person: You can also manually write your topics on cue cards to conduct an in-person card sort. While you might not garner a ton of responses, the benefit of card sorting in person is you can hear participants’ thought processes and you’re able to ask your participants questions to dig into their mental models

Try to limit your cards to between 30-40 so you don’t overwhelm your participants. If you have more than 40 topics, focus on the most important ones that you want to gain insight on.

Card sorting by budget

If you have a budget for user research, you can use a recruitment service like Optimal Workshop, Validately, or UsabilityHub to get a participant pool that reflects your target users. The services typically offer an incentive like a gift card to participate so recruiting participants can be expensive, especially if you also have a specific demographic or requirements to cater for.

If your company has a user research team, reach out to them as they might already have a panel of customers that you can leverage to run your tests. The downside to this option is that they likely only run a few tests a month as not to inundate the customer panel and dilute response rates so you might not be able to test right away.

If you don’t have the budget, you still have some options. Try a guerilla testing tactic where you ask your friends, family, and colleagues to complete the activity to gather some results. Ask them to send and share with their network to expand your participant pool. You can even hit the streets and ask strangers. The risks with this approach are that the participants might not be your target audience, and it can be difficult to get a large number of responses. 

With the data you glean from card sorting, along with the research and audits you’ve done, you can then draft a sitemap. 

Tree testing

The other UX testing technique is tree testing (sometimes called task testing). You load your sitemap (tree) into a software and then set tasks to see how successful participants are in finding the correct information. If a task gets a low success rate, then it’s likely that you need to revisit where you’ve placed that information. 

In addition to success rates, tree testing gives you first click data which helps you understand where your users intuitively go first. For example, if the first click data for a task is accurate but the participant did not successfully get to the right destination, it may indicate that the second-level label is not clear enough. 

Other insights are whether participants’ success was direct or indirect. You can dig into each participant’s activity to see if there are any trends or insights to be gained. Tree testing results will help you test categories and labels, as well as identify any potential navigation and findability issues. 

Whether you’re testing current state or a redesign, tree-testing helps you answer questions like:

  • How well is our current sitemap / navigation supporting key user tasks? 
  • What labels or categories don’t make sense to users?
  • How might we restructure information to improve navigation and discovery?

Below is an example of what Optimal Workshop’s tree testing looks like. For each task, more of the sitemap is revealed as participants click through. They can go back and forth between levels before selecting their choice.


An example of what Optimal Workshop’s tree testing looks like with a list of content items arranged in a hierarchy. The main level is 'BananaCom Homepafe' and sub items include Help and Support, My Account, and Home Phone.

Optimal Workshop tree testing demo

When to use tree testing

There are a couple points where you can conduct tree testing:

  • Before a redesign: To test a current sitemap to benchmark
  • When you have a proposed sitemap: To identify potential navigation and usability issues

When you test a proposed sitemap, you can also run A/B tests to measure the efficacy of labels. For example, you can compare success rates of sitemap A with branded terms and sitemap B with plain language labels. This is particularly valuable if you weren’t able to card sort as it can give you a good idea of the language that best resonates. However, just keep in mind that with A/B testing, you’ll need double the participant pool. 

How to set up tree testing

Turn your sitemap into a virtual one using a software like Optimal Workshop. When you write tasks, make sure they are based on the user tasks (jobs to be done) of your target audience. This is where your user personas come into play. Another tip with writing your tasks is to avoid including words or terms that are used in your navigation labels as not to “give away the answer” and avoid using branded terms in your tasks as participants may not be familiar. 

Tree testing by budget

Unlike card sorting, tree testing is not an activity that can easily be done in person. At the very least, you’ll need to generate a virtual sitemap. But like card sorting, you have the same options for participants: either use your in-house customer panel, a recruitment service, or guerilla testing.

For both card sorting and tree testing, you want to aim to get at least 40 good participants to obtain enough data to glean insights. Try to aim for 100 participants as you will likely get some unreliable data that you need to filter out. This includes participants who abandoned, didn’t get any tasks right (generally people will at least get some right), got through the test too quickly (suggests they did not pay attention).

Case study: Redesigning TELUS’ “About” section

In a recent redesign of a section on the TELUS website, we conducted both card sorting and tree testing to ensure that we were building a structure that was customer-first. We had various stakeholder groups across the company who were unfamiliar with IA. So during our discovery phase, we conducted an in-person, open card sort with our stakeholders that helped them better understand the problem and gain buy-in. It’s one thing to agree that the site needs improvement, it’s another thing to be faced with the task and have to solve the problem themselves. The exercise helped put them in the customers’ shoes, and they were that much more motivated to find a customer-first solution. 

It’s also a great interactive exercise to kick off the project and work collaboratively with your stakeholders - the last thing you want to do is work in a silo. Another thing we did to ensure alignment throughout the project was to bring them along on the journey. After every cart sort and tree test, we put together a summary and ran our stakeholders through the results. This helps ensure your stakeholders are invested from the start and become advocates of proper IA. 

Card sorting - what went well?

We also conducted card sorting with real users which allowed us to see how they intuitively grouped the information on our site, and the insight into behaviours was used to inform the sitemap. We opted for a hybrid approach and were able to incorporate “real” language that customers use in our navigation labels.

Card sorting - what to do differently next time?

One thing we saw from the results was that some participants got confused with the defined categories, thinking they had to sort all the cards into the three predefined ones. Next time, I would clarify in the activity instructions that people can create new categories and are not confined to the existing ones. 

Tree testing - what went well?

We were working on a redesign so we tree tested two sitemaps: 

  1. Current state to benchmark
  2. Proposed state to measure the efficacy of our suggestions

This allowed us to test assumptions and hypotheses. In the current state, we suspected that duplicate information on the website confused people and that branded program terms did not resonate with people unfamiliar with the company. Testing the current state allowed us to prove both assumptions as those tasks yielded low success rates. 

At the same time, we were also able to validate our hypotheses. In the proposed sitemap, we consolidated the repetitive information and organized it in a manner that we thought would best reflect our users’ mental models based on our card sorting results. We also replaced the branded program names with more descriptive labels to aid understanding and clarity. Both tasks in the proposed version performed better. 

We already had solid data to start with because we had the results from card sorting but tree testing added that additional layer of validation to ensure that we were building a sitemap that truly reflects and meets our users’ needs. We gained valuable insight that helped us iterate and further consolidate our proposed sitemap into a final structure.

Tree testing - What to do differently next time?

Next time, I would opt for a moderated tree test so we can hear users’ thought processes and ask them follow-up questions to dig into any surprising behaviours. Unmoderated testing gave us enough responses to identify trends in behaviours but moderated testing would have given us the insight behind the behaviours.

Some useful resources:

If you’re interested in learning more, here are some helpful resources: 

Webinar Recording

Using information architecture and taxonomy to meet strategic goals

A more powerful approach content structure.

July 25, 2019

4:00 pm

Register now

Webinar Recording

Using information architecture and taxonomy to meet strategic goals

A more powerful approach content structure.

July 25, 2019

4:00 pm

Watch now
No items found.

About the author

Alice Chen

Alice is a content strategist at TELUS, a Canadian telecommunications company. She focuses on crafting the content strategy for campaigns, leading IA projects, and bringing to life the brand narrative across the digital experience.

Related posts you might like