Decision Tree Overhaul

Example of decision tree question before our revamp

Background

Dycom Industries relies heavily on decision trees to collect data from workers in the field. Each tree uses a series of conditional questions for users to answer about time worked on projects or assets used for a job. Though they were created with the intention of making billing quicker, many users found the questions clunky or confusing, while tree builders struggled to make improvements.

Our UX team took on the task of optimizing the decision tree experience, for both the end-users and tree builders. I guided the overall content strategy for this effort, with the following objectives in mind:

  • Analyze existing questions to identify trends

  • Create standards around language in trees

  • Develop a framework for question writing

Process

 

Audience analytics and stakeholder interviews

I relied on data collected in previous research projects to influence certain decisions on comprehension and readability. In past studies, our business found that on average, field workers read at a 5th grade level.

While field laborers were the true final audience for these decision tree questions, I also spoke with the question writers to get an idea of what might help them create copy more efficiently. I found out quite a bit about their process: it actually takes a team of three to read the contracts, come up with questions to ask to capture the proper data, then code the tree. The knowledge it takes to read and translate contract language is very niche, so we really wanted to find a way to capture that knowledge in a streamlined system. Through these interviews, we came up with a few ideas on how to help out, most notably by standardizing the question writing process and improving their content strategy. My two major deliverables for this effort were content standards for these trees and a question bank to optimize their creation.

 

Data mining

Our team analyzed a spreadsheet of a few hundred questions utilized in past trees. We removed duplicates, then created groupings of the questions. Tree developers would use these groupings as filters in the final question bank to find pre-written questions they can use in their trees. These were the major question types we identified:

  • Crew

  • Task

  • Item Identifier

  • Item Specs

  • Quantity

  • Measurement

  • Duration

  • Type of

  • Photograph

  • Notes

  • Timekeeping

 

Content strategy and design

After finding the prominent question types, I created a set of best practices and pitched these to the product team. Then, I helped rewrite the existing questions, created templates to use for common question types, and a guide to creating questions from scratch. I wanted all of our content to cater to the needs of our primary audience (construction workers) and stick to a conversational approach. Since I was working within a lower reading level, I spent time cutting down the complexity of our sentences/vocabulary and creating consistency among similar questions.

Final products

Question bank

The bank soon became the main focus of the effort. Reducing the time spent writing questions would allow for quicker development of trees. And, if the repository of questions fit the new standard, we could even reduce the time users spend completing trees. To round out the question bank, the team added a few finishing touches and collaborated with cross-functional partners to format the spreadsheet.

What we ended up with was a single space to look at past questions, create new questions with templates, and review writing standards. To allow tree builders to further filter down questions, we looked back to conversations with our stakeholders and added a few filters they noted would be useful: customer, environment, and activity. For the templates, we applied a Mad Libs-style fill-in-the-blank approach to our revised question list. Once the spreadsheet was finished, we tested this process, made some small tweaks to our materials and finalized the bank.

When faced with contract terms to collect from the user, the tree builder starts at Step 1, then continues on if necessary:

Step 1: Check to see if the question’s been written before - Tree builders can filter through existing questions or search for keywords. If they find what they need, they copy and paste the YAML code for that question.

Step 2: Draft the question using a template - If the question isn’t in the bank, the tree builder can view the available templates and choose one that best matches their use case. Once they fill-in the empty sections, the YAML code is generated.

step 2.PNG
 

Step 3: Create a new question from scratch - This is where that writing guide comes in. If all else fails, the tree builder will need to write a brand new question. They’ll use this guide as a reference along with the examples in the bank. YAML will not be generated in this instance.

 

Decision tree writing guide

I created this writing guide to be used alongside the question bank. While we were able to optimize an impressive amount of questions for reuse, we will consistently require new questions for new contracts. So, whenever a tree builder can’t find their question in the bank or use an existing template, they can use the guide to translate their contract language into acceptable UX copy.