This article demonstrates how to utilize the "Scorecard" screen in the "Review" section for modeling analysis and feature audit of Hexawise test models.
Hexawise provides automated analysis to help you review your models
Using some rules-of-thumb feature monitoring, and thorough explanations, Hexawise is able to 'score' your work and provide valuable feedback. The scorecard is broken-down into 2 main categories:
1) Considerations for a well constructed test model / potential problem areas
2) Feature usage reporting
The Considerations portion of the scorecard helps explain where some of your modeling in Hexawise might have gone off-course. Such Considerations that are highlighted include the following:
Longest list of Values
'No Possible Values'
Unmatched Value Expansions
Unused Value Expansions
Each Consideration is also given an explanation for why you might consider editing your model. All of these are focused on making sure the tester understands what their work so far does and does not achieve.
In addition to providing insight into the modeling practices, the Scorecard also provides a review of what features you did and did not use. Knowing it can:
1) Increase awareness of unknown Hexawise features
● e.g., "Risk-weighted test scenarios using Mixed-Strength testing? Who knew?!"
2) Give clues into how much work is left to do
● e.g., "Scripts still need to be included."
3) Act as a reminder to add something into the model
● e.g., "I forgot to add some forced interactions!"
4) Provide management insight into tester usage
● e.g., "The testers didn't use Hexawise's 'Expected Outcomes' feature."
We can't guarantee that a model with all the items 'checked-off' is a perfect one. Similarly, not every potential issue raised by the Scorecard will turn out to be an actual problem. Even so, most testers and managers find that referring to the Scorecard is a good way to spot potential problems and learn about improvement ideas.