What are our testing objectives?

Each time someone gets directions, thirty-three different test conditions are included in the test scenario. This produces close to 60 trillion possible scenarios. In this context, we want to test this online map direction process relatively thoroughly - with a manageable number of tests.

We know that testing each item in our system once is not sufficient; we know that interactions between the different things in our system (such as a particular browser type interacting with input methods, for example) could well cause problems. Similarly, we know that the written requirements document will be incomplete and will not identify all of those potentially troublesome interactions for us. As thoughtful test designers, we want to be smart and systematic about testing for potential problems caused by interactions without going off the deep end and trying to test every possible combination.

Hexawise makes it quick and simple for us to select an appropriate set of tests whatever time pressure might exist on the project or whatever testing thoroughness requirements we might have. Hexawise-generated tests automatically maximize variation, maximize testing thoroughness, and minimize wasteful repetition.

What interesting Hexawise features are highlighted in this sample model description?

Invalid Pairs - How to prevent 'impossible to test for' scenarios from appearing in your set of tests.

Forced Interactions - How to force certain high priority scenarios to appear in your set of tests.

Auto-Scripting - How to save time by generating detailed test scripts in the precise format you require (semi)-automatically.

Coverage Graphs - How to get fact-based insights into "how much testing is enough?".

Matrix Charts - How to tell which exact coverage gaps would exist in our testing gif we were to stop executing tests at any point in time before the final Hexawise-generated test.

What interesting test design considerations are raised in this particular sample model?

Sometimes it is necessary to identify a particular combination as impossible to even test for and remove it from the scenarios that Hexawise creates. If you are using a Windows 7 machine, for example, it is not possible to use any version of the Safari browser.

Marking a pair of values (such as Windows 7 and Safari) as invalid will prevent it from appearing in your Hexawise-generated tests.

This particular model has five Invalid Pairs. Three Invalid Pairs are created for ensuring Windows 7,  Windows 8, and Windows 10 are not paired with Safari. And two Invalid Pairs are created for ensuring Mac OS is not paired with either Internet Explorer 10 or Internet Explorer 11.

For more on creating Invalid Pairs, please see the help file: How do I create an "Invalid Pair" to prevent impossible to test for Values from appearing together?

Consider additional variation ideas to ask about our verb and noun using "newspaper questions" - who, what, when, why, where, how, how many?

Designing powerful software tests requires that people to think carefully about potential inputs into the system being tested and how they might impact the behavior of the system. As described in this blog post, we strongly encourage test designers to start with a verb an a noun to frame a sensible scope for a set of tests and then ask the "newspaper reporter" questions of who?, what? when? where? why? how? and how many?


Who is getting directions and what characteristics do they have that will impact how the System Under Test behaves? In particular...

  • Regarding the user's operating system, what version is it?

  • Regarding the user's browser type, what version is it?

  • Do they have JavaScript enabled?

  • Does the user have any saved addresses?

  • Do they typically execute steps left to right? Or some weird variation of that?

  • Will they want printed directions?

  • Will they share the directions electronically?

What Kind / Where

  • What kind of directions do they want? (e.g., car, public transportation, walking)

  • With regard to directions, will they cross any state or international borders?

  • In what measure do they want the directions (e.g., miles or kilometers)?

  • What kind of starting location / origin will they use?

  • What kind of ending location / destination will they use?

  • What kind of link should they use for returning to the directions?


  • How should highways be handled?

  • How should toll roads be handled?

  • How long/far is the trip?

  • How many stops (separate points along the way) in the trip will they indicate?

  • How will they view the map? (e.g., Map, satellite, Terrain, others?)

  • How zoomed in should they view the directions?

  • How will they pan the image?

  • How will they change the route (drag it)?

  • How should they respond to advertisements?

What else

  • Can they view more photos?

  • Can they view more videos?

  • Can they find more research on wikipedia?

  • Can they connect their webcam?

  • Can they turn on real estate alerts?

  • Should internet speeds or disruptions be simulated?

  • Should they revise any of these actions?

Variation Ideas entered into Hexawise's Parameters screen

Asking the newspaper questions described above is useful to understand potential ways the system under test might behave.

Once we have decided which test conditions are important enough to include in this model (and excluded things - like 'Should they try interplanetary directions?' - that will not impact how the system being tested operates), Hexawise makes it quick and easy to systematically create powerful tests that will allow us to maximize our test execution efficiency.

Once we enter our parameters into Hexawise, we simply click on the "Scenarios" link in the left navigation pane.

Hexawise helps us identify a set of high priority scenarios within seconds

The coverage achieved in the 35 tests above is known as pairwise testing coverage (or 2-way interaction coverage). Hexawise-generated pairwise tests have been proven in many contexts and types of testing to deliver large thoroughness and efficiency benefits compared to sets of hand-selected scenarios.

Hexawise gives test designers control over how thorough they want their testing coverage to be. As in this case, Hexawise allows testers to quickly generate dozens, hundreds, or thousands of tests using Hexawise's "coverage dial." If you have very little time for test execution, you would find those 35 pairwise tests to be dramatically more thorough than a similar number of tests you might select by hand. If you had a lot more time for testing, you could quickly generate a set of even more thorough 3-way tests (as shown in the screen shot immediately below).

For more detailed explanations describing the approach Hexawise uses to maximize variation, maximize coverage, and minimize wasteful repetition in test sets, please see this image-heavy introductory presentation, this 3-page article on Combinatorial Testing (published by IEEE), and/or this detailed explanation comparing the differences between 2-way coverage and 3-way coverage.

Our Invalid Pairs have prevented 'impossible to test for' scenarios from appearing

In the picture above, we see the pairings for our Operating System and Browser parameters. As you can see, none of the pairs we marked as invalid have made it into our tests.

As a reminder: our invalid pairs prevented Windows OS (7, 8, and 10) from appearing with a Browser of Safari. And the Operating System of Mac OS has been prevented from appearing with Browsers Internet Explorer 10 and Internet Explorer 11.

We can force specific scenarios to appear in tests

We easily forced a few high priority scenarios to appear by using Hexawise's "Forced Interactions" feature:

You'll notice from the screen shots of 2-way tests and 3-way tests shown above that some of the Values in both sets of tests are bolded. Those bolded Values are the Values we "forced" Hexawise to include by using this feature.

Auto-scripting allows us to almost instantly convert tables of optimized test conditions (shown above on the "Scenarios" tab screen shots) into detailed test scripts (shown below in the screen shot of an Excel file)

The Auto-scripting feature saves testers a lot of time by partially automating the process of documenting detailed, stepped-out test scripts.

We document a single test script in detail from the beginning to end. As we do so, we indicate where our variables (such as, "Operating System," and "Avoid Highways,") are in each sentence. That's it. As soon as we document a single test in this way, we're ready to export every one of our tests.

From there, Hexawise automatically modifies the single template test script we create and inserts the appropriate Values into every test in your model (whether it has 10 tests or 1,000).

We can even add simple Expected Results to our detailed test scripts

If you describe Expected Results like the one above on the "Manual Auto-Scripts" screen, Hexawise will automatically add Expected Results into every applicable test step in every applicable test in your model. As we entered this Expected Result, every test that meets the condition will show it after Test Step 14.

It is possible to create simple rules using the drop down menu that will determine when a given Expected Result should appear. To do so, we would use the drop down menus in this feature to create simple rules such as "When ____ is ___ and when ____ is not ____, then the Expected Result would be_____."

This Expected Results feature makes it easy to maintain test sets over time because rules-based Expected Results in tests will automatically update and adjust as test sets get changed over time.

Coverage graphs allow teams to make fact-based decisions about "how much testing is enough?"

After executing the first 11 tests of this model's 2-way set of tests, 81.0% of all possible "pairs" of Values that exist within the system will have been tested together. After all 35 tests, every possible "pair" of Values in the system will have been tested together (100% coverage).

This chart, and the additional charts shown below, provide teams with insights about "how much testing is enough?" And they clearly show that the amount of learning / amount of coverage that would be gained from executing the tests at the beginning of test sets is much higher than the the learning and coverage gained by executing those tests toward the end of the test set. As explained here, this type of "diminishing marginal return" is very often the case with scientifically optimized test sets such as these.

Hexawise tests are always ordered to maximize the testing coverage achieved in however much time there is available to test. Testers should generally execute the tests in the order that they are listed in Hexawise; doing this allows testers to stop testing after any test with the confidence that they have covered as much as possible in the time allowed.

We know we would achieve 81.0% coverage of the pairs in the system if we stopped testing after test number 11, but which specific coverage gaps would exist at that point? See the matrix chart below for that information.

The matrix coverage chart tells us exactly which coverage gaps would exist if we stopped executing test before the end of the test set

The matrix chart above shows every specific pair of values that would not yet tested together if we were to stop testing after test number 11.

For example, in the first 11 tests, there is no scenario that includes both (a) "Operating System - Mac OS" and (b) "Browser - Firefox."

Mind maps can be exported from this Hexawise model to facilitate stakeholder discussions.

Hexawise supports exporting in several different formats. Mind maps can be a great option if a tester wants to get quick, actionable guidance from stakeholders about which variation ideas should (or should not) be included. Mind maps quickly demonstrates to stakeholders that the test designers have thought about the testing objectives clearly and they give stakeholders an opportunity to provide useful feedback more quickly as compared to having stakeholders read through long documents filled with test scripts.

Detailed test scripts (complete with stepped-out tester instructions and rule-generated Expected Results) can be exported also:

The detailed test scripts shown above were created using Hexawise's Auto-Scripts feature.

Other possible export formats could include test "data tables" in either CSV or Excel format or even Gherkin-style formatting.

At Hexawise, we regularly customize export formats to exactly match our client's specific formatting requirements. This can make exporting from Hexawise and importing into your customized version of Micro Focus ALM / QC very quick and easy.

Did this answer your question?