This is an overview and explanation of a set of functional tests created in Hexawise.

What are our testing objectives?

Each time someone applies for a loan, nine different test conditions are included in the test scenario. Even in this over-simplified example, there are close to 20,000 possible scenarios. In this context, we want to test this loan application process relatively thoroughly - with a manageable number of tests.

We know that testing each item in our system once is not sufficient; we know that interactions between the different things in our system (such as a particular credit rating range interacting with a specific type of property, for example) could well cause problems. Similarly, we know that the written requirements document will be incomplete and will not identify all of those potentially troublesome interactions for us. As thoughtful test designers, we want to be smart and systematic about testing for potential problems caused by interactions without going off the deep end and trying to test every possible combination.

Hexawise makes it quick and simple for us to select an appropriate set of tests whatever time pressure might exist on the project or whatever testing thoroughness requirements we might have. Hexawise-generated tests automatically maximize variation, maximize testing thoroughness, and minimize wasteful repetition.

What interesting Hexawise features are highlighted in this sample plan description?

This sample plan write up includes descriptions of the following features:

Forced Interactions - How to force certain high-priority combinations to appear in your set of scenarios.

Auto-Scripting - How to save time by generating detailed test scripts in the precise format you require (semi)-automatically

Coverage Graphs - How to get fact-based insights into "how much testing is enough?"

Matrix Charts - How to tell which exact coverage gaps would exist in our testing if we were to stop executing tests at any point in time before the final Hexawise-generated test

Using Hexawise's "Coverage Dial" - How to generate sets of thorough 2-way tests and/or extremely thorough 3-way tests in seconds


With particular emphasis on this "test design superpower" feature:

Risk-Based Testing / Mixed-Strength Test Generation - How to we focus extra testing thoroughness SELECTIVELY on high-priority interactions we identify

What interesting test design considerations are raised in this particular sample plan?

Equivalence Classes are used to define many of the values in this plan. It is worth pointing out that the plan includes examples of several different strategies to treat the Equivalence Classes. The choice of strategy used has subtle impacts on how the tests are created and written.

  • First Strategy  - General Descriptive Terms (used for "Income") - Simply describe the different ranges of income with the Values of "High" "Medium" and "Low".

  • Second Strategy - Value Ranges (used for "Loan Amount"): 10,000 - 9,999 and 100,000 - 199,999 and 200,000 - 500,000.

  • Third Strategy - Descriptive Terms with Pre-Defined Values (used for "Credit Rating" and also in "Location of Property") - Use Value Expansions to assign specific numbers for each range.

Which strategy is best to use? It depends on your goals. Here are considerations:

  • "General Descriptive Terms" will result in tester instructions like: enter a high income... Potential disadvantage: is this enough guidance for testers who might execute the tests later?

  • "Value Ranges" works well when you have equivalence classes defined by numerical ranges. If there are different rules for small loans than large loans, and small loans are defined as loans of anywhere between 10,000 and 99,999, you can enter 10,000 - 99,999. The Hexawise algorithm is smart enough to generate tests for you that automatically cover the valid boundary values in your ranges (e.g., highest and lowest values as well as values randomly selected from inside the range).

  • "Descriptive Terms with Predefined Value Expansions" would provide the execution team with more precise instructions than the 1st strategy (e.g., instead of Region 1, instructions would be more specific, such as Big City in Region 1 or Small Town in Region 1).

It is also worth highlighting that no matter whether you want to execute a set of a few dozen, a few hundred, or a few thousand tests, Hexawise can generate a prioritized / optimized set of tests for your thoroughness and timing needs

Consider additional variation ideas to ask about our verb and noun using "newspaper questions" - who, what, when, why, where, how, how many?

Designing powerful software tests requires that people to think carefully about potential inputs into the system being tested. And how those potential inputs might impact the behavior of the system. As described in this blog post, we strongly encourage test designers to start with a verb an a noun to frame a sensible scope for a set of tests and then ask the "newspaper reporter" questions of who?, what? when? where? why? how? and how many?


Who is applying for the loan and what characteristics do they have that will impact how the System Under Test behaves? In particular...

  • Regarding the applicant's income, how high is it?

  • Regarding the applicant's credit rating, how high is it?

  • Does the applicant have some special status (such as employee of the bank or VIP customer)?

How Many / How Large

  • How long will the duration of the loan be?

  • How much money is being borrowed?

  • What is the ratio of the amount borrowed as compared to the value of the property?

What Kind / Where

  • What type of property will be purchased with the loan? (e.g., House, Apartment, Condominium)

  • Where is the property located?

  • What kind of residence is the property? (e.g., Primary property that the borrower will live in, an investment property that the investor will try to rent out, a vacation property)

Variation Ideas entered into Hexawise's Parameters screen

Asking the newspaper questions described above is useful to understand potential ways the system under test might behave.

Once we have decided which test conditions are important enough to include in this model (and excluded things - like "First Name" and "Last Name" in this example - that will not impact how the system being tested operates), Hexawise makes it quick and easy to systematically create powerful scenarios that will allow us to maximize our test execution efficiency.

Once we enter our parameters into Hexawise, we simply click on the "Scenarios" link in the left navigation pane.

Hexawise helps us identify a set of high priority scenarios within seconds

The coverage achieved in the 17 tests above is known as pairwise testing coverage (or 2-way interaction coverage). Hexawise-generated pairwise tests have been proven in many contexts and types of testing to deliver large thoroughness and efficiency benefits compared to sets of hand-selected scenarios.

Hexawise gives test designers control over how thorough they want their testing coverage to be. As in this case, Hexawise allows testers to quickly generate dozens, hundreds, or thousands of tests using Hexawise's "coverage dial." If you have very little time for test execution, you would find those 17 pairwise tests to be dramatically more thorough than a similar number of tests you might select by hand. If you had a lot more time for testing, you could quickly generate a set of even more thorough 3-way tests (as shown in the screen shot immediately below).

For more detailed explanations describing the approach Hexawise uses to maximize variation, maximize coverage, and minimize wasteful repetition in test sets, please see this image-heavy introductory presentation, this 3-page article on Combinatorial Testing (published by IEEE), and/or this detailed explanation comparing the differences between 2-way coverage and 3-way coverage.

Selecting "3-way interactions" generates a longer set of tests which cover every single possible "triplet" of Values

Hexawise generates and displays this extremely thorough set of 63 three-way tests to you within a few seconds.  This set of 3-way coverage strength tests would be dramatically more thorough than typical sets of manually selected test scenarios typically used by large global firms when they test their systems.

The only defects that could sneak by this set of tests would be these two kinds:

  • 1st type - Defects that were triggered by things not included in your test inputs at all (e.g., if special business rules should be applied to an applicant living in Syria, that business rule would not be tested because that test input was never included in the test model at all). This risk is always present every time you design software tests, whether or not you use Hexawise.

    This risk is, in our experience much larger than the second type of risk:

  • 2nd type - Extraordinarily unusual defects that would be triggered if and only if 4 or more specific test conditions all appeared together in the same scenario. E.g., if the only way a defect occurred was if an applicant with a (i) Low income and a (ii) High credit rating, applied for a loan for an (iii) apartment, and the borrower was planning on using the apartment as a (iv) rental property. It is extremely rare for defects to require 4 or more specific test inputs to appear together. Many testers test software for years without seeing such a defect. More details are available here.

If a tester spent a few days trying to select tests by hand that achieved 100% coverage of every single possible "triplet" of Values (such as, e.g., (i) 30 year term of loan, and (ii) a large (200,000 - 500,000) loan, and (iii) for a property in Region 3), the following results would probably occur:

  • It would take far longer for a tester to attempt to select a similarly thorough set of tests and the tester would accidentally leave many, many coverage gaps.

  • The tester trying to select tests by hand to match this extremely high "all triples" thoroughness level would create far more than 63 tests (which is the optimized solution, shown above).

  • Almost certainly, if the tester tried to achieve this coverage goal in 100 or fewer tests, there would be many, many gaps in coverage (e.g., 3-way combinations of Values that the tester accidentally forgot to include).

  • Finally, unlike the Hexawise-generated tests which systematically minimize wasteful repetition, many of the tester's hand-selected scenarios would probably be highly repetitive from one test to the next; that wasteful repetition would result in lots of wasted effort in the test execution phase


We can force specific scenarios to appear in tests and/or prevent "impossible to test for" combinations from appearing

We easily forced a few high priority scenarios to appear by using Hexawise's "Forced Interactions" feature:

You'll notice from the screen shots of 2-way tests and 3-way tests shown above that some of the Values in both sets of tests are bolded. Those bolded Values are the ones we "forced" Hexawise to include by using this feature.

Auto-scripting allows us to almost instantly convert tables of optimized test conditions (shown above on the "Scenarios" tab screen shots) into detailed test scripts (shown below in the screen shot of an Excel file)

The Auto-scripting feature saves testers a lot of time by partially automating the process of documenting detailed, stepped-out test scripts.

We document a single test script in detail from the beginning to end. As we do so, we indicate where our variables (such as, "Term of Loan," and "Loan Amount," and "Loan to Value Ratio") are in each sentence. That's it. As soon as we document a single test in this way, we're ready to export every one of our tests.

From there, Hexawise automatically modifies the single template test script we create and inserts the appropriate Values into every test in your plan (whether our plan has 10 tests or 1,000).

We can even add simple Expected Results to our detailed test scripts

If you describe Expected Results like the one above on the "Manual Auto-Scripts" screen, Hexawise will automatically add them into every applicable step in every applicable scenario in your plan. As we entered this Expected Result, every test in this plan will show it after test step 15.

It is possible to create simple rules using the drop down menu that will determine when a given Expected Result should appear. To do so, we would use the drop down menus in this feature to create simple rules such as "When ____ is ___ and when ____ is not ____, then the Expected Result would be _____."

This Expected Results feature makes it easy to maintain test sets over time because rules-based Expected Results in tests will automatically update and adjust as test sets get changed over time.

Coverage graphs allow teams to make fact-based decisions about "how much testing is enough?"

After executing the first 13 tests of this plan's 2-way set of tests, 95.5% of all possible "pairs" of Values that exist within the system will have been tested together. After all 17 tests, every possible "pair" of Values in the system will have been tested together (100% coverage).

This chart, and the additional charts shown below, provide teams with insights about "how much testing is enough?" And they clearly show that the amount of learning / amount of coverage that would be gained from executing the tests at the beginning of test sets is much higher than the the learning and coverage gained by executing those tests toward the end of the test set. As explained here, this type of "diminishing marginal return" is very often the case with scientifically optimized test sets such as these.

Hexawise tests are always ordered to maximize the testing coverage achieved in however much time there is available to test. Testers should generally execute the tests in the order that they are listed in Hexawise; doing this allows testers to stop testing after any test with the confidence that they have covered as much as possible in the time allowed.

We know we would achieve 95.5% coverage of the pairs in the system if we stopped testing after test number 13, but which specific coverage gaps would exist at that point? See the matrix chart below for that information.

The matrix coverage chart tells us exactly which coverage gaps would exist if we stopped executing test before the end of the test set

The matrix chart above shows every specific pair of values that would not yet tested together if we were to stop testing after test number 13.

For example, in the first 13 tests, there is no scenario that includes both (a) loan amount of 200,000 - 500,000 together with (b) "Term of Loan = 15 years."

We can analyze coverage on the extremely thorough set of 3-way tests we created also.

After executing the first 30 scenarios of this plan's 3-way set, 88.4% of all possible "triplets" of Values that exist within the system will have been tested together. After all 63 tests, every possible "triplet" of Values in the system will have been tested together (100% coverage).

This chart provides teams with insights about "how much testing is enough?" And it clearly shows that the amount of learning / amount of coverage that would be gained from executing the tests at the beginning of the test set is much higher than the the learning and coverage gained by executing those tests toward the end of the test set. As explained here, this type of "diminishing marginal return" is very often the case with scientifically optimized test sets such as these.

Hexawise tests are always ordered to maximize the testing coverage achieved in however much time there is available to test. Testers should generally execute the tests in the order that they are listed in Hexawise; doing this allows testers to stop testing after any test with the confidence that they have covered as much as possible in the time allowed.

Risk-Based Testing Feature - - - With "Mixed-Strength Test Generation," we can focus extra thorough coverage selectively on the high-priority interactions in our System Under Test

Some interactions in our system are more important to test thoroughly with one another than other interactions are. That's almost always the case, isn't it?

In this example, we want to test every single possible combination involving (a) Different Income levels, (b) Different Credit Rating levels, and the (c) different regions. That's because each of the Regions have different weightings they apply to incomes and credit ratings. Stakeholders (and loud, bossy, hostile ones, at that) have made it extremely clear that, whatever else we do, we NEED to be VERY sure to test EVERY last one of these 27 "high priority" 3-way interactions. We can't forget to test any of them.

If you're thinking "Easy! Let's just execute all 63 tests from our 3-way test set!" You're on the right track. It would be nice to have that luxury, but life is filled with setbacks and disappointments, isn't it? We've only got half as much time to test now as originally planned. We need to achieve 100% coverage of these 27 super high priority combinations and also achieve 100% coverage of every possible pair of test inputs.

No problem. We can achieve both of those goals here with Mixed-Strength Scenario Generation!

We select "Mixed-strength interactions" from the drop down menu.

We mark our high priority columns with 3's above them to ensure we'll generate scenarios for all 27 of those targeted, high-priority 3-way combinations.

And click on "Reapply." A couple seconds later, we see the test scenarios that meet all of our objectives.

Booya! Newly-updated plan with: (a) 1/2 as many tests as our 3-way plan had, (b) 100% coverage of our 27 high-priority triplets, AND (c) 100% coverage of all the pairs of Values!

That is what test design awesomesauce looks like. Enjoy your new superpowers!

Mind maps can be exported from this Hexawise plan to facilitate stakeholder discussions.

Hexawise supports exporting in several different formats. Mind maps can be a great option if a tester wants to get quick, actionable guidance from stakeholders about which test inputs should (or should not) be included. Mind maps quickly demonstrates to stakeholders that the test designers have thought about the testing objectives clearly and they give stakeholders an opportunity to provide useful feedback more quickly as compared to having stakeholders read through long documents filled with test scripts.

Detailed test scripts (complete with stepped-out tester instructions and rule-generated Expected Results) can be exported also:

The detailed test scripts shown above were created using Hexawise's Auto-Scripts feature.

Other possible export formats could include test "data tables" in either CSV or Excel format or even Gherkin-style formatting.

At Hexawise, we regularly customize export formats to exactly match our client's specific formatting requirements. This can make exporting from Hexawise and importing into your customized version of Micro Focus ALM / QC very quick and easy.

Did this answer your question?