As you remember from the certification course, we suggest drawing a line between system inputs & expected outcomes, and then handling the latter ones in the scripts or even after export.

That approach would be preferable in the majority of situations, and you are likely well aware of how the data-driven Expected Results in Manual auto-scripts facilitate it. However, there are a couple of alternatives to keep in mind.

Hexawise Automate

If you are scripting in Automate, there are two primary ways to address the difference in outcomes: 1) conditional steps and 2) subsetting.

1) Assuming the approval from the automation engineers, you can use the following kind of syntax to specify the unique validations:


The left part of the equation is dynamic, and it changes based on the scenario. The right part is fixed, and it determines the step verbiage. If the equation is evaluated as TRUE, the step is performed, otherwise skipped. Multiple conditions could be stringed together with AND, OR, etc. This approach is preferable when the number of unique steps is low compared to the overall length of the script.

2) If that number is high, it is better to create multiple Scenario blocks using the {} syntax.


Within each block, you have the flexibility to write a completely different script if necessary. Depending on the overall length, you may consider putting each block on a separate tab in Automate and placing the subsetting {} value in the Background section.

Side note: for the avoidance of doubt, these two methods are not limited to Expected Results/"Then" statements and can be used for any steps.

Forced Interactions

Another feature is the internal variable <Expected Outcome> (at the bottom of the dropdown list) which comes from the Forced Interactions tab.

First, you can specify the conditions (even one).

Second, the algorithm populates the extra column on Scenarios.

Lastly, the column is referenced in the Automate script (the variable is NOT available in the Manual option).

Benefits:

  • No need to repeat the scenario blocks which saves the time on creation & maintenance of hardcoded text.

  • Simple dependencies are easy to implement and review with the stakeholders.

  • If you switch the overall execution approach to just the data table, you will already have the expected result populated there as well.

Downsides:

  • For complex dependencies, extra intervention in the algorithm could reduce the efficiency and unnecessarily increase the number of scenarios.

  • If the whole column is not populated, the script will get the value "No expected outcome" for blanks, which a) could cause confusion; b) would need to be treated as "skip the step" by the automation framework.

    You could side-step this challenge with the freeze/reimport combination.

  • This does not support differences in multiple steps, only in 1 line.

Expected Results as Parameters

Finally, let's talk about why & how we can deviate from the training recommendation.

Why: multiple simple dependencies throughout the flow (i.e. you can have multiple action-result parameter pairs to use in multiple steps)

This approach can be used for a single dependency as well, in which case the only difference from Expected Outcome is that the parameter requires constraints instead of forced interactions.

The downsides are the potential complexity of constraints and the "cluttered" artifacts like Mind Map and Parameter/Scenarios tables.

How

Step 1: Add a parameter with the possible outcomes as values

Step 2: Tie the trigger values to the respective outcome with constraints

Step 3: Reference the parameter in the script

Side note: the same approach can be also used for any kind of conversion testing when input A1 enters the system and is supposed to be transformed into B1.

As you probably guessed, this approach would struggle with dependencies across 3 or more "actual" parameters. Some of the challenge can again be mitigated using the freeze/reimport combination.

Conclusion

While the approach recommended in the certification is generally the most applicable, it is important to understand the alternatives and evaluate which method (or their combination) would work best in your test design tasks. If you run into any issues with the implementation of expected results, do not hesitate to reach out to us at support@hexawise.com or using the chatbot in the bottom right.

Did this answer your question?