Reputation: 26982
I'm looking run behave feature tests repeatedly, but each one with different parameters, a bit like pytest's parameterize https://docs.pytest.org/en/latest/reference.html#pytest-mark-parametrize-ref
I'm unable to find anything that suggests this can be done within a single run of behave. Does this have to be done externally, e.g. via a bash script, which calls behave multiple times, with each run having parameters passed in using, for example, userdata http://behave.readthedocs.io/en/latest/behave.html?highlight=userdata#cmdoption-define , or is there an alternative?
The actual parameters are themselves found dynamically at runtime as well, running all tests over a set of dynamically-determined parameter sets.
Upvotes: 4
Views: 2717
Reputation: 17602
Pretty much all of the Gherkin-syntax BDD tools like Behave, Cucumber etc. support a thing called "Scenario Outline" which should do what you want. From these examples here:
Feature: Scenario Outline (tutorial04)
Scenario Outline: Use Blender with <thing>
Given I put "<thing>" in a blender
When I switch the blender on
Then it should transform into "<other thing>"
Examples: Amphibians
| thing | other thing |
| Red Tree Frog | mush |
| apples | apple juice |
Examples: Consumer Electronics
| thing | other thing |
| iPhone | toxic waste |
| Galaxy Nexus | toxic waste |
And to implement the steps:
@given('I put "{thing}" in a blender')
def step_given_put_thing_into_blender(context, thing):
context.blender = Blender()
context.blender.add(thing)
Pretty simple!
Upvotes: 5