Reputation: 332
I've got a Python 3.4 project with tests built in the behave
framework (version 1.2.5). When I run the tests, I get several hundred lines of output, most of it describing steps that passed with no problems. When a scenario fails, I need to scroll through all this output looking for the failure (which is easy to notice because it's red while the passing steps are green, but I still need to look for it).
Is there a way to make behave
only show output for failing scenarios? Ideally, I'd have the output from all failing scenarios and the summary at the end of how many features/scenarios/steps were passed/failed/skipped. I'd also be content if it printed everything out but put all the failures at the bottom.
I've run behave --help
and looked through this website, but didn't find anything relevant. and yet, surely I'm not the first person to get annoyed at this, and I imagine there's some way to do it. Thanks for the help!
edit: the --quiet
flag simplifies the output, but does not remove it. For example, this output:
Scenario Outline: Blank key identification -- @1.3 blank checks # tests/features/tkg.feature:15
Given we have pages with the wrong checksum # tests/features/steps/tkg_tests.py:30 0.000s
When we check if the key is blank # tests/features/steps/tkg_tests.py:50 0.000s
Then it is not blank # tests/features/steps/tkg_tests.py:55 0.000s
when run with the --quiet
flag becomes:
Scenario Outline: Blank key identification -- @1.3 blank checks
Given we have pages with the wrong checksum # 0.000s
When we check if the key is blank # 0.000s
Then it is not blank # 0.000s
but it's still the same number of lines long.
Upvotes: 5
Views: 3036