=================================== FAILURES =================================== E Failed: fnmatch: '*' E and: ' -x, --exitfirst Exit instantly on first error or failed test' E and: ' --pdb Start the interactive Python debugger on errors or KeyboardInterrupt' E and: ' --runxfail Report the results of xfail tests as if they were not marked' E and: ' --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed)' E and: ' --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown.' E and: ' --lfnf={all,none}, --last-failed-no-failures={all,none}' E and: ' With ``--lf``, determines whether to execute tests when there are no previously (known) failures or when no cached ``lastfailed`` data was found. ``all`` (the default) runs the full test suite again. ``none`` just emits a message about no known failures and exits successfully.' E and: ' --sw, --stepwise Exit on test failure and continue from last failing test next time' E and: ' Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise.' E and: " -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE')." E and: ' --xfail-tb Show tracebacks for xfail (as long as --tb != no)' E and: ' Controls how captured stdout/stderr/log is shown on failed tests. Default: all.' E and: ' --pastebin=mode Send failed|all info to bpaste.net pastebin service' E and: ' --maxfail=num Exit after first num failures or errors' E and: ' --strict-config Any warnings encountered while parsing the `pytest` section of the configuration file raise errors' E and: ' --strict-markers Markers not registered in the `markers` section of the configuration file raise errors' E and: ' --continue-on-collection-errors' E and: ' Force test execution even if collection errors occur' E and: ' --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}' E and: ' Choose another output format for diffs on doctest failure' E and: ' --doctest-ignore-import-errors' E and: ' Ignore doctest collection errors' E and: ' --doctest-continue-on-failure' E and: ' For a given doctest, continue to run after the first failure' E and: ' Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.' E nomatch: ' --benchmark-compare-fail=EXPR?[[]EXPR?...[]]' E fnmatch: ' --benchmark-compare-fail=EXPR?[[]EXPR?...[]]' E with: ' --benchmark-compare-fail=EXPR [EXPR ...]' E and: ' Fail test if performance regresses according to given EXPR (eg: min:5% or mean:0.001 for number of seconds). Can be used multiple times.' E and: ' -f, --looponfail Run tests in subprocess: wait for files to be modified, then re-run failing test set until all pass.' E and: ' xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False)' E and: ' Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none)' E and: ' Specify a verbosity level for assertions, overriding the main level. Higher levels will provide more detailed explanation when an assertion fails.' E and: ' looponfailroots (paths):' -x, --exitfirst Exit instantly on first error or failed test --pdb Start the interactive Python debugger on errors or KeyboardInterrupt --runxfail Report the results of xfail tests as if they were not marked --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --lfnf={all,none}, --last-failed-no-failures={all,none} With ``--lf``, determines whether to execute tests when there are no previously (known) failures or when no cached ``lastfailed`` data was found. ``all`` (the default) runs the full test suite again. ``none`` just emits a message about no known failures and exits successfully. --sw, --stepwise Exit on test failure and continue from last failing test next time Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise. -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --xfail-tb Show tracebacks for xfail (as long as --tb != no) Controls how captured stdout/stderr/log is shown on failed tests. Default: all. --pastebin=mode Send failed|all info to bpaste.net pastebin service --maxfail=num Exit after first num failures or errors --strict-config Any warnings encountered while parsing the `pytest` section of the configuration file raise errors --strict-markers Markers not registered in the `markers` section of the configuration file raise errors --continue-on-collection-errors Force test execution even if collection errors occur --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} Choose another output format for diffs on doctest failure --doctest-ignore-import-errors Ignore doctest collection errors --doctest-continue-on-failure For a given doctest, continue to run after the first failure Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --benchmark-compare-fail=EXPR [EXPR ...] Fail test if performance regresses according to given EXPR (eg: min:5% or mean:0.001 for number of seconds). Can be used multiple times. -f, --looponfail Run tests in subprocess: wait for files to be modified, then re-run failing test set until all pass. xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False) Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none) Specify a verbosity level for assertions, overriding the main level. Higher levels will provide more detailed explanation when an assertion fails. looponfailroots (paths): FAILED tests/test_benchmark.py::test_help - Failed: fnmatch: '*' ============ 1 failed, 201 passed, 19 skipped in 166.85s (0:02:46) ============= >>> ERROR: py3-pytest-benchmark: check failed