>>> py3-pytest-benchmark: Building main/py3-pytest-benchmark 4.0.0-r4 (using abuild 3.15.0-r0) started Wed, 10 Sep 2025 06:57:58 +0000 >>> py3-pytest-benchmark: Validating /home/udu/aports/main/py3-pytest-benchmark/APKBUILD... >>> py3-pytest-benchmark: Analyzing dependencies... >>> py3-pytest-benchmark: Installing for build: build-base python3 py3-pytest py3-py-cpuinfo py3-gpep517 py3-setuptools py3-wheel py3-pytest-xdist py3-freezegun py3-pygal py3-pygaljs py3-elasticsearch WARNING: opening /home/udu/packages//main: No such file or directory (1/41) Installing py3-iniconfig (2.1.0-r0) (2/41) Installing py3-iniconfig-pyc (2.1.0-r0) (3/41) Installing py3-parsing (3.2.3-r0) (4/41) Installing py3-parsing-pyc (3.2.3-r0) (5/41) Installing py3-packaging (25.0-r0) (6/41) Installing py3-packaging-pyc (25.0-r0) (7/41) Installing py3-pluggy (1.5.0-r0) (8/41) Installing py3-pluggy-pyc (1.5.0-r0) (9/41) Installing py3-py (1.11.0-r4) (10/41) Installing py3-py-pyc (1.11.0-r4) (11/41) Installing py3-pytest (8.3.5-r0) (12/41) Installing py3-pytest-pyc (8.3.5-r0) (13/41) Installing py3-py-cpuinfo (9.0.0-r4) (14/41) Installing py3-py-cpuinfo-pyc (9.0.0-r4) (15/41) Installing py3-installer (0.7.0-r2) (16/41) Installing py3-installer-pyc (0.7.0-r2) (17/41) Installing py3-gpep517 (19-r0) (18/41) Installing py3-gpep517-pyc (19-r0) (19/41) Installing py3-setuptools (80.9.0-r0) (20/41) Installing py3-setuptools-pyc (80.9.0-r0) (21/41) Installing py3-wheel (0.46.1-r0) (22/41) Installing py3-wheel-pyc (0.46.1-r0) (23/41) Installing py3-execnet (2.1.1-r0) (24/41) Installing py3-execnet-pyc (2.1.1-r0) (25/41) Installing py3-pytest-xdist (3.6.1-r0) (26/41) Installing py3-pytest-xdist-pyc (3.6.1-r0) (27/41) Installing py3-six (1.17.0-r0) (28/41) Installing py3-six-pyc (1.17.0-r0) (29/41) Installing py3-dateutil (2.9.0-r1) (30/41) Installing py3-dateutil-pyc (2.9.0-r1) (31/41) Installing py3-freezegun (1.5.1-r0) (32/41) Installing py3-freezegun-pyc (1.5.1-r0) (33/41) Installing py3-pygal (3.0.0-r5) (34/41) Installing py3-pygal-pyc (3.0.0-r5) (35/41) Installing py3-pygaljs (1.0.2-r4) (36/41) Installing py3-pygaljs-pyc (1.0.2-r4) (37/41) Installing py3-urllib3 (1.26.20-r0) (38/41) Installing py3-urllib3-pyc (1.26.20-r0) (39/41) Installing py3-elasticsearch (7.11.0-r4) (40/41) Installing py3-elasticsearch-pyc (7.11.0-r4) (41/41) Installing .makedepends-py3-pytest-benchmark (20250910.065759) Executing busybox-1.37.0-r19.trigger OK: 316 MiB in 124 packages >>> py3-pytest-benchmark: Cleaning up srcdir >>> py3-pytest-benchmark: Cleaning up pkgdir >>> py3-pytest-benchmark: Cleaning up tmpdir >>> py3-pytest-benchmark: Fetching https://github.com/ionelmc/pytest-benchmark/archive/v4.0.0/pytest-benchmark-4.0.0.tar.gz >>> py3-pytest-benchmark: Fetching py3-pytest-benchmark-4.0.0-py3.11.patch::https://github.com/ionelmc/pytest-benchmark/commit/b2f624afd68a3090f20187a46284904dd4baa4f6.patch >>> py3-pytest-benchmark: Fetching py3-pytest-benchmark-4.0.0-tests.patch::https://github.com/ionelmc/pytest-benchmark/commit/2b987f5be1873617f02f24cb6d76196f9aed21bd.patch >>> py3-pytest-benchmark: Fetching https://github.com/ionelmc/pytest-benchmark/archive/v4.0.0/pytest-benchmark-4.0.0.tar.gz >>> py3-pytest-benchmark: Fetching py3-pytest-benchmark-4.0.0-py3.11.patch::https://github.com/ionelmc/pytest-benchmark/commit/b2f624afd68a3090f20187a46284904dd4baa4f6.patch >>> py3-pytest-benchmark: Fetching py3-pytest-benchmark-4.0.0-tests.patch::https://github.com/ionelmc/pytest-benchmark/commit/2b987f5be1873617f02f24cb6d76196f9aed21bd.patch >>> py3-pytest-benchmark: Checking sha512sums... pytest-benchmark-4.0.0.tar.gz: OK py3-pytest-benchmark-4.0.0-py3.11.patch: OK py3-pytest-benchmark-4.0.0-tests.patch: OK >>> py3-pytest-benchmark: Unpacking /var/cache/distfiles/pytest-benchmark-4.0.0.tar.gz... >>> py3-pytest-benchmark: py3-pytest-benchmark-4.0.0-py3.11.patch patching file src/pytest_benchmark/compat.py patching file src/pytest_benchmark/utils.py >>> py3-pytest-benchmark: py3-pytest-benchmark-4.0.0-tests.patch patching file tests/test_benchmark.py 2025-09-10 06:57:59,689 gpep517 INFO Building wheel via backend setuptools.build_meta:__legacy__ /usr/lib/python3.12/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated. !! ******************************************************************************** Please consider removing the following classifiers in favor of a SPDX license expression: License :: OSI Approved :: BSD License See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! self._finalize_license_expression() 2025-09-10 06:57:59,702 root INFO running bdist_wheel 2025-09-10 06:57:59,712 root INFO running build 2025-09-10 06:57:59,712 root INFO running build_py 2025-09-10 06:57:59,714 root INFO creating build/lib/pytest_benchmark 2025-09-10 06:57:59,714 root INFO copying src/pytest_benchmark/table.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,714 root INFO copying src/pytest_benchmark/histogram.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/session.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/plugin.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/utils.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/stats.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/hookspec.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/fixture.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/compat.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/timers.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/logger.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/__init__.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,715 root INFO copying src/pytest_benchmark/csv.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,716 root INFO copying src/pytest_benchmark/__main__.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,716 root INFO copying src/pytest_benchmark/cli.py -> build/lib/pytest_benchmark 2025-09-10 06:57:59,716 root INFO creating build/lib/pytest_benchmark/storage 2025-09-10 06:57:59,716 root INFO copying src/pytest_benchmark/storage/__init__.py -> build/lib/pytest_benchmark/storage 2025-09-10 06:57:59,716 root INFO copying src/pytest_benchmark/storage/elasticsearch.py -> build/lib/pytest_benchmark/storage 2025-09-10 06:57:59,716 root INFO copying src/pytest_benchmark/storage/file.py -> build/lib/pytest_benchmark/storage 2025-09-10 06:57:59,716 root INFO running egg_info 2025-09-10 06:57:59,718 root INFO creating src/pytest_benchmark.egg-info 2025-09-10 06:57:59,718 root INFO writing src/pytest_benchmark.egg-info/PKG-INFO 2025-09-10 06:57:59,719 root INFO writing dependency_links to src/pytest_benchmark.egg-info/dependency_links.txt 2025-09-10 06:57:59,719 root INFO writing entry points to src/pytest_benchmark.egg-info/entry_points.txt 2025-09-10 06:57:59,719 root INFO writing requirements to src/pytest_benchmark.egg-info/requires.txt 2025-09-10 06:57:59,719 root INFO writing top-level names to src/pytest_benchmark.egg-info/top_level.txt 2025-09-10 06:57:59,719 root INFO writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' 2025-09-10 06:57:59,721 root INFO reading manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' 2025-09-10 06:57:59,721 root INFO reading manifest template 'MANIFEST.in' 2025-09-10 06:57:59,722 root WARNING warning: no previously-included files matching '*.py[cod]' found anywhere in distribution 2025-09-10 06:57:59,722 root WARNING warning: no previously-included files matching '__pycache__/*' found anywhere in distribution 2025-09-10 06:57:59,722 root WARNING warning: no previously-included files matching '*.so' found anywhere in distribution 2025-09-10 06:57:59,723 root WARNING warning: no previously-included files matching '*.dylib' found anywhere in distribution 2025-09-10 06:57:59,723 root INFO adding license file 'LICENSE' 2025-09-10 06:57:59,723 root INFO adding license file 'AUTHORS.rst' 2025-09-10 06:57:59,724 root INFO writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' 2025-09-10 06:57:59,733 root INFO installing to build/bdist.linux-x86_64/wheel 2025-09-10 06:57:59,733 root INFO running install 2025-09-10 06:57:59,738 root INFO running install_lib 2025-09-10 06:57:59,740 root INFO creating build/bdist.linux-x86_64/wheel 2025-09-10 06:57:59,740 root INFO creating build/bdist.linux-x86_64/wheel/pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/table.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/histogram.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/session.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/plugin.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/utils.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/stats.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/hookspec.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/fixture.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/compat.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/timers.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/logger.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,740 root INFO copying build/lib/pytest_benchmark/__init__.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/csv.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/__main__.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/cli.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark 2025-09-10 06:57:59,741 root INFO creating build/bdist.linux-x86_64/wheel/pytest_benchmark/storage 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/storage/__init__.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark/storage 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/storage/elasticsearch.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark/storage 2025-09-10 06:57:59,741 root INFO copying build/lib/pytest_benchmark/storage/file.py -> build/bdist.linux-x86_64/wheel/./pytest_benchmark/storage 2025-09-10 06:57:59,741 root INFO running install_egg_info 2025-09-10 06:57:59,743 root INFO Copying src/pytest_benchmark.egg-info to build/bdist.linux-x86_64/wheel/./pytest_benchmark-4.0.0-py3.12.egg-info 2025-09-10 06:57:59,743 root INFO running install_scripts 2025-09-10 06:57:59,744 root INFO creating build/bdist.linux-x86_64/wheel/pytest_benchmark-4.0.0.dist-info/WHEEL 2025-09-10 06:57:59,744 wheel INFO creating '/home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0/.dist/.tmp-6ke6317y/pytest_benchmark-4.0.0-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it 2025-09-10 06:57:59,744 wheel INFO adding 'pytest_benchmark/__init__.py' 2025-09-10 06:57:59,744 wheel INFO adding 'pytest_benchmark/__main__.py' 2025-09-10 06:57:59,744 wheel INFO adding 'pytest_benchmark/cli.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/compat.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/csv.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/fixture.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/histogram.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/hookspec.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/logger.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/plugin.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/session.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/stats.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/table.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/timers.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/utils.py' 2025-09-10 06:57:59,745 wheel INFO adding 'pytest_benchmark/storage/__init__.py' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark/storage/elasticsearch.py' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark/storage/file.py' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/licenses/AUTHORS.rst' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/licenses/LICENSE' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/METADATA' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/WHEEL' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/entry_points.txt' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/top_level.txt' 2025-09-10 06:57:59,746 wheel INFO adding 'pytest_benchmark-4.0.0.dist-info/RECORD' 2025-09-10 06:57:59,746 root INFO removing build/bdist.linux-x86_64/wheel 2025-09-10 06:57:59,747 gpep517 INFO The backend produced .dist/pytest_benchmark-4.0.0-py3-none-any.whl pytest_benchmark-4.0.0-py3-none-any.whl /home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0/.testenv/lib/python3.12/site-packages/pytest_benchmark/logger.py:46: PytestBenchmarkWarning: Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be performed reliably in a parallelized environment. warner(PytestBenchmarkWarning(text)) ============================= test session starts ============================== platform linux -- Python 3.12.11, pytest-8.3.5, pluggy-1.5.0 benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0 configfile: pytest.ini plugins: benchmark-4.0.0, xdist-3.6.1 created: 20/20 workers 20 workers [221 items] ..F..................................................................... [ 32%] ................s....................................................... [ 65%] ....sssssss.s.s.s..sss.s..sss...s....................................... [ 97%] ..... [100%] =================================== FAILURES =================================== __________________________________ test_help ___________________________________ [gw0] linux -- Python 3.12.11 /home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0/.testenv/bin/python3 /home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0/tests/test_benchmark.py:12: in test_help result.stdout.fnmatch_lines([ E Failed: fnmatch: '*' E with: 'usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]' E fnmatch: '*' E with: '' E nomatch: 'benchmark:' E and: 'positional arguments:' E and: ' file_or_dir' E and: '' E and: 'general:' E and: " -k EXPRESSION Only run tests which match the given substring expression. An expression is a Python evaluable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains" E and: " 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their" E and: " 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive." E and: " -m MARKEXPR Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'." E and: ' --markers show markers (builtin, plugin and per-project ones).' E and: ' -x, --exitfirst Exit instantly on first error or failed test' E and: ' --fixtures, --funcargs' E and: " Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')" E and: ' --fixtures-per-test Show fixtures per test' E and: ' --pdb Start the interactive Python debugger on errors or KeyboardInterrupt' E and: ' --pdbcls=modulename:classname' E and: ' Specify a custom interactive Python debugger for use with --pdb.For example: --pdbcls=IPython.terminal.debugger:TerminalPdb' E and: ' --trace Immediately break when running each test' E and: ' --capture=method Per-test capturing method: one of fd|sys|no|tee-sys' E and: ' -s Shortcut for --capture=no' E and: ' --runxfail Report the results of xfail tests as if they were not marked' E and: ' --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed)' E and: ' --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown.' E and: ' --nf, --new-first Run tests from new files first, then the rest of the tests sorted by file mtime' E and: ' --cache-show=[CACHESHOW]' E and: " Show cache contents, don't perform collection or tests. Optional argument: glob (default: '*')." E and: ' --cache-clear Remove all cache contents at start of test run' E and: ' --lfnf={all,none}, --last-failed-no-failures={all,none}' E and: ' With ``--lf``, determines whether to execute tests when there are no previously (known) failures or when no cached ``lastfailed`` data was found. ``all`` (the default) runs the full test suite again. ``none`` just emits a message about no known failures and exits successfully.' E and: ' --sw, --stepwise Exit on test failure and continue from last failing test next time' E and: ' --sw-skip, --stepwise-skip' E and: ' Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise.' E and: '' E and: 'Reporting:' E and: ' --durations=N Show N slowest setup/test durations (N=0 for all)' E and: ' --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default: 0.005.' E and: ' -v, --verbose Increase verbosity' E and: ' --no-header Disable header' E and: ' --no-summary Disable summary' E and: ' --no-fold-skipped Do not fold skipped tests in short summary.' E and: ' -q, --quiet Decrease verbosity' E and: ' --verbosity=VERBOSE Set verbosity. Default: 0.' E and: " -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE')." E and: ' --disable-warnings, --disable-pytest-warnings' E and: ' Disable warnings summary' E and: ' -l, --showlocals Show locals in tracebacks (disabled by default)' E and: ' --no-showlocals Hide locals in tracebacks (negate --showlocals passed through addopts)' E and: ' --tb=style Traceback print mode (auto/long/short/line/native/no)' E and: ' --xfail-tb Show tracebacks for xfail (as long as --tb != no)' E and: ' --show-capture={no,stdout,stderr,log,all}' E and: ' Controls how captured stdout/stderr/log is shown on failed tests. Default: all.' E and: " --full-trace Don't cut any tracebacks (default is to cut)" E and: ' --color=color Color terminal output (yes/no/auto)' E and: ' --code-highlight={yes,no}' E and: ' Whether code should be highlighted (only if --color is also enabled). Default: yes.' E and: ' --pastebin=mode Send failed|all info to bpaste.net pastebin service' E and: ' --junit-xml=path Create junit-xml style report file at given path' E and: ' --junit-prefix=str Prepend prefix to classnames in junit-xml output' E and: '' E and: 'pytest-warnings:' E and: ' -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS' E and: ' Set which warnings to report, see -W option of Python itself' E and: ' --maxfail=num Exit after first num failures or errors' E and: ' --strict-config Any warnings encountered while parsing the `pytest` section of the configuration file raise errors' E and: ' --strict-markers Markers not registered in the `markers` section of the configuration file raise errors' E and: ' --strict (Deprecated) alias to --strict-markers' E and: ' -c FILE, --config-file=FILE' E and: ' Load configuration from `FILE` instead of trying to locate one of the implicit configuration files.' E and: ' --continue-on-collection-errors' E and: ' Force test execution even if collection errors occur' E and: " --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'." E and: '' E and: 'collection:' E and: " --collect-only, --co Only collect tests, don't execute them" E and: ' --pyargs Try to interpret all arguments as Python packages' E and: ' --ignore=path Ignore path during collection (multi-allowed)' E and: ' --ignore-glob=path Ignore path pattern during collection (multi-allowed)' E and: ' --deselect=nodeid_prefix' E and: ' Deselect item (via node id prefix) during collection (multi-allowed)' E and: " --confcutdir=dir Only load conftest.py's relative to specified dir" E and: " --noconftest Don't load any conftest.py files" E and: ' --keep-duplicates Keep duplicate tests' E and: ' --collect-in-virtualenv' E and: " Don't ignore tests in a local virtualenv directory" E and: ' --import-mode={prepend,append,importlib}' E and: ' Prepend/append to sys.path when importing test modules and conftest files. Default: prepend.' E and: ' --doctest-modules Run doctests in all .py modules' E and: ' --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}' E and: ' Choose another output format for diffs on doctest failure' E and: ' --doctest-glob=pat Doctests file matching pattern, default: test*.txt' E and: ' --doctest-ignore-import-errors' E and: ' Ignore doctest collection errors' E and: ' --doctest-continue-on-failure' E and: ' For a given doctest, continue to run after the first failure' E and: '' E and: 'test session debugging and configuration:' E and: ' --basetemp=dir Base temporary directory for this test run. (Warning: this directory is removed if it exists.)' E and: ' -V, --version Display pytest version and information about plugins. When given twice, also display information about plugins.' E and: ' -h, --help Show help message and configuration info' E and: ' -p name Early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`.' E and: ' --trace-config Trace considerations of conftest.py files' E and: ' --debug=[DEBUG_FILE_NAME]' E and: " Store internal tracing debug information in this log file. This file is opened with 'w' and truncated as a result, care advised. Default: pytestdebug.log." E and: ' -o OVERRIDE_INI, --override-ini=OVERRIDE_INI' E and: ' Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.' E and: ' --assert=MODE Control assertion debugging tools.' E and: " 'plain' performs no assertion debugging." E and: " 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information." E and: ' --setup-only Only setup fixtures, do not execute tests' E and: ' --setup-show Show setup of fixtures while executing tests' E and: " --setup-plan Show what fixtures and tests would be executed but don't execute anything" E and: '' E and: 'logging:' E and: ' --log-level=LEVEL Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler\'s effective level, where it is "WARNING" by default.' E and: ' --log-format=LOG_FORMAT' E and: ' Log format used by the logging module' E and: ' --log-date-format=LOG_DATE_FORMAT' E and: ' Log date format used by the logging module' E and: ' --log-cli-level=LOG_CLI_LEVEL' E and: ' CLI logging level' E and: ' --log-cli-format=LOG_CLI_FORMAT' E and: ' Log format used by the logging module' E and: ' --log-cli-date-format=LOG_CLI_DATE_FORMAT' E and: ' Log date format used by the logging module' E and: ' --log-file=LOG_FILE Path to a file when logging will be written to' E and: ' --log-file-mode={w,a}' E and: ' Log file open mode' E and: ' --log-file-level=LOG_FILE_LEVEL' E and: ' Log file logging level' E and: ' --log-file-format=LOG_FILE_FORMAT' E and: ' Log format used by the logging module' E and: ' --log-file-date-format=LOG_FILE_DATE_FORMAT' E and: ' Log date format used by the logging module' E and: ' --log-auto-indent=LOG_AUTO_INDENT' E and: ' Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.' E and: ' --log-disable=LOGGER_DISABLE' E and: ' Disable a logger by name. Can be passed multiple times.' E and: '' E exact match: 'benchmark:' E exact match: ' --benchmark-min-time=SECONDS' E fnmatch: " *Default: '0.000005'" E with: " Minimum time per round in seconds. Default: '0.000005'" E exact match: ' --benchmark-max-time=SECONDS' E fnmatch: " *Default: '1.0'" E with: " Maximum run time per test - it will be repeated until this total time is reached. It may be exceeded if test function is very slow or --benchmark-min-rounds is large (it takes precedence). Default: '1.0'" E exact match: ' --benchmark-min-rounds=NUM' E fnmatch: ' *Default: 5' E with: ' Minimum rounds, even if total time would exceed `--max-time`. Default: 5' E exact match: ' --benchmark-timer=FUNC' E nomatch: ' --benchmark-calibration-precision=NUM' E and: " Timer to use when measuring time. Default: 'time.perf_counter'" E exact match: ' --benchmark-calibration-precision=NUM' E fnmatch: ' *Default: 10' E with: ' Precision to use when calibrating number of iterations. Precision of 10 will make the timer look 10 times more accurate, at a cost of less precise measure of deviations. Default: 10' E exact match: ' --benchmark-warmup=[KIND]' E nomatch: ' --benchmark-warmup-iterations=NUM' E and: " Activates warmup. Will run the test function up to number of times in the calibration phase. See `--benchmark-warmup-iterations`. Note: Even the warmup phase obeys --benchmark-max-time. Available KIND: 'auto', 'off', 'on'. Default: 'auto' (automatically activate on PyPy)." E exact match: ' --benchmark-warmup-iterations=NUM' E fnmatch: ' *Default: 100000' E with: ' Max number of iterations to run in the warmup phase. Default: 100000' E exact match: ' --benchmark-disable-gc' E nomatch: ' --benchmark-skip *' E and: ' Disable GC during benchmarks.' E fnmatch: ' --benchmark-skip *' E with: ' --benchmark-skip Skip running any tests that contain benchmarks.' E nomatch: ' --benchmark-only *' E and: " --benchmark-disable Disable benchmarks. Benchmarked functions are only ran once and no stats are reported. Use this is you want to run the test but don't do any benchmarking." E and: ' --benchmark-enable Forcibly enable benchmarks. Use this option to override --benchmark-disable (in case you have it in pytest configuration).' E fnmatch: ' --benchmark-only *' E with: ' --benchmark-only Only run benchmarks. This overrides --benchmark-skip.' E exact match: ' --benchmark-save=NAME' E nomatch: ' --benchmark-autosave *' E and: " Save the current run into 'STORAGE-PATH/counter_NAME.json'." E fnmatch: ' --benchmark-autosave *' E with: " --benchmark-autosave Autosave the current run into 'STORAGE-PATH/counter_unversioned_20250910_065802.json" E exact match: ' --benchmark-save-data' E nomatch: ' --benchmark-json=PATH' E and: ' Use this to make --benchmark-save and --benchmark-autosave include all the timing data, not just the stats.' E exact match: ' --benchmark-json=PATH' E nomatch: ' --benchmark-compare=[NUM|_ID]' E and: ' Dump a JSON report into PATH. Note that this will include the complete data (all the timings, not just the stats).' E exact match: ' --benchmark-compare=[NUM|_ID]' E nomatch: ' --benchmark-compare-fail=EXPR?[[]EXPR?...[]]' E and: ' Compare the current run against run NUM (or prefix of _id in elasticsearch) or the latest saved run if unspecified.' E fnmatch: ' --benchmark-compare-fail=EXPR?[[]EXPR?...[]]' E with: ' --benchmark-compare-fail=EXPR [EXPR ...]' E nomatch: ' --benchmark-cprofile=COLUMN' E and: ' Fail test if performance regresses according to given EXPR (eg: min:5% or mean:0.001 for number of seconds). Can be used multiple times.' E exact match: ' --benchmark-cprofile=COLUMN' E nomatch: ' --benchmark-storage=URI' E and: " If specified measure one run with cProfile and stores 25 top functions. Argument is a column to sort by. Available columns: 'ncallls_recursion', 'ncalls', 'tottime', 'tottime_per', 'cumtime', 'cumtime_per', 'function_name'." E exact match: ' --benchmark-storage=URI' E nomatch: " *Default: 'file://./.benchmarks'." E and: ' Specify a path to store the runs as uri in form file://path or elasticsearch+http[s]://host1,host2/[index/doctype?project_name=Project] (when --benchmark-save or --benchmark-autosave are used). For backwards compatibility unexpected values are converted to file://. Default:' E and: " 'file://./.benchmarks'." E and: ' --benchmark-netrc=[BENCHMARK_NETRC]' E and: " Load elasticsearch credentials from a netrc file. Default: ''." E and: ' --benchmark-verbose Dump diagnostic and progress information.' E and: ' --benchmark-quiet Disable reporting. Verbose mode takes precedence.' E and: " --benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max', 'mean', 'stddev', 'name', 'fullname'. Default: 'min'" E and: ' --benchmark-group-by=LABEL' E and: " How to group tests. Can be one of: 'group', 'name', 'fullname', 'func', 'fullfunc', 'param' or 'param:NAME', where NAME is the name passed to @pytest.parametrize. Default: 'group'" E and: ' --benchmark-columns=LABELS' E and: " Comma-separated list of columns to show in the result table. Default: 'min, max, mean, stddev, median, iqr, outliers, ops, rounds, iterations'" E and: ' --benchmark-name=FORMAT' E and: " How to format names in results. Can be one of 'short', 'normal', 'long', or 'trial'. Default: 'normal'" E and: ' --benchmark-histogram=[FILENAME-PREFIX]' E and: " Plot graphs of min/max/avg/stddev over time in FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX contains slashes ('/') then directories will be created. Default: 'benchmark_20250910_065802'" E and: '' E and: 'distributed and subprocess testing:' E and: ' -n numprocesses, --numprocesses=numprocesses' E and: " Shortcut for '--dist=load --tx=NUM*popen'." E and: " With 'logical', attempt to detect logical CPU count (requires psutil, falls back to 'auto')." E and: " With 'auto', attempt to detect physical CPU count. If physical CPU count cannot be determined, falls back to 1." E and: ' Forced to 0 (disabled) when used with --pdb.' E and: ' --maxprocesses=maxprocesses' E and: " Limit the maximum number of workers to process the tests when using --numprocesses with 'auto' or 'logical'" E and: ' --max-worker-restart=MAXWORKERRESTART' E and: ' Maximum number of workers that can be restarted when crashed (set to zero to disable this feature)' E and: ' --dist=distmode Set mode for distributing tests to exec environments.' E and: ' each: Send each test to all available environments.' E and: ' load: Load balance by sending any pending test to any available environment.' E and: ' loadscope: Load balance by sending pending groups of tests in the same scope to any available environment.' E and: ' loadfile: Load balance by sending test grouped by file to any available environment.' E and: " loadgroup: Like 'load', but sends tests marked with 'xdist_group' to the same worker." E and: ' worksteal: Split the test suite between available environments, then re-balance when any worker runs out of tests.' E and: " (default) no: Run tests inprocess, don't distribute." E and: ' --tx=xspec Add a test execution environment. Some examples:' E and: ' --tx popen//python=python2.5 --tx socket=192.168.1.102:8888' E and: ' --tx ssh=user@codespeak.net//chdir=testcache' E and: " -d Load-balance tests. Shortcut for '--dist=load'." E and: ' --rsyncdir=DIR Add directory for rsyncing to remote tx nodes' E and: ' --rsyncignore=GLOB Add expression for ignores when rsyncing to remote tx nodes' E and: ' --testrunuid=TESTRUNUID' E and: " Provide an identifier shared amongst all workers as the value of the 'testrun_uid' fixture." E and: " If not provided, 'testrun_uid' is filled with a new unique string on every test run." E and: ' --maxschedchunk=MAXSCHEDCHUNK' E and: ' Maximum number of tests scheduled in one step for --dist=load.' E and: ' Setting it to 1 will force pytest to send tests to workers one by one - might be useful for a small number of slow tests.' E and: ' Larger numbers will allow the scheduler to submit consecutive chunks of tests to workers - allows reusing fixtures.' E and: ' Due to implementation reasons, at least 2 tests are scheduled per worker at the start. Only later tests can be scheduled one by one.' E and: ' Unlimited if not set.' E and: ' -f, --looponfail Run tests in subprocess: wait for files to be modified, then re-run failing test set until all pass.' E and: '' E and: '[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:' E and: '' E and: ' markers (linelist): Register new markers for test functions' E and: ' empty_parameter_set_mark (string):' E and: ' Default marker for empty parametersets' E and: ' norecursedirs (args): Directory patterns to avoid for recursion' E and: ' testpaths (args): Directories to search for tests when no files or directories are given on the command line' E and: ' filterwarnings (linelist):' E and: ' Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings.' E and: ' consider_namespace_packages (bool):' E and: ' Consider namespace packages when resolving module names during import' E and: ' usefixtures (args): List of default fixtures to be used with this project' E and: ' python_files (args): Glob-style file patterns for Python test module discovery' E and: ' python_classes (args):' E and: ' Prefixes or glob names for Python test class discovery' E and: ' python_functions (args):' E and: ' Prefixes or glob names for Python test function and method discovery' E and: ' disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):' E and: ' Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own risk)' E and: ' console_output_style (string):' E and: ' Console output: "classic", or with additional progress information ("progress" (percentage) | "count" | "progress-even-when-capture-no" (forces progress even when capture=no)' E and: ' verbosity_test_cases (string):' E and: ' Specify a verbosity level for test case execution, overriding the main level. Higher levels will provide more detailed information about each test case executed.' E and: ' xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False)' E and: ' tmp_path_retention_count (string):' E and: ' How many sessions should we keep the `tmp_path` directories, according to `tmp_path_retention_policy`.' E and: ' tmp_path_retention_policy (string):' E and: ' Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none)' E and: ' enable_assertion_pass_hook (bool):' E and: ' Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache files.' E and: ' verbosity_assertions (string):' E and: ' Specify a verbosity level for assertions, overriding the main level. Higher levels will provide more detailed explanation when an assertion fails.' E and: ' junit_suite_name (string):' E and: ' Test suite name for JUnit report' E and: ' junit_logging (string):' E and: ' Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all' E and: ' junit_log_passing_tests (bool):' E and: ' Capture log information for passing tests to JUnit report:' E and: ' junit_duration_report (string):' E and: ' Duration time to report: one of total|call' E and: ' junit_family (string):' E and: ' Emit XML for schema: one of legacy|xunit1|xunit2' E and: ' doctest_optionflags (args):' E and: ' Option flags for doctests' E and: ' doctest_encoding (string):' E and: ' Encoding used for doctest files' E and: ' cache_dir (string): Cache directory path' E and: ' log_level (string): Default value for --log-level' E and: ' log_format (string): Default value for --log-format' E and: ' log_date_format (string):' E and: ' Default value for --log-date-format' E and: ' log_cli (bool): Enable log display during test run (also known as "live logging")' E and: ' log_cli_level (string):' E and: ' Default value for --log-cli-level' E and: ' log_cli_format (string):' E and: ' Default value for --log-cli-format' E and: ' log_cli_date_format (string):' E and: ' Default value for --log-cli-date-format' E and: ' log_file (string): Default value for --log-file' E and: ' log_file_mode (string):' E and: ' Default value for --log-file-mode' E and: ' log_file_level (string):' E and: ' Default value for --log-file-level' E and: ' log_file_format (string):' E and: ' Default value for --log-file-format' E and: ' log_file_date_format (string):' E and: ' Default value for --log-file-date-format' E and: ' log_auto_indent (string):' E and: ' Default value for --log-auto-indent' E and: ' pythonpath (paths): Add paths to sys.path' E and: ' faulthandler_timeout (string):' E and: ' Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish' E and: ' addopts (args): Extra command line options' E and: ' minversion (string): Minimally required pytest version' E and: ' required_plugins (args):' E and: ' Plugins that must be present for pytest to run' E and: ' rsyncdirs (paths): list of (relative) paths to be rsynced for remote distributed testing.' E and: ' rsyncignore (paths): list of (relative) glob-style paths to be ignored for rsyncing.' E and: ' looponfailroots (paths):' E and: ' directories to check for changes. Default: current directory.' E and: '' E and: 'Environment variables:' E and: ' CI When set (regardless of value), pytest knows it is running in a CI process and does not truncate summary info' E and: ' BUILD_NUMBER Equivalent to CI' E and: ' PYTEST_ADDOPTS Extra command line options' E and: ' PYTEST_PLUGINS Comma-separated plugins to load during startup' E and: ' PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading' E and: " PYTEST_DEBUG Set to enable debug tracing of pytest's internals" E and: '' E and: '' E and: 'to see available markers type: pytest --markers' E and: 'to see available fixtures type: pytest --fixtures' E and: "(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option" E remains unmatched: " *Default: 'file://./.benchmarks'." ----------------------------- Captured stdout call ----------------------------- running: /home/udu/aports/main/py3-pytest-benchmark/src/pytest-benchmark-4.0.0/.testenv/bin/python3 -mpytest --basetemp=/tmp/pytest-of-udu/pytest-0/popen-gw0/test_help0/runpytest-0 --help in: /tmp/pytest-of-udu/pytest-0/popen-gw0/test_help0 usage: __main__.py [options] [file_or_dir] [file_or_dir] [...] positional arguments: file_or_dir general: -k EXPRESSION Only run tests which match the given substring expression. An expression is a Python evaluable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst Exit instantly on first error or failed test --fixtures, --funcargs Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test Show fixtures per test --pdb Start the interactive Python debugger on errors or KeyboardInterrupt --pdbcls=modulename:classname Specify a custom interactive Python debugger for use with --pdb.For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test --capture=method Per-test capturing method: one of fd|sys|no|tee-sys -s Shortcut for --capture=no --runxfail Report the results of xfail tests as if they were not marked --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first Run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] Show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear Remove all cache contents at start of test run --lfnf={all,none}, --last-failed-no-failures={all,none} With ``--lf``, determines whether to execute tests when there are no previously (known) failures or when no cached ``lastfailed`` data was found. ``all`` (the default) runs the full test suite again. ``none`` just emits a message about no known failures and exits successfully. --sw, --stepwise Exit on test failure and continue from last failing test next time --sw-skip, --stepwise-skip Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise. Reporting: --durations=N Show N slowest setup/test durations (N=0 for all) --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default: 0.005. -v, --verbose Increase verbosity --no-header Disable header --no-summary Disable summary --no-fold-skipped Do not fold skipped tests in short summary. -q, --quiet Decrease verbosity --verbosity=VERBOSE Set verbosity. Default: 0. -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings Disable warnings summary -l, --showlocals Show locals in tracebacks (disabled by default) --no-showlocals Hide locals in tracebacks (negate --showlocals passed through addopts) --tb=style Traceback print mode (auto/long/short/line/native/no) --xfail-tb Show tracebacks for xfail (as long as --tb != no) --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default: all. --full-trace Don't cut any tracebacks (default is to cut) --color=color Color terminal output (yes/no/auto) --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled). Default: yes. --pastebin=mode Send failed|all info to bpaste.net pastebin service --junit-xml=path Create junit-xml style report file at given path --junit-prefix=str Prepend prefix to classnames in junit-xml output pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS Set which warnings to report, see -W option of Python itself --maxfail=num Exit after first num failures or errors --strict-config Any warnings encountered while parsing the `pytest` section of the configuration file raise errors --strict-markers Markers not registered in the `markers` section of the configuration file raise errors --strict (Deprecated) alias to --strict-markers -c FILE, --config-file=FILE Load configuration from `FILE` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'. collection: --collect-only, --co Only collect tests, don't execute them --pyargs Try to interpret all arguments as Python packages --ignore=path Ignore path during collection (multi-allowed) --ignore-glob=path Ignore path pattern during collection (multi-allowed) --deselect=nodeid_prefix Deselect item (via node id prefix) during collection (multi-allowed) --confcutdir=dir Only load conftest.py's relative to specified dir --noconftest Don't load any conftest.py files --keep-duplicates Keep duplicate tests --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} Prepend/append to sys.path when importing test modules and conftest files. Default: prepend. --doctest-modules Run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} Choose another output format for diffs on doctest failure --doctest-glob=pat Doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors Ignore doctest collection errors --doctest-continue-on-failure For a given doctest, continue to run after the first failure test session debugging and configuration: --basetemp=dir Base temporary directory for this test run. (Warning: this directory is removed if it exists.) -V, --version Display pytest version and information about plugins. When given twice, also display information about plugins. -h, --help Show help message and configuration info -p name Early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config Trace considerations of conftest.py files --debug=[DEBUG_FILE_NAME] Store internal tracing debug information in this log file. This file is opened with 'w' and truncated as a result, care advised. Default: pytestdebug.log. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only Only setup fixtures, do not execute tests --setup-show Show setup of fixtures while executing tests --setup-plan Show what fixtures and tests would be executed but don't execute anything logging: --log-level=LEVEL Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT Log format used by the logging module --log-date-format=LOG_DATE_FORMAT Log date format used by the logging module --log-cli-level=LOG_CLI_LEVEL CLI logging level --log-cli-format=LOG_CLI_FORMAT Log format used by the logging module --log-cli-date-format=LOG_CLI_DATE_FORMAT Log date format used by the logging module --log-file=LOG_FILE Path to a file when logging will be written to --log-file-mode={w,a} Log file open mode --log-file-level=LOG_FILE_LEVEL Log file logging level --log-file-format=LOG_FILE_FORMAT Log format used by the logging module --log-file-date-format=LOG_FILE_DATE_FORMAT Log date format used by the logging module --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. --log-disable=LOGGER_DISABLE Disable a logger by name. Can be passed multiple times. benchmark: --benchmark-min-time=SECONDS Minimum time per round in seconds. Default: '0.000005' --benchmark-max-time=SECONDS Maximum run time per test - it will be repeated until this total time is reached. It may be exceeded if test function is very slow or --benchmark-min-rounds is large (it takes precedence). Default: '1.0' --benchmark-min-rounds=NUM Minimum rounds, even if total time would exceed `--max-time`. Default: 5 --benchmark-timer=FUNC Timer to use when measuring time. Default: 'time.perf_counter' --benchmark-calibration-precision=NUM Precision to use when calibrating number of iterations. Precision of 10 will make the timer look 10 times more accurate, at a cost of less precise measure of deviations. Default: 10 --benchmark-warmup=[KIND] Activates warmup. Will run the test function up to number of times in the calibration phase. See `--benchmark-warmup-iterations`. Note: Even the warmup phase obeys --benchmark-max-time. Available KIND: 'auto', 'off', 'on'. Default: 'auto' (automatically activate on PyPy). --benchmark-warmup-iterations=NUM Max number of iterations to run in the warmup phase. Default: 100000 --benchmark-disable-gc Disable GC during benchmarks. --benchmark-skip Skip running any tests that contain benchmarks. --benchmark-disable Disable benchmarks. Benchmarked functions are only ran once and no stats are reported. Use this is you want to run the test but don't do any benchmarking. --benchmark-enable Forcibly enable benchmarks. Use this option to override --benchmark-disable (in case you have it in pytest configuration). --benchmark-only Only run benchmarks. This overrides --benchmark-skip. --benchmark-save=NAME Save the current run into 'STORAGE-PATH/counter_NAME.json'. --benchmark-autosave Autosave the current run into 'STORAGE-PATH/counter_unversioned_20250910_065802.json --benchmark-save-data Use this to make --benchmark-save and --benchmark-autosave include all the timing data, not just the stats. --benchmark-json=PATH Dump a JSON report into PATH. Note that this will include the complete data (all the timings, not just the stats). --benchmark-compare=[NUM|_ID] Compare the current run against run NUM (or prefix of _id in elasticsearch) or the latest saved run if unspecified. --benchmark-compare-fail=EXPR [EXPR ...] Fail test if performance regresses according to given EXPR (eg: min:5% or mean:0.001 for number of seconds). Can be used multiple times. --benchmark-cprofile=COLUMN If specified measure one run with cProfile and stores 25 top functions. Argument is a column to sort by. Available columns: 'ncallls_recursion', 'ncalls', 'tottime', 'tottime_per', 'cumtime', 'cumtime_per', 'function_name'. --benchmark-storage=URI Specify a path to store the runs as uri in form file://path or elasticsearch+http[s]://host1,host2/[index/doctype?project_name=Project] (when --benchmark-save or --benchmark-autosave are used). For backwards compatibility unexpected values are converted to file://. Default: 'file://./.benchmarks'. --benchmark-netrc=[BENCHMARK_NETRC] Load elasticsearch credentials from a netrc file. Default: ''. --benchmark-verbose Dump diagnostic and progress information. --benchmark-quiet Disable reporting. Verbose mode takes precedence. --benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max', 'mean', 'stddev', 'name', 'fullname'. Default: 'min' --benchmark-group-by=LABEL How to group tests. Can be one of: 'group', 'name', 'fullname', 'func', 'fullfunc', 'param' or 'param:NAME', where NAME is the name passed to @pytest.parametrize. Default: 'group' --benchmark-columns=LABELS Comma-separated list of columns to show in the result table. Default: 'min, max, mean, stddev, median, iqr, outliers, ops, rounds, iterations' --benchmark-name=FORMAT How to format names in results. Can be one of 'short', 'normal', 'long', or 'trial'. Default: 'normal' --benchmark-histogram=[FILENAME-PREFIX] Plot graphs of min/max/avg/stddev over time in FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX contains slashes ('/') then directories will be created. Default: 'benchmark_20250910_065802' distributed and subprocess testing: -n numprocesses, --numprocesses=numprocesses Shortcut for '--dist=load --tx=NUM*popen'. With 'logical', attempt to detect logical CPU count (requires psutil, falls back to 'auto'). With 'auto', attempt to detect physical CPU count. If physical CPU count cannot be determined, falls back to 1. Forced to 0 (disabled) when used with --pdb. --maxprocesses=maxprocesses Limit the maximum number of workers to process the tests when using --numprocesses with 'auto' or 'logical' --max-worker-restart=MAXWORKERRESTART Maximum number of workers that can be restarted when crashed (set to zero to disable this feature) --dist=distmode Set mode for distributing tests to exec environments. each: Send each test to all available environments. load: Load balance by sending any pending test to any available environment. loadscope: Load balance by sending pending groups of tests in the same scope to any available environment. loadfile: Load balance by sending test grouped by file to any available environment. loadgroup: Like 'load', but sends tests marked with 'xdist_group' to the same worker. worksteal: Split the test suite between available environments, then re-balance when any worker runs out of tests. (default) no: Run tests inprocess, don't distribute. --tx=xspec Add a test execution environment. Some examples: --tx popen//python=python2.5 --tx socket=192.168.1.102:8888 --tx ssh=user@codespeak.net//chdir=testcache -d Load-balance tests. Shortcut for '--dist=load'. --rsyncdir=DIR Add directory for rsyncing to remote tx nodes --rsyncignore=GLOB Add expression for ignores when rsyncing to remote tx nodes --testrunuid=TESTRUNUID Provide an identifier shared amongst all workers as the value of the 'testrun_uid' fixture. If not provided, 'testrun_uid' is filled with a new unique string on every test run. --maxschedchunk=MAXSCHEDCHUNK Maximum number of tests scheduled in one step for --dist=load. Setting it to 1 will force pytest to send tests to workers one by one - might be useful for a small number of slow tests. Larger numbers will allow the scheduler to submit consecutive chunks of tests to workers - allows reusing fixtures. Due to implementation reasons, at least 2 tests are scheduled per worker at the start. Only later tests can be scheduled one by one. Unlimited if not set. -f, --looponfail Run tests in subprocess: wait for files to be modified, then re-run failing test set until all pass. [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found: markers (linelist): Register new markers for test functions empty_parameter_set_mark (string): Default marker for empty parametersets norecursedirs (args): Directory patterns to avoid for recursion testpaths (args): Directories to search for tests when no files or directories are given on the command line filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. consider_namespace_packages (bool): Consider namespace packages when resolving module names during import usefixtures (args): List of default fixtures to be used with this project python_files (args): Glob-style file patterns for Python test module discovery python_classes (args): Prefixes or glob names for Python test class discovery python_functions (args): Prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own risk) console_output_style (string): Console output: "classic", or with additional progress information ("progress" (percentage) | "count" | "progress-even-when-capture-no" (forces progress even when capture=no) verbosity_test_cases (string): Specify a verbosity level for test case execution, overriding the main level. Higher levels will provide more detailed information about each test case executed. xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False) tmp_path_retention_count (string): How many sessions should we keep the `tmp_path` directories, according to `tmp_path_retention_policy`. tmp_path_retention_policy (string): Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache files. verbosity_assertions (string): Specify a verbosity level for assertions, overriding the main level. Higher levels will provide more detailed explanation when an assertion fails. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): Option flags for doctests doctest_encoding (string): Encoding used for doctest files cache_dir (string): Cache directory path log_level (string): Default value for --log-level log_format (string): Default value for --log-format log_date_format (string): Default value for --log-date-format log_cli (bool): Enable log display during test run (also known as "live logging") log_cli_level (string): Default value for --log-cli-level log_cli_format (string): Default value for --log-cli-format log_cli_date_format (string): Default value for --log-cli-date-format log_file (string): Default value for --log-file log_file_mode (string): Default value for --log-file-mode log_file_level (string): Default value for --log-file-level log_file_format (string): Default value for --log-file-format log_file_date_format (string): Default value for --log-file-date-format log_auto_indent (string): Default value for --log-auto-indent pythonpath (paths): Add paths to sys.path faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish addopts (args): Extra command line options minversion (string): Minimally required pytest version required_plugins (args): Plugins that must be present for pytest to run rsyncdirs (paths): list of (relative) paths to be rsynced for remote distributed testing. rsyncignore (paths): list of (relative) glob-style paths to be ignored for rsyncing. looponfailroots (paths): directories to check for changes. Default: current directory. Environment variables: CI When set (regardless of value), pytest knows it is running in a CI process and does not truncate summary info BUILD_NUMBER Equivalent to CI PYTEST_ADDOPTS Extra command line options PYTEST_PLUGINS Comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading PYTEST_DEBUG Set to enable debug tracing of pytest's internals to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option =========================== short test summary info ============================ SKIPPED [1] tests/test_skip.py:5: bla SKIPPED [2] tests/test_utils.py:60: 'git' not available on $PATH SKIPPED [2] tests/test_utils.py:60: 'hg' not available on $PATH SKIPPED [2] tests/test_utils.py:80: 'git' not available on $PATH SKIPPED [2] tests/test_utils.py:80: 'hg' not available on $PATH SKIPPED [1] tests/test_utils.py:94: 'git' not available on $PATH SKIPPED [1] tests/test_utils.py:94: 'hg' not available on $PATH SKIPPED [4] tests/test_utils.py:160: 'hg' not available on $PATH SKIPPED [4] tests/test_utils.py:160: 'git' not available on $PATH FAILED tests/test_benchmark.py::test_help - Failed: fnmatch: '*' ============ 1 failed, 201 passed, 19 skipped in 174.44s (0:02:54) ============= >>> ERROR: py3-pytest-benchmark: check failed >>> py3-pytest-benchmark: Uninstalling dependencies... (1/41) Purging .makedepends-py3-pytest-benchmark (20250910.065759) (2/41) Purging py3-py-cpuinfo-pyc (9.0.0-r4) (3/41) Purging py3-py-cpuinfo (9.0.0-r4) (4/41) Purging py3-gpep517-pyc (19-r0) (5/41) Purging py3-gpep517 (19-r0) (6/41) Purging py3-installer-pyc (0.7.0-r2) (7/41) Purging py3-installer (0.7.0-r2) (8/41) Purging py3-wheel-pyc (0.46.1-r0) (9/41) Purging py3-wheel (0.46.1-r0) (10/41) Purging py3-pytest-xdist-pyc (3.6.1-r0) (11/41) Purging py3-pytest-xdist (3.6.1-r0) (12/41) Purging py3-execnet-pyc (2.1.1-r0) (13/41) Purging py3-execnet (2.1.1-r0) (14/41) Purging py3-pytest-pyc (8.3.5-r0) (15/41) Purging py3-pytest (8.3.5-r0) (16/41) Purging py3-iniconfig-pyc (2.1.0-r0) (17/41) Purging py3-iniconfig (2.1.0-r0) (18/41) Purging py3-pluggy-pyc (1.5.0-r0) (19/41) Purging py3-pluggy (1.5.0-r0) (20/41) Purging py3-py-pyc (1.11.0-r4) (21/41) Purging py3-py (1.11.0-r4) (22/41) Purging py3-freezegun-pyc (1.5.1-r0) (23/41) Purging py3-freezegun (1.5.1-r0) (24/41) Purging py3-dateutil-pyc (2.9.0-r1) (25/41) Purging py3-dateutil (2.9.0-r1) (26/41) Purging py3-six-pyc (1.17.0-r0) (27/41) Purging py3-six (1.17.0-r0) (28/41) Purging py3-pygal-pyc (3.0.0-r5) (29/41) Purging py3-pygal (3.0.0-r5) (30/41) Purging py3-setuptools-pyc (80.9.0-r0) (31/41) Purging py3-setuptools (80.9.0-r0) (32/41) Purging py3-packaging-pyc (25.0-r0) (33/41) Purging py3-packaging (25.0-r0) (34/41) Purging py3-parsing-pyc (3.2.3-r0) (35/41) Purging py3-parsing (3.2.3-r0) (36/41) Purging py3-pygaljs-pyc (1.0.2-r4) (37/41) Purging py3-pygaljs (1.0.2-r4) (38/41) Purging py3-elasticsearch-pyc (7.11.0-r4) (39/41) Purging py3-elasticsearch (7.11.0-r4) (40/41) Purging py3-urllib3-pyc (1.26.20-r0) (41/41) Purging py3-urllib3 (1.26.20-r0) Executing busybox-1.37.0-r19.trigger OK: 296 MiB in 83 packages