美文网首页
HTTPRunner 3 用户手册 (4) 运行用例

HTTPRunner 3 用户手册 (4) 运行用例

作者: 猫与测试 | 来源:发表于2020-07-23 10:58 被阅读0次

准备好测试用例后,可以使用hrun命令运行测试用例。

注意,hrunhttprunner run 的命令别名,具有相同的效果。

$ hrun = httprunner run

注意:目前最新版 3.1.0 版本,测试文件命名不能以test开头,否则hrunhttprunner run 都会报文件格式错误。

HttpRunner 可以以多种方式运行测试用例。

通过指定文件运行用例

你可以通过指定测试用例文件路径来运行单个测试用例:

$ hrun path/to/testcase1.py

还可以同时指定多个文件路径来运行多个测试用例:

$ hrun path/to/testcase1.py path/to/testcase2.py

如果要运行整个项目的测试用例,则可以通过指定项目文件夹路径来实现此目标。

$ hrun path/to/testcase_folder/

运行 YAML/JSON 用例

对于 YAML/JSON 类型的用例,也可以通过hrun命令运行。

hrun会先将 YAML/JSON 类型的用例转化为 pytest 类型的用例,然后再通过 pytest 命令运行用例文件。

相当于这种情况下,hrun 运行 YAML/JSON 用例时相当于执行了两个命令:

make + pytest

在大多数情况下,生成的pytest文件位于原始 YAML/JSON 文件旁边的同一文件夹中,具有相同的文件名,除了添加_test后缀并将扩展名.yml/.yaml/.json替换为.py

/path/to/example.yml => /path/to/example_test.py

但是,如果测试用例的文件夹名称或文件名包含小数点、短横线、空格之类的符号,则会将这些符号替换为下划线,以避免 python 类导入(测试用例引用)时出现语法错误。

另外,文件/文件夹的名称以数字开头将添加前缀T,因为 python 模块和类名不能以数字开头。

/path 1/a.b-2/3.yml => path_1/a_b_2/T3_test.py

运行 pytest 类型的用例

如果你的测试用例是以 pytest 格式编写的,或者您想运行从 YAML/JSON 测试用例转换而来的 pytest 文件,则 hrunpytest命令都可以。

你需要记住的是,hrun 仅包装 pytest,因此 pytest 的所有参数都可以与 hrun 一起使用。

$ hrun -h
usage: hrun [options] [file_or_dir] [file_or_dir] [...]

positional arguments:
  file_or_dir

general:
  -k EXPRESSION         only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are
                        substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and
                        classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in
                        their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and
                        functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them.
                        The matching is case-insensitive.
  -m MARKEXPR           only run tests matching given mark expression. example: -m 'mark1 and not mark2'.
  --markers             show markers (builtin, plugin and per-project ones).
  -x, --exitfirst       exit instantly on first error or failed test.
  --maxfail=num         exit after first num failures or errors.
  --strict-markers, --strict
                        markers not registered in the `markers` section of the configuration file raise errors.
  -c file               load configuration from `file` instead of trying to locate one of the implicit configuration files.
  --continue-on-collection-errors
                        Force test execution even if collection errors occur.
  --rootdir=ROOTDIR     Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path:
                        '/home/user/root_dir'; path with variables: '$HOME/root_dir'.
  --fixtures, --funcargs
                        show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')
  --fixtures-per-test   show fixtures per test
  --import-mode={prepend,append}
                        prepend/append to sys.path when importing test modules, default is to prepend.
  --pdb                 start the interactive Python debugger on errors or KeyboardInterrupt.
  --pdbcls=modulename:classname
                        start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb
  --trace               Immediately break when running each test.
  --capture=method      per-test capturing method: one of fd|sys|no|tee-sys.
  -s                    shortcut for --capture=no.
  --runxfail            report the results of xfail tests as if they were not marked
  --lf, --last-failed   rerun only the tests that failed at the last run (or all if none failed)
  --ff, --failed-first  run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown
  --nf, --new-first     run tests from new files first, then the rest of the tests sorted by file mtime
  --cache-show=[CACHESHOW]
                        show cache contents, don't perform collection or tests. Optional argument: glob (default: '*').
  --cache-clear         remove all cache contents at start of test run.
  --lfnf={all,none}, --last-failed-no-failures={all,none}
                        which tests to run with no previously (known) failures.
  --sw, --stepwise      exit on test failure and continue from last failing test next time
  --stepwise-skip       ignore the first failing test but stop on the next failing test
  --allure-severities=SEVERITIES_SET
                        Comma-separated list of severity names. Tests only with these severities will be run. Possible values are: blocker, critical, normal,
                        minor, trivial.
  --allure-epics=EPICS_SET
                        Comma-separated list of epic names. Run tests that have at least one of the specified feature labels.
  --allure-features=FEATURES_SET
                        Comma-separated list of feature names. Run tests that have at least one of the specified feature labels.
  --allure-stories=STORIES_SET
                        Comma-separated list of story names. Run tests that have at least one of the specified story labels.
  --allure-link-pattern=LINK_TYPE:LINK_PATTERN
                        Url pattern for link type. Allows short links in test, like 'issue-1'. Text will be formatted to full url with python str.format().

reporting:
  --durations=N         show N slowest setup/test durations (N=0 for all).
  -v, --verbose         increase verbosity.
  -q, --quiet           decrease verbosity.
  --verbosity=VERBOSE   set verbosity. Default is 0.
  -r chars              show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output,
                        (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list.
                        (default: 'fE').
  --disable-warnings, --disable-pytest-warnings
                        disable warnings summary
  -l, --showlocals      show locals in tracebacks (disabled by default).
  --tb=style            traceback print mode (auto/long/short/line/native/no).
  --show-capture={no,stdout,stderr,log,all}
                        Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'.
  --full-trace          don't cut any tracebacks (default is to cut).
  --color=color         color terminal output (yes/no/auto).
  --pastebin=mode       send failed|all info to bpaste.net pastebin service.
  --junit-xml=path      create junit-xml style report file at given path.
  --junit-prefix=str    prepend prefix to classnames in junit-xml output
  --result-log=path     DEPRECATED path for machine-readable result log.
  --html=path           create html report file at given path.
  --self-contained-html
                        create a self-contained html file containing all necessary styles, scripts, and images - this means that the report may not render or
                        function where CSP restrictions are in place (see https://developer.mozilla.org/docs/Web/Security/CSP)
  --css=path            append given css file content to report style file.

collection:
  --collect-only, --co  only collect tests, don't execute them.
  --pyargs              try to interpret all arguments as python packages.
  --ignore=path         ignore path during collection (multi-allowed).
  --ignore-glob=path    ignore path pattern during collection (multi-allowed).
  --deselect=nodeid_prefix
                        deselect item (via node id prefix) during collection (multi-allowed).
  --confcutdir=dir      only load conftest.py's relative to specified dir.
  --noconftest          Don't load any conftest.py files.
  --keep-duplicates     Keep duplicate tests.
  --collect-in-virtualenv
                        Don't ignore tests in a local virtualenv directory
  --doctest-modules     run doctests in all .py modules
  --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
                        choose another output format for diffs on doctest failure
  --doctest-glob=pat    doctests file matching pattern, default: test*.txt
  --doctest-ignore-import-errors
                        ignore doctest ImportErrors
  --doctest-continue-on-failure
                        for a given doctest, continue to run after the first failure

test session debugging and configuration:
  --basetemp=dir        base temporary directory for this test run.(warning: this directory is removed if it exists)
  -V, --version         display pytest version and information about plugins.
  -h, --help            show help message and configuration info
  -p name               early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`.
  --trace-config        trace considerations of conftest.py files.
  --debug               store internal tracing debug information in 'pytestdebug.log'.
  -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
                        override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.
  --assert=MODE         Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test
                        modules on import to provide assert expression information.
  --setup-only          only setup fixtures, do not execute tests.
  --setup-show          show setup of fixtures while executing tests.
  --setup-plan          show what fixtures and tests would be executed but don't execute anything.

pytest-warnings:
  -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
                        set which warnings to report, see -W option of python itself.

logging:
  --no-print-logs       disable printing caught logs on failed tests.
  --log-level=LEVEL     level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is
                        "WARNING" by default.
  --log-format=LOG_FORMAT
                        log format as used by the logging module.
  --log-date-format=LOG_DATE_FORMAT
                        log date format as used by the logging module.
  --log-cli-level=LOG_CLI_LEVEL
                        cli logging level.
  --log-cli-format=LOG_CLI_FORMAT
                        log format as used by the logging module.
  --log-cli-date-format=LOG_CLI_DATE_FORMAT
                        log date format as used by the logging module.
  --log-file=LOG_FILE   path to a file when logging will be written to.
  --log-file-level=LOG_FILE_LEVEL
                        log file logging level.
  --log-file-format=LOG_FILE_FORMAT
                        log format as used by the logging module.
  --log-file-date-format=LOG_FILE_DATE_FORMAT
                        log date format as used by the logging module.
  --log-auto-indent=LOG_AUTO_INDENT
                        Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.

reporting:
  --alluredir=DIR       Generate Allure report in the specified directory (may not exist)
  --clean-alluredir     Clean alluredir folder if it exists
  --allure-no-capture   Do not attach pytest captured logging/stdout/stderr to report

custom options:
  --metadata=key value  additional metadata.
  --metadata-from-json=METADATA_FROM_JSON
                        additional metadata from a json string.

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:

  markers (linelist):   markers for test functions
  empty_parameter_set_mark (string):
                        default marker for empty parametersets
  norecursedirs (args): directory patterns to avoid for recursion
  testpaths (args):     directories to search for tests when no files or directories are given in the command line.
  usefixtures (args):   list of default fixtures to be used with this project
  python_files (args):  glob-style file patterns for Python test module discovery
  python_classes (args):
                        prefixes or glob names for Python test class discovery
  python_functions (args):
                        prefixes or glob names for Python test function and method discovery
  disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
                        disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk)
  console_output_style (string):
                        console output: "classic", or with additional progress information ("progress" (percentage) | "count").
  xfail_strict (bool):  default for the strict parameter of xfail markers when not given explicitly (default: False)
  enable_assertion_pass_hook (bool):
                        Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files.
  junit_suite_name (string):
                        Test suite name for JUnit report
  junit_logging (string):
                        Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all
  junit_log_passing_tests (bool):
                        Capture log information for passing tests to JUnit report:
  junit_duration_report (string):
                        Duration time to report: one of total|call
  junit_family (string):
                        Emit XML for schema: one of legacy|xunit1|xunit2
  doctest_optionflags (args):
                        option flags for doctests
  doctest_encoding (string):
                        encoding used for doctest files
  cache_dir (string):   cache directory path.
  filterwarnings (linelist):
                        Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings.
  log_print (bool):     default value for --no-print-logs
  log_level (string):   default value for --log-level
  log_format (string):  default value for --log-format
  log_date_format (string):
                        default value for --log-date-format
  log_cli (bool):       enable log display during test run (also known as "live logging").
  log_cli_level (string):
                        default value for --log-cli-level
  log_cli_format (string):
                        default value for --log-cli-format
  log_cli_date_format (string):
                        default value for --log-cli-date-format
  log_file (string):    default value for --log-file
  log_file_level (string):
                        default value for --log-file-level
  log_file_format (string):
                        default value for --log-file-format
  log_file_date_format (string):
                        default value for --log-file-date-format
  log_auto_indent (string):
                        default value for --log-auto-indent
  faulthandler_timeout (string):
                        Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish. Not available on Windows.
  addopts (args):       extra command line options
  minversion (string):  minimally required pytest version
  render_collapsed (bool):
                        Open the report with all rows collapsed. Useful for very large reports

environment variables:
  PYTEST_ADDOPTS           extra command line options
  PYTEST_PLUGINS           comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
  PYTEST_DEBUG             set to enable debug tracing of pytest's internals


to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option

执行日志

默认情况下,hrun 不会打印请求和响应数据的详细信息。

如果要查看请求和响应数据、提取和验证的详细信息,则可以添加参数-s--capture = no的快捷方式)。

hrun -s examples/postman_echo/request_methods/request_with_functions.yml
2020-06-17 15:42:54.369 | INFO     | httprunner.make:make_testcase:317 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions.yml
2020-06-17 15:42:54.369 | INFO     | httprunner.make:make_testcase:390 - generated testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
2020-06-17 15:42:54.370 | INFO     | httprunner.make:format_pytest_with_black:154 - format pytest cases with black ...
reformatted /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
All done! ✨ 🍰 ✨
1 file reformatted, 1 file left unchanged.
2020-06-17 15:42:54.699 | INFO     | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.13
=========================== test session starts ===========================
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
rootdir: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
collected 1 item                                                                                                                                                

examples/postman_echo/request_methods/request_with_functions_test.py 2020-06-17 15:42:55.017 | INFO     | httprunner.runner:test_start:435 - Start to run testcase: request methods testcase with functions, TestCase ID: cc404c49-000f-485c-b4c1-ac3367a053fe
2020-06-17 15:42:55.018 | INFO     | httprunner.runner:__run_step:278 - run step begin: get with params >>>>>>
2020-06-17 15:42:56.326 | DEBUG    | httprunner.client:log_print:40 - 
================== request details ==================
method   : GET
url      : https://postman-echo.com/get?foo1=bar11&foo2=bar21&sum_v=3
headers  : {
    "User-Agent": "HttpRunner/3.0.13",
    "Accept-Encoding": "gzip, deflate",
    "Accept": "*/*",
    "Connection": "keep-alive",
    "HRUN-Request-ID": "HRUN-cc404c49-000f-485c-b4c1-ac3367a053fe-775018",
    "Content-Length": "2",
    "Content-Type": "application/json"
}
cookies  : {}
body     : {}

2020-06-17 15:42:56.327 | DEBUG    | httprunner.client:log_print:40 - 
================== response details ==================
status_code : 200
headers  : {
    "Date": "Wed, 17 Jun 2020 07:42:56 GMT",
    "Content-Type": "application/json; charset=utf-8",
    "Content-Length": "477",
    "Connection": "keep-alive",
    "ETag": "W/\"1dd-2JtBYPcnh8D6fqLz8KFn16Oq1R0\"",
    "Vary": "Accept-Encoding",
    "set-cookie": "sails.sid=s%3A6J_EtUk3nkL_C2xtx-NtAXrlA5wPxEgk.gIO2yBbtvGWIIgQ%2F2mZhMkU669G3F60cvLAPWbwyoGM; Path=/; HttpOnly"
}
cookies  : {
    "sails.sid": "s%3A6J_EtUk3nkL_C2xtx-NtAXrlA5wPxEgk.gIO2yBbtvGWIIgQ%2F2mZhMkU669G3F60cvLAPWbwyoGM"
}
encoding : utf-8
content_type : application/json; charset=utf-8
body     : {
    "args": {
        "foo1": "bar11",
        "foo2": "bar21",
        "sum_v": "3"
    },
    "headers": {
        "x-forwarded-proto": "https",
        "x-forwarded-port": "443",
        "host": "postman-echo.com",
        "x-amzn-trace-id": "Root=1-5ee9c980-d8e98cc72a26ef24f5819ce3",
        "content-length": "2",
        "user-agent": "HttpRunner/3.0.13",
        "accept-encoding": "gzip, deflate",
        "accept": "*/*",
        "hrun-request-id": "HRUN-cc404c49-000f-485c-b4c1-ac3367a053fe-775018",
        "content-type": "application/json"
    },
    "url": "https://postman-echo.com/get?foo1=bar11&foo2=bar21&sum_v=3"
}

2020-06-17 15:42:56.328 | INFO     | httprunner.client:request:203 - status_code: 200, response_time(ms): 1307.33 ms, response_length: 477 bytes
2020-06-17 15:42:56.328 | INFO     | httprunner.response:extract:152 - extract mapping: {'foo3': 'bar21'}
2020-06-17 15:42:56.328 | INFO     | httprunner.response:validate:209 - assert status_code equal 200(int)       ==> pass
2020-06-17 15:42:56.329 | INFO     | httprunner.response:validate:209 - assert body.args.foo1 equal bar11(str)  ==> pass
2020-06-17 15:42:56.329 | INFO     | httprunner.response:validate:209 - assert body.args.sum_v equal 3(str)     ==> pass
2020-06-17 15:42:56.329 | INFO     | httprunner.response:validate:209 - assert body.args.foo2 equal bar21(str)  ==> pass
2020-06-17 15:42:56.330 | INFO     | httprunner.runner:__run_step:290 - run step end: get with params <<<<<<

<Omit>

2020-06-17 15:42:57.019 | INFO     | httprunner.runner:test_start:444 - generate testcase log: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/logs/cc404c49-000f-485c-b4c1-ac3367a053fe.run.log
.

=============================== 1 passed in 2.13s ===============================

另外,将为每个测试用例生成一个执行日志文件,位于<Project RootDir> /logs/TestCase.run.log中。

用例 ID 与 请求 ID

当出现错误时,为了更好的定位问题,每个测试用例都会生成一个唯一的ID(uuid4),并且每个请求标头都会自动添加一个带有测试用例ID的HRUN-Request-ID字段。

HRUN-Request-ID = "HRUN-<TestCase ID>-<timestamp_six_digits>"
timestamp_six_digits = str(int(time.time() * 1000))[-6:])

换句话说,一个测试用例中的所有请求将具有相同的HRUN-Request-ID前缀,并且每个请求将具有唯一的HRUN-Request-ID后缀。

相关文章

网友评论

      本文标题:HTTPRunner 3 用户手册 (4) 运行用例

      本文链接:https://www.haomeiwen.com/subject/htvclktx.html