一个pytest插件,它将输出限制为您需要的东西。
pytest-tldr的Python项目详细描述
一个pytest插件,它将pytest的输出限制为 需要看看。
我个人对pytest最大的抱怨之一就是它的控制台 输出非常,非常健谈。它告诉你它开始了。它告诉你 工作。它告诉你一切都结束了。如果测试失败,不仅仅是 告诉你哪项考试不及格。它将一页又一页的代码转储到 你的控制台。
它用华丽的彩色来完成这一切。希望你有完美的 彩色视觉,你的控制台颜色选择是对比度兼容。
是的:pytest有很多很多命令行选项。其中一些行为 可以使用功能标志配置或关闭。但是有些人 (至少,假设是pytest核心团队)谁喜欢pytest的 输出格式。所以如果你是团队中唯一一个不喜欢 pytest的输出,不能将“更好的”选项提交到默认值中 配置-每次运行时都必须手动指定选项 测试套件。
幸运的是,pytest还有一个插件系统,所以我们可以解决这个问题。
pytest tldr是一个插件,它提供单色的最低限度输出, 同时仍然给出测试套件进度的指示。
安装
您可以通过PyPI中的pip安装“pytest tldr”:
$ pip install pytest-tldr
然后您可以正常运行测试套件:
$ pytest tests EF..s..........ux ====================================================================== ERROR: tests/test_things.py::TestTests::test_error ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rkm/projects/sample/tests/test_things.py", line 182, in test_error raise Exception("this is really bad") Exception: this is really bad ====================================================================== FAIL: tests/test_things.py::TestTests::test_failed ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rkm/projects/sample/tests/test_things.py", line 179, in test_failed self.fail('failed!') File "/Users/rkm/.pyenv/versions/3.6.2/lib/python3.6/unittest/case.py", line 670, in fail raise self.failureException(msg) AssertionError: failed! ====================================================================== UNEXPECTED SUCCESS: tests/test_things.py::TestTests::test_upassed ---------------------------------------------------------------------- Ran 17 tests in 2.11s FAILED (errors=1, failures=1, skipped=1, expected failures=1, unexpected successes=1)
或者,如果需要更详细的信息,请使用详细选项:
$ pytest tests -v platform darwin -- Python 3.6.2 pytest==3.6.1 py==1.5.2 pluggy==0.6.0 rootdir: /Users/rkm/projects/sample plugins: xdist-1.22.0, forked-0.2, tldr-0.1.0 cachedir: .pytest_cache ---------------------------------------------------------------------- tests/test_things.py::TestTests::test_error ... ERROR tests/test_things.py::TestTests::test_failed ... FAIL tests/test_things.py::TestTests::test_output ... ok tests/test_things.py::TestTests::test_passed ... ok tests/test_things.py::TestTests::test_skipped ... Skipped: tra-la-la tests/test_things.py::TestTests::test_thing_0 ... ok tests/test_things.py::TestTests::test_thing_1 ... ok tests/test_things.py::TestTests::test_thing_2 ... ok tests/test_things.py::TestTests::test_thing_3 ... ok tests/test_things.py::TestTests::test_thing_4 ... ok tests/test_things.py::TestTests::test_thing_5 ... ok tests/test_things.py::TestTests::test_thing_6 ... ok tests/test_things.py::TestTests::test_thing_7 ... ok tests/test_things.py::TestTests::test_thing_8 ... ok tests/test_things.py::TestTests::test_thing_9 ... ok tests/test_things.py::TestTests::test_upassed ... unexpected success tests/test_things.py::TestTests::test_xfailed ... expected failure ====================================================================== ERROR: tests/test_things.py::TestTests::test_error ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rkm/projects/sample/tests/test_things.py", line 182, in test_error raise Exception("this is really bad") Exception: this is really bad ====================================================================== FAIL: tests/test_things.py::TestTests::test_failed ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rkm/projects/sample/tests/test_things.py", line 179, in test_failed self.fail('failed!') File "/Users/rkm/.pyenv/versions/3.6.2/lib/python3.6/unittest/case.py", line 670, in fail raise self.failureException(msg) AssertionError: failed! ====================================================================== UNEXPECTED SUCCESS: tests/test_things.py::TestTests::test_upassed ---------------------------------------------------------------------- Ran 17 tests in 2.07s FAILED (errors=1, failures=1, skipped=1, expected failures=1, unexpected successes=1)