1
Fork 0

Merge pull request from automaticp/parser-tests-with-pytest

Parser tests: replace shell scripts with pytest scripts
This commit is contained in:
Christofer Nolander 2023-11-02 16:27:34 +01:00 committed by GitHub
commit f5694d1d37
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 58 additions and 95 deletions

View file

@ -19,7 +19,7 @@ watch *ARGS:
test:
zig build test --summary failures {{flags}}
just install
tests/run-all-tests.sh
pytest tests/
test-file *ARGS:
zig test {{flags}} "$@"

View file

@ -1,7 +1,7 @@
# `glsl_analyzer/tests`
- [How do I run the tests?](#how-do-i-run-the-tests)
- [LSP tests](#lsp-tests)
- [How do I run the tests?](#how-do-i-run-the-tests)
- [How do I debug failing tests?](#how-do-i-debug-failing-tests)
- [How do I add more files for testing / write new test cases?](#how-do-i-add-more-files-for-testing--write-new-test-cases)
- [How tests and test input are structured](#how-tests-and-test-input-are-structured)
@ -13,10 +13,8 @@
- [Expecting failure](#expecting-failure)
- [Parser tests](#parser-tests)
## LSP tests
### How do I run the tests?
## How do I run the tests?
**Requirements:** `python 3.10` and `requirements.txt`.
@ -34,9 +32,18 @@ $ cd tests/ && pytest
`pytest` will automatically pick up tests from modules that have names like `test_*.py` and contain functions `test_*` and classes `Test*`. More patterns are available, see the `pytest` docs.
If you want to run individual test files, just specify them as arguments:
```sh
$ pytest tests/test_lsp.py
```
*Caution: do not run `pytest` with the `--runxfail` flag on these tests. We do not use `xfail` in its conventional form in our test suite; using this flag will make the tests fail in unpredictable ways.*
## LSP tests
### How do I debug failing tests?
The current test routines are set up to output the glsl file location that was targeted in the test:
@ -259,17 +266,13 @@ We always expect failure in *strict mode*: if the test was expected to fail, but
## Parser tests
Parser tests are simple bash scripts that run the parser with `glsl_analyzer --parse-file` on a set of files. Currently we only test successful parsing on well-formed code.
Parser tests are simple pytest scripts that run the parser with `glsl_analyzer --parse-file` on a set of files. Currently we only test successful parsing on well-formed code and check that the `glsl_analyzer` did not output anything into `stderr`.
*The `glsl_analyzer` version you want to test must be available in your PATH.*
You can run the tests by executing
Run only the parser tests by executing
```sh
$ ./run-all-tests.sh
$ pytest test_parser.py
```
You can add more directories for testing by adding them to the `well_formed_dirs` in `run-parser-tests.sh`.
*This setup will most likely be rewritten in the near future.*
You can add more directories for testing by adding them to the `dir_with_test_files` fixture `params` in `test_parser.py`.

View file

@ -1,21 +0,0 @@
#!/usr/bin/env bash
testdir="${BASH_SOURCE%/*}"
well_formed_dirs="
$testdir/glsl-samples/well-formed/
$testdir/glsl-samples/well-formed/glslang
"
failed=0
for d in $well_formed_dirs; do
echo "================================================"
"$testdir/run-parser-tests-well-formed.sh" "$d" || failed=1
done
if [[ $failed != 0 ]]; then
exit 1
fi

View file

@ -1,61 +0,0 @@
#!/usr/bin/env bash
print_usage() {
echo "Usage: run-parser-tests.sh [directory]"
}
if [[ -z "$1" || -n "$2" ]]; then
print_usage
exit 1
fi
if [[ ! -d "$1" ]]; then
echo "\"$1\" is not an existing directory"
print_usage
exit 1
fi
echo "Running parser tests in \"$1\" for glsl_analyzer $(glsl_analyzer --version)"
num_files=0
num_failed=0
for file in "$1"/*; do
if [[ -f "$file" ]]; then
# Filter out headers, shell scripts, etc.
ext="${file##*.}"
if [[
$ext == "glsl" || $ext == "vert" || $ext == "frag" || $ext == "geom" ||
$ext == "comp" || $ext == "tesc" || $ext == "tese" || $ext == "rgen" ||
$ext == "rint" || $ext == "rahit" || $ext == "rchit" || $ext == "rmiss" ||
$ext == "rcall" || $ext == "mesh" || $ext == "task"
]]; then
# Redirect stderr to stdout, capture stdout into a variable,
# then drop stdout into /dev/null.
out=$(glsl_analyzer --parse-file "$file" 2>&1 >/dev/null)
# If the output isn't empty, then the parser emitted an error.
# We expect the test code to be well-formed and not produce erros,
# so this is considered a test failure.
if [[ -n "$out" ]]; then
first_line=$(echo "$out" | head -1)
let num_failed=num_failed+1
echo "FAILED($(printf "%03d" "$num_failed")): $first_line"
fi
let num_files=num_files+1
fi
fi
done
echo
echo "FAILED $num_failed out of $num_files files."
if [[ $num_failed == 0 ]]; then
echo "All tests passed."
else
exit 1
fi

42
tests/test_parser.py Normal file
View file

@ -0,0 +1,42 @@
import subprocess
from pathlib import Path
import pytest
import pytest_subtests
base_directory = Path(__file__).parent.resolve()
glsl_extensions = (
".glsl", ".vert", ".frag", ".geom",
".comp", ".tesc", ".tese", ".rgen",
".rint", ".rahit", ".rchit", ".rmiss",
".rcall", ".mesh", ".task"
)
@pytest.fixture(
params=("glsl-samples/well-formed/glslang/",)
)
def dir_with_test_files(request) -> Path:
return base_directory / Path(request.param)
def test_parser_in_directory(
subtests: pytest_subtests.SubTests,
dir_with_test_files: Path
):
for file in dir_with_test_files.iterdir():
if file.suffix not in glsl_extensions:
continue
with subtests.test(msg=str(file)):
output = subprocess.run(
args=("glsl_analyzer", "--parse-file", str(file)),
capture_output=True,
text=True
)
assert output.returncode == 0 and len(output.stderr) == 0, \
output.stderr