mirror of https://github.com/astral-sh/ruff
## Summary This PR updates the linter benchmark to use the `tokenize` function instead of the lexer. The linter expects the token list to be up to and including the first error which is what the `ruff_python_parser::tokenize` function returns. This was not a problem before because the benchmarks only uses valid Python code. |
||
|---|---|---|
| .. | ||
| formatter.rs | ||
| lexer.rs | ||
| linter.rs | ||
| parser.rs | ||