ruff/crates/ruff_benchmark/benches
Dhruv Manilawala 50f14d017e
Use `tokenize` for linter benchmark (#11417)
## Summary

This PR updates the linter benchmark to use the `tokenize` function
instead of the lexer.

The linter expects the token list to be up to and including the first
error which is what the `ruff_python_parser::tokenize` function returns.

This was not a problem before because the benchmarks only uses valid
Python code.
2024-05-14 10:28:40 -04:00
..
formatter.rs Approximate tokens len (#9546) 2024-01-19 17:39:37 +01:00
lexer.rs Add lexer benchmark (#7132) 2023-09-04 13:18:36 +00:00
linter.rs Use `tokenize` for linter benchmark (#11417) 2024-05-14 10:28:40 -04:00
parser.rs Remove source path from parser errors (#9322) 2023-12-30 20:33:05 +00:00