mirror of https://github.com/astral-sh/ruff
## Summary Enable using the new `Mode::Jupyter` for the tokenizer/parser to parse Jupyter line magic tokens. The individual call to the lexer i.e., `lex_starts_at` done by various rules should consider the context of the source code (is this content from a Jupyter Notebook?). Thus, a new field `source_type` (of type `PySourceType`) is added to `Checker` which is being passed around as an argument to the relevant functions. This is then used to determine the `Mode` for the lexer. ## Test Plan Add new test cases to make sure that the magic statement is considered while generating the diagnostic and autofix: * For `I001`, if there's a magic statement in between two import blocks, they should be sorted independently fixes: #6090 |
||
|---|---|---|
| .. | ||
| format_dev.rs | ||
| generate_all.rs | ||
| generate_cli_help.rs | ||
| generate_docs.rs | ||
| generate_json_schema.rs | ||
| generate_options.rs | ||
| generate_rules_table.rs | ||
| main.rs | ||
| print_ast.rs | ||
| print_cst.rs | ||
| print_tokens.rs | ||
| round_trip.rs | ||