mirror of https://github.com/astral-sh/ruff
6 Commits
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
9f3a38d408
|
Extract `LineIndex` independent methods from `Locator` (#13938) | |
|
|
8f40928534
|
Enable token-based rules on source with syntax errors (#11950)
## Summary This PR updates the linter, specifically the token-based rules, to work on the tokens that come after a syntax error. For context, the token-based rules only diagnose the tokens up to the first lexical error. This PR builds up an error resilience by introducing a `TokenIterWithContext` which updates the `nesting` level and tries to reflect it with what the lexer is seeing. This isn't 100% accurate because if the parser recovered from an unclosed parenthesis in the middle of the line, the context won't reduce the nesting level until it sees the newline token at the end of the line. resolves: #11915 ## Test Plan * Add test cases for a bunch of rules that are affected by this change. * Run the fuzzer for a long time, making sure to fix any other bugs. |
|
|
|
bf5b62edac
|
Maintain synchronicity between the lexer and the parser (#11457)
## Summary This PR updates the entire parser stack in multiple ways: ### Make the lexer lazy * https://github.com/astral-sh/ruff/pull/11244 * https://github.com/astral-sh/ruff/pull/11473 Previously, Ruff's lexer would act as an iterator. The parser would collect all the tokens in a vector first and then process the tokens to create the syntax tree. The first task in this project is to update the entire parsing flow to make the lexer lazy. This includes the `Lexer`, `TokenSource`, and `Parser`. For context, the `TokenSource` is a wrapper around the `Lexer` to filter out the trivia tokens[^1]. Now, the parser will ask the token source to get the next token and only then the lexer will continue and emit the token. This means that the lexer needs to be aware of the "current" token. When the `next_token` is called, the current token will be updated with the newly lexed token. The main motivation to make the lexer lazy is to allow re-lexing a token in a different context. This is going to be really useful to make the parser error resilience. For example, currently the emitted tokens remains the same even if the parser can recover from an unclosed parenthesis. This is important because the lexer emits a `NonLogicalNewline` in parenthesized context while a normal `Newline` in non-parenthesized context. This different kinds of newline is also used to emit the indentation tokens which is important for the parser as it's used to determine the start and end of a block. Additionally, this allows us to implement the following functionalities: 1. Checkpoint - rewind infrastructure: The idea here is to create a checkpoint and continue lexing. At a later point, this checkpoint can be used to rewind the lexer back to the provided checkpoint. 2. Remove the `SoftKeywordTransformer` and instead use lookahead or speculative parsing to determine whether a soft keyword is a keyword or an identifier 3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted by the lexer but it contains owned data which makes it expensive to clone. The new `TokenKind` enum just represents the type of token which is very cheap. This brings up a question as to how will the parser get the owned value which was stored on `Tok`. This will be solved by introducing a new `TokenValue` enum which only contains a subset of token kinds which has the owned value. This is stored on the lexer and is requested by the parser when it wants to process the data. For example: |
|
|
|
a33763170e
|
Use `TokenKind` in `doc_lines_from_tokens` (#11418)
## Summary This PR updates the `doc_lines_from_tokens` function to use `TokenKind` instead of `Tok`. This is part of #11401 ## Test Plan `cargo test` |
|
|
|
230c9ce236
|
Split `Constant` to individual literal nodes (#8064)
## Summary This PR splits the `Constant` enum as individual literal nodes. It introduces the following new nodes for each variant: * `ExprStringLiteral` * `ExprBytesLiteral` * `ExprNumberLiteral` * `ExprBooleanLiteral` * `ExprNoneLiteral` * `ExprEllipsisLiteral` The main motivation behind this refactor is to introduce the new AST node for implicit string concatenation in the coming PR. The elements of that node will be either a string literal, bytes literal or a f-string which can be implemented using an enum. This means that a string or bytes literal cannot be represented by `Constant::Str` / `Constant::Bytes` which creates an inconsistency. This PR avoids that inconsistency by splitting the constant nodes into it's own literal nodes, literal being the more appropriate naming convention from a static analysis tool perspective. This also makes working with literals in the linter and formatter much more ergonomic like, for example, if one would want to check if this is a string literal, it can be done easily using `Expr::is_string_literal_expr` or matching against `Expr::StringLiteral` as oppose to matching against the `ExprConstant` and enum `Constant`. A few AST helper methods can be simplified as well which will be done in a follow-up PR. This introduces a new `Expr::is_literal_expr` method which is the same as `Expr::is_constant_expr`. There are also intermediary changes related to implicit string concatenation which are quiet less. This is done so as to avoid having a huge PR which this already is. ## Test Plan 1. Verify and update all of the existing snapshots (parser, visitor) 2. Verify that the ecosystem check output remains **unchanged** for both the linter and formatter ### Formatter ecosystem check #### `main` | project | similarity index | total files | changed files | |----------------|------------------:|------------------:|------------------:| | cpython | 0.75803 | 1799 | 1647 | | django | 0.99983 | 2772 | 34 | | home-assistant | 0.99953 | 10596 | 186 | | poetry | 0.99891 | 317 | 17 | | transformers | 0.99966 | 2657 | 330 | | twine | 1.00000 | 33 | 0 | | typeshed | 0.99978 | 3669 | 20 | | warehouse | 0.99977 | 654 | 13 | | zulip | 0.99970 | 1459 | 22 | #### `dhruv/constant-to-literal` | project | similarity index | total files | changed files | |----------------|------------------:|------------------:|------------------:| | cpython | 0.75803 | 1799 | 1647 | | django | 0.99983 | 2772 | 34 | | home-assistant | 0.99953 | 10596 | 186 | | poetry | 0.99891 | 317 | 17 | | transformers | 0.99966 | 2657 | 330 | | twine | 1.00000 | 33 | 0 | | typeshed | 0.99978 | 3669 | 20 | | warehouse | 0.99977 | 654 | 13 | | zulip | 0.99970 | 1459 | 22 | |
|
|
|
5849a75223
|
Rename `ruff` crate to `ruff_linter` (#7529) |