Commit Graph

302 Commits

Author SHA1 Message Date
Brent Westbrook cbc6863b8c
Fix panic when formatting comments in unary expressions (#21501)
## Summary

This is another attempt at https://github.com/astral-sh/ruff/pull/21410
that fixes https://github.com/astral-sh/ruff/issues/19226.

@MichaReiser helped me get something working in a very helpful pairing
session. I pushed one additional commit moving the comments back from
leading comments to trailing comments, which I think retains more of the
input formatting.

I was inspired by Dylan's PR (#21185) to make one of these tables:

<table>
                <thead>
                    <tr>
                    <th scope="col">Input</th>
                    <th scope="col">Main</th>
                    <th scope="col">PR</th>
                    </tr>
                </thead>
                <tbody>
<tr>
<td><pre lang="python">
if (
    not
    # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass
</pre></td>
<td><pre lang="python">
if (
    # comment
    not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass

</pre></td>
<td><pre lang="python">
if (
    not
    # comment
    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass

</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
    # unary comment
    not
    # operand comment
    (
        # comment
        aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
        + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
    )
):
    pass
</pre></td>
<td><pre lang="python">
if (
    # unary comment
    # operand comment
    not (
        # comment
        aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
        + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
    )
):
    pass

</pre></td>
<td><pre lang="python">
if (
    # unary comment
    not
    # operand comment
    (
        # comment
        aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
        + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
    )
):
    pass

</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
    not # comment
    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass
</pre></td>
<td><pre lang="python">
if (  # comment
    not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass

</pre></td>
<td><pre lang="python">
if (
    not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa  # comment
    + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
    pass

</pre></td>
</tr>
</tbody>
            </table>

hopefully it helps even though the snippets are much wider here.

The two main differences are (1) that we now retain own-line comments
between the unary operator and its operand instead of moving these to
leading comments on the operator itself, and (2) that we move
end-of-line comments between the operator and operand to dangling
end-of-line comments on the operand (the last example in the table).

## Test Plan

Existing tests, plus new ones based on the issue. As I noted below, I
also ran the output from main on the unary.py file back through this
branch to check that we don't reformat code from main. This made me feel
a bit better about not preview-gating the changes in this PR.

```shell
> git show main:crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/unary.py | ruff format - | ./target/debug/ruff format --diff -
> echo $?
0
```

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Takayuki Maeda <takoyaki0316@gmail.com>
2025-11-18 10:48:14 -05:00
Dylan 8156b45173
Avoid syntax error when formatting attribute expressions with outer parentheses, parenthesized value, and trailing comment on value (#20418)
Closes #19350 

This fixes a syntax error caused by formatting. However, the new tests reveal that there are some cases where formatting attributes with certain comments behaves strangely, both before and after this PR, so some more polish may be in order.

For example, without parentheses around the value, and both before and after this PR, we have:

```python
# unformatted
variable = (
    something # a comment
    .first_method("some string")
)

# formatted
variable = something.first_method("some string")  # a comment
```

which is probably not where the comment ought to go.
2025-11-17 09:11:36 -06:00
Brent Westbrook 591e9bbccb
Remove parentheses around multiple exception types on Python 3.14+ (#20768)
Summary
--

This PR implements the black preview style from
https://github.com/psf/black/pull/4720. As of Python 3.14, you're
allowed to omit the parentheses around groups of exceptions, as long as
there's no `as` binding:

**3.13**

```pycon
Python 3.13.4 (main, Jun  4 2025, 17:37:06) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException: ...
...
  File "<python-input-1>", line 2
    except Exception, BaseException: ...
           ^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized
```

**3.14**

```pycon
Python 3.14.0rc2 (main, Sep  2 2025, 14:20:56) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except Exception, BaseException: ...
...
Ellipsis
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException as e: ...
...
  File "<python-input-2>", line 2
    except Exception, BaseException as e: ...
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized when using 'as'
```

I think this ended up being pretty straightforward, at least once Micha
showed me where to start :)

Test Plan
--

New tests

At first I thought we were deviating from black in how we handle
comments within the exception type tuple, but I think this applies to
how we format all tuples, not specifically with the new preview style.
2025-10-14 11:17:45 -04:00
Micha Reiser 7dfde3b929
Update Rust toolchain to 1.89 (#19807) 2025-08-07 18:21:50 +02:00
Ibraheem Ahmed c9dff5c7d5
[ty] AST garbage collection (#18482)
## Summary

Garbage collect ASTs once we are done checking a given file. Queries
with a cross-file dependency on the AST will reparse the file on demand.
This reduces ty's peak memory usage by ~20-30%.

The primary change of this PR is adding a `node_index` field to every
AST node, that is assigned by the parser. `ParsedModule` can use this to
create a flat index of AST nodes any time the file is parsed (or
reparsed). This allows `AstNodeRef` to simply index into the current
instance of the `ParsedModule`, instead of storing a pointer directly.

The indices are somewhat hackily (using an atomic integer) assigned by
the `parsed_module` query instead of by the parser directly. Assigning
the indices in source-order in the (recursive) parser turns out to be
difficult, and collecting the nodes during semantic indexing is
impossible as `SemanticIndex` does not hold onto a specific
`ParsedModuleRef`, which the pointers in the flat AST are tied to. This
means that we have to do an extra AST traversal to assign and collect
the nodes into a flat index, but the small performance impact (~3% on
cold runs) seems worth it for the memory savings.

Part of https://github.com/astral-sh/ty/issues/214.
2025-06-13 08:40:11 -04:00
Dylan 9bbf4987e8
Implement template strings (#17851)
This PR implements template strings (t-strings) in the parser and
formatter for Ruff.

Minimal changes necessary to compile were made in other parts of the code (e.g. ty, the linter, etc.). These will be covered properly in follow-up PRs.
2025-05-30 15:00:56 -05:00
Micha Reiser 9ae698fe30
Switch to Rust 2024 edition (#18129) 2025-05-16 13:25:28 +02:00
Micha Reiser 196e4befba
Update MSRV to 1.85 and toolchain to 1.87 (#18126) 2025-05-16 09:19:55 +02:00
Micha Reiser fa628018b2
Use `#[expect(lint)]` over `#[allow(lint)]` where possible (#17822) 2025-05-03 21:20:31 +02:00
Micha Reiser 8a4158c5f8
Upgrade to Rust 1.86 and bump MSRV to 1.84 (#17171)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

I decided to disable the new
[`needless_continue`](https://rust-lang.github.io/rust-clippy/master/index.html#needless_continue)
rule because I often found the explicit `continue` more readable over an
empty block or having to invert the condition of an other branch.


## Test Plan

`cargo test`

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-04-03 15:59:44 +00:00
Carl Meyer dd6f6233bd
bump MSRV to 1.83 (#16294)
According to our new MSRV policy (see
https://github.com/astral-sh/ruff/issues/16370 ), bump our MSRV to 1.83
(N - 2), and autofix some new clippy lints.
2025-02-26 06:12:43 -08:00
Alex Waygood 25920fe489
Rename `ExprStringLiteral::as_unconcatenated_string()` to `ExprStringLiteral::as_single_part_string()` (#16253) 2025-02-19 16:06:57 +00:00
Micha Reiser 31180a84e4
Fix unstable formatting of trailing end-of-line comments of parenthesized attribute values (#16187) 2025-02-18 08:43:51 +01:00
Alex Waygood b6b1947010
Improve API exposed on `ExprStringLiteral` nodes (#16192)
## Summary

This PR makes the following changes:
- It adjusts various callsites to use the new
`ast::StringLiteral::contents_range()` method that was introduced in
https://github.com/astral-sh/ruff/pull/16183. This is less verbose and
more type-safe than using the `ast::str::raw_contents()` helper
function.
- It adds a new `ast::ExprStringLiteral::as_unconcatenated_literal()`
helper method, and adjusts various callsites to use it. This addresses
@MichaReiser's review comment at
https://github.com/astral-sh/ruff/pull/16183#discussion_r1957334365.
There is no functional change here, but it helps readability to make it
clearer that we're differentiating between implicitly concatenated
strings and unconcatenated strings at various points.
- It renames the `StringLiteralValue::flags()` method to
`StringLiteralFlags::first_literal_flags()`. If you're dealing with an
implicitly concatenated string `string_node`,
`string_node.value.flags().closer_len()` could give an incorrect result;
this renaming makes it clearer that the `StringLiteralFlags` instance
returned by the method is only guaranteed to give accurate information
for the first `StringLiteral` contained in the `ExprStringLiteral` node.
- It deletes the unused `BytesLiteralValue::flags()` method. This seems
prone to misuse in the same way as `StringLiteralValue::flags()`: if
it's an implicitly concatenated bytestring, the `BytesLiteralFlags`
instance returned by the method would only give accurate information for
the first `BytesLiteral` in the bytestring.

## Test Plan

`cargo test`
2025-02-17 07:58:54 +00:00
Douglas Creager ef85c682bd
Remove customizable reference enum names (#15647)
The AST generator creates a reference enum for each syntax group — an
enum where each variant contains a reference to the relevant syntax
node. Previously you could customize the name of the reference enum for
a group — primarily because there was an existing `ExpressionRef` type
that wouldn't have lined up with the auto-derived name `ExprRef`. This
follow-up PR is a simple search/replace to switch over to the
auto-derived name, so that we can remove this customization point.
2025-01-21 13:46:31 -05:00
Douglas Creager 98ef564170
Remove `AstNode` and `AnyNode` (#15479)
While looking into potential AST optimizations, I noticed the `AstNode`
trait and `AnyNode` type aren't used anywhere in Ruff or Red Knot. It
looks like they might be historical artifacts of previous ways of
consuming AST nodes?

- `AstNode::cast`, `AstNode::cast_ref`, and `AstNode::can_cast` are not
used anywhere.
- Since `cast_ref` isn't needed anymore, the `Ref` associated type isn't
either.

This is a pure refactoring, with no intended behavior changes.
2025-01-17 17:11:00 -05:00
Micha Reiser 96c2d0996d
Fix curly bracket spacing around curly f-string expressions (#15471) 2025-01-15 09:22:47 +01:00
Micha Reiser 424b720c19 Ruff 2025 style guide (#13906)
Closes #13371
2025-01-09 10:20:06 +01:00
Micha Reiser 1218bc65ed
Preserve multiline implicit concatenated strings in docstring positions (#15126) 2025-01-03 10:27:14 +01:00
Micha Reiser b63c2e126b
Upgrade Rust toolchain to 1.83 (#14677) 2024-11-29 12:05:05 +00:00
Dhruv Manilawala f3dac27e9a
Fix f-string formatting in assignment statement (#14454)
## Summary

fixes: #13813

This PR fixes a bug in the formatting assignment statement when the
value is an f-string.

This is resolved by using custom best fit layouts if the f-string is (a)
not already a flat f-string (thus, cannot be multiline) and (b) is not a
multiline string (thus, cannot be flattened). So, it is used in cases
like the following:
```py
aaaaaaaaaaaaaaaaaa = f"testeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee{
    expression}moreeeeeeeeeeeeeeeee"
```
Which is (a) `FStringLayout::Multiline` and (b) not a multiline.

There are various other examples in the PR diff along with additional
explanation and context as code comments.

## Test Plan

Add multiple test cases for various scenarios.
2024-11-26 15:07:18 +05:30
Dhruv Manilawala efe54081d6
Remove `FormatFStringPart` (#14448)
## Summary

This is just a small refactor to remove the `FormatFStringPart` as it's
only used in the case when the f-string is not implicitly concatenated
in which case the only part is going to be `FString`. In implicitly
concatenated f-strings, we use `StringLike` instead.
2024-11-25 10:29:22 +05:30
Micha Reiser 9f3a38d408
Extract `LineIndex` independent methods from `Locator` (#13938) 2024-10-28 07:53:41 +00:00
Micha Reiser 7272f83868
Fix preview style name in `can_omit_parentheses` to is_f_string_formatting_enabled (#13907) 2024-10-24 11:32:28 +00:00
Micha Reiser 73ee72b665
Join implicit concatenated strings when they fit on a line (#13663) 2024-10-24 11:52:22 +02:00
Micha Reiser 8f5b2aac9a
Refactor: Remove `StringPart` and `AnyStringPart` in favor of `StringLikePart` (#13772) 2024-10-16 12:52:06 +02:00
Micha Reiser fc661e193a
Normalize implicit concatenated f-string quotes per part (#13539) 2024-10-08 09:59:17 +00:00
Micha Reiser 253f5f269a
refactor: Rename `FormatStringContinuation` to `FormatImplicitConcatenatedString` (#13531) 2024-09-27 08:24:50 +00:00
Micha Reiser 9442cd8fae
Parenthesize `match..case` `if` guards (#13513) 2024-09-26 06:44:33 +00:00
Micha Reiser 531ebf6dff
Fix parentheses around return type annotations (#13381) 2024-09-20 09:23:53 +02:00
Alex Waygood aa0db338d9
Implement `iter()`, `len()` and `is_empty()` for all display-literal AST nodes (#12807) 2024-08-12 10:39:28 +00:00
Micha Reiser e18b4e42d3
[red-knot] Upgrade to the *new* *new* salsa (#12406) 2024-07-29 07:21:24 +00:00
Dhruv Manilawala 549cc1e437
Build `CommentRanges` outside the parser (#11792)
## Summary

This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.

For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.

## Test Plan

`cargo insta test`
2024-06-09 09:55:17 +00:00
Micha Reiser 32ca704956
Rename `PreorderVisitor` to `SourceOrderVisitor` (#11798)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-06-07 17:01:58 +00:00
Dhruv Manilawala bf5b62edac
Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
Alex Waygood 6774f27f4b
Refactor the `ExprDict` node (#11267)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-07 11:46:10 +00:00
Micha Reiser 6a1e555537
Upgrade to Rust 1.78 (#11260) 2024-05-03 12:46:21 +00:00
Dhruv Manilawala 13ffb5bc19
Replace LALRPOP parser with hand-written parser (#10036)
(Supersedes #9152, authored by @LaBatata101)

## Summary

This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.

It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.

Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.

For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.

## Test Plan

Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.

## Acknowledgements

- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)

---------

Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-04-18 17:57:39 +05:30
Alex Waygood 92e6026446
Apply NFKC normalization to unicode identifiers in the lexer (#10412) 2024-03-18 11:56:56 +00:00
Micha Reiser 184241f99a
Remove `Expr` postfix from `ExprNamed`, `ExprIf`, and `ExprGenerator` (#10229)
The expression types in our AST are called `ExprYield`, `ExprAwait`,
`ExprStringLiteral` etc, except `ExprNamedExpr`, `ExprIfExpr` and
`ExprGenratorExpr`. This seems to align with [Python AST's
naming](https://docs.python.org/3/library/ast.html) but feels
inconsistent and excessive.

This PR removes the `Expr` postfix from `ExprNamedExpr`, `ExprIfExpr`,
and `ExprGeneratorExpr`.
2024-03-04 12:55:01 +01:00
Micha Reiser a6f32ddc5e
Ruff 2024.2 style (#9639) 2024-02-29 09:30:54 +01:00
Micha Reiser 77c5561646
Add `parenthesized` flag to `ExprTuple` and `ExprGenerator` (#9614) 2024-02-26 15:35:20 +00:00
Dhruv Manilawala 72bf1c2880
Preview minimal f-string formatting (#9642)
## Summary

_This is preview only feature and is available using the `--preview`
command-line flag._

With the implementation of [PEP 701] in Python 3.12, f-strings can now
be broken into multiple lines, can contain comments, and can re-use the
same quote character. Currently, no other Python formatter formats the
f-strings so there's some discussion which needs to happen in defining
the style used for f-string formatting. Relevant discussion:
https://github.com/astral-sh/ruff/discussions/9785

The goal for this PR is to add minimal support for f-string formatting.
This would be to format expression within the replacement field without
introducing any major style changes.

### Newlines

The heuristics for adding newline is similar to that of
[Prettier](https://prettier.io/docs/en/next/rationale.html#template-literals)
where the formatter would only split an expression in the replacement
field across multiple lines if there was already a line break within the
replacement field.

In other words, the formatter would not add any newlines unless they
were already present i.e., they were added by the user. This makes
breaking any expression inside an f-string optional and in control of
the user. For example,

```python
# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"

# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
	aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
```


If there are comments in any of the replacement field of the f-string,
then it will always be a multi-line f-string in which case the formatter
would prefer to break expressions i.e., introduce newlines. For example,

```python
x = f"{ # comment
    a }"
```

### Quotes

The logic for formatting quotes remains unchanged. The existing logic is
used to determine the necessary quote char and is used accordingly.

Now, if the expression inside an f-string is itself a string like, then
we need to make sure to preserve the existing quote and not change it to
the preferred quote unless it's 3.12. For example,

```python
f"outer {'inner'} outer"

# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"

# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"
```

But, for triple-quoted strings, we can re-use the same quote char unless
the inner string is itself a triple-quoted string.

```python
f"""outer {"inner"} outer"""  # valid
f"""outer {'''inner'''} outer"""  # preserve the single quote char for the inner string
```

### Debug expressions

If debug expressions are present in the replacement field of a f-string,
then the whitespace needs to be preserved as they will be rendered as it
is (for example, `f"{ x = }"`. If there are any nested f-strings, then
the whitespace in them needs to be preserved as well which means that
we'll stop formatting the f-string as soon as we encounter a debug
expression.

```python
f"outer {   x =  !s  :.3f}"
#                  ^^
#                  We can remove these whitespaces
```

Now, the whitespace doesn't need to be preserved around conversion spec
and format specifiers, so we'll format them as usual but we won't be
formatting any nested f-string within the format specifier.

### Miscellaneous

- The
[`hug_parens_with_braces_and_square_brackets`](https://github.com/astral-sh/ruff/issues/8279)
preview style isn't implemented w.r.t. the f-string curly braces.
- The
[indentation](https://github.com/astral-sh/ruff/discussions/9785#discussioncomment-8470590)
is always relative to the f-string containing statement

## Test Plan

* Add new test cases
* Review existing snapshot changes
* Review the ecosystem changes

[PEP 701]: https://peps.python.org/pep-0701/
2024-02-16 20:28:11 +05:30
Micha Reiser 4f7fb566f0
Range formatting: Fix invalid syntax after parenthesizing expression (#9751) 2024-02-02 17:56:25 +01:00
Micha Reiser 91046e4c81
Preserve indent around multiline strings (#9637) 2024-01-26 08:18:30 +01:00
Alex Waygood a1e65a92bd
Move `is_tuple_parenthesized` from the formatter to `ruff_python_ast` (#9533)
This allows it to be used in the linter as well as the formatter. It
will be useful in #9474
2024-01-15 16:10:40 +00:00
Micha Reiser ac02d3aedd
Hug multiline-strings preview style (#9243) 2024-01-10 12:47:34 +01:00
Charlie Marsh ba71772d93
Parenthesize breaking named expressions in match guards (#9396)
## Summary

This is an attempt to solve
https://github.com/astral-sh/ruff/issues/9394 by avoiding breaks in
named expressions when invalid.
2024-01-08 14:47:01 +00:00
Dimitri Papadopoulos Orfanos d04d49cc0e
Fix typos found by codespell (#9346)
## Summary

Fix typos found by
[codespell](https://github.com/codespell-project/codespell).

## Test Plan

CI tests.
2024-01-02 02:08:15 +00:00
Charlie Marsh e80260a3c5
Remove source path from parser errors (#9322)
## Summary

I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
2023-12-30 20:33:05 +00:00