## Summary
Fixes#20774 by tracking whether an `InterpolatedStringState` element is
nested inside of another interpolated element. This feels like kind of a
naive fix, so I'm welcome to other ideas. But it resolves the problem in
the issue and clears up the syntax error in the black compatibility
test, without affecting many other cases.
The other affected case is actually interesting too because the
[input](96b156303b/crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/fstring.py (L707))
is invalid, but the previous quote selection fixed the invalid syntax:
```pycon
Python 3.11.13 (main, Sep 2 2025, 14:20:25) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f'{1: abcd "{'aa'}" }' # input
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
>>> f'{1: abcd "{"aa"}" }' # old output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier ' abcd "aa" ' for object of type 'int'
>>> f'{1: abcd "{'aa'}" }' # new output
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
```
We now preserve the invalid syntax in the input.
Unfortunately, this also seems to be another edge case I didn't consider
in https://github.com/astral-sh/ruff/pull/20867 because we don't flag
this as a syntax error after 0.14.1:
<details><summary>Shell output</summary>
<p>
```
> uvx ruff@0.14.0 check --ignore ALL --target-version py311 - <<EOF
f'{1: abcd "{'aa'}" }'
EOF
invalid-syntax: Cannot reuse outer quote character in f-strings on Python 3.11 (syntax was added in Python 3.12)
--> -:1:14
|
1 | f'{1: abcd "{'aa'}" }'
| ^
|
Found 1 error.
> uvx ruff@0.14.1 check --ignore ALL --target-version py311 - <<EOF
f'{1: abcd "{'aa'}" }'
EOF
All checks passed!
> uvx python@3.11 -m ast <<EOF
f'{1: abcd "{'aa'}" }'
EOF
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 1752, in <module>
main()
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 1748, in main
tree = parse(source, args.infile.name, args.mode, type_comments=args.no_type_comments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
```
</p>
</details>
I assumed that was the same `ParseError` as the one caused by
`f"{1:""}"`, but this is a nested interpolation inside of the format
spec.
## Test Plan
New test copied from the black compatibility test. I guess this is a
duplicate now, I started working on this branch before the new black
tests were imported, so I could delete the separate test in our fixtures
if that's preferable.
Summary
--
Fixes#20844 by refining the unsupported syntax error check for [PEP
701]
f-strings before Python 3.12 to allow backslash escapes and escaped
outer quotes
in the format spec part of f-strings. These are only disallowed within
the
f-string expression part on earlier versions. Using the examples from
the PR:
```pycon
>>> f"{1:\x64}"
'1'
>>> f"{1:\"d\"}"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier '"d"' for object of type 'int'
```
Note that the second case is a runtime error, but this is actually
avoidable if
you override `__format__`, so despite being pretty weird, this could
actually be
a valid use case.
```pycon
>>> class C:
... def __format__(*args, **kwargs): return "<C>"
...
>>> f"{C():\"d\"}"
'<C>'
```
At first I thought narrowing the range we check to exclude the format
spec would
only work for escapes, but it turns out that cases like `f"{1:""}"` are
already
covered by an existing `ParseError`, so we can just narrow the range of
both our
escape and quote checks.
Our comment check also seems to be working correctly because it's based
on the
actual tokens. A case like
[this](https://play.ruff.rs/9f1c2ff2-cd8e-4ad7-9f40-56c0a524209f):
```python
f"""{1:# }"""
```
doesn't include a comment token, instead the `#` is part of an
`InterpolatedStringLiteralElement`.
Test Plan
--
New inline parser tests
[PEP 701]: https://peps.python.org/pep-0701/
Summary
--
This PR implements the black preview style from
https://github.com/psf/black/pull/4720. As of Python 3.14, you're
allowed to omit the parentheses around groups of exceptions, as long as
there's no `as` binding:
**3.13**
```pycon
Python 3.13.4 (main, Jun 4 2025, 17:37:06) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException: ...
...
File "<python-input-1>", line 2
except Exception, BaseException: ...
^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized
```
**3.14**
```pycon
Python 3.14.0rc2 (main, Sep 2 2025, 14:20:56) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except Exception, BaseException: ...
...
Ellipsis
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException as e: ...
...
File "<python-input-2>", line 2
except Exception, BaseException as e: ...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized when using 'as'
```
I think this ended up being pretty straightforward, at least once Micha
showed me where to start :)
Test Plan
--
New tests
At first I thought we were deviating from black in how we handle
comments within the exception type tuple, but I think this applies to
how we format all tuples, not specifically with the new preview style.
Summary
--
```shell
git clone git@github.com:psf/black.git ../other/black
crates/ruff_python_formatter/resources/test/fixtures/import_black_tests.py ../other/black
```
Then ran our tests and accepted the snapshots
I had to make a small fix to our tuple normalization logic for `del`
statements
in the second commit, otherwise the tests were panicking at a changed
AST. I
think the new implementation is closer to the intention described in the
nearby
comment anyway, though.
The first commit adds the new Python, settings, and `.expect` files, the
next three commits make some small
fixes to help get the tests running, and then the fifth commit accepts
all but one of the new snapshots. The last commit includes the new
unsupported syntax error for one f-string example, tracked in #20774.
Test Plan
--
Newly imported tests. I went through all of the new snapshots and added
review comments below. I think they're all expected, except a few cases
I wasn't 100% sure about.
This PR resolves the issue noticed in
https://github.com/astral-sh/ruff/pull/20777#discussion_r2417233227.
Namely, cases like this were being flagged as syntax errors despite
being perfectly valid on Python 3.8:
```pycon
Python 3.8.20 (default, Oct 2 2024, 16:34:12)
[Clang 18.1.8 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> with (open("foo.txt", "w")): ...
...
Ellipsis
>>> with (open("foo.txt", "w")) as f: print(f)
...
<_io.TextIOWrapper name='foo.txt' mode='w' encoding='UTF-8'>
```
The second of these was already allowed but not the first:
```shell
> ruff check --target-version py38 --ignore ALL - <<EOF
with (open("foo.txt", "w")): ...
with (open("foo.txt", "w")) as f: print(f)
EOF
invalid-syntax: Cannot use parentheses within a `with` statement on Python 3.8 (syntax was added in Python 3.9)
--> -:1:6
|
1 | with (open("foo.txt", "w")): ...
| ^
2 | with (open("foo.txt", "w")) as f: print(f)
|
Found 1 error.
```
There was some discussion of related cases in
https://github.com/astral-sh/ruff/pull/16523#discussion_r1984657793, but
it seems I overlooked the single-element case when flagging tuples. As
suggested in the other thread, we can just check if there's more than
one element or a trailing comma, which will cause the tuple parsing on
<=3.8 and avoid the false positives.
## Summary
Based on the suggestion in
https://github.com/astral-sh/ruff/issues/20774#issuecomment-3383153511,
I added rendering of unsupported syntax errors in our `format` test.
In support of this, I added a `DummyFileResolver` type to `ruff_db` to
pass to `DisplayDiagnostics::new` (first commit). Another option would
obviously be implementing this directly in the fixtures, but we'd have
to import a `NotebookIndex` somehow; either by depending directly on
`ruff_notebook` or re-exporting it from `ruff_db`. I thought it might be
convenient elsewhere to have a dummy resolver, for example in the
parser, where we currently have a separate rendering pipeline
[copied](https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_parser/tests/fixtures.rs#L321)
from our old rendering code in `ruff_linter`. I also briefly tried
implementing a `TestDb` in the formatter since I noticed the
`ruff_python_formatter::db` module, but that was turning into a lot more
code than the dummy resolver.
We could also push this a bit further if we wanted. I didn't add the new
snapshots to the black compatibility tests or to the preview snapshots,
for example. I thought it was kind of noisy enough (and helpful enough)
already, though. We could also use a shorter diagnostic format, but the
full output seems most useful once we accept this initial large batch of
changes.
## Test Plan
I went through the baseline snapshots pretty quickly, but they all
looked reasonable to me, with one exception I noted below. I also tested
that the case from #20774 produces a new unsupported syntax error.
Summary
--
Closes#19467 and also removes the warning about using Python 3.14
without
preview enabled.
I also bumped `PythonVersion::default` to 3.9 because it reaches EOL
this month,
but we could also defer that for now if we wanted.
The first three commits are related to the `latest` bump to 3.14; the
fourth commit
bumps the default to 3.10.
Note that this PR also bumps the default Python version for ty to 3.10
because
there was a test asserting that it stays in sync with
`ast::PythonVersion`.
Test Plan
--
Existing tests
I spot-checked the ecosystem report, and I believe these are all
expected. Inbits doesn't specify a target Python version, so I guess
we're applying the default. UP007, UP035, and UP045 all use the new
default value to emit new diagnostics.
Resolves a crash when attempting to format code like:
```
from x import (a as # whatever
b)
```
Reworks the way comments are associated with nodes when parsing modules,
so that all possible comment positions can be retained and reproduced during
formatting.
Overall follows Black's formatting style for multi-line import statements.
Fixes issue #19138
This adds a new `backend: internal | uv` option to the LSP
`FormatOptions` allowing users to perform document and range formatting
operations though uv. The idea here is to prototype a solution for users
to transition to a `uv format` command without encountering version
mismatches (and consequently, formatting differences) between the LSP's
version of `ruff` and uv's version of `ruff`.
The primarily alternative to this would be to use uv to discover the
`ruff` version used to start the LSP in the first place. However, this
would increase the scope of a minimal `uv format` command beyond "run a
formatter", and raise larger questions about how uv should be used to
coordinate toolchain discovery. I think those are good things to
explore, but I'm hesitant to let them block a `uv format`
implementation. Another downside of using uv to discover `ruff`, is that
it needs to be implemented _outside_ the LSP; e.g., we'd need to change
the instructions on how to run the LSP and implement it in each editor
integration, like the VS Code plugin.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Removes the `module_ptr` field from `AstNodeRef` in release mode, and
change `NodeIndex` to a `NonZeroU32` to reduce the size of
`Option<AstNodeRef<_>>` fields.
I believe CI runs in debug mode, so this won't show up in the memory
report, but this reduces memory by ~2% in release mode.
As of [this cpython PR](https://github.com/python/cpython/pull/135996),
it is not allowed to concatenate t-strings with non-t-strings,
implicitly or explicitly. Expressions such as `"foo" t"{bar}"` are now
syntax errors.
This PR updates some AST nodes and parsing to reflect this change.
The structural change is that `TStringPart` is no longer needed, since,
as in the case of `BytesStringLiteral`, the only possibilities are that
we have a single `TString` or a vector of such (representing an implicit
concatenation of t-strings). This removes a level of nesting from many
AST expressions (which is what all the snapshot changes reflect), and
simplifies some logic in the implementation of visitors, for example.
The other change of note is in the parser. When we meet an implicit
concatenation of string-like literals, we now count the number of
t-string literals. If these do not exhaust the total number of
implicitly concatenated pieces, then we emit a syntax error. To recover
from this syntax error, we encode any t-string pieces as _invalid_
string literals (which means we flag them as invalid, record their
range, and record the value as `""`). Note that if at least one of the
pieces is an f-string we prefer to parse the entire string as an
f-string; otherwise we parse it as a string.
This logic is exactly the same as how we currently treat
`BytesStringLiteral` parsing and error recovery - and carries with it
the same pros and cons.
Finally, note that I have not implemented any changes in the
implementation of the formatter. As far as I can tell, none are needed.
I did change a few of the fixtures so that we are always concatenating
t-strings with t-strings.
Closes#18671
Note that while this has, I believe, always been invalid syntax, it was
reported as a different syntax error until Python 3.12:
Python 3.11:
```pycon
>>> x = 1
>>> f"{x! s}"
File "<stdin>", line 1
f"{x! s}"
^
SyntaxError: f-string: invalid conversion character: expected 's', 'r', or 'a'
```
Python 3.12:
```pycon
>>> x = 1
>>> f"{x! s}"
File "<stdin>", line 1
f"{x! s}"
^^^
SyntaxError: f-string: conversion type must come right after the exclamanation mark
```
## Summary
Garbage collect ASTs once we are done checking a given file. Queries
with a cross-file dependency on the AST will reparse the file on demand.
This reduces ty's peak memory usage by ~20-30%.
The primary change of this PR is adding a `node_index` field to every
AST node, that is assigned by the parser. `ParsedModule` can use this to
create a flat index of AST nodes any time the file is parsed (or
reparsed). This allows `AstNodeRef` to simply index into the current
instance of the `ParsedModule`, instead of storing a pointer directly.
The indices are somewhat hackily (using an atomic integer) assigned by
the `parsed_module` query instead of by the parser directly. Assigning
the indices in source-order in the (recursive) parser turns out to be
difficult, and collecting the nodes during semantic indexing is
impossible as `SemanticIndex` does not hold onto a specific
`ParsedModuleRef`, which the pointers in the flat AST are tied to. This
means that we have to do an extra AST traversal to assign and collect
the nodes into a flat index, but the small performance impact (~3% on
cold runs) seems worth it for the memory savings.
Part of https://github.com/astral-sh/ty/issues/214.
## Summary
https://github.com/astral-sh/ty/issues/214 will require a couple
invasive changes that I would like to get merged even before garbage
collection is fully implemented (to avoid rebasing):
- `ParsedModule` can no longer be dereferenced directly. Instead you
need to load a `ParsedModuleRef` to access the AST, which requires a
reference to the salsa database (as it may require re-parsing the AST if
it was collected).
- `AstNodeRef` can only be dereferenced with the `node` method, which
takes a reference to the `ParsedModuleRef`. This allows us to encode the
fact that ASTs do not live as long as the database and may be collected
as soon a given instance of a `ParsedModuleRef` is dropped. There are a
number of places where we currently merge the `'db` and `'ast`
lifetimes, so this requires giving some types/functions two separate
lifetime parameters.
This PR implements template strings (t-strings) in the parser and
formatter for Ruff.
Minimal changes necessary to compile were made in other parts of the code (e.g. ty, the linter, etc.). These will be covered properly in follow-up PRs.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
I decided to disable the new
[`needless_continue`](https://rust-lang.github.io/rust-clippy/master/index.html#needless_continue)
rule because I often found the explicit `continue` more readable over an
empty block or having to invert the condition of an other branch.
## Test Plan
`cargo test`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
I don't remember exactly when we made `Identifier` a node but it is now
considered a node (it implements `AnyNodeRef`, it has a range). However,
we never updated
the `SourceOrderVisitor` to visit identifiers because we never had a use
case for it and visiting new nodes can change how the formatter
associates comments (breaking change!).
This PR updates the `SourceOrderVisitor` to visit identifiers and
changes the formatter comment visitor to skip identifiers (updating the
visitor might be desired because it could help simplifying some comment
placement logic but this is out of scope for this PR).
## Test Plan
Tests, updated snapshot tests
## Summary
We renamed the `PreorderVisitor` to `SourceOrderVisitor` a long time ago
but it seems that we missed to rename the `visit_preorder` functions to
`visit_source_order`.
This PR renames `visit_preorder` to `visit_source_order`
## Test Plan
`cargo test`
## Summary
This PR detects the use of PEP 701 f-strings before 3.12. This one
sounded difficult and ended up being pretty easy, so I think there's a
good chance I've over-simplified things. However, from experimenting in
the Python REPL and checking with [pyright], I think this is correct.
pyright actually doesn't even flag the comment case, but Python does.
I also checked pyright's implementation for
[quotes](98dc4469cc/packages/pyright-internal/src/analyzer/checker.ts (L1379-L1398))
and
[escapes](98dc4469cc/packages/pyright-internal/src/analyzer/checker.ts (L1365-L1377))
and think I've approximated how they do it.
Python's error messages also point to the simple approach of these
characters simply not being allowed:
```pycon
Python 3.11.11 (main, Feb 12 2025, 14:51:05) [Clang 19.1.6 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f'''multiline {
... expression # comment
... }'''
File "<stdin>", line 3
}'''
^
SyntaxError: f-string expression part cannot include '#'
>>> f'''{not a line \
... continuation}'''
File "<stdin>", line 2
continuation}'''
^
SyntaxError: f-string expression part cannot include a backslash
>>> f'hello {'world'}'
File "<stdin>", line 1
f'hello {'world'}'
^^^^^
SyntaxError: f-string: expecting '}'
```
And since escapes aren't allowed, I don't think there are any tricky
cases where nested quotes or comments can sneak in.
It's also slightly annoying that the error is repeated for every nested
quote character, but that also mirrors pyright, although they highlight
the whole nested string, which is a little nicer. However, their check
is in the analysis phase, so I don't think we have such easy access to
the quoted range, at least without adding another mini visitor.
## Test Plan
New inline tests
[pyright]:
https://pyright-play.net/?pythonVersion=3.11&strict=true&code=EYQw5gBAvBAmCWBjALgCgO4gHaygRgEoAoEaCAIgBpyiiBiCLAUwGdknYIBHAVwHt2LIgDMA5AFlwSCJhwAuCAG8IoMAG1Rs2KIC6EAL6iIxosbPmLlq5foRWiEAAcmERAAsQAJxAomnltY2wuSKogA6WKIAdABWfPBYqCAE%2BuSBVqbpWVm2iHwAtvlMWMgB2ekiolUAgq4FjgA2TAAeEMieSADWCsoV5qoaqrrGDJ5MiDz%2B8ABuLqosAIREhlXlaybrmyYMXsDw7V4AnoysyAmQ5SIhwYo3d9cheADUeKlv5O%2BpQA
## Summary
This PR stabilizies the fix for
https://github.com/astral-sh/ruff/issues/14001
We try to only make breaking formatting changes once a year. However,
the plan was to release this fix as part of Ruff 0.9 but I somehow
missed it when promoting all other formatter changes.
I think it's worth making an exception here considering that this is a
bug fix, it improves readability, and it should be rare
(very few files in a single project). Our version policy explicitly
allows breaking formatter changes in any minor release and the idea of
only making breaking formatter changes once a year is mainly to avoid
multiple releases throughout the year that introduce large formatter
changes
Closes https://github.com/astral-sh/ruff/issues/14001
## Test Plan
Updated snapshot
## Summary
This should give us better coverage for the unsupported syntax error
features and
increases our confidence that the formatter doesn't accidentially
introduce new unsupported
syntax errors.
A feature like this would have been very useful when working on f-string
formatting
where it took a lot of iteration to find all Python 3.11 or older
incompatibilities.
## Test Plan
I applied my changes on top of
https://github.com/astral-sh/ruff/pull/16523 and
removed the target version check in the with-statement formatting code.
As expected,
the integration tests now failed
## Summary
This is part of the preparation for detecting syntax errors in the
parser from https://github.com/astral-sh/ruff/pull/16090/. As suggested
in [this
comment](https://github.com/astral-sh/ruff/pull/16090/#discussion_r1953084509),
I started working on a `ParseOptions` struct that could be stored in the
parser. For this initial refactor, I only made it hold the existing
`Mode` option, but for syntax errors, we will also need it to have a
`PythonVersion`. For that use case, I'm picturing something like a
`ParseOptions::with_python_version` method, so you can extend the
current calls to something like
```rust
ParseOptions::from(mode).with_python_version(settings.target_version)
```
But I thought it was worth adding `ParseOptions` alone without changing
any other behavior first.
Most of the diff is just updating call sites taking `Mode` to take
`ParseOptions::from(Mode)` or those taking `PySourceType`s to take
`ParseOptions::from(PySourceType)`. The interesting changes are in the
new `parser/options.rs` file and smaller parts of `parser/mod.rs` and
`ruff_python_parser/src/lib.rs`.
## Test Plan
Existing tests, this should not change any behavior.
## Summary
This PR updates the formatter and linter to use the `PythonVersion`
struct from the `ruff_python_ast` crate internally. While this doesn't
remove the need for the `linter::PythonVersion` enum, it does remove the
`formatter::PythonVersion` enum and limits the use in the linter to
deserializing from CLI arguments and config files and moves most of the
remaining methods to the `ast::PythonVersion` struct.
## Test Plan
Existing tests, with some inputs and outputs updated to reflect the new
(de)serialization format. I think these are test-specific and shouldn't
affect any external (de)serialization.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR makes the following changes:
- It adjusts various callsites to use the new
`ast::StringLiteral::contents_range()` method that was introduced in
https://github.com/astral-sh/ruff/pull/16183. This is less verbose and
more type-safe than using the `ast::str::raw_contents()` helper
function.
- It adds a new `ast::ExprStringLiteral::as_unconcatenated_literal()`
helper method, and adjusts various callsites to use it. This addresses
@MichaReiser's review comment at
https://github.com/astral-sh/ruff/pull/16183#discussion_r1957334365.
There is no functional change here, but it helps readability to make it
clearer that we're differentiating between implicitly concatenated
strings and unconcatenated strings at various points.
- It renames the `StringLiteralValue::flags()` method to
`StringLiteralFlags::first_literal_flags()`. If you're dealing with an
implicitly concatenated string `string_node`,
`string_node.value.flags().closer_len()` could give an incorrect result;
this renaming makes it clearer that the `StringLiteralFlags` instance
returned by the method is only guaranteed to give accurate information
for the first `StringLiteral` contained in the `ExprStringLiteral` node.
- It deletes the unused `BytesLiteralValue::flags()` method. This seems
prone to misuse in the same way as `StringLiteralValue::flags()`: if
it's an implicitly concatenated bytestring, the `BytesLiteralFlags`
instance returned by the method would only give accurate information for
the first `BytesLiteral` in the bytestring.
## Test Plan
`cargo test`
## Summary
Fixes https://github.com/astral-sh/ruff/issues/15978
Even Better TOML doesn't support `allOf` well. In fact, it just crashes.
This PR works around this limitation by avoid using `allOf` in the
automatically
derived schema for the docstring formatting setting.
### Alternatives
schemars introduces `allOf` whenver it sees a `$ref` alongside other
object properties
because this is no longer valid according to Draft 7. We could replace
the
visitor performing the rewrite but I prefer not to because replacing
`allOf` with `oneOf`
is only valid for objects that don't have any other `oneOf` or `anyOf`
schema.
## Test Plan
https://github.com/user-attachments/assets/25d73b2a-fee1-4ba6-9ffe-869b2c3bc64e
## Summary
This is a follow-up to #15726, #15778, and #15794 to preserve the triple
quote and prefix flags in plain strings, bytestrings, and f-strings.
I also added a `StringLiteralFlags::without_triple_quotes` method to
avoid passing along triple quotes in rules like SIM905 where it might
not make sense, as discussed
[here](https://github.com/astral-sh/ruff/pull/15726#discussion_r1930532426).
## Test Plan
Existing tests, plus many new cases in the `generator::tests::quote`
test that should cover all combinations of quotes and prefixes, at least
for simple string bodies.
Closes#7799 when combined with #15694, #15726, #15778, and #15794.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This is another follow-up to #15726 and #15778, extending the
quote-preserving behavior to f-strings and deleting the now-unused
`Generator::quote` field.
## Details
I also made one unrelated change to `rules/flynt/helpers.rs` to remove a
`to_string` call for making a `Box<str>` and tweaked some arguments to
some of the `Generator::unparse_f_string` methods to make the code
easier to follow, in my opinion. Happy to revert especially the latter
of these if needed.
Unfortunately this still does not fix the issue in #9660, which appears
to be more of an escaping issue than a quote-preservation issue. After
#15726, the result is now `a = f'# {"".join([])}' if 1 else ""` instead
of `a = f"# {''.join([])}" if 1 else ""` (single quotes on the outside
now), but we still don't have the desired behavior of double quotes
everywhere on Python 3.12+. I added a test for this but split it off
into another branch since it ended up being unaddressed here, but my
`dbg!` statements showed the correct preferred quotes going into
[`UnicodeEscape::with_preferred_quote`](https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_literal/src/escape.rs#L54).
## Test Plan
Existing rule and `Generator` tests.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This is a very closely related follow-up to #15726, adding the same
quote-preserving behavior to bytestrings. Only one rule (UP018) was
affected this time, and it was easy to mirror the plain string changes.
## Test Plan
Existing tests
## Summary
This is a first step toward fixing #7799 by using the quoting style
stored in the `flags` field on `ast::StringLiteral`s to select a quoting
style. This PR does not include support for f-strings or byte strings.
Several rules also needed small updates to pass along existing quoting
styles instead of using `StringLiteralFlags::default()`. The remaining
snapshot changes are intentional and should preserve the quotes from the
input strings.
## Test Plan
Existing tests with some accepted updates, plus a few new RUF055 tests
for raw strings.
---------
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
The AST generator creates a reference enum for each syntax group — an
enum where each variant contains a reference to the relevant syntax
node. Previously you could customize the name of the reference enum for
a group — primarily because there was an existing `ExpressionRef` type
that wouldn't have lined up with the auto-derived name `ExprRef`. This
follow-up PR is a simple search/replace to switch over to the
auto-derived name, so that we can remove this customization point.
While looking into potential AST optimizations, I noticed the `AstNode`
trait and `AnyNode` type aren't used anywhere in Ruff or Red Knot. It
looks like they might be historical artifacts of previous ways of
consuming AST nodes?
- `AstNode::cast`, `AstNode::cast_ref`, and `AstNode::can_cast` are not
used anywhere.
- Since `cast_ref` isn't needed anymore, the `Ref` associated type isn't
either.
This is a pure refactoring, with no intended behavior changes.
This PR replaces most of the hard-coded AST definitions with a
generation script, similar to what happens in `rust_python_formatter`.
I've replaced every "rote" definition that I could find, where the
content is entirely boilerplate and only depends on what syntax nodes
there are and which groups they belong to.
This is a pretty massive diff, but it's entirely a refactoring. It
should make absolutely no changes to the API or implementation. In
particular, this required adding some configuration knobs that let us
override default auto-generated names where they don't line up with
types that we created previously by hand.
## Test plan
There should be no changes outside of the `rust_python_ast` crate, which
verifies that there were no API changes as a result of the
auto-generation. Aggressive `cargo clippy` and `uvx pre-commit` runs
after each commit in the branch.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Fixes https://github.com/astral-sh/ruff/issues/14778
The formatter incorrectly removed the inner implicitly concatenated
string for following single-line f-string:
```py
f"{'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' 'a' if True else ""}"
# formatted
f"{ if True else ''}"
```
This happened because I changed the `RemoveSoftlinesBuffer` in
https://github.com/astral-sh/ruff/pull/14489 to remove any content
wrapped in `if_group_breaks`. After all, it emulates an *all flat*
layout. This works fine when `if_group_breaks` is only used to **add**
content if the gorup breaks. It doesn't work if the same content is
rendered differently depending on if the group fits using
`if_group_breaks` and `if_groups_fits` because the enclosing `group`
might still *break* if the entire content exceeds the line-length limit.
This PR fixes this by unwrapping any `if_group_fits` content by removing
the `if_group_fits` start and end tags.
## Test Plan
added test
## Summary
This PR fixes a bug in the f-string formatting to not consider the
escaped newlines for `is_multiline`. This is done by checking if the
f-string is triple-quoted or not similar to normal string literals.
This is not required to be gated behind preview because the logic change
for `is_multiline` was added in
https://github.com/astral-sh/ruff/pull/14454.
## Test Plan
Add a test case which formats differently on `main`:
https://play.ruff.rs/ea3c55c2-f0fe-474e-b6b8-e3365e0ede5e
## Summary
fixes: #14608
The logic that was only applied for 3.12+ target version needs to be
applied for other versions as well.
## Test Plan
I've moved the existing test cases for 3.12 only to `f_string.py` so
that it's tested against the default target version.
I think we should probably enabled testing for two target version (pre
3.12 and 3.12) but it won't highlight any issue because the parser
doesn't consider this. Maybe we should enable this once we have target
version specific syntax errors in place
(https://github.com/astral-sh/ruff/issues/6591).
## Summary
fixes: #13813
This PR fixes a bug in the formatting assignment statement when the
value is an f-string.
This is resolved by using custom best fit layouts if the f-string is (a)
not already a flat f-string (thus, cannot be multiline) and (b) is not a
multiline string (thus, cannot be flattened). So, it is used in cases
like the following:
```py
aaaaaaaaaaaaaaaaaa = f"testeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee{
expression}moreeeeeeeeeeeeeeeee"
```
Which is (a) `FStringLayout::Multiline` and (b) not a multiline.
There are various other examples in the PR diff along with additional
explanation and context as code comments.
## Test Plan
Add multiple test cases for various scenarios.
## Summary
This is just a small refactor to remove the `FormatFStringPart` as it's
only used in the case when the f-string is not implicitly concatenated
in which case the only part is going to be `FString`. In implicitly
concatenated f-strings, we use `StringLike` instead.
Remove unnecessary uses of `.as_ref()`, `.iter()`, `&**` and similar, mostly in situations when iterating over variables. Many of these changes are only possible following #13826, when we bumped our MSRV to 1.80: several useful implementations on `&Box<[T]>` were only stabilised in Rust 1.80. Some of these changes we could have done earlier, however.
## Summary
Follow-up to #13147, this PR implements the `AstNode` for `Identifier`.
This makes it easier to create the `NodeKey` in red knot because it uses
a generic method to construct the key from `AnyNodeRef` and is important
for definitions that are created only on identifiers instead of
`ExprName`.
## Test Plan
`cargo test` and `cargo clippy`
When there is a function or class definition at the end of a suite
followed by the beginning of an alternative block, we have to insert a
single empty line between them.
In the if-else-statement example below, we insert an empty line after
the `foo` in the if-block, but none after the else-block `foo`, since in
the latter case the enclosing suite already adds empty lines.
```python
if sys.version_info >= (3, 10):
def foo():
return "new"
else:
def foo():
return "old"
class Bar:
pass
```
To do so, we track whether the current suite is the last one in the
current statement with a new option on the suite kind.
Fixes#12199
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.
For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.
## Test Plan
`cargo insta test`
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
## Summary
This moves the string-prefix enumerations in `ruff_python_ast` to a
separate submodule. I think this helps clarify that these prefixes are
purely abstract: they only depend on each other, and do not depend on
any of the other code in `nodes.rs` in any way. Moreover, while various
AST nodes _use_ them, they're not really nodes themselves, so they feel
slightly out of place in `nodes.rs`.
I considered moving all of them to `str.rs`, but it felt like enough
code that it could be a separate submodule.
## Test Plan
`cargo test`
## Summary
This PR adds a newtype wrapper around `Vec<FStringElement>` that derefs
to a `&Vec<FStringElement>`.
Both f-string and format specifier are made up of `Vec<FStringElement>`.
By creating a newtype wrapper around it, we can share the methods for
both parent types.
## Summary
This PR renames `AnyStringKind` to `AnyStringFlags` and `AnyStringFlags`
to `AnyStringFlagsInner`.
The main motivation is to have consistent usage of "kind" and "flags".
For each string kind, it's "flags" like `StringLiteralFlags`,
`BytesLiteralFlags`, and `FStringFlags` but it was `AnyStringKind` for
the "any" variant.
## Summary
This PR fixes the bug where the formatter would format an f-string and
could potentially change the AST.
For a triple-quoted f-string, the element can't be formatted into
multiline if it has a format specifier because otherwise the newline
would be treated as part of the format specifier.
Given the following f-string:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f} ddddddddddddddd eeeeeeee"""
```
The formatter sees that the f-string is already multiline so it assumes
that it can contain line breaks i.e., broken into multiple lines. But,
in this specific case we can't format it as:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f
} ddddddddddddddd eeeeeeee"""
```
Because the format specifier string would become ".3f\n", which is not
the original string (`.3f`).
If the original source code already contained a newline, they'll be
preserved. For example:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f
} ddddddddddddddd eeeeeeee"""
```
The above will be formatted as:
```py
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {variable:.3f
} ddddddddddddddd eeeeeeee"""
```
Note that the newline after `.3f` is part of the format specifier which
needs to be preserved.
The Python version is irrelevant in this case.
fixes: #10040
## Test Plan
Add some test cases to verify this behavior.
## Summary
I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.
@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.
So boxing type-params also on functions reduces the overall size of
`Stmt`.
## Test Plan
The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
(Supersedes #9152, authored by @LaBatata101)
## Summary
This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.
It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.
Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.
For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.
## Test Plan
Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.
## Acknowledgements
- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)
---------
Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This is a follow up on https://github.com/astral-sh/ruff/pull/10492
I incorrectly assumed that `subscript.value.end()` always points past
the value. However, this isn't the case for parenthesized values where
the end "ends" before the parentheses.
## Test Plan
I added new tests for the parenthesized case.
## Summary
This PR fixes an instability where formatting a subscribt
where the `slice` is not an `ExprSlice` and it has a trailing
end-of-line comment after its opening `[` required two formatting passes
to be stable.
The fix is to associate the trailing end-of-line comment as dangling
comment on `[` to preserve its position, similar to how Ruff does it for
other parenthesized expressions.
This also matches how trailing end-of-line subscript comments are
handled when the `slice` is an `ExprSlice`.
Fixes https://github.com/astral-sh/ruff/issues/10355
## Versioning
Shipping this as part of a patch release is fine because:
* It fixes a stability issue
* It doesn't impact already formatted code because Ruff would already
have moved to the comment to the end of the line (instead of preserving
it)
## Test Plan
Added tests
## Summary
#10151 documented the deviations between Ruff and Black with the new
2024 style guide in the `ruff-python-formatter/README.md`. However,
that's not the documentation shown
on the website when navigating to [intentional
deviations](https://docs.astral.sh/ruff/formatter/black/).
This PR streamlines the `ruff-python-formatter/README.md` and links to
the documentation on the website instead of repeating the same content.
The PR also makes the 2024 style guide deviations available on the
website documentation.
## Test Plan
I built the documentation locally and verified that the 2024 style guide
known deviations are now shown on the website.
## Summary
This PR adds methods on `FString` to iterate over the two different kind
of elements it can have - literals and expressions. This is similar to
the methods we have on `ExprFString`.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>