When there is a function or class definition at the end of a suite
followed by the beginning of an alternative block, we have to insert a
single empty line between them.
In the if-else-statement example below, we insert an empty line after
the `foo` in the if-block, but none after the else-block `foo`, since in
the latter case the enclosing suite already adds empty lines.
```python
if sys.version_info >= (3, 10):
def foo():
return "new"
else:
def foo():
return "old"
class Bar:
pass
```
To do so, we track whether the current suite is the last one in the
current statement with a new option on the suite kind.
Fixes#12199
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.
For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.
## Test Plan
`cargo insta test`
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
## Summary
This moves the string-prefix enumerations in `ruff_python_ast` to a
separate submodule. I think this helps clarify that these prefixes are
purely abstract: they only depend on each other, and do not depend on
any of the other code in `nodes.rs` in any way. Moreover, while various
AST nodes _use_ them, they're not really nodes themselves, so they feel
slightly out of place in `nodes.rs`.
I considered moving all of them to `str.rs`, but it felt like enough
code that it could be a separate submodule.
## Test Plan
`cargo test`
## Summary
This PR adds a newtype wrapper around `Vec<FStringElement>` that derefs
to a `&Vec<FStringElement>`.
Both f-string and format specifier are made up of `Vec<FStringElement>`.
By creating a newtype wrapper around it, we can share the methods for
both parent types.
## Summary
This PR renames `AnyStringKind` to `AnyStringFlags` and `AnyStringFlags`
to `AnyStringFlagsInner`.
The main motivation is to have consistent usage of "kind" and "flags".
For each string kind, it's "flags" like `StringLiteralFlags`,
`BytesLiteralFlags`, and `FStringFlags` but it was `AnyStringKind` for
the "any" variant.
## Summary
This PR fixes the bug where the formatter would format an f-string and
could potentially change the AST.
For a triple-quoted f-string, the element can't be formatted into
multiline if it has a format specifier because otherwise the newline
would be treated as part of the format specifier.
Given the following f-string:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f} ddddddddddddddd eeeeeeee"""
```
The formatter sees that the f-string is already multiline so it assumes
that it can contain line breaks i.e., broken into multiple lines. But,
in this specific case we can't format it as:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f
} ddddddddddddddd eeeeeeee"""
```
Because the format specifier string would become ".3f\n", which is not
the original string (`.3f`).
If the original source code already contained a newline, they'll be
preserved. For example:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
variable:.3f
} ddddddddddddddd eeeeeeee"""
```
The above will be formatted as:
```py
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {variable:.3f
} ddddddddddddddd eeeeeeee"""
```
Note that the newline after `.3f` is part of the format specifier which
needs to be preserved.
The Python version is irrelevant in this case.
fixes: #10040
## Test Plan
Add some test cases to verify this behavior.
## Summary
I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.
@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.
So boxing type-params also on functions reduces the overall size of
`Stmt`.
## Test Plan
The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
(Supersedes #9152, authored by @LaBatata101)
## Summary
This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.
It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.
Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.
For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.
## Test Plan
Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.
## Acknowledgements
- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)
---------
Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This is a follow up on https://github.com/astral-sh/ruff/pull/10492
I incorrectly assumed that `subscript.value.end()` always points past
the value. However, this isn't the case for parenthesized values where
the end "ends" before the parentheses.
## Test Plan
I added new tests for the parenthesized case.
## Summary
This PR fixes an instability where formatting a subscribt
where the `slice` is not an `ExprSlice` and it has a trailing
end-of-line comment after its opening `[` required two formatting passes
to be stable.
The fix is to associate the trailing end-of-line comment as dangling
comment on `[` to preserve its position, similar to how Ruff does it for
other parenthesized expressions.
This also matches how trailing end-of-line subscript comments are
handled when the `slice` is an `ExprSlice`.
Fixes https://github.com/astral-sh/ruff/issues/10355
## Versioning
Shipping this as part of a patch release is fine because:
* It fixes a stability issue
* It doesn't impact already formatted code because Ruff would already
have moved to the comment to the end of the line (instead of preserving
it)
## Test Plan
Added tests
## Summary
This PR adds methods on `FString` to iterate over the two different kind
of elements it can have - literals and expressions. This is similar to
the methods we have on `ExprFString`.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
I used `codespell` and `gramma` to identify mispellings and grammar
errors throughout the codebase and fixed them. I tried not to make any
controversial changes, but feel free to revert as you see fit.
## Summary
Fix#10282
This PR updates the Python grammar to include the `*` character in
`*args` `**kwargs` in the range of the `Parameter`
```
def f(*args, **kwargs): pass
# ~~~~ ~~~~~~ <-- range before the PR
# ^^^^^ ^^^^^^^^ <-- range after
```
The invalid syntax `def f(*, **kwargs): ...` is also now correctly
reported.
## Test Plan
Test cases were added to `function.rs`.
## Summary
This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.
This is how we format a single-item with statement today
```python
def run(data_path, model_uri):
with pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
).config(key="spark.ui.enabled", value=False).master(
"local-cluster[2, 1, 1024]"
).getOrCreate():
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):
```python
def run(data_path, model_uri):
while (
pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
)
.config(key="spark.ui.enabled", value=False)
.master("local-cluster[2, 1, 1024]")
.getOrCreate()
):
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
Which seems inconsistent to me.
This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.
```python
def run(data_path, model_uri):
with (
pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
)
.config(key="spark.ui.enabled", value=False)
.master("local-cluster[2, 1, 1024]")
.getOrCreate()
):
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
According to our versioning policy, this style change is gated behind a
preview flag.
## Test Plan
See added tests.
Added
## Summary
Fixes https://github.com/astral-sh/ruff/issues/10267
The issue with the current formatting is that the formatter flips
between the `SingleParenthesizedContextManager` and
`ParenthesizeIfExpands` or `SingleWithTarget` because the layouts use
incompatible formatting ( `SingleParenthesizedContextManager`:
`maybe_parenthesize_expression(context)` vs `ParenthesizeIfExpands`:
`parenthesize_if_expands(item)`, `SingleWithTarget`:
`optional_parentheses(item)`.
The fix is to ensure that the layouts between which the formatter flips
when adding or removing parentheses are the same. I do this by
introducing a new `SingleWithoutTarget` layout that uses the same
formatting as `SingleParenthesizedContextManager` if it has no target
and prefer `SingleWithoutTarget` over using `ParenthesizeIfExpands` or
`SingleWithTarget`.
## Formatting change
The downside is that we now use `maybe_parenthesize_expression` over
`parenthesize_if_expands` for expressions where
`can_omit_optional_parentheses` returns `false`. This can lead to stable
formatting changes. I only found one formatting change in our ecosystem
check and, unfortunately, this is necessary to fix the instability (and
instability fixes are okay to have as part of minor changes according to
our versioning policy)
The benefit of the change is that `with` items with a single context
manager and without a target are now formatted identically to how the
same expression would be formatted in other clause headers.
## Test Plan
I ran the ecosystem check locally
## Summary
This PR refactors the with item formatting to use more explicit layouts
to make it easier to understand the different formatting cases.
The benefit of the explicit layout is that it makes it easier to reasons
about layout transition between format runs. For example, today it's
possible that `SingleWithTarget` or `ParenthesizeIfExpands` add
parentheses around the with items for `with aaaaaaaaaa + bbbbbbbbbbbb:
pass`, resulting in `with (aaaaaaaaaa + bbbbbbbbbbbb): pass`. The
problem with this is that the next formatting pass uses the
`SingleParenthesizedContextExpression` layout that uses
`maybe_parenthesize_expression` which is different from
`parenthesize_if_expands(&expr)` or `optional_parentheses(&expr)`.
## Test Plan
`cargo test`
I ran the ecosystem checks locally and there are no changes.
## Summary
Fixes the handling end of line comments that belong to `**kwargs` when
the `**kwargs` come after a slash.
The issue was that we missed to include the `**kwargs` start position
when determining the start of the next node coming after the `/`.
Fixes https://github.com/astral-sh/ruff/issues/10281
## Test Plan
Added test
The expression types in our AST are called `ExprYield`, `ExprAwait`,
`ExprStringLiteral` etc, except `ExprNamedExpr`, `ExprIfExpr` and
`ExprGenratorExpr`. This seems to align with [Python AST's
naming](https://docs.python.org/3/library/ast.html) but feels
inconsistent and excessive.
This PR removes the `Expr` postfix from `ExprNamedExpr`, `ExprIfExpr`,
and `ExprGeneratorExpr`.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Fixes#6611
## Summary
This lint rule spots comments that are _intended_ to suppress or enable
the formatter, but will be ignored by the Ruff formatter.
We borrow some functions the formatter uses for determining comment
placement / putting them in context within an AST.
The analysis function uses an AST visitor to visit each comment and
attach it to the AST. It then uses that context to check:
1. Is this comment in an expression?
2. Does this comment have bad placement? (e.g. a `# fmt: skip` above a
function instead of at the end of a line)
3. Is this comment redundant?
4. Does this comment actually suppress any code?
5. Does this comment have ambiguous placement? (e.g. a `# fmt: off`
above an `else:` block)
If any of these are true, a violation is thrown. The reported reason
depends on the order of the above check-list: in other words, a `# fmt:
skip` comment on its own line within a list expression will be reported
as being in an expression, since that reason takes priority.
The lint suggests removing the comment as an unsafe fix, regardless of
the reason.
## Test Plan
A snapshot test has been created.
## Summary
_This is preview only feature and is available using the `--preview`
command-line flag._
With the implementation of [PEP 701] in Python 3.12, f-strings can now
be broken into multiple lines, can contain comments, and can re-use the
same quote character. Currently, no other Python formatter formats the
f-strings so there's some discussion which needs to happen in defining
the style used for f-string formatting. Relevant discussion:
https://github.com/astral-sh/ruff/discussions/9785
The goal for this PR is to add minimal support for f-string formatting.
This would be to format expression within the replacement field without
introducing any major style changes.
### Newlines
The heuristics for adding newline is similar to that of
[Prettier](https://prettier.io/docs/en/next/rationale.html#template-literals)
where the formatter would only split an expression in the replacement
field across multiple lines if there was already a line break within the
replacement field.
In other words, the formatter would not add any newlines unless they
were already present i.e., they were added by the user. This makes
breaking any expression inside an f-string optional and in control of
the user. For example,
```python
# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
```
If there are comments in any of the replacement field of the f-string,
then it will always be a multi-line f-string in which case the formatter
would prefer to break expressions i.e., introduce newlines. For example,
```python
x = f"{ # comment
a }"
```
### Quotes
The logic for formatting quotes remains unchanged. The existing logic is
used to determine the necessary quote char and is used accordingly.
Now, if the expression inside an f-string is itself a string like, then
we need to make sure to preserve the existing quote and not change it to
the preferred quote unless it's 3.12. For example,
```python
f"outer {'inner'} outer"
# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"
# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"
```
But, for triple-quoted strings, we can re-use the same quote char unless
the inner string is itself a triple-quoted string.
```python
f"""outer {"inner"} outer""" # valid
f"""outer {'''inner'''} outer""" # preserve the single quote char for the inner string
```
### Debug expressions
If debug expressions are present in the replacement field of a f-string,
then the whitespace needs to be preserved as they will be rendered as it
is (for example, `f"{ x = }"`. If there are any nested f-strings, then
the whitespace in them needs to be preserved as well which means that
we'll stop formatting the f-string as soon as we encounter a debug
expression.
```python
f"outer { x = !s :.3f}"
# ^^
# We can remove these whitespaces
```
Now, the whitespace doesn't need to be preserved around conversion spec
and format specifiers, so we'll format them as usual but we won't be
formatting any nested f-string within the format specifier.
### Miscellaneous
- The
[`hug_parens_with_braces_and_square_brackets`](https://github.com/astral-sh/ruff/issues/8279)
preview style isn't implemented w.r.t. the f-string curly braces.
- The
[indentation](https://github.com/astral-sh/ruff/discussions/9785#discussioncomment-8470590)
is always relative to the f-string containing statement
## Test Plan
* Add new test cases
* Review existing snapshot changes
* Review the ecosystem changes
[PEP 701]: https://peps.python.org/pep-0701/
## Summary
This PR is a small refactor to extract out the logic for normalizing
string in the formatter from the `StringPart` struct. It also separates
the quote selection into a separate method on the new
`StringNormalizer`. Both of these will help in the f-string formatting
to use `StringPart` and `choose_quotes` irrespective of normalization.
The reason for having separate quote selection and normalization step is
so that the f-string formatting can perform quote selection on its own.
Unlike string and byte literals, the f-string formatting would require
that the normalization happens only for the literal elements of it i.e.,
the "foo" and "bar" in `f"foo {x + y} bar"`. This will automatically be
handled by the already separate `normalize_string` function.
Another use-case in the f-string formatting is to extract out the
relevant information from the `StringPart` like quotes and prefix which
is to be passed as context while formatting each element of an f-string.
## Test Plan
Ensure that clippy is happy and all tests pass.
## Summary
This PR reduces the size of `Expr` from 80 to 64 bytes, by reducing the
sizes of...
- `ExprCall` from 72 to 56 bytes, by using boxed slices for `Arguments`.
- `ExprCompare` from 64 to 48 bytes, by using boxed slices for its
various vectors.
In testing, the parser gets a bit faster, and the linter benchmarks
improve quite a bit.
## Summary
This PR adds the `AnyNode` and `AnyNodeRef` implementation for
`FStringFormatSpec` node which will be required in the f-string
formatting.
The main usage for this is so that we can pass in the node directly to
`suppressed_node` in case debug expression is used to format is as
verbatim text.
## Summary
This PR implements the `blank_line_after_nested_stub_class` preview
style in the formatter.
The logic is divided into 3 parts:
1. In between preceding and following nodes at top level and nested
suite
2. When there's a trailing comment after the class
3. When there is no following node from (1) which is the case when it's
the last or the only node in a suite
We handle (3) with `FormatLeadingAlternateBranchComments`.
## Test Plan
- Add new test cases and update existing snapshots
- Checked the `typeshed` diff
fixes: #8891
## Summary
If you paste in the TOML for our default configuration (from the docs),
it's rejected by our JSON Schema:

It seems like the issue is with:
```toml
# Set the line length limit used when formatting code snippets in
# docstrings.
#
# This only has an effect when the `docstring-code-format` setting is
# enabled.
docstring-code-line-length = "dynamic"
```
Specifically, since that value uses a custom Serde implementation, I
guess Schemars bails out? This PR adds a custom representation to allow
`"dynamic"` (but no other strings):

This seems like it should work but I don't have a great way to test it.
Closes https://github.com/astral-sh/ruff/issues/9630.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes#8334.
`Display` has been implemented for `ruff_workspace::Settings`, which
gives a much nicer and more readable output to `--show-settings`.
Internally, a `display_settings` utility macro has been implemented to
reduce the boilerplate of the display code.
### Work to be done
- [x] A lot of formatting for `Vec<_>` and `HashSet<_>` types have been
stubbed out, using `Debug` as a fallback. There should be a way to add
generic formatting support for these types as a modifier in
`display_settings`.
- [x] Several complex types were also stubbed out and need proper
`Display` implementations rather than falling back on `Debug`.
- [x] An open question needs to be answered: how important is it that
the output be valid TOML? Some types in settings, such as a hash-map
from a glob pattern to a multi-variant enum, will be hard to rework into
valid _and_ readable TOML.
- [x] Tests need to be implemented.
## Test Plan
Tests consist of a snapshot test for the default `--show-settings`
output and a doctest for `display_settings!`.
## Summary
This is similar to https://github.com/astral-sh/ruff/pull/8876, but more
limited in scope:
1. It only applies to `# fmt: skip` (like Black). Like `# isort: on`, `#
fmt: on` needs to be on its own line (still).
2. It only delimits on `#`, so you can do `# fmt: skip # noqa`, but not
`# fmt: skip - some other content` or `# fmt: skip; noqa`.
If we want to support the `;`-delimited version, we should revisit
later, since we don't support that in the linter (so `# fmt: skip; noqa`
wouldn't register a `noqa`).
Closes https://github.com/astral-sh/ruff/issues/8892.
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
This helps a bit with (but does not close) the issues described in
https://github.com/astral-sh/ruff/issues/9311. E.g., now, we at least
see: `error: Failed to format main.py: source contains syntax errors:
invalid syntax. Got unexpected token '=' at byte offset 20`.
## Summary
Add new test cases for `with_item` and `match` sequence that demonstrate how long headers break.
Removes one use of `optional_parentheses` in a position where it is know that the parentheses always need to be added.
## Test Plan
cargo test
## Summary
Add some more documentation to `can_omit_optional_parentheses` because it is realy hard to understand.
Restrict the `Attribute` and `None` `OperatorPrecedence` branches to ensure they only get applyied to the intended nodes.
## Test Plan
Ecosystem check reports no differences. The compatibility index remains unchanged.
This PR splits the string formatting code in the formatter to be handled
by the respective nodes.
Previously, the string formatting was done through a single
`FormatString` interface. Now, the nodes themselves are responsible for
formatting.
The following changes were made:
1. Remove `StringLayout::ImplicitStringConcatenationInBinaryLike` and
inline the call to `FormatStringContinuation`. After the refactor, the
binary like formatting would delegate to `FormatString` which would then
delegate to `FormatStringContinuation`. This removes the intermediary
steps.
2. Add formatter implementation for `FStringPart` which delegates it to
the respective string literal or f-string node.
3. Add `ExprStringLiteralKind` which is either `String` or `Docstring`.
If it's a docstring variant, then the string expression would not be
implicitly concatenated. This is guaranteed by the
`DocstringStmt::try_from_expression` constructor.
4. Add `StringLiteralKind` which is either a `String`, `Docstring` or
`InImplicitlyConcatenatedFString`. The last variant is for when the
string literal is implicitly concatenated with an f-string (`"foo" f"bar
{x}"`).
5. Remove `FormatString`.
6. Extract the f-string quote detection as a standalone function which
is public to the crate. This is used to detect the quote to be used for
an f-string at the expression level (`ExprFString` or
`FormatStringContinuation`).
### Formatter ecosystem result
**This PR**
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |
**main**
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |
This fixes a bug where the current indent level was not calculated
correctly for doctests. Namely, it didn't account for the extra indent
level (in terms of ASCII spaces) used by by the PS1 (`>>> `) and PS2
(`... `) prompts. As a result, lines could extend up to 4 spaces beyond
the configured line length limit.
We fix that by passing the `CodeExampleKind` to the `format` routine
instead of just the code itself. In this way, `format` can query whether
there will be any extra indent added _after_ formatting the code and
take that into account for its line length setting.
We add a few regression tests, taken directly from @stinodego's
examples.
Fixes#9126
This PR does the plumbing to make a new formatting option,
`docstring-code-format`, available in the configuration for end users.
It is disabled by default (opt-in). It is opt-in at least initially to
reflect a conservative posture. The intent is to make it opt-out at some
point in the future.
This was split out from #8811 in order to make #8811 easier to merge.
Namely, once this is merged, docstring code snippet formatting will
become available to end users. (See comments below for how we arrived at
the name.)
Closes#7146
## Test Plan
Other than the standard test suite, I ran the formatter over the CPython
and polars projects to ensure both that the result looked sensible and
that tests still passed. At time of writing, one issue that currently
appears is that reformatting code snippets trips the long line lint:
https://github.com/BurntSushi/polars/actions/runs/7006619426/job/19058868021
This PR adds a `as_slice` method to all the string nodes which returns
all the parts of the nodes as a slice. This will be useful in the next
PR to split the string formatting to use this method to extract the
_single node_ or _implicitly concanated nodes_.
## Summary
This PR changes the internal `docstring-code-line-width` setting to
additionally accept a string value `dynamic`. When `dynamic` is set, the
line width is dynamically adjusted when reformatting code snippets in
docstrings based on the indent level of the docstring. The result is
that the reformatted lines from the code snippet should not exceed the
"global" line width configuration for the surrounding source.
This PR does not change the default behavior, although I suspect the
default should probably be `dynamic`.
## Test Plan
I added a new configuration to the existing docstring code tests and
also added a new set of tests dedicated to the new `dynamic` mode.
## Summary
This does the light plumbing necessary to add a new internal option that
permits setting the line width of code examples in docstrings. The plan
is to add the corresponding user facing knob in #8854.
Note that this effectively removes the `same-as-global` configuration
style discussed [in this
comment](https://github.com/astral-sh/ruff/issues/8855#issuecomment-1847230440).
It replaces it with the `{integer}` configuration style only.
There are a lot of commits here, but they are each tiny to make review
easier because of the changes to snapshots.
## Test Plan
I added a new docstring test configuration that sets
`docstring-code-line-width = 60` and examined the differences.
## Summary
In https://github.com/astral-sh/ruff/pull/8921, we changed our parameter
formatting behavior to add a trailing comma whenever a single-argument
function breaks. This introduced a deviation in the case that a function
contains a single argument, but _also_ includes a positional-only or
keyword-only separator.
Closes https://github.com/astral-sh/ruff/issues/9074.
(This is not possible to actually use until
https://github.com/astral-sh/ruff/pull/8854 is merged.)
This commit slots in support for formatting Markdown fenced code
blocks[1]. With the refactoring done for reStructuredText previously,
this ended up being pretty easy to add. Markdown code blocks are also
quite a bit easier to parse and recognize correctly.
One point of contention in #8860 is whether to assume that unlabeled
Markdown code fences are Python or not by default. In this PR, we make
such an assumption. This follows what `rustdoc` does. The mitigation
here is that if an unlabeled code block isn't Python, then it probably
won't parse as Python. And we'll end up skipping it. So in the vast
majority of cases, the worst thing that can happen is a little bit of
wasted work.
Closes#8860
[1]: https://spec.commonmark.org/0.30/#fenced-code-blocks
Rebase of #6365 authored by @davidszotten.
## Summary
This PR updates the AST structure for an f-string elements.
The main **motivation** behind this change is to have a dedicated node
for the string part of an f-string. Previously, the existing
`ExprStringLiteral` node was used for this purpose which isn't exactly
correct. The `ExprStringLiteral` node should include the quotes as well
in the range but the f-string literal element doesn't include the quote
as it's a specific part within an f-string. For example,
```python
f"foo {x}"
# ^^^^
# This is the literal part of an f-string
```
The introduction of `FStringElement` enum is helpful which represent
either the literal part or the expression part of an f-string.
### Rule Updates
This means that there'll be two nodes representing a string depending on
the context. One for a normal string literal while the other is a string
literal within an f-string. The AST checker is updated to accommodate
this change. The rules which work on string literal are updated to check
on the literal part of f-string as well.
#### Notes
1. The `Expr::is_literal_expr` method would check for
`ExprStringLiteral` and return true if so. But now that we don't
represent the literal part of an f-string using that node, this improves
the method's behavior and confines to the actual expression. We do have
the `FStringElement::is_literal` method.
2. We avoid checking if we're in a f-string context before adding to
`string_type_definitions` because the f-string literal is now a
dedicated node and not part of `Expr`.
3. Annotations cannot use f-string so we avoid changing any rules which
work on annotation and checks for `ExprStringLiteral`.
## Test Plan
- All references of `Expr::StringLiteral` were checked to see if any of
the rules require updating to account for the f-string literal element
node.
- New test cases are added for rules which check against the literal
part of an f-string.
- Check the ecosystem results and ensure it remains unchanged.
## Performance
There's a performance penalty in the parser. The reason for this remains
unknown as it seems that the generated assembly code is now different
for the `__reduce154` function. The reduce function body is just popping
the `ParenthesizedExpr` on top of the stack and pushing it with the new
location.
- The size of `FStringElement` enum is the same as `Expr` which is what
it replaces in `FString::format_spec`
- The size of `FStringExpressionElement` is the same as
`ExprFormattedValue` which is what it replaces
I tried reducing the `Expr` enum from 80 bytes to 72 bytes but it hardly
resulted in any performance gain. The difference can be seen here:
- Original profile: https://share.firefox.dev/3Taa7ES
- Profile after boxing some node fields:
https://share.firefox.dev/3GsNXpD
### Backtracking
I tried backtracking the changes to see if any of the isolated change
produced this regression. The problem here is that the overall change is
so small that there's only a single checkpoint where I can backtrack and
that checkpoint results in the same regression. This checkpoint is to
revert using `Expr` to the `FString::format_spec` field. After this
point, the change would revert back to the original implementation.
## Review process
The review process is similar to #7927. The first set of commits update
the node structure, parser, and related AST files. Then, further commits
update the linter and formatter part to account for the AST change.
---------
Co-authored-by: David Szotten <davidszotten@gmail.com>
(This is not possible to actually use until
https://github.com/astral-sh/ruff/pull/8854 is merged.)
ruff_python_formatter: add reStructuredText docstring formatting support
This commit makes use of the refactoring done in prior commits to slot
in reStructuredText support. Essentially, we add a new type of code
example and look for *both* literal blocks and code block directives.
Literal blocks are treated as Python by default because it seems to be a
[common
practice](https://github.com/adamchainz/blacken-docs/issues/195).
That is, literal blocks like this:
```
def example():
"""
Here's an example::
foo( 1 )
All done.
"""
pass
```
Will get reformatted. And code blocks (via reStructuredText directives)
will also get reformatted:
```
def example():
"""
Here's an example:
.. code-block:: python
foo( 1 )
All done.
"""
pass
```
When looking for a code block, it is possible for it to become invalid.
In which case, we back out of looking for a code example and print the
lines out as they are. As with doctest formatting, if reformatting the
code would result in invalid Python or if the code collected from the
block is invalid, then formatting is also skipped.
A number of tests have been added to check both the formatting and
resetting behavior. Mixed indentation is also tested a fair bit, since
one of my initial attempts at dealing with mixed indentation ended up
not working.
I recommend working through this PR commit-by-commit. There is in
particular a somewhat gnarly refactoring before reST support is added.
Closes#8859
## Summary
We should avoid inlining the ellipsis in:
```python
def h():
...
# bye
```
Just as we omit the ellipsis in:
```python
def h():
# bye
...
```
Closes https://github.com/astral-sh/ruff/issues/8905.
In the source of working on #8859, I made a number of smallish refactors
to how code snippet formatting works. Most or all of these were
motivated by writing in support for reStructuredText blocks. They have
some fundamentally different requirements than doctests, and there are a
lot more ways for reStructuredText blocks to become invalid.
(Commit-by-commit review is recommended as the commit messages provide
further context on each change. I split this off from ongoing work to
make review more manageable.)
This PR removes several uses of `unsafe`. I generally limited myself to
low hanging fruit that I could see. There are still a few remaining uses
of `unsafe` that looked a bit more difficult to remove (if possible at
all). But this gets rid of a good chunk of them.
I put each `unsafe` removal into its own commit with a justification for
why I did it. So I would encourage reviewing this PR commit-by-commit.
That way, we can legislate them independently. It's no problem to drop a
commit if we feel the `unsafe` should stay in that case.
This turns `string` into a parent module with a `docstring` sub-module.
I arranged things this way because there are parts of the `string`
module that the `docstring` module wants to know about (such as a
`NormalizedString`). The alternative I think would be to make
`docstring` a sibling module and expose more of `string`'s internals.
I think I overall like this change because it gives docstring handling a
bit more room to breath. It has grown quite a bit with the addition of
code snippet formatting.
[This was suggested by
@charliermarsh.](https://github.com/astral-sh/ruff/pull/8811#discussion_r1401169531)
## Summary
This PR adds opt-in support for formatting doctests in docstrings. This
reflects initial support and it is intended to add support for Markdown
and reStructuredText Python code blocks in the future. But I believe
this PR lays the groundwork, and future additions for Markdown and reST
should be less costly to add.
It's strongly recommended to review this PR commit-by-commit. The last
few commits in particular implement the bulk of the work here and
represent the denser portions.
Some things worth mentioning:
* The formatter is itself not perfect, and it is possible for it to
produce invalid Python code. Because of this, reformatted code snippets
are checked for Python validity. If they aren't valid, then we
(unfortunately silently) bail on formatting that code snippet.
* There are a couple places where it would be nice to at least warn the
user that doctest formatting failed, but it wasn't clear to me what the
best way to do that is.
* I haven't yet run this in anger on a real world code base. I think
that should happen before merging.
Closes#7146
## Test Plan
* [x] Pass the local test suite.
* [x] Scrutinize ecosystem changes.
* [x] Run this formatter on extant code and scrutinize the results.
(e.g., CPython, numpy.)
A follow-up to auto-generate the `FormatNodeRule` implementation for the
string part nodes. This is just a dummy implementation that is
unreachable because it's handled by the parent nodes.
## Summary
This PR updates the string nodes (`ExprStringLiteral`,
`ExprBytesLiteral`, and `ExprFString`) to account for implicit string
concatenation.
### Motivation
In Python, implicit string concatenation are joined while parsing
because the interpreter doesn't require the information for each part.
While that's feasible for an interpreter, it falls short for a static
analysis tool where having such information is more useful. Currently,
various parts of the code uses the lexer to get the individual string
parts.
One of the main challenge this solves is that of string formatting.
Currently, the formatter relies on the lexer to get the individual
string parts, and formats them including the comments accordingly. But,
with PEP 701, f-string can also contain comments. Without this change,
it becomes very difficult to add support for f-string formatting.
### Implementation
The initial proposal was made in this discussion:
https://github.com/astral-sh/ruff/discussions/6183#discussioncomment-6591993.
There were various AST designs which were explored for this task which
are available in the linked internal document[^1].
The selected variant was the one where the nodes were kept as it is
except that the `implicit_concatenated` field was removed and instead a
new struct was added to the `Expr*` struct. This would be a private
struct would contain the actual implementation of how the AST is
designed for both single and implicitly concatenated strings.
This implementation is achieved through an enum with two variants:
`Single` and `Concatenated` to avoid allocating a vector even for single
strings. There are various public methods available on the value struct
to query certain information regarding the node.
The nodes are structured in the following way:
```
ExprStringLiteral - "foo" "bar"
|- StringLiteral - "foo"
|- StringLiteral - "bar"
ExprBytesLiteral - b"foo" b"bar"
|- BytesLiteral - b"foo"
|- BytesLiteral - b"bar"
ExprFString - "foo" f"bar {x}"
|- FStringPart::Literal - "foo"
|- FStringPart::FString - f"bar {x}"
|- StringLiteral - "bar "
|- FormattedValue - "x"
```
[^1]: Internal document:
https://www.notion.so/astral-sh/Implicit-String-Concatenation-e036345dc48943f89e416c087bf6f6d9?pvs=4
#### Visitor
The way the nodes are structured is that the entire string, including
all the parts that are implicitly concatenation, is a single node
containing individual nodes for the parts. The previous section has a
representation of that tree for all the string nodes. This means that
new visitor methods are added to visit the individual parts of string,
bytes, and f-strings for `Visitor`, `PreorderVisitor`, and
`Transformer`.
## Test Plan
- `cargo insta test --workspace --all-features --unreferenced reject`
- Verify that the ecosystem results are unchanged
Fix an instability where await was followed by a breaking fluent style
expression:
```python
test_data = await (
Stream.from_async(async_data)
.flat_map_async()
.map()
.filter_async(is_valid_data)
.to_list()
)
```
Note that this technically a minor style change (see ecosystem check)
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.
The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.
I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> crates/ruff_source_file/src/newlines.rs:62:17
|
62 | #[allow(unsafe_code)]
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: `forbid` lint level was set on command line
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Testing the compatibility with the future stable black style, i realized
the `ruff_python_formatter` dev main was lacking the
`--skip-magic-trailing-comma` option. This does not affect `ruff
format`.
Usage:
```shell
cargo run --bin ruff_python_formatter -p ruff_python_formatter -- --skip-magic-trailing-comma --emit stdout scratch.py
```
It seems as though using `include!(...)` to avoid the source code copy
breaks rust-analzer. Namely, it treats the included file as unlinked,
and so any part of analysis (e.g., goto-definition) that needs that file
to reason about the code ends up failing.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
This PR updates the formatter to preserve trailing semicolon for Jupyter
Notebooks.
The motivation behind the change is that semicolons in notebooks are
typically used to hide the output, for example when plotting. This is
highlighted in the linked issue.
The conditions required as to when the trailing semicolon should be
preserved are:
1. It should be a top-level statement which is last in the module.
2. For statement, it can be either assignment, annotated assignment, or
augmented assignment. Here, the target should only be a single
identifier i.e., multiple assignments or tuple unpacking isn't
considered.
3. For expression, it can be any.
## Test Plan
Add a new integration test in `ruff_cli`. The test notebook basically
acts as a document as to which trailing semicolons are to be preserved.
fixes: #8254
## Summary
This PR fixes a bug in our formatter where a multiline lambda expression
statement was formatted over multiple lines without adding parentheses.
The PR "fixes" the problem by not splitting the lambda parameters if it
is not parenthesized
## Test Plan
Added test
**Summary** Previously, own line comment following after a docstring
followed by newline(s) before the first content statement were treated
as trailing on the docstring and we didn't insert a newline after the
docstring as black would.
Before:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
After:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
I'm not entirely happy about hijacking
`handle_own_line_comment_between_statements`, but i don't know a better
spot to put it.
Fixes#7948
**Test Plan** Fixtures
We previously incorrectly treated byte strings in docstring position as
docstrings because black does so
(https://github.com/astral-sh/ruff/pull/8283#discussion_r1375682931,
https://github.com/psf/black/issues/4002), even CPython doesn't
recognize them:
```console
$ python3.12
Python 3.12.0 (main, Oct 6 2023, 17:57:44) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def f():
... b""" a"""
...
>>> print(str(f.__doc__))
None
```
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR inlines the formatting logic for `ExprNumberLiteral` and removes
the need of having dedicated `Format*` struct for each number type.
## Test Plan
`cargo test`
## Summary
This PR splits the `Constant` enum as individual literal nodes. It
introduces the following new nodes for each variant:
* `ExprStringLiteral`
* `ExprBytesLiteral`
* `ExprNumberLiteral`
* `ExprBooleanLiteral`
* `ExprNoneLiteral`
* `ExprEllipsisLiteral`
The main motivation behind this refactor is to introduce the new AST
node for implicit string concatenation in the coming PR. The elements of
that node will be either a string literal, bytes literal or a f-string
which can be implemented using an enum. This means that a string or
bytes literal cannot be represented by `Constant::Str` /
`Constant::Bytes` which creates an inconsistency.
This PR avoids that inconsistency by splitting the constant nodes into
it's own literal nodes, literal being the more appropriate naming
convention from a static analysis tool perspective.
This also makes working with literals in the linter and formatter much
more ergonomic like, for example, if one would want to check if this is
a string literal, it can be done easily using
`Expr::is_string_literal_expr` or matching against `Expr::StringLiteral`
as oppose to matching against the `ExprConstant` and enum `Constant`. A
few AST helper methods can be simplified as well which will be done in a
follow-up PR.
This introduces a new `Expr::is_literal_expr` method which is the same
as `Expr::is_constant_expr`. There are also intermediary changes related
to implicit string concatenation which are quiet less. This is done so
as to avoid having a huge PR which this already is.
## Test Plan
1. Verify and update all of the existing snapshots (parser, visitor)
2. Verify that the ecosystem check output remains **unchanged** for both
the linter and formatter
### Formatter ecosystem check
#### `main`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
#### `dhruv/constant-to-literal`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
This PR adds a new `Singleton` enum for the `PatternMatchSingleton`
node.
Earlier the node was using the `Constant` enum but the value for this
pattern can only be either `None`, `True` or `False`. With the coming PR
to remove the `Constant`, this node required a new type to fill in.
This also has the benefit of narrowing the type down to only the
possible values for the node as evident by the removal of `unreachable`.
## Test Plan
Update the AST snapshots and run `cargo test`.
Change
```python
"""Test docstring"""
a = 1
```
to
```python
"""Test docstring"""
a = 1
```
in preview style, but don't touch the docstring otherwise.
Do we want to ask black to also format the content of module level
docstrings? Seems inconsistent to me that we change function and class
docstring indentation/contents but not module docstrings.
Fixes https://github.com/astral-sh/ruff/issues/7995
**Summary** Prepare for the black preview style becoming the black
stable style at the end of the year.
This adds a new test file to compare stable and preview on some relevant
preview options in black, and makes `format_dev` understand the black
preview flag. I've added poetry as a project that uses preview.
I've implemented one specific deviation (collapsing of stub
implementation in non-stub files) which showed up in poetry for testing.
This also improves poetry compatibility from 0.99891 to 0.99919.
Fixes#7440
New compatibility stats:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 35 |
| home-assistant | 0.99953 | 10596 | 189 |
| poetry | 0.99919 | 317 | 12 |
| transformers | 0.99963 | 2657 | 332 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99969 | 654 | 15 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
Given:
```python
# comment
class A:
def foo(self):
pass
```
We need to insert an additional newline between `# comment` and `class
A`. We were missing this handling for the case in which `# comment` is a
leading comment on `class A`, as opposed to a trailing comment of some
preceding statement.
In practice, I think this only applies to the specific case in which a
class or function is the first statement in a module, and there's a
single empty line between a leading comment and that class or function.
If there are no empty lines, then the comment "sticks" to the
definition; if there are two or more, then `leading_comments` will
truncate appropriately. If the class or function is nested, then we only
need one empty line anyway.
Closes https://github.com/astral-sh/ruff/issues/8215.
## Test Plan
No change in similarity.
Before:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
After:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1648 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
This PR removes the `todo!()` around `IpyEscapeCommand` in the
formatter.
The `NeedsParentheses` trait needs to be implemented which always return
`Never`. The reason being that if an escape command is parenthesized,
then that's not parsed as an escape command. IOW, the parentheses
shouldn't be present around an escape command.
In the similar way, the `CanSkipOptionalParenthesesVisitor` will skip
this node.
## Test Plan
Updated the `unformatted.ipynb` fixture with new cells containing
IPython escape commands and the corresponding snapshot was verified.
Also, tested it out in a few open source repositories containing
notebooks (`openai/openai-cookbook`, `huggingface/notebooks`).
#### New cells in `unformatted.ipynb`
**Cell 2**
```markdown
A markdown cell
```
**Cell 3**
```python
def some_function(foo, bar):
pass
%matplotlib inline
```
**Cell 4**
```python
foo = %pwd
def some_function(foo,bar,):
foo = %pwd
print(foo
)
```
fixes: #8204
## Summary
This PR fixes the bug where if a Notebook contained IPython syntax, then
the format command would fail. This was because the correct mode was not
being used while parsing through the formatter code path.
## Test Plan
This PR isn't the only requirement for Notebook formatting to start
working with IPython escape commands. The following PR in the stack is
required as well.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ruff/issues/7448
Fixes https://github.com/astral-sh/ruff/issues/7892
I've removed automatic dangling comment formatting, we're doing manual
dangling comment formatting everywhere anyway (the
assert-all-comments-formatted ensures this) and dangling comments would
break the formatting there.
## Test Plan
New test file.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Given an expression like `[x for (x) in y]`, we weren't skipping over
parentheses when searching for the `in` between `(x)` and `y`.
Closes https://github.com/astral-sh/ruff/issues/8053.
**Summary** Insert a newline after nested function and class
definitions, unless there is a trailing own line comment.
We need to e.g. format
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
as
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
even though `f()` is directly preceded by an if statement, not a
function or class definition. See the comments and fixtures for trailing
own line comment handling.
**Test Plan** I checked that the new content of `newlines.py` matches
black's formatting.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>