## Summary
This PR updates the lexer tests to use the snapshot testing framework.
It also
makes the following changes:
* Remove the use of macros in the lexer tests
* Use `test_case` for EOL tests
## Test Plan
```
cargo test --package ruff_python_parser --lib --all-features -- lexer::tests --no-capture
```
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds a utility for transforming expressions via LibCST that
automatically wraps the expression in parentheses, applies a
user-provided transformation, then strips the parentheses from the
generated code. LibCST can't parse arbitrary expression ranges, since
some expressions may require parenthesization in order to be parsed
properly. For example:
```python
option = (
'{name}={value}'
.format(nam=name, value=value)
)
```
In this case, the expression range is:
```python
'{name}={value}'
.format(nam=name, value=value)
```
Which isn't valid on its own. So, instead, we add "fake" parentheses
around the expression.
We were already doing this in a few places, so this is mostly
formalizing and DRYing up that pattern.
Closes https://github.com/astral-sh/ruff/issues/6720.
## Summary
For imports, we enforce that there's _at least_ one empty line after an
import (assuming the next statement is _not_ an import), but allow up to
two at the module level.
Closes https://github.com/astral-sh/ruff/issues/6760.
## Test Plan
`cargo test`
## Summary
The isolation group for unused imports was relying on
`checker.semantic().current_statement()`, which isn't valid for that
rule, since it runs over the _scope_, not the statement. Instead, we
need to lookup the isolation group based on the `NodeId` of the
statement.
Our tests didn't catch this, because we mostly have cases that look like
this:
```python
if TYPE_CHECKING:
import shelve
import importlib
```
In this case, the two fixes to remove the two unused imports are
considered overlapping (since we delete the _full_ line, and the two
_full_ lines touch, and we consider exactly-adjacent fixes to be
overlapping), and so they don't run in a single pass due to the
non-overlapping-fixes requirement. That is: the isolation groups aren't
required for this case. They are, however, required for cases like:
```python
if TYPE_CHECKING:
import shelve
import importlib
```
...where the fixes don't overlap.
Closes https://github.com/astral-sh/ruff/issues/6758.
## Test Plan
`cargo test`
`IOError` is special, it is not actually a lint but an error before
linting. I'm not entirely sure how to document it since it does not
match the general lint rule pattern (`Checks that the file can be read
in its entirety.` is imho worse).
I added the in my experience two most common reasons for io errors on
unix systems and linked two tutorials on how to fix them.
See https://github.com/astral-sh/ruff/issues/2646
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
I noticed this in the ecosystem CI check from
https://github.com/astral-sh/ruff/pull/6742. If we include source code
directly in a diagnostic, we need to be careful to avoid rendering
multi-line diagnostics or even excessively long diagnostics.
## Test Plan
`cargo test`
## Summary
This PR is a follow-up to the suggestion in
https://github.com/astral-sh/ruff/pull/6345#discussion_r1285470953 to
use a single stack to store all statements and expressions, rather than
using separate vectors for each, which gives us something closer to a
full-fidelity chain. (We can then generalize this concept to include all
other AST nodes too.)
This is in part made possible by the removal of the hash map from
`&Stmt` to `StatementId` (#6694), which makes it much cheaper to store
these using a single interface (since doing so no longer introduces the
requirement that we hash all expressions).
I'll follow-up with some profiling, but a few notes on how the data
requirements have changed:
- We now store a `BranchId` for every expression, not just every
statement, so that's an extra `u32`.
- We now store a single `NodeId` on every snapshot, rather than separate
`StatementId` and `ExpressionId` IDs, so that's one fewer `u32` for each
snapshot.
- We're probably doing a few more lookups in general, since any calls to
`current_statement()` etc. now have to iterate up the node hierarchy
until they identify the first statement.
## Test Plan
`cargo test`
## Summary
Avoid attempting to rewrite `import matplotlib.pyplot` as `import
matplotlib.pyplot as plt`. We can't support these right now, since we
don't track references at the attribute level (like
`matplotlib.pyplot`).
Closes https://github.com/astral-sh/ruff/issues/6719.
## Summary
Given:
```python
from sys import *
exit(0)
```
We can't add `exit` to `from sys import *`, so we should just ignore it.
Ideally, we'd just resolve `exit` in the first place (since it's
imported from `from sys import *`), but as long as we don't support
wildcard imports, this is more consistent.
Closes https://github.com/astral-sh/ruff/issues/6718.
## Test Plan
`cargo test`
Avoid the nesting in a macro by using the new `WithNodeLevel` to
`PyFormatter` deref. No changes otherwise.
I wanted to follow this up with quickly fixing the typeshed empty line
rules but they turned out a lot more complex than i had anticipated.
**Summary** A common pattern in the code used to be
```rust
if statements.len() != 1 {
return;
}
use_single_entry(statements[0])?;
```
which can be better expressed as
```rust
let [statement] = statements else {
return;
};
use_single_entry(statements)?;
```
Direct indexing can cause panics if you don't manually take care of
checking the length, while matching (such as if-let or let-else) can
never panic.
This isn't a complete refactor, i've just removed some of the obvious
cases. I've specifically looked for `.len() != 1` and fixed those.
**Test Plan** No functional changes
## Summary
We're using LibCST to ensure that we return the full parenthesized range
of an expression, for display purposes. We can just use
`parenthesized_range` which is more efficient and removes one LibCST
dependency.
## Test Plan
`cargo test`
This _probably_ never matters given the set of rules we support and in
fact I'm having trouble thinking of a test-case for it, but it's
definitely incorrect _not_ to pass on the `BranchId` here.
## Summary
We have a few rules that rely on detecting whether two statements are in
different branches -- for example, different arms of an `if`-`else`.
Historically, the way this was implemented is that, given two statement
IDs, we'd find the common parent (by traversing upwards via our
`Statements` abstraction); then identify branches "manually" by matching
the parents against `try`, `if`, and `match`, and returning iterators
over the arms; then check if there's an arm for which one of the
statements is a child, and the other is not.
This has a few drawbacks:
1. First, the code is generally a bit hard to follow (Konsti mentioned
this too when working on the `ElifElseClause` refactor).
2. Second, this is the only place in the codebase where we need to go
from `&Stmt` to `StatementID` -- _everywhere_ else, we only need to go
in the _other_ direction. Supporting these lookups means we need to
maintain a mapping from `&Stmt` to `StatementID` that includes every
`&Stmt` in the program. (We _also_ end up maintaining a `depth` level
for every statement.) I'd like to get rid of these requirements to
improve efficiency, reduce complexity, and enable us to treat AST modes
more generically in the future. (When I looked at adding the `&Expr` to
our existing statement-tracking infrastructure, maintaining a hash map
with all the statements noticeably hurt performance.)
The solution implemented here instead makes branches a first-class
concept in the semantic model. Like with `Statements`, we now have a
`Branches` abstraction, where each branch points to its optional parent.
When we store statements, we store the `BranchID` alongside each
statement. When we need to detect whether two statements are in the same
branch, we just realize each statement's branch path and compare the
two. (Assuming that the two statements are in the same scope, then
they're on the same branch IFF one branch path is a subset of the other,
starting from the top.) We then add some calls to the visitor to push
and pop branches in the appropriate places, for `if`, `try`, and `match`
statements.
Note that a branch is not 1:1 with a statement; instead, each branch is
closer to a suite, but not _every_ suite is a branch. For example, each
arm in an `if`-`elif`-`else` is a branch, but the `else` in a `for` loop
is not considered a branch.
In addition to being much simpler, this should also be more efficient,
since we've shed the entire `&Stmt` hash map, plus the `depth` that we
track on `StatementWithParent` in favor of a single `Option<BranchID>`
on `StatementWithParent` plus a single vector for all branches. The
lookups should be faster too, since instead of doing a bunch of jumps
around with the hash map + repeated recursive calls to find the common
parents, we instead just do a few simple lookups in the `Branches`
vector to realize and compare the branch paths.
## Test Plan
`cargo test` -- we have a lot of coverage for this, which we inherited
from PyFlakes
## Summary
This is much simpler and avoids (1) multiple passes over the entire
function body, (2) requiring the rule to do its own binding tracking (we
can just use the semantic model), and (3) a usage of `StatementKey`.
In general, where we can, we should try to remove these kinds of custom
visitors that track name references, and instead rely on the semantic
model.
## Test Plan
`cargo test`
## Summary
If a lambda doesn't contain any parameters, or any parameter _tokens_
(like `*`), we can use `None` for the parameters. This feels like a
better representation to me, since, e.g., what should the `TextRange` be
for a non-existent set of parameters? It also allows us to remove
several sites where we check if the `Parameters` is empty by seeing if
it contains any arguments, so semantically, we're already trying to
detect and model around this elsewhere.
Changing this also fixes a number of issues with dangling comments in
parameter-less lambdas, since those comments are now automatically
marked as dangling on the lambda. (As-is, we were also doing something
not-great whereby the lambda was responsible for formatting dangling
comments on the parameters, which has been removed.)
Closes https://github.com/astral-sh/ruff/issues/6646.
Closes https://github.com/astral-sh/ruff/issues/6647.
## Test Plan
`cargo test`
## Summary
In working on https://github.com/astral-sh/ruff/pull/6628, I noticed
that we clone the source code contents, potentially multiple times,
prior to linting. The issue is that `SourceKind::Python` takes a
`String`, so we first have to provide it with a `String`. In the stdin
case, that means cloning. However, on top of this, we then have to clone
`source_kind.contents()` because `SourceKind` gets mutated. So for
stdin, we end up cloning twice. For non-stdin, we end up cloning once,
but unnecessarily (since the _contents_ don't get mutated, only the
kind).
This PR removes the `String` from `source_kind`, instead requiring that
we parse it out elsewhere. It reduces the number of clones down to 1 for
Jupyter Notebooks, and zero otherwise.
## Summary
The motivating code here was:
```python
with test as (
# test
foo):
pass
```
Which we were formatting as:
```python
with test as
# test
(foo):
pass
```
`with` statements are oddly difficult. This PR makes a bunch of subtle
modifications and adds a more extensive test suite. For example, we now
only preserve parentheses if there's more than one `WithItem` _or_ a
trailing comma; before, we always preserved.
Our formatting is_not_ the same as Black, but here's a diff of our
formatted code vs. Black's for the `with.py` test suite. The primary
difference is that we tend to break parentheses when they contain
comments rather than move them to the end of the life (this is a
consistent difference that we make across the codebase):
```diff
diff --git a/crates/ruff_python_formatter/foo.py b/crates/ruff_python_formatter/foo.py
index 85e761080..31625c876 100644
--- a/crates/ruff_python_formatter/foo.py
+++ b/crates/ruff_python_formatter/foo.py
@@ -1,6 +1,4 @@
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
-), aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
...
# trailing
@@ -16,28 +14,33 @@ with (
# trailing
-with a, b: # a # comma # c # colon
+with (
+ a, # a # comma
+ b, # c
+): # colon
...
with (
- a as # a # as
- # own line
- b, # b # comma
+ a as ( # a # as
+ # own line
+ b
+ ), # b # comma
c, # c
): # colon
... # body
# body trailing own
-with (
- a as # a # as
+with a as ( # a # as
# own line
- bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb # b
-):
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+): # b
pass
-with (a,): # magic trailing comma
+with (
+ a,
+): # magic trailing comma
...
@@ -47,6 +50,7 @@ with a: # should remove brackets
with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as c:
...
+
with (
# leading comment
a
@@ -74,8 +78,7 @@ with (
with (
a # trailing same line comment
# trailing own line comment
- as b
-):
+) as b:
...
with (
@@ -87,7 +90,9 @@ with (
with (
a
# trailing own line comment
-) as b: # trailing as same line comment # trailing b same line comment
+) as ( # trailing as same line comment
+ b
+): # trailing b same line comment
...
with (
@@ -124,18 +129,24 @@ with ( # comment
...
with ( # outer comment
- CtxManager1() as example1, # inner comment
+ ( # inner comment
+ CtxManager1()
+ ) as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
-with CtxManager() as example: # outer comment
+with ( # outer comment
+ CtxManager()
+) as example:
...
with ( # outer comment
CtxManager()
-) as example, CtxManager2() as example2: # inner comment
+) as example, ( # inner comment
+ CtxManager2()
+) as example2:
...
with ( # outer comment
@@ -145,7 +156,9 @@ with ( # outer comment
...
with ( # outer comment
- (CtxManager1()), # inner comment
+ ( # inner comment
+ CtxManager1()
+ ),
CtxManager2(),
) as example:
...
@@ -179,7 +192,9 @@ with (
):
pass
-with a as (b): # foo
+with a as ( # foo
+ b
+):
pass
with f(
@@ -209,17 +224,13 @@ with f(
) as b, c as d:
pass
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
pass
with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
pass
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b, c as d:
pass
with (
@@ -230,6 +241,8 @@ with (
pass
with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b,
+ c as d,
+):
pass
```
Closes https://github.com/astral-sh/ruff/issues/6600.
## Test Plan
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75473 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74292 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75473 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74292 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
`cargo test`
## Summary
Attaches comments around the `:=` operator in a named expression as
dangling, and formats them manually in the `named_expr.rs` formatter.
Closes https://github.com/astral-sh/ruff/issues/5695.
## Test Plan
`cargo test`
## Summary
This PR exposes our `is_expression_parenthesized` logic such that we can
use it to expand expressions when autofixing to include their
parenthesized ranges.
This solution has a few drawbacks: (1) we need to compute parenthesized
ranges in more places, which also relies on backwards lexing; and (2) we
need to make use of this in any relevant fixes.
However, I still think it's worth pursuing. On (1), the implementation
is very contained, so IMO we can easily swap this out for a more
performant solution in the future if needed. On (2), this improves
correctness and fixes some bad syntax errors detected by fuzzing, which
means it has value even if it's not as robust as an _actual_
`ParenthesizedExpression` node in the AST itself.
Closes https://github.com/astral-sh/ruff/issues/4925.
## Test Plan
`cargo test` with new cases that previously failed the fuzzer.
## Summary
I noticed some inconsistencies around uses of `.range.start()`, structs
that have a `TextRange` field but don't implement `Ranged`, etc.
## Test Plan
`cargo test`
## Summary
Given:
```python
if (
x
:=
( # 4
y # 5
) # 6
):
pass
```
It turns out the parser ended the range of the `NamedExpr` at the end of
`y`, rather than the end of the parenthesis that encloses `y`. This just
seems like a bug -- the range should be from the start of the name on
the left, to the end of the parenthesized node on the right.
## Test Plan
`cargo test`
Closes https://github.com/astral-sh/ruff/issues/6442
Python string formatting like `"hello {place}".format(place="world")`
supports format specifications for replaced content such as `"hello
{place:>10}".format(place="world")` which will align the text to the
right in a container filled up to ten characters.
Ruff parses formatted strings into `FormatPart`s each of which is either
a `Field` (content in `{...}`) or a `Literal` (the normal content).
Fields are parsed into name and format specifier sections (we'll ignore
conversion specifiers for now).
There are a myriad of specifiers that can be used in a `FormatSpec`.
Unfortunately for linters, the specifier values can be dynamically set.
For example, `"hello {place:{align}{width}}".format(place="world",
align=">", width=10)` and `"hello {place:{fmt}}".format(place="world",
fmt=">10")` will yield the same string as before but variables can be
used to determine the formatting. In this case, when parsing the format
specifier we can't know what _kind_ of specifier is being used as their
meaning is determined by both position and value.
Ruff does not support nested replacements and our current data model
does not support the concept. Here the data model is updated to support
this concept, although linting of specifications with replacements will
be inherently limited. We could split format specifications into two
types, one without any replacements that we can perform lints with and
one with replacements that we cannot inspect. However, it seems
excessive to drop all parsing of format specifiers due to the presence
of a replacement. Instead, I've opted to parse replacements eagerly and
ignore their possible effect on other format specifiers. This will allow
us to retain a simple interface for `FormatSpec` and most syntax checks.
We may need to add some handling to relax errors if a replacement was
seen previously.
It's worth noting that the nested replacement _can_ also include a
format specification although it may fail at runtime if you produce an
invalid outer format specification. For example, `"hello
{place:{fmt:<2}}".format(place="world", fmt=">10")` is valid so we need
to represent each nested replacement as a full `FormatPart`.
## Test plan
Adding unit tests for `FormatSpec` parsing and snapshots for PLE1300
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Support glob patterns for `raises_require_match_for` and
`raises_require_match_for`. Resolve#6473
## Test Plan
New tests + existing tests
## Summary
No behavior changes, but these need some refactoring to support
https://github.com/astral-sh/ruff/pull/6575 (namely, they need to take
the `ast::ExprCompare` or similar node instead of the attribute fields),
and I don't want to muddy that PR.
## Test Plan
`cargo test`
## Summary
Ensures we avoid cases like:
```python
x.values = 1
```
Since Pandas doesn't even expose a setter for that. We also avoid cases
like:
```python
print(self.values)
```
Since it's overwhelming likely to be a false positive.
Closes https://github.com/astral-sh/ruff/issues/6630.
## Test Plan
`cargo test`
## Summary
When running Ruff from stdin, we were always falling back to the default
source type, even if the user specified a path (as is the case when
running from the LSP). This PR wires up the source type inference, which
means we now get the expected result when checking `.pyi` and `.ipynb`
files.
Closes#6627.
## Test Plan
Verified that `cat
crates/ruff/resources/test/fixtures/jupyter/valid.ipynb | cargo run -p
ruff_cli -- --force-exclude --no-cache --no-fix --isolated --select ALL
--stdin-filename foo.ipynb -` yielded the expected results (and differs
from the errors you get if you omit the filename).
Verified that `cat foo.pyi | cargo run -p ruff_cli -- --force-exclude
--no-cache --no-fix --format json --isolated --select TCH
--stdin-filename path/to/foo.pyi -` yielded no errors.
In https://github.com/astral-sh/ruff/pull/6616 we are adding support for
nested replacements in format specifiers which makes actually formatting
strings infeasible without a great deal of complexity. Since we're not
using these functions (they just exist for runtime use in RustPython),
we can just remove them.
## Summary
Closes https://github.com/astral-sh/ruff/issues/6384, although I think
the issue was fixed already on main, for the most part.
The linked issue is around formatting expressions like:
```python
def test():
(
yield
#comment 1
* # comment 2
# comment 3
test # comment 4
)
```
On main, prior to this PR, we now format like:
```python
def test():
(
yield (
# comment 1
# comment 2
# comment 3
*test
) # comment 4
)
```
Which strikes me as reasonable. (We can't test this, since it's a syntax
error after for our parser, despite being a syntax error in both cases
from CPython's perspective.)
Meanwhile, Black does:
```python
def test():
(
yield
# comment 1
* # comment 2
# comment 3
test # comment 4
)
```
So our formatting differs in that we move comments between the star and
the expression above the star.
As of this PR, we also support formatting this input, which is valid:
```python
def test():
(
yield
#comment 1
* # comment 2
# comment 3
test, # comment 4
1
)
```
Like:
```python
def test():
(
yield (
# comment 1
(
# comment 2
# comment 3
*test, # comment 4
1,
)
)
)
```
There were two fixes here: (1) marking starred comments as dangling and
formatting them properly; and (2) supporting parenthesized comments for
tuples that don't contain their own parentheses, as is often the case
for yielded tuples (previously, we hit a debug assert).
Note that this diff
## Test Plan
cargo test
## Summary
Allows for proper lexing of tokens like `->`.
The main challenge is to ensure that our forward and backwards
representations are the same for cases like `===`. Specifically, we want
that to lex as `==` followed by `=` regardless of whether it's a
forwards or backwards lex. To do so, we identify the range of the
sequential characters (the full span of `===`), lex it forwards, then
return the last token.
## Test Plan
`cargo test`
## Summary
This PR adds support for parenthesized comments. A parenthesized comment
is a comment that appears within a parenthesis, but not within the range
of the expression enclosed by the parenthesis. For example, the comment
here is a parenthesized comment:
```python
if (
# comment
True
):
...
```
The parentheses enclose the `True`, but the range of `True` doesn’t
include the `# comment`.
There are at least two problems associated with parenthesized comments:
(1) associating the comment with the correct (i.e., enclosed) node; and
(2) formatting the comment correctly, once it has been associated with
the enclosed node.
The solution proposed here for (1) is to search for parentheses between
preceding and following node, and use open and close parentheses to
break ties, rather than always assigning to the preceding node.
For (2), we handle these special parenthesized comments in `FormatExpr`.
The biggest risk with this approach is that we forget some codepath that
force-disables parenthesization (by passing in `Parentheses::Never`).
I've audited all usages of that enum and added additional handling +
test coverage for such cases.
Closes https://github.com/astral-sh/ruff/issues/6390.
## Test Plan
`cargo test` with new cases.
Before:
| project | similarity index |
|--------------|------------------|
| build | 0.75623 |
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| typeshed | 0.74233 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| build | 0.75623 |
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| typeshed | 0.74237 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
## Summary
Unlike other statements, Black always adds a trailing comma if an
import-from statement breaks with a single import member. I believe this
is for compatibility with isort -- see
09f5ee3a19,
https://github.com/psf/black/issues/127, or
66648c528a/src/black/linegen.py (L1452)
for the current version.
## Test Plan
`cargo test`, notice that a big chunk of the compatibility suite is
removed.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74233 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74260 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
## Summary
Instead, we set an `is_star` flag on `Stmt::Try`. This is similar to the
pattern we've migrated towards for `Stmt::For` (removing
`Stmt::AsyncFor`) and friends. While these are significant differences
for an interpreter, we tend to handle these cases identically or nearly
identically.
## Test Plan
`cargo test`
## Summary
This PR adds handling for comments on open parentheses in parenthesized
context managers. For example, given:
```python
with ( # comment
CtxManager1() as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
We want to preserve that formatting. (Black does the same.) On `main`,
we format as:
```python
with (
# comment
CtxManager1() as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
It's very similar to how `StmtImportFrom` is handled.
Note that this case _isn't_ covered by the "parenthesized comment"
proposal, since this is a common on the statement that would typically
be attached to the first `WithItem`, and the `WithItem` _itself_ can
have parenthesized comments, like:
```python
with ( # comment
(
CtxManager1() # comment
) as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
## Test Plan
`cargo test`
Confirmed no change in similarity score.
## Summary
For #6485, I need to be able to use the `SimpleTokenizer` to lex the
space between any two adjacent expressions (i.e., the space between a
preceding and following node). This requires that we support a wider
range of keywords (like `and`, to connect the pieces of `x and y`), and
some additional single-character tokens (like `-` and `>`, to support
`->`). Note that the `SimpleTokenizer` does not support multi-character
tokens, so the `->` in a function signature is lexed as a `-` followed
by a `>` -- but this is fine for our purposes.
## Summary
In https://github.com/astral-sh/ruff/pull/6512, we added a flag to the
AST to mark implicitly-concatenated string expressions. This PR makes
use of that flag to remove the `is_implicit_concatenation` method.
## Test Plan
`cargo test`
## Summary
The bracketed-end-of-line comment rule is meant to assign comments like
this as "immediately following the bracket":
```python
f( # comment
1
)
```
However, the logic was such that we treated this equivalently:
```python
f(
( # comment
1
)
)
```
This PR modifies the placement logic to ensure that we only skip the
opening bracket, and not any nested brackets. The above is now formatted
as:
```python
f(
(
# comment
1
)
)
```
(But will be corrected once we handle parenthesized comments properly.)
## Test Plan
`cargo test`
Confirmed no change in similarity score.
## Summary
Per the discussion in
https://github.com/astral-sh/ruff/discussions/6183, this PR adds an
`implicit_concatenated` flag to the string and bytes constant variants.
It's not actually _used_ anywhere as of this PR, but it is covered by
the tests.
Specifically, we now use a struct for the string and bytes cases, along
with the `Expr::FString` node. That struct holds the value, plus the
flag:
```rust
#[derive(Clone, Debug, PartialEq, is_macro::Is)]
pub enum Constant {
Str(StringConstant),
Bytes(BytesConstant),
...
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct StringConstant {
/// The string value as resolved by the parser (i.e., without quotes, or escape sequences, or
/// implicit concatenations).
pub value: String,
/// Whether the string contains multiple string tokens that were implicitly concatenated.
pub implicit_concatenated: bool,
}
impl Deref for StringConstant {
type Target = str;
fn deref(&self) -> &Self::Target {
self.value.as_str()
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct BytesConstant {
/// The bytes value as resolved by the parser (i.e., without quotes, or escape sequences, or
/// implicit concatenations).
pub value: Vec<u8>,
/// Whether the string contains multiple string tokens that were implicitly concatenated.
pub implicit_concatenated: bool,
}
impl Deref for BytesConstant {
type Target = [u8];
fn deref(&self) -> &Self::Target {
self.value.as_slice()
}
}
```
## Test Plan
`cargo test`
**Summary** Implement docstring formatting
**Test Plan** Matches black's `docstring.py` fixture exactly, added some
new cases for what is hard to debug with black and with what black
doesn't cover.
similarity index:
main:
zulip: 0.99702
django: 0.99784
warehouse: 0.99585
build: 0.75623
transformers: 0.99469
cpython: 0.75989
typeshed: 0.74853
this branch:
zulip: 0.99702
django: 0.99784
warehouse: 0.99585
build: 0.75623
transformers: 0.99464
cpython: 0.75517
typeshed: 0.74853
The regression in transformers is actually an improvement in a file they
don't format with black (they run `black examples tests src utils
setup.py conftest.py`, the difference is in hubconf.py). cpython doesn't
use black.
Closes#6196
## Summary
This method is almost never what you actually want, because it doesn't
respect Python's scoping semantics. For example, if you call this within
a class method, it will return class attributes, whereas Python actually
_skips_ symbols in classes unless the load occurs within the class
itself. I also want to move away from these kinds of dynamic lookups and
more towards `resolve_name`, which performs a lookup based on the stored
`BindingId` at the time of symbol resolution, and will make it much
easier for us to separate model building from linting in the near
future.
## Test Plan
`cargo test`
## Summary
Fixes some TODOs introduced in
https://github.com/astral-sh/ruff/pull/6538. In short, given an
expression like `1 if x > 0 else "Hello, world!"`, we now return a union
type that says the expression can resolve to either an `int` or a `str`.
The system remains very limited, it only works for obvious primitive
types, and there's no attempt to do inference on any more complex
variables. (If any expression yields `Unknown` or `TypeError`, we
propagate that result throughout and abort on the client's end.)
## Summary
When adding an import, such as when fixing `I002`, ruff doesn't skip
whitespace between comments, but isort does. See this issue for more
detail: https://github.com/astral-sh/ruff/issues/6504
This change would fix that by skipping whitespace between comments in
`Insertion.start_of_file()`.
## Test Plan
I added a new test, `comments_and_newlines`, to verify this behavior. I
also ran `cargo test` and no existing tests broke. That being said, this
is technically a breaking change, as it's possible that someone was
relying on the previous behavior.
## Summary
Generalizes the abstractions for name matching introduced in
https://github.com/astral-sh/ruff/pull/6378 and applies them to the
existing `banned_api` rule, such that both rules have a uniform API and
implementation.
## Test Plan
`cargo test`
## Summary
Noticed in https://github.com/astral-sh/ruff/pull/6378. Given `import h;
import i`, we don't consider `import i` to be a "top-level" import for
E402 purposes, which is wrong. Similarly, we _do_ consider `import k` to
be a "top-level" import in:
```python
if __name__ == "__main__":
import j; \
import k
```
Using the semantic detection, rather than relying on newline position,
fixes both cases.
## Test Plan
`cargo test`
## Summary
Add a new rule `TID253` (`banned-module-level-imports`), to ban a
user-specified list of imports from appearing at module level. This rule
doesn't exist in `flake8-tidy-imports`, so it's unique to Ruff. The
implementation is pretty similar to `TID251`.
Briefly discussed
[here](https://github.com/astral-sh/ruff/discussions/6370).
## Test Plan
Added a new test case, checking that inline imports are allowed and that
non-inline imports from the banned list are disallowed.
## Summary
This PR modifies our logic for wrapping return type annotations.
Previously, we _always_ wrapped the annotation in parentheses if it
expanded; however, Black only exhibits this behavior when the function
parameters is empty (i.e., it doesn't and can't break). In other cases,
it uses the normal parenthesization rules, allowing nodes to bring their
own parentheses.
For example, given:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(x) -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
```
Black will format as:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
x,
) -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
```
Whereas, prior to this PR, Ruff would format as:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
x,
) -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
```
Closes https://github.com/astral-sh/ruff/issues/6431.
## Test Plan
Before:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75988
- `typeshed`: 0.74853
After:
- `zulip`: 0.99724
- `django`: 0.99791
- `warehouse`: 0.99586
- `build`: 0.75623
- `transformers`: 0.99474
- `cpython`: 0.75956
- `typeshed`: 0.74857
## Summary
This PR fixes some misformattings around optional parentheses for
expressions.
I first noticed that we were misformatting this:
```python
return (
unicodedata.normalize("NFKC", s1).casefold()
== unicodedata.normalize("NFKC", s2).casefold()
)
```
The above is stable Black formatting, but we were doing:
```python
return unicodedata.normalize("NFKC", s1).casefold() == unicodedata.normalize(
"NFKC", s2
).casefold()
```
Above, the "last" expression is a function call, so our
`can_omit_optional_parentheses` was returning `true`...
However, it turns out that Black treats function calls differently
depending on whether or not they have arguments -- presumedly because
they'll never split empty parentheses, and so they're functionally
non-useful. On further investigation, I believe this applies to all
parenthesized expressions. If Black can't split on the parentheses, it
doesn't leverage them when removing optional parentheses.
## Test Plan
Nice increase in similarity scores.
Before:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75989
- `typeshed`: 0.74853
After:
- `zulip`: 0.99705
- `django`: 0.99795
- `warehouse`: 0.99600
- `build`: 0.75623
- `transformers`: 0.99471
- `cpython`: 0.75989
- `typeshed`: 0.74853
**Summary** Some files seems notoriously slow in the formatter (secons in debug mode). This time was however almost exclusively spent in the diff algorithm to collect the similarity index, so i replaced that. I kept `similar` for printing actual diff to avoid rewriting that too, with the disadvantage that we now have to diff libraries in format_dev.
I used this PR to remove the spinner from tracing-indicatif and changed `flamegraph --perfdata perf.data` to `flamegraph --perfdata perf.data --no-inline` as the former wouldn't finish for me on release builds with debug info.
## Summary
This PR adds formatting support for `MatchCase` node with subs for the
`Pattern`
nodes.
## Test Plan
Added test cases for case node handling with comments, newlines.
resolves: #6299
## Summary
The bug was happening in this
[loop](75f402eb82/crates/ruff_python_formatter/src/comments/placement.rs (L545)).
Basically, In the first iteration of the loop, the `comment_indentation`
is bigger than `child_indentation` (`comment_indentation` is 7 and
`child_indentation` is 4) making the `Ordering::Greater` branch execute.
Inside the `Ordering::Greater` branch, the `if` block gets executed,
resulting in the update of these variables.
```rust
parent_body = current_body;
current_body = Some(last_child_in_current_body);
last_child_in_current_body = nested_child;
```
In the second iteration of the loop, `comment_indentation` is smaller
than `child_indentation` (`comment_indentation` is 7 and
`child_indentation` is 8) making the `Ordering::Less` branch execute.
Inside the `Ordering::Less` branch, the `if` block gets executed, this
is where the bug was happening. At this point `parent_body` should be a
`StmtFunctionDef` but it was a `StmtClassDef`. Causing the comment to be
incorrectly formatted.
That happened for the following code:
```python
class A:
def f():
pass
# strangely indented comment
print()
```
There is only one problem that I couldn't figure it out a solution, the
variable `current_body` in this
[line](75f402eb82/crates/ruff_python_formatter/src/comments/placement.rs (L542C5-L542C49))
now gives this warning _"value assigned to `current_body` is never read
maybe it is overwritten before being read?"_
Any tips on how to solve that?
Closes#5337
## Test Plan
Add new test case.
---------
Co-authored-by: konstin <konstin@mailbox.org>
**Summary** Implement `DerefMut` for `WithNodeLevel` so it can be used
in the same way as `PyFormatter`. I want this for my WIP upstack branch
to enable `.fmt(f)` on `WithNodeLevel` context. We could extend this to
remove the other two method from `WithNodeLevel`.
## Summary
Some follow-ups to https://github.com/astral-sh/ruff/pull/6131 to ensure
that fixes are inserted _after_ function docstrings, and that fixes are
robust to a bunch of edge cases.
## Test Plan
`cargo test`
## Summary
This PR respects our unused variable regex when flagging bound
exceptions, so that you no longer get a violation for, e.g.:
```python
def f():
try:
pass
except Exception as _:
pass
```
This is an odd pattern, but I think it's surprising that the regex
_isn't_ respected here.
Closes https://github.com/astral-sh/ruff/issues/6391
## Test Plan
`cargo test`
## Summary
In https://github.com/astral-sh/ruff/pull/5811, I suggested that we add
a heuristic to the overlong-lines check such that if the line had fewer
bytes than the character limit, we return early -- the idea being that a
single byte per character was the "worst case". I overlooked that this
isn't true for tabs -- with tabs, the "worst case" scenario is that
every byte is a tab, which can have a width greater than 1.
Closes https://github.com/astral-sh/ruff/issues/6425.
## Test Plan
`cargo test` with a new fixture borrowed from the issue, plus manual
testing.
## Summary
Checks for any misspelled dunder name method and for any method defined
with `__...__` that's not one of the pre-defined methods.
The pre-defined methods encompass all of Python's standard dunder
methods.
ref: #970
## Test Plan
Snapshots and manual runs of pylint.
## Summary
Our `is_builtin` check did a naive walk over the parent scopes; instead,
it needs to (e.g.) skip symbols in a class scope if being called outside
of the class scope itself.
Closes https://github.com/astral-sh/ruff/issues/6466.
## Test Plan
`cargo test`
## Summary
Reopening of https://github.com/astral-sh/ruff/pull/4880
One open TODO as described in:
https://github.com/astral-sh/ruff/pull/4880#discussion_r1265110215
FYI @charliermarsh seeing as you commented you wanted to do final review
and merge. @konstin @dhruvmanila @MichaReiser as previous reviewers.
# Old Description
## Summary
Adds an autofix for B006 turning mutable argument defaults into None and
setting their original value back in the function body if still `None`
at runtime like so:
```python
def before(x=[]):
pass
def after(x=None):
if x is None:
x = []
pass
```
## Test Plan
Added an extra test case to existing fixture with more indentation.
Checked results for all old examples.
NOTE: Also adapted the jupyter notebook test as this checked for B006 as
well.
## Issue link
Closes: https://github.com/charliermarsh/ruff/issues/4693
---------
Co-authored-by: konstin <konstin@mailbox.org>
**Summary** I collected all examples of end-of-line comments after
opening parentheses that i could think of so we get a comprehensive view
at the state of their formatting (#6390).
This PR intentionally only adds tests cases without any changes in
formatting. We need to decide which exact formatting we want, ideally in
terms of these test files, and implement this in follow-up PRs.
~~One stability check is still deactivated pending
https://github.com/astral-sh/ruff/pull/6386.~~
We currently don't format all comments as match statements are not yet implemented. We can work around this for the top level match statement by setting them manually formatted but the mocked-out top level match doesn't call into its children so they would still have unformatted comments
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR fixes the issue where the FString formatting dropped dangling comments between the string parts.
```python
result_f = (
f' File "{__file__}", line {lineno_f+1}, in f\n'
' f()\n'
# XXX: The following line changes depending on whether the tests
# are run through the interactive interpreter or with -m
# It also varies depending on the platform (stack size)
# Fortunately, we don't care about exactness here, so we use regex
r' \[Previous line repeated (\d+) more times\]' '\n'
'RecursionError: maximum recursion depth exceeded\n'
)
```
The solution here isn't ideal because it re-introduces the `enclosing_parent` on `DecoratedComment` but it is the easiest fix that I could come up.
I didn't spend more time finding another solution becaues I think we have to re-write most of the fstring formatting with the upcoming Python 3.12 support (because lexing the individual parts as we do now will no longer work).
closes#6440
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
The child PR testing that all comments are formatted should now pass
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds the `AnyNodeRef.visit_preorder` method. I'll need this method to mark all comments of a suppressed node's children as formatted (in debug builds).
I'm not super happy with this because it now requires a double-dispatch where the `walk_*` methods call into `node.visit_preorder` and the `visit_preorder` then calls back into the visitor. Meaning,
the new implementation now probably results in way more function calls. The other downside is that `AnyNodeRef` now contains code that is difficult to auto-generate. This could be mitigated by extracting the `visit_preorder` method into its own `VisitPreorder` trait.
Anyway, this approach solves the need and avoids duplicating the visiting code once more.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
<!-- How was it tested? -->
## Summary
The use of `|` as a union operator is not always safe, if a type
annotation is evaluated in a runtime context. For example, this code
errors at runtime:
```python
import httpretty
import requests_mock
item: type[requests_mock.Mocker | httpretty] = requests_mock.Mocker
```
However, it's fine in a `.pyi` file, with `__future__` annotations`, or
if the annotation is in a non-evaluated context, like:
```python
def func():
item: type[requests_mock.Mocker | httpretty] = requests_mock.Mocker
```
This PR modifies the rule to avoid enforcing in those invalid,
runtime-evaluated contexts.
Closes https://github.com/astral-sh/ruff/issues/6455.
## Summary
Use the same Python version by default for all tests (our
latest-supported version).
## Test Plan
`cargo test`
---------
Co-authored-by: Zanie <contact@zanie.dev>
## Summary
I think it makes sense for `PythonVersion::default()` to return our
minimum-supported non-EOL version.
## Test Plan
`cargo test`
---------
Co-authored-by: Zanie <contact@zanie.dev>
## Summary
This PR renames the `MagicCommand` token to `IpyEscapeCommand` token and
`MagicKind` to `IpyEscapeKind` type to better reflect the purpose of the
token and type. Similarly, it renames the AST nodes from `LineMagic` to
`IpyEscapeCommand` prefixed with `Stmt`/`Expr` wherever necessary.
It also makes renames from using `jupyter_magic` to
`ipython_escape_commands` in various function names.
The mode value is still `Mode::Jupyter` because the escape commands are
part of the IPython syntax but the lexing/parsing is done for a Jupyter
notebook.
### Motivation behind the rename:
* IPython codebase defines it as "EscapeCommand" / "Escape Sequences":
* Escape Sequences:
292e3a2345/IPython/core/inputtransformer2.py (L329-L333)
* Escape command:
292e3a2345/IPython/core/inputtransformer2.py (L410-L411)
* The word "magic" is used mainly for the actual magic commands i.e.,
the ones starting with `%`/`%%`
(https://ipython.readthedocs.io/en/stable/interactive/reference.html#magic-command-system).
So, this avoids any confusion between the Magic token (`%`, `%%`) and
the escape command itself.
## Test Plan
* `cargo test` to make sure all renames are done correctly.
* `grep` for `jupyter_escape`/`magic` to make sure all renames are done
correctly.
## Summary
This PR removes the group around function definition parameters, instead
grouping the parameters with the type parameters and return type
annotation.
This increases Zulip's similarity score from 0.99385 to 0.99699, so it's
a meaningful improvement. However, there's at least one stability error
that I'm working on, and I'm really just looking for high-level feedback
at this point, because I'm not happy with the solution.
Closes https://github.com/astral-sh/ruff/issues/6352.
## Test Plan
Before:
- `zulip`: 0.99396
- `django`: 0.99784
- `warehouse`: 0.99578
- `build`: 0.75436
- `transformers`: 0.99407
- `cpython`: 0.75987
- `typeshed`: 0.74432
After:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75988
- `typeshed`: 0.74853
## Summary
In https://github.com/astral-sh/ruff/pull/6397, the documentation was
updated stating that the default target-version is now "py38", but the
actual default value wasn't updated and remained py310. This commit
updates the default value to match what the documentation says.
## Summary
This PR adds support for a stricter version of help end escape
commands[^1] in the parser. By stricter, I mean that the escape tokens
are only at the end of the command and there are no tokens at the start.
This makes it difficult to implement it in the lexer without having to
do a lot of look aheads or keeping track of previous tokens.
Now, as we're adding this in the parser, the lexer needs to recognize
and emit a new token for `?`. So, `Question` token is added which will
be recognized only in `Jupyter` mode.
The conditions applied are the same as the ones in the original
implementation in IPython codebase (which is a regex):
* There can only be either 1 or 2 question mark(s) at the end
* The node before the question mark can be a `Name`, `Attribute`,
`Subscript` (only with integer constants in slice position), or any
combination of the 3 nodes.
## Test Plan
Added test cases for various combination of the possible nodes in the
command value position and update the snapshots.
fixes: #6359fixes: #5030 (This is the final piece)
[^1]: https://github.com/astral-sh/ruff/pull/6272#issue-1833094281
## Summary
Error if `tab-size` is set to zero (it is used as a divisor). Closes
#6423.
Also fixes a typo.
## Test Plan
Running ruff with a config
```toml
[tool.ruff]
tab-size = 0
```
returns an error message to the user saying that `tab-size` must be
greater than zero.
## Summary
Given:
```python
def double(a: int) -> ( # Hello
int
):
return 2*a
```
We currently treat `# Hello` as a trailing comment on the parameters
(`(a: int)`). This PR adds a placement method to instead treat it as a
dangling comment on the function definition itself, so that it gets
formatted at the end of the definition, like:
```python
def double(a: int) -> int: # Hello
return 2*a
```
The formatting in this case is unchanged, but it's incorrect IMO for
that to be a trailing comment on the parameters, and that placement
leads to an instability after changing the grouping in #6410.
Fixing this led to a _different_ instability related to tuple return
type annotations, like:
```python
def zrevrangebylex(self, name: _Key, max: _Value, min: _Value, start: int | None = None, num: int | None = None) -> ( # type: ignore[override]
):
...
```
(This is a real example.)
To fix, I had to special-case tuples in that spot, though I'm not
certain that's correct.
## Summary
This PR moves `empty_parenthesized` such that it's peer to
`parenthesized`, and changes the API to better match that of
`parenthesized` (takes `&str` rather than `StaticText`, has a
`with_dangling_comments` method, etc.).
It may be intentionally _not_ part of `parentheses.rs`, but to me
they're so similar that it makes more sense for them to be in the same
module, with the same API, etc.
## Summary
This PR adds support for `StmtMatch` with subs for `MatchCase`.
## Test Plan
Add a few additional test cases around `match` statement, comments, line
breaks.
resolves: #6298
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix name of rule in example of `extend-per-file-ignores` in `options.rs`
file.
It was `E401` but in configuration example `E402` was listed. Just a
tiny mismatch.
## Test Plan
<!-- How was it tested? -->
Just by my eyes :).
## Bug
Given
```python
x = () - (#
)
```
the comment is a dangling comment of the empty tuple. This is an
end-of-line comment so it may move after the expression. It still
expands the parent, so the operator breaks:
```python
x = (
()
- () #
)
```
In the next formatting pass, the comment is not a trailing tuple but a
trailing bin op comment, so the bin op doesn't break anymore. The
comment again expands the parent, so we still add the superfluous
parentheses
```python
x = (
() - () #
)
```
## Fix
The new formatting is to keep the comment on the empty tuple. This is a
log uglier and again has additional outer parentheses, but it's stable:
```python
x = (
()
- ( #
)
)
```
## Alternatives
Black formats all the examples above as
```python
x = () - () #
```
which i find better.
I would be happy about any suggestions for better solutions than the
current one. I'd mainly need a workaround for expand parent having an
effect on the bin op instead of first moving the comment to the end and
then applying expand parent to the assign statement.
## Summary
Manually add the parentheses around tuple expressions for the autofix in
`B014`.
This is also done in various other autofixes as well such as for
[`RUF005`](6df5ab4098/crates/ruff/src/rules/ruff/rules/collection_literal_concatenation.rs (L183-L184)),
[`UP024`](6df5ab4098/crates/ruff/src/rules/pyupgrade/rules/os_error_alias.rs (L137-L137)).
### Alternate Solution
An alternate solution would be to fix this in the `Generator` itself by
checking
if the tuple expression needs to be generated at the top-level or not.
If so,
then always add the parentheses.
```rust
} else if level == 0 {
// Top-level tuples are always parenthesized.
self.p("(");
let mut first = true;
for elt in elts {
self.p_delim(&mut first, ", ");
self.unparse_expr(elt, precedence::COMMA);
}
self.p_if(elts.len() == 1, ",");
self.p(")");
```
## Test Plan
Add a regression test for this case in `B014`.
fixes: #6412
Extends #6289 to support moving type variable usage in type aliases to
use PEP-695.
Does not remove the possibly unused type variable declaration.
Presumably this is handled by other rules, but is not working for me.
Does not handle type variables with bounds or variance declarations yet.
Part of #4617
## Summary
We have some logic in the expression analyzer method to avoid
re-checking the inner `Union` in `Union[Union[...]]`, since the methods
that analyze `Union` expressions already recurse. Elsewhere, we have
logic to avoid re-checking the inner `|` in `int | (int | str)`, for the
same reason.
This PR unifies that logic into a single method _and_ ensures that, just
as we recurse over both `Union` and `|`, we also detect that we're in
_either_ kind of nested union.
Closes https://github.com/astral-sh/ruff/issues/6285.
## Test Plan
Added some new snapshots.
## Summary
We can anticipate earlier that this will error, so we should avoid
flagging the error at all. Specifically, we're talking about cases like
`"{1} {0}".format(*args)"`, in which we'd need to reorder the arguments
in order to remove the `1` and `0`, but we _can't_ reorder the arguments
since they're not statically analyzable.
Closes https://github.com/astral-sh/ruff/issues/6388.
## Summary
I noticed some deviations in how we treat dangling comments that hug the
opening parenthesis for function definitions.
For example, given:
```python
def f( # first
# second
): # third
...
```
We currently format as:
```python
def f(
# first
# second
): # third
...
```
This PR adds the proper opening-parenthesis dangling comment handling
for function parameters. Specifically, as with all other parenthesized
nodes, we now detect that dangling comment in `placement.rs` and handle
it in `parameters.rs`. We have to take some care in that file, since we
have multiple "kinds" of dangling comments, but I added a bunch of test
cases that we now format identically to Black.
## Test Plan
`cargo test`
Before:
- `zulip`: 0.99388
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74364
After:
- `zulip`: 0.99386
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74409
Meaningful improvement on `typeshed`, minor decrease on `zulip`.
Closes https://github.com/astral-sh/ruff/issues/6068
These commits are kind of a mess as I did some stumbling around here.
Unrolls formatting of chained boolean operations to prevent nested
grouping which gives us Black-compatible formatting where each boolean
operation is on a new line.
## Summary
This PR modifies our `can_omit_optional_parentheses` rules to ensure
that if we see a call followed by an attribute, we treat that as an
attribute access rather than a splittable call expression.
This in turn ensures that we wrap like:
```python
ct_match = aaaaaaaaaaact_id == self.get_content_type(
obj=rel_obj, using=instance._state.db
)
```
For calls, but:
```python
ct_match = (
aaaaaaaaaaact_id == self.get_content_type(obj=rel_obj, using=instance._state.db).id
)
```
For calls with trailing attribute accesses.
Closes https://github.com/astral-sh/ruff/issues/6065.
## Test Plan
Similarity index before:
- `zulip`: 0.99436
- `django`: 0.99779
- `warehouse`: 0.99504
- `transformers`: 0.99403
- `cpython`: 0.75912
- `typeshed`: 0.72293
And after:
- `zulip`: 0.99436
- `django`: 0.99780
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.72293
## Summary
This PR leverages the unified function definition node to add precise
AST node types to `MemberKind`, which is used to power our docstring
definition tracking (e.g., classes and functions, whether they're
methods or functions or nested functions and so on, whether they have a
docstring, etc.). It was painful to do this in the past because the
function variants needed to support a union anyway, but storing precise
nodes removes like a dozen panics.
No behavior changes -- purely a refactor.
## Test Plan
`cargo test`
## Summary
Per the suggestion in
https://github.com/astral-sh/ruff/discussions/6183, this PR removes
`AsyncWith`, `AsyncFor`, and `AsyncFunctionDef`, replacing them with an
`is_async` field on the non-async variants of those structs. Unlike an
interpreter, we _generally_ have identical handling for these nodes, so
separating them into distinct variants adds complexity from which we
don't really benefit. This can be seen below, where we get to remove a
_ton_ of code related to adding generic `Any*` wrappers, and a ton of
duplicate branches for these cases.
## Test Plan
`cargo test` is unchanged, apart from parser snapshots.
## Summary
See discussion in
https://github.com/astral-sh/ruff/pull/6351#discussion_r1284996979. We
can remove `RefEquality` entirely and instead use a text offset for
statement keys, since no two statements can start at the same text
offset.
## Test Plan
`cargo test`
## Summary
This PR adds support for help end escape command in the lexer.
### What are "help end escape commands"?
First, the escape commands are special IPython syntax which enhances the
functionality for the IPython REPL. There are 9 types of escape kinds
which are recognized by the tokens which are present at the start of the
command (`?`, `??`, `!`, `!!`, etc.).
Here, the help command is using either the `?` or `??` token at the
start (`?str.replace` for example). Those 2 tokens are also supported
when they're at the end of the command (`str.replace?`), but the other
tokens aren't supported in that position.
There are mainly two types of help end escape commands:
1. Ending with either `?` or `??`, but it also starts with one of the
escape tokens (`%matplotlib?`)
2. On the other hand, there's a stricter version for (1) which doesn't
start with any escape tokens (`str.replace?`)
This PR adds support for (1) while (2) will be supported in the parser.
### Priority
Now, if the command starts and ends with an escape token, how do we
decide the kind of this command? This is where priority comes into
picture. This is simple as there's only one priority where `?`/`??` at
the end takes priority over any other escape token and all of the other
tokens are at the same priority. Remember that only `?`/`??` at the end
is considered valid.
This is mainly useful in the case where someone would want to invoke the
help command on the magic command itself. For example, in `%matplotlib?`
the help command takes priority which means that we want help for the
`matplotlib` magic function instead of calling the magic function
itself.
### Specification
Here's where things get a bit tricky. What if there are question mark
tokens at both ends. How do we decide if it's `Help` (`?`) kind or
`Help2` (`??`) kind?
| | Magic | Value | Kind |
| --- | --- | --- | --- |
| 1 | `?foo?` | `foo` | `Help` |
| 2 | `??foo?` | `foo` | `Help` |
| 3 | `?foo??` | `foo` | `Help2` |
| 4 | `??foo??` | `foo` | `Help2` |
| 5 | `???foo??` | `foo` | `Help2` |
| 6 | `??foo???` | `foo???` | `Help2` |
| 7 | `???foo???` | `?foo???` | `Help2` |
Looking at the above table:
- The question mark tokens on the right takes priority over the ones on
the left but only if the number of question mark on the right is 1 or 2.
- If there are more than 2 question mark tokens on the right side, then
the left side is used to determine the same.
- If the right side is used to determine the kind, then all of the
question marks and whitespaces on the left side are ignored in the
`value`, but if it’s the other way around, then all of the extra
question marks are part of the `value`.
### References
- IPython implementation using the regex:
292e3a2345/IPython/core/inputtransformer2.py (L454-L462)
- Priorities:
292e3a2345/IPython/core/inputtransformer2.py (L466-L469)
## Test Plan
Add a bunch of test cases for the lexer and verify that it matches the
behavior of
IPython transformer.
resolves: #6357
## Summary
This PR fixes the performance degradation introduced in
https://github.com/astral-sh/ruff/pull/6345. Instead of using the
generic `Nodes` structs, we now use separate `Statement` and
`Expression` structs. Importantly, we can avoid tracking a bunch of
state for expressions that we need for parents: we don't need to track
reference-to-ID pointers (we just have no use-case for this -- I'd
actually like to remove this from statements too, but we need it for
branch detection right now), we don't need to track depth, etc.
In my testing, this entirely removes the regression on all-rules, and
gets us down to 2ms slower on the default rules (as a crude hyperfine
benchmark, so this is within margin of error IMO).
No behavioral changes.
## Summary
This PR attempts to draw a clearer divide between "methods that take
(e.g.) an expression or statement as input" and "methods that rely on
the _current_ expression or statement" in the semantic model, by
renaming methods like `stmt()` to `current_statement()`.
This had led to confusion in the past. For example, prior to this PR, we
had `scope()` (which returns the current scope), and `parent_scope`,
which returns the parent _of a scope that's passed in_. Now, the API is
clearer: `current_scope` returns the current scope, and `parent_scope`
takes a scope as argument and returns its parent.
Per above, I also changed `stmt` to `statement` and `expr` to
`expression`.
## Summary
Given:
```python
[ # comment
first,
second,
third
] # another comment
```
We were adding a hard line break as part of the formatting of `#
comment`, which led to the following formatting:
```python
[first, second, third] # comment
# another comment
```
Closes https://github.com/astral-sh/ruff/issues/6367.
## Summary
Fixes an instability whereby this:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = datetime.datetime.now() - datetime.timedelta( # noqa: DTZ005
days=threshold_days
)
```
Was being formatted as:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = (
datetime.datetime.now()
- datetime.timedelta(days=threshold_days) # noqa: DTZ005
)
```
Which was in turn being formatted as:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = (
datetime.datetime.now() - datetime.timedelta(days=threshold_days) # noqa: DTZ005
)
```
The second-to-third formattings still differs from Black because we
aren't taking the line suffix into account when splitting
(https://github.com/astral-sh/ruff/issues/6377), but the first
formatting is correct and should be unchanged (i.e., the first-to-second
formattings is incorrect, and fixed here).
## Test Plan
`cargo run --bin ruff_dev -- format-dev --stability-check ../zulip`
## Summary
When we iterate over the AST for analysis, we often process nodes in a
"deferred" manner. For example, if we're analyzing a function, we push
the function body onto a deferred stack, along with a snapshot of the
current semantic model state. Later, when we analyze the body, we
restore the semantic model state from the snapshot. This ensures that we
know the correct scope, hierarchy of statement parents, etc., when we go
to analyze the function body.
Historically, we _haven't_ included the _expression_ hierarchy in the
model snapshot -- so we track the current expression parents in the
visitor, but we never save and restore them when processing deferred
nodes. This can lead to subtle bugs, in that methods like
`expr_parent()` aren't guaranteed to be correct, if you're in a deferred
visitor.
This PR migrates expression tracking to mirror statement tracking
exactly. So we push all expressions onto an `IndexVec`, and include the
current expression on the snapshot. This ensures that `expr_parent()`
and related methods are "always correct" rather than "sometimes
correct".
There's a performance cost here, both at runtime and in terms of memory
consumption (we now store an additional pointer for every expression).
In my hyperfine testing, it's about a 1% performance decrease for
all-rules on CPython (up to 533.8ms, from 528.3ms) and a 4% performance
decrease for default-rules on CPython (up to 212ms, from 204ms).
However... I think this is worth it given the incorrectness of our
current approach. In the future, we may want to reconsider how we do
these upward traversals (e.g., with something like a red-green tree).
(**Note**: in https://github.com/astral-sh/ruff/pull/6351, the slowdown
seems to be entirely removed.)
Changes:
- Fixes typo and repeated phrase in `DTZ002`
- Adds docs for `DTZ003`
- Adds docs for `DTZ004`
- Adds example for <=Python3.10 in `DTZ001`
Related to: https://github.com/astral-sh/ruff/issues/2646
## Summary
Historically, we've stored "qualified names" on our
`BindingKind::Import`, `BindingKind::SubmoduleImport`, and
`BindingKind::ImportFrom` structs. In Ruff, a "qualified name" is a
dot-separated path to a symbol. For example, given `import foo.bar`, the
"qualified name" would be `"foo.bar"`; and given `from foo.bar import
baz`, the "qualified name" would be `foo.bar.baz`.
This PR modifies the `BindingKind` structs to instead store _call paths_
rather than qualified names. So in the examples above, we'd store
`["foo", "bar"]` and `["foo", "bar", "baz"]`. It turns out that this
more efficient given our data access patterns. Namely, we frequently
need to convert the qualified name to a call path (whenever we call
`resolve_call_path`), and it turns out that we do this operation enough
that those conversations show up on benchmarks.
There are a few other advantages to using call paths, rather than
qualified names:
1. The size of `BindingKind` is reduced from 32 to 24 bytes, since we no
longer need to store a `String` (only a boxed slice).
2. All three import types are more consistent, since they now all store
a boxed slice, rather than some storing an `&str` and some storing a
`String` (for `BindingKind::ImportFrom`, we needed to allocate a
`String` to create the qualified name, but the call path is a slice of
static elements that don't require that allocation).
3. A lot of code gets simpler, in part because we now do call path
resolution "earlier". Most notably, for relative imports (`from .foo
import bar`), we store the _resolved_ call path rather than the relative
call path, so the semantic model doesn't have to deal with that
resolution. (See that `resolve_call_path` is simpler, fewer branches,
etc.)
In my testing, this change improves the all-rules benchmark by another
4-5% on top of the improvements mentioned in #6047.
## Summary
Update `F841` autofix to not remove line magic expr
## Test Plan
Added test case for assignment statement with and without type
annotation
fixes: #6116
## Summary
Enable using the new `Mode::Jupyter` for the tokenizer/parser to parse
Jupyter line magic tokens.
The individual call to the lexer i.e., `lex_starts_at` done by various
rules should consider the context of the source code (is this content
from a Jupyter Notebook?). Thus, a new field `source_type` (of type
`PySourceType`) is added to `Checker` which is being passed around as an
argument to the relevant functions. This is then used to determine the
`Mode` for the lexer.
## Test Plan
Add new test cases to make sure that the magic statement is considered
while generating the diagnostic and autofix:
* For `I001`, if there's a magic statement in between two import blocks,
they should be sorted independently
fixes: #6090
## Summary
As of version
[23.1.0](2a86db8271/CHANGELOG.md?plain=1#L158-L160),
`flake8-pyi` remove the rule `Y027`.
The errors that resulted in `PYI027` are now being emitted by `PYI022`
(`UP035`).
ref: #848
## Test Plan
Add new tests cases.
## Summary
Previously, failed on methods like:
```python
@classmethod
def bad_posonly_class_method(cls: type[_S], /) -> _S: ... # PYI019
```
Since we check if there are any positional-only or non-positional
arguments, but then do an unsafe access on `parameters.args`.
Closes https://github.com/astral-sh/ruff/issues/6349.
## Test Plan
`cargo test` (verified that `main` panics on the new fixtures)
These rules assume that the docstring is on its own line. pydocstyle
treats them inconsistently, so I'm just going to disable them in this
case.
Closes https://github.com/astral-sh/ruff/issues/6329.
## Summary
Fixes some comprehension formatting by avoiding creating the group for
the comprehension itself (so that if it breaks, all parts break on their
own lines, e.g. the `for` and the `if` clauses).
Closes https://github.com/astral-sh/ruff/issues/6063.
## Test Plan
Bunch of new fixtures.
Implement fluent style/call chains. See the `call_chains.py` formatting
for examples.
This isn't fully like black because in `raise A from B` they allow `A`
breaking can influence the formatting of `B` even if it is already
multiline.
Similarity index:
| project | main | PR |
|--------------|-------|-------|
| build | ??? | 0.753 |
| django | 0.991 | 0.998 |
| transformers | 0.993 | 0.994 |
| typeshed | 0.723 | 0.723 |
| warehouse | 0.978 | 0.994 |
| zulip | 0.992 | 0.994 |
Call chain formatting is affected by
https://github.com/astral-sh/ruff/issues/627, but i'm cutting scope
here.
Closes#5343
**Test Plan**:
* Added a dedicated call chains test file
* The ecosystem checks found some bugs
* I manually check django and zulip formatting
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Extends `comparison-with-itself` to cover simple function calls on
known-pure functions, like `id`. For example, we now flag `id(x) ==
id(x)`.
Closes https://github.com/astral-sh/ruff/issues/6276.
## Test Plan
`cargo test`
**Summary** This adds the information whether we're in a .py python
source file or in a .pyi stub file to enable people working on #5822 and
related issues.
I'm not completely happy with `Default` for something that depends on
the input.
**Test Plan** None, this is currently unused, i'm leaving this to first
implementation of stub file specific formatting.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Follow-up to https://github.com/astral-sh/ruff/pull/6325, to avoid false
positives in cases like:
```python
if x == int:
...
```
Which is valid, since we don't know that we're comparing the type _of_
something -- we're comparing the type objects directly.
## Summary
Extends `type-comparison` to flag:
```python
if type(obj) is int:
pass
```
In addition to the existing cases, like:
```python
if type(obj) is type(1):
pass
```
Closes https://github.com/astral-sh/ruff/issues/6260.
## Summary
We already support preserving the end-of-line comment in calls and type
parameters, as in:
```python
foo( # comment
bar,
)
```
This PR adds the same behavior for lists, sets, comprehensions, etc.,
such that we preserve:
```python
[ # comment
1,
2,
3,
]
```
And related cases.
## Summary
This PR adds an API for chaining comment placement methods based on the
[`then_with`](https://doc.rust-lang.org/std/cmp/enum.Ordering.html#method.then_with)
from `Ordering` in the standard library.
For example, you can now do:
```rust
try_some_case(comment).then_with(|comment| try_some_other_case_if_still_default(comment))
```
This lets us avoid this kind of pattern, which I've seen in
`placement.rs` and used myself before:
```rust
let comment = match handle_own_line_comment_between_branches(comment, preceding, locator) {
CommentPlacement::Default(comment) => comment,
placement => return placement,
};
```
## Summary
This ensures that we treat `# comment` as parenthesized in contexts
like:
```python
while (
True
# comment
):
pass
```
The same logic applies equally to `for`, `async for`, `if`, `with`, and
`async with`. The general pattern is that you have an expression which
precedes a colon-separated suite.
**Summary** Prompted by
https://github.com/astral-sh/ruff/pull/6257#issuecomment-1661308410, it
tried to make the ecosystem script output on failure better
understandable. All log messages are now written to a file, which is
printed on error. Running locally progress is still shown.
Looking through the log output i saw that we currently log syntax errors
in input, which is confusing because they aren't actual errors, but we
don't check that these files don't change due to parser regressions or
improvements. I added `--files-with-errors` to catch that.
**Test Plan** CI
Adds rule to convert type aliases defined with annotations i.e. `x:
TypeAlias = int` to the new PEP-695 syntax e.g. `type x = int`.
Does not support using new generic syntax for type variables, will be
addressed in a follow-up.
Added as part of pyupgrade — ~the code 100 as chosen to avoid collision
with real pyupgrade codes~.
Part of #4617
Builds on #5062
Of the rules that flake8-pyi enforces for `.pyi` type stubs, many of
them equally make sense to check in normal runtime code with type
annotations. Broaden these rules to check all files:
PYI013 ellipsis-in-non-empty-class-body
PYI016 duplicate-union-member
PYI018 unused-private-type-var
PYI019 custom-type-var-return-type
PYI024 collections-named-tuple
PYI025 unaliased-collections-abc-set-import
PYI030 unnecessary-literal-union
PYI032 any-eq-ne-annotation
PYI034 non-self-return-type
PYI036 bad-exit-annotation
PYI041 redundant-numeric-union
PYI042 snake-case-type-alias
PYI043 t-suffixed-type-alias
PYI045 iter-method-return-iterable
PYI046 unused-private-protocol
PYI047 unused-private-type-alias
PYI049 unused-private-typed-dict
PYI050 no-return-argument-annotation-in-stub (Python ≥ 3.11)
PYI051 redundant-literal-union
PYI056 unsupported-method-call-on-all
The other rules are stub-specific and remain enabled only in `.pyi`
files.
PYI001 unprefixed-type-param
PYI002 complex-if-statement-in-stub
PYI003 unrecognized-version-info-check
PYI004 patch-version-comparison
PYI005 wrong-tuple-length-version-comparison (could make sense to
broaden, see
https://github.com/astral-sh/ruff/pull/6297#issuecomment-1663314807)
PYI006 bad-version-info-comparison (same)
PYI007 unrecognized-platform-check
PYI008 unrecognized-platform-name
PYI009 pass-statement-stub-body
PYI010 non-empty-stub-body
PYI011 typed-argument-default-in-stub
PYI012 pass-in-class-body
PYI014 argument-default-in-stub
PYI015 assignment-default-in-stub
PYI017 complex-assignment-in-stub
PYI020 quoted-annotation-in-stub
PYI021 docstring-in-stub
PYI026 type-alias-without-annotation (could make sense to broaden, but
gives many false positives on runtime code as currently implemented)
PYI029 str-or-repr-defined-in-stub
PYI033 type-comment-in-stub
PYI035 unassigned-special-variable-in-stub
PYI044 future-annotations-in-stub
PYI048 stub-body-multiple-statements
PYI052 unannotated-assignment-in-stub
PYI053 string-or-bytes-too-long
PYI054 numeric-literal-too-long
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
In Python >= 3.7, `await` can be included in f-strings.
https://bugs.python.org/issue28942
## Test Plan
<!-- How was it tested? -->
Existing tests
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
#2646
## Test Plan
<!-- How was it tested? -->
## Summary
Implements `Y019` from
[flake8-pyi](https://github.com/PyCQA/flake8-pyi).
The rule checks if
- instance methods that return `self`
- class methods that return an instance of `cls`
- `__new__` methods
Return a custom `TypeVar` instead of `typing.Self` and raises a
violation if this is the case. The rule also covers
[PEP-695](https://peps.python.org/pep-0695/) syntax as introduced in
upstream in https://github.com/PyCQA/flake8-pyi/pull/402
## Test Plan
Added fixtures with test cases from upstream implementation (plus
additional one for an excluded edge case, mentioned in upstream
implementation)
## Summary
Relates to #970.
Add new `bad-format-character` Pylint rule.
I had to make a change in `crates/ruff_python_literal/src/format.rs` to
get a more detailed error in case the format character is not correct. I
chose to do this since most of the format spec parsing functions are
private. It would have required me reimplementing most of the parsing
logic just to know if the format char was correct.
This PR also doesn't reflect current Pylint functionality in two ways.
It supports new format strings correctly, Pylint as of now doesn't. See
pylint-dev/pylint#6085.
In case there are multiple adjacent string literals delimited by
whitespace the index of the wrong format char will relative to the
single string. Pylint will instead reported it relative to the
concatenated string.
Given this:
```
"%s" "%z" % ("hello", "world")
```
Ruff will report this:
```Unsupported format character 'z' (0x7a) at index 1```
Pylint instead:
```Unsupported format character 'z' (0x7a) at index 3```
I believe it's more sensible to report the index relative to the
individual string.
## Test Plan
Added new snapshot and a small test in
`crates/ruff_python_literal/src/format.rs`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Part of #5062
Closes https://github.com/astral-sh/ruff/issues/5931
Implements formatting of a sequence of type parameters in a dedicated
struct for reuse by classes, functions, and type aliases (preparing for
#5929). Adds formatting of type parameters in class and function
definitions — previously, they were just elided.
## Summary
Builds on #6170 to break `global` and `nonlocal` statements, such that
we get:
```python
def f():
global \
analyze_featuremap_layer, \
analyze_featuremapcompression_layer, \
analyze_latencies_post, \
analyze_motions_layer, \
analyze_size_model
```
Instead of:
```python
def f():
global analyze_featuremap_layer, analyze_featuremapcompression_layer, analyze_latencies_post, analyze_motions_layer, analyze_size_model
```
Notably, we avoid applying this formatting if the statement ends in a
comment. Otherwise, the comment would _need_ to be placed after the last
item, like:
```python
def f():
global \
analyze_featuremap_layer, \
analyze_featuremapcompression_layer, \
analyze_latencies_post, \
analyze_motions_layer, \
analyze_size_model # noqa
```
To me, this seems wrong (and would break the `# noqa` comment). Ideally,
the items would be parenthesized, and the comment would be on the inner
parenthesis, like:
```python
def f():
global ( # noqa
analyze_featuremap_layer,
analyze_featuremapcompression_layer,
analyze_latencies_post,
analyze_motions_layer,
analyze_size_model
)
```
But that's not valid syntax.
## Summary
Checks for the presence of redundant `Literal` types and builtin super
types in an union. See [original
source](2a86db8271/pyi.py (L1261)).
This implementation has a couple of differences from the original. The
first one is, we support the `complex` and `float` builtin types. The
second is, when reporting diagnostic for a `Literal` with multiple
members of the same type, we print the entire `Literal` while `flak8`
only prints the `Literal` with its first member.
For example:
```python
from typing import Literal
x: Literal[1, 2] | int
```
Ruff will show `Literal[1, 2]` while flake8 only shows `Literal[1]`.
```shell
$ ruff crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:4:18: PYI051 `Literal["foo"]` is redundant in an union with `str`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:5:37: PYI051 `Literal[b"bar", b"foo"]` is redundant in an union with `bytes`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:6:37: PYI051 `Literal[5]` is redundant in an union with `int`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:6:67: PYI051 `Literal["foo"]` is redundant in an union with `str`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:7:37: PYI051 `Literal[b"str_bytes"]` is redundant in an union with `bytes`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:7:51: PYI051 `Literal[42]` is redundant in an union with `int`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:9:31: PYI051 `Literal[1J]` is redundant in an union with `complex`
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:9:53: PYI051 `Literal[3.14]` is redundant in an union with `float`
Found 8 errors.
```
```shell
$ flake8 crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:4:18: Y051 "Literal['foo']" is redundant in a union with "str"
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:5:37: Y051 "Literal[b'bar']" is redundant in a union with "bytes"
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:6:37: Y051 "Literal[5]" is redundant in a unionwith "int"
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:6:67: Y051 "Literal['foo']" is redundant in a union with "str"
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:7:37: Y051 "Literal[b'str_bytes']" is redundantin a union with "bytes"
crates/ruff/resources/test/fixtures/flake8_pyi/PYI051.pyi:7:51: Y051 "Literal[42]" is redundant in a union with "int"
```
While implementing this rule, I found a bug in the `is_unchecked_union`
check. This is the new check.
1ab86bad35/crates/ruff/src/checkers/ast/analyze/expression.rs (L85-L102)
The purpose of the check was to prevent rules from navigating through
nested `Union`s, as they already handle nested `Union`s. The way it was
implemented, this was not happening, the rules were getting executed
more than one time and sometimes were receiving expressions that were
not `Union`. For example, with the following code:
```python
typing.Union[Literal[5], int, typing.Union[Literal["foo"], str]]
```
The rules were receiving the expressions in the following order:
- `typing.Union[Literal[5], int, typing.Union[Literal["foo"], str]]`
- `Literal[5]`
- `typing.Union[Literal["foo"], str]]`
This was causing `PYI030` to report redundant information, for example:
```python
typing.Union[Literal[5], int, typing.Union[Literal["foo"],
Literal["bar"]]]
```
This is the `PYI030` output for this code:
```shell
PYI030 Multiple literal members in a union. Use a single literal, e.g. `Literal[5, "foo", "bar"]`
YI030 Multiple literal members in a union. Use a single literal, e.g.`Literal[5, "foo"]`
```
If I haven't misinterpreted the rule, that looks incorrect. I didn't
have the time to check the `PYI016` rule.
The last thing is, I couldn't find a reason for the "Why is this bad?"
section for `PYI051`.
Ref: #848
## Test Plan
Snapshots and manual runs of flake8.
\
## Summary
Previously, the ruff formatter was removing the star argument of
`lambda` expressions when formatting.
Given the following code snippet
```python
lambda *a: ()
lambda **b: ()
```
it would be formatted to
```python
lambda: ()
lambda: ()
```
We fix this by checking for the presence of `args`, `vararg` or `kwarg`
in the `lambda` expression, before we were only checking for the
presence of `args`.
Fixes#5894
## Test Plan
Add new tests cases.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Similar to #6279, moving some helpers onto the struct in the name of
reducing the number of random undiscoverable utilities we have in
`helpers.rs`.
Most of the churn is migrating rules to take `ast::ExprCall` instead of
the spread call arguments.
## Test Plan
`cargo test`
## Summary
This PR removes a now-unnecessary abstraction from `helper.rs`
(`CallArguments`), in favor of adding methods to `Arguments` directly,
which helps with discoverability.
## Summary
This PR boxes the `TypeParams` and `Arguments` fields on the class
definition node. These fields are optional and often emitted, and given
that class definition is our largest enum variant, we pay the cost of
including them for every statement in the AST. Boxing these types
reduces the statement size by 40 bytes, which seems like a good tradeoff
given how infrequently these are accessed.
## Test Plan
Need to benchmark, but no behavior changes.
## Summary
This PR leverages the `Arguments` AST node introduced in #6259 in the
formatter, which ensures that we correctly handle trailing comments in
calls, like:
```python
f(
1,
# comment
)
pass
```
(Previously, this was treated as a leading comment on `pass`.)
This also allows us to unify the argument handling across calls and
class definitions.
## Test Plan
A bunch of new fixture tests, plus improved Black compatibility.
## Summary
Similar to #6259, this PR adds a `TypeParams` node to the AST, to
capture the list of type parameters with their surrounding brackets.
If a statement lacks type parameters, the `type_params` field will be
`None`.
## Summary
This PR adds a new `Arguments` AST node, which we can use for function
calls and class definitions.
The `Arguments` node spans from the left (open) to right (close)
parentheses inclusive.
In the case of classes, the `Arguments` is an option, to differentiate
between:
```python
# None
class C: ...
# Some, with empty vectors
class C(): ...
```
In this PR, we don't really leverage this change (except that a few
rules get much simpler, since we don't need to lex to find the start and
end ranges of the parentheses, e.g.,
`crates/ruff/src/rules/pyupgrade/rules/lru_cache_without_parameters.rs`,
`crates/ruff/src/rules/pyupgrade/rules/unnecessary_class_parentheses.rs`).
In future PRs, this will be especially helpful for the formatter, since
we can track comments enclosed on the node itself.
## Test Plan
`cargo test`
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Before:
<img width="1031" alt="Screen Shot 2023-08-02 at 15 57 10"
src="https://github.com/astral-sh/ruff/assets/17039389/171a21d5-01a5-4aa5-8079-4e7f8a59ade8">
After:
<img width="1031" alt="Screen Shot 2023-08-02 at 15 58 03"
src="https://github.com/astral-sh/ruff/assets/17039389/afd1b9b7-89e0-4e38-a4a6-e3255b62f021">
## Test Plan
<!-- How was it tested? -->
Manual inspection
## Summary
This PR renames...
- `Parameter#arg` to `Parameter#name`
- `ParameterWithDefault#def` to `ParameterWithDefault#parameter` (such
that `ParameterWithDefault` has a `default` and a `parameter`)
## Test Plan
`cargo test`
## Summary
This PR renames a few AST nodes for clarity:
- `Arguments` is now `Parameters`
- `Arg` is now `Parameter`
- `ArgWithDefault` is now `ParameterWithDefault`
For now, the attribute names that reference `Parameters` directly are
changed (e.g., on `StmtFunctionDef`), but the attributes on `Parameters`
itself are not (e.g., `vararg`). We may revisit that decision in the
future.
For context, the AST node formerly known as `Arguments` is used in
function definitions. Formally (outside of the Python context),
"arguments" typically refers to "the values passed to a function", while
"parameters" typically refers to "the variables used in a function
definition". E.g., if you Google "arguments vs parameters", you'll get
some explanation like:
> A parameter is a variable in a function definition. It is a
placeholder and hence does not have a concrete value. An argument is a
value passed during function invocation.
We're thus deviating from Python's nomenclature in favor of a scheme
that we find to be more precise.
## Summary
Black allows up to one blank line _before_ a class docstring, and
enforces one blank line _after_ a class docstring. This PR implements
that handling. The cases in
`crates/ruff_python_formatter/resources/test/fixtures/ruff/statement/class_definition.py`
match Black identically.
## Summary
This PR ensures that if a function or class is the first statement in a
nested suite that _isn't_ a function or class body, we insert a leading
newline.
For example, given:
```python
def f():
if True:
def register_type():
pass
```
We _want_ to preserve the newline, whereas today, we remove it.
Note that this only applies when the function or class doesn't have any
leading comments.
Closes https://github.com/astral-sh/ruff/issues/6066.
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR removes the `Interactive` and `FunctionType` parser modes that are unused by ruff
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR removes the `type_comment` field which our parser doesn't support.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR removes the type ignore node from the AST because our parser doesn't support it, and just having it around is confusing.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo build`
<!-- How was it tested? -->
**Summary** Allow passing any node to `CommentPlacement::{leading,
trailing, dangling}` without manually converting. Conversely, Restrict
the comment to the only type we actually pass.
**Test Plan** No changes.
## Summary
Previously, given:
```python
a = \
5;
```
When detecting continuations starting at the offset of the `;`, we'd
flag the previous line as a continuation. We should only flag a
continuation if there isn't leading content prior to the offset.
Closes https://github.com/astral-sh/ruff/issues/6214