## Summary
This PR updates the revision of `LibCST` dependency to 9c263aa897
inorder to fix https://github.com/astral-sh/ruff/issues/4899
## Test Plan
The test case including the carriage return (`\r`) character was added for
`F504` and then `cargo test`.
fixes: #4899
If a user has `import collections, functools, operator`, and we try to
import from `functools` and `operator`, we end up adding two identical
synthetic edits to preserve that import statement. We need to dedupe
them.
Closes https://github.com/astral-sh/ruff/issues/7059.
## Summary
This PR moves `ruff/jupyter` into its own `ruff_notebook` crate. Beyond
the move itself, there were a few challenges:
1. `ruff_notebook` relies on the source map abstraction. I've moved the
source map into `ruff_diagnostics`, since it doesn't have any
dependencies on its own and is used alongside diagnostics.
2. `ruff_notebook` has a couple tests for end-to-end linting and
autofixing. I had to leave these tests in `ruff` itself.
3. We had code in `ruff/jupyter` that relied on Python lexing, in order
to provide a more targeted error message in the event that a user saves
a `.py` file with a `.ipynb` extension. I removed this in order to avoid
a dependency on the parser, it felt like it wasn't worth retaining just
for that dependency.
## Test Plan
`cargo test`
## Summary
I think the fallthrough here for some branches is a little confusing.
Now each branch either runs a command that returns `Result<ExitStatus>`,
or runs a command that returns `Result<()>` and then explicitly returns
`Ok(ExitStatus::SUCCESS)`.
## Summary
This PR refactors the error-handling cases around Jupyter notebooks to
use errors rather than `Box<Diagnostics>`, which creates some oddities
in the downstream handling. So, instead of formatting errors as
diagnostics _eagerly_ (in the notebook methods), we now return errors
and convert those errors to diagnostics at the last possible moment (in
`diagnostics.rs`). This is more ergonomic, as errors can be composed and
reported-on in different ways, whereas diagnostics require a `Printer`,
etc.
See, e.g.,
https://github.com/astral-sh/ruff/pull/7013#discussion_r1311136301.
## Test Plan
Ran `cargo run` over a Python file labeled with a `.ipynb` suffix, and
saw:
```
foo.ipynb:1:1: E999 SyntaxError: Expected a Jupyter Notebook, which must be internally stored as JSON, but found a Python source file: expected value at line 1 column 1
```
## Summary
This PR modifies our between-statement comment handling such that
comments that are not separated by a statement by any newlines continue
to be treated as leading comments on the statement, but comments that
_are_ separated are instead formatted as trailing comments on the
preceding statement.
See, e.g., the originating snippet:
```python
DEFAULT_TEMPLATE = "flatpages/default.html"
# This view is called from FlatpageFallbackMiddleware.process_response
# when a 404 is raised, which often means CsrfViewMiddleware.process_view
# has not been called even if CsrfViewMiddleware is installed. So we need
# to use @csrf_protect, in case the template needs {% csrf_token %}.
# However, we can't just wrap this view; if no matching flatpage exists,
# or a redirect is required for authentication, the 404 needs to be returned
# without any CSRF checks. Therefore, we only
# CSRF protect the internal implementation.
def flatpage(request, url):
pass
```
Here, we need to ensure that the `def flatpage` is precede by two empty
lines. However, we want those two empty lines to be enforced from the
_end_ of the comment block, _unless_ the comments are directly atop the
`def flatpage`.
I played with this a bit, and I think the simplest conceptual model and
implementation is to instead treat those as trailing comments on the
preceding node. The main difficulty with this approach is that, in order
to be fully compatible with Black, we'd sometimes need to insert
newlines _between_ the preceding node and its trailing comments. See,
e.g.:
```python
def func():
...
# comment
x = 1
```
In this case, we'd need to insert two blank lines between `def func():
...` and `# comment`, but `# comment` is trailing comment on `def
func(): ...`. So, we'd need to take this case into account in the
various nodes that _require_ newlines after them: functions, classes,
and imports. After some discussion, we've opted _not_ to support this,
and just treat these as trailing comments -- so we won't insert newlines
there. This means our handling is still identical to Black's on
Black-formatted code, but avoids moving such trailing comments on
unformatted code.
I dislike that the empty handling is so complex, and that it's split
between so many different nodes, but this is really tricky. Continuing
to treat these as leading comments is very difficult too, since we'd
need to do similar tricks for the leading comment handling in those
nodes, and influencing leading comments is even harder, since they're
all formatted _before_ the node itself.
Closes https://github.com/astral-sh/ruff/issues/6761.
## Test Plan
`cargo test`
Surprisingly, it doesn't change the similarity at all (apart from a
0.00001 change in CPython), but I manually confirmed that it did fix the
originating issue in Django.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76082 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76081 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
## Summary
This PR modifies a few of our rules related to which statements (and how
many) are allowed in function bodies within `.pyi` files, to improve
compatibility with flake8-pyi and improve the interplay dynamics between
them. Each change fixes a deviation from flake8-pyi:
- We now always trigger the multi-statement rule (PYI048) regardless of
whether one of the statements is a docstring.
- We no longer trigger the `...` rule (PYI010) if the single statement
is a docstring or a `pass` (since those are covered by other rules).
- We no longer trigger the `...` rule (PYI010) if the function body
contains multiple statements (since that's covered by PYI048).
Closes https://github.com/astral-sh/ruff/issues/7021.
## Test Plan
`cargo test`
## Summary
This PR attempts to address a problem in the parser related to the
range's of `WithItem` nodes in certain contexts -- specifically,
`WithItem` nodes in parentheses that do not have an `as` token after
them.
For example,
[here](https://play.ruff.rs/71be2d0b-2a04-4c7e-9082-e72bff152679):
```python
with (a, b):
pass
```
The range of the `WithItem` `a` is set to the range of `(a, b)`, as is
the range of the `WithItem` `b`. In other words, when we have this kind
of sequence, we use the range of the entire parenthesized context,
rather than the ranges of the items themselves.
Note that this also applies to cases
[like](https://play.ruff.rs/c551e8e9-c3db-4b74-8cc6-7c4e3bf3713a):
```python
with (a, b, c as d):
pass
```
You can see the issue in the parser here:
```rust
#[inline]
WithItemsNoAs: Vec<ast::WithItem> = {
<location:@L> <all:OneOrMore<Test<"all">>> <end_location:@R> => {
all.into_iter().map(|context_expr| ast::WithItem { context_expr, optional_vars: None, range: (location..end_location).into() }).collect()
},
}
```
Fixing this issue is... very tricky. The naive approach is to use the
range of the `context_expr` as the range for the `WithItem`, but that
range will be incorrect when the `context_expr` is itself parenthesized.
For example, _that_ solution would fail here, since the range of the
first `WithItem` would be that of `a`, rather than `(a)`:
```python
with ((a), b):
pass
```
The `with` parsing in general is highly precarious due to ambiguities in
the grammar. Changing it in _any_ way seems to lead to an ambiguous
grammar that LALRPOP fails to translate. Consensus seems to be that we
don't really understand _why_ the current grammar works (i.e., _how_ it
avoids these ambiguities as-is).
The solution implemented here is to avoid changing the grammar itself,
and instead change the shape of the nodes returned by various rules in
the grammar. Specifically, everywhere that we return `Expr`, we instead
return `ParenthesizedExpr`, which includes a parenthesized range and the
underlying `Expr` itself. (If an `Expr` isn't parenthesized, the ranges
will be equivalent.) In `WithItemsNoAs`, we can then use the
parenthesized range as the range for the `WithItem`.
Per discussion at https://github.com/astral-sh/ruff/discussions/6998
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Adds a `--preview` and `--no-preview` option to the CLI for `ruff check`
and corresponding settings. The CLI options are hidden for now.
Available in the settings as `preview = true` or `preview = false`.
Does not include environment variable configuration, although we may add
it in the future.
## Test Plan
<!-- How was it tested? -->
`cargo build`
Future work will build on this setting, such as toggling the mode during
a test.
## Summary
This PR adds comment handling for comments between the `=` and the
`value` for keywords, as in the following cases:
```python
func(
x # dangling
= # dangling
# dangling
1,
** # dangling
y
)
```
(Comments after the `**` were already handled in some cases, but I've
unified the handling with the `=` handling.)
Note that, previously, comments between the `**` and its value were
rendered as trailing comments on the value (so they'd appear after `y`).
This struck me as odd since it effectively re-ordered the comment with
respect to its closest AST node (the value). I've made them leading
comments, though I don't know that that's a significant improvement. I
could also imagine us leaving them where they are.
## Summary
Attempt at a small improvement to two `perflint` rules using the new
type inference capabilities to only flag `PERF401` and `PERF402` for
values we infer to be lists. This makes the rule more conservative, as
it only flags values that we _know_ to be lists, but it's overall a
desirable change, as it favors false negatives over false positives for
a "nice-to-have" rule.
Closes https://github.com/astral-sh/ruff/issues/6995.
## Test Plan
Add non-list value cases and make sure all old cases are still caught.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Rewriting the `if`-comparison to focus on the meaning of rule ` assert
S101`.
Fixes#6984
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Ensures that we use the same error types and messages. Also renames
those struct to `FormatCommand*` for consistency, and removes the
`FormatCommandResult::Skipped` variant in favor of skipping in the
iterator directly.
## Summary
Returns an exit code of 1 if any files would be reformatted:
```
ruff on charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0
❯ cargo run -p ruff_cli -- format foo.py --check
Compiling ruff_cli v0.0.286 (/Users/crmarsh/workspace/ruff/crates/ruff_cli)
Finished dev [unoptimized + debuginfo] target(s) in 1.69s
Running `target/debug/ruff format foo.py --check`
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended only for experimentation.
1 file would be reformatted
ruff on charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0 took 2s
❯ echo $?
1
```
Closes#6966.
## Summary
This is similar to `commands::check` vs. `commands::check_stdin`, and
gets the logic out of the parent file (`lib.rs`). It also ensures that
we avoid formatting files that should be excluded when `--force-exclude`
is provided.
## Summary
This PR adds a new helper method on the `Cursor` called `eat_char2`
which is similar to `eat_char` but accepts 2 characters instead of 1. It'll
`bump` the cursor twice if both characters are found on lookahead.
## Test Plan
`cargo test`
- Use `Option` instead of `Result` everywhere.
- Use `field` instead of `property` (to match the nomenclature of
`NamedTuple` and `TypedDict`).
- Put the violation function at the top of the file, rather than the
bottom.
## Summary
The `typename` argument to `NamedTuple` and `TypedDict` is a required
positional argument. We assumed as much, but panicked if it was provided
as a keyword argument or otherwise omitted. This PR handles the case
gracefully.
Closes https://github.com/astral-sh/ruff/issues/6953.
## Summary
As a small quality-of-life improvement, the locator can now slice like
`locator.slice(stmt)` instead of requiring
`locator.slice(stmt.range())`.
## Test Plan
`cargo test`
## Summary
This PR adds a higher-level enum (`SourceType`) around `PySourceType` to
allow us to use the same detection path to handle TOML files. Right now,
we have ad hoc `is_pyproject_toml` checks littered around, and some
codepaths are omitting that logic altogether (like `add_noqa`). Instead,
we should always be required to check the source type and handle TOML
files as appropriate.
This PR will also help with our pre-commit capabilities. If we add
`toml` to pre-commit (to support `pyproject.toml`), pre-commit will
start to pass _other_ files to Ruff (along with `poetry.lock` and
`Pipfile` -- see
[identify](b59996304f/identify/extensions.py (L355))).
By detecting those files and handling those cases, we avoid attempting
to parse them as Python files, which would lead to pre-commit errors.
(We tried to add `toml` to pre-commit here
(https://github.com/astral-sh/ruff-pre-commit/pull/44), but had to
revert here (https://github.com/astral-sh/ruff-pre-commit/pull/45) as it
led to the pre-commit hook attempting to parse `poetry.lock` files as
Python files.)
## Summary
This PR fixes a bug which sends the lexer into infinite loop for an invalid input.
The code in question is `[1` where the nesting is never finished. This means
that the lexer will keep emitting the `Err` token forever.
## Test Plan
Add a test case which collects all the tokens from the lexer. This just
makes sure that it doesn't go into infinite loop.
## Summary
Just making the formatter CLI more consistent with the linter -- e.g.,
we now use stdin on invocations like `cat foo.py | cargo run -p ruff_cli
-- format -- --stdin-filename=foo.py`, instead of _only_ relying on the
`-` file (and use the same helper as the linter to facilitate this).
**Summary** Add recursive formatting based on `ruff check` file
discovery for `ruff format`, as a prototype for the formatter alpha.
This allows e.g. `format ../projects/django/`. It's still lacking
support for any settings except line length.
Note just like the existing `ruff format` this will become part of the
production build, i.e. you'll be able to use it - hidden by default and
with a prominent warning - with `ruff format .` after the next release.
Error handling works in my manual tests (the colors do also work):
```
$ target/debug/ruff format scripts/
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
```
(the above changes `add_rule.py` where we have the wrong bin op
breaking)
```
$ target/debug/ruff format ../projects/django/
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
Failed to format /home/konsti/projects/django/tests/test_runner_apps/tagged/tests_syntax_error.py: source contains syntax errors: ParseError { error: UnrecognizedToken(Name { name: "syntax_error" }, None), offset: 131, source_path: "<filename>" }
```
```
$ target/debug/ruff format a
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
Failed to read /home/konsti/ruff/a/d.py: Permission denied (os error 13)
```
**Test Plan** Missing! I'm not sure if it's worth building tests at this
stage or how they should look like.
## Summary
The motivation here is that this enables us to implement `Ranged` in
crates that don't depend on `ruff_python_ast`.
Largely a mechanical refactor with a lot of regex, Clippy help, and
manual fixups.
## Test Plan
`cargo test`
## Summary
The range of the usage from `Globals` should be the range of the
identifier, not the range of the full `global pandas` statement.
Closes https://github.com/astral-sh/ruff/issues/6914.
## Summary
This PR introduces two new AST nodes to improve the representation of
`PatternMatchClass`. As a reminder, `PatternMatchClass` looks like this:
```python
case Point2D(0, 0, x=1, y=2):
...
```
Historically, this was represented as a vector of patterns (for the `0,
0` portion) and parallel vectors of keyword names (for `x` and `y`) and
values (for `1` and `2`). This introduces a bunch of challenges for the
formatter, but importantly, it's also really different from how we
represent similar nodes, like arguments (`func(0, 0, x=1, y=2)`) or
parameters (`def func(x, y)`).
So, firstly, we now use a single node (`PatternArguments`) for the
entire parenthesized region, making it much more consistent with our
other nodes. So, above, `PatternArguments` would be `(0, 0, x=1, y=2)`.
Secondly, we now have a `PatternKeyword` node for `x=1` and `y=2`. This
is much more similar to the how `Keyword` is represented within
`Arguments` for call expressions.
Closes https://github.com/astral-sh/ruff/issues/6866.
Closes https://github.com/astral-sh/ruff/issues/6880.
Closes#6767
Replaces https://github.com/astral-sh/ruff/pull/6773 (this cherry-picks
some parts from there)
Alternative to the approach introduced in #6616 which added support for
placeholders in format specifications while retaining parsing of other
format specification parts.
The idea is that if there are placeholders in a format specification we
will not attempt to glean semantic meaning from the other parts of the
format specification we'll just extract all of the placeholders ignoring
other characters. The dynamic content of placeholders can drastically
change the meaning of the format specification in ways unknowable by
static analysis. This change prevents false analysis and will ensure
safety if we build other rules on top of this at the cost of missing
detection of some bad specifications.
Minor note: I've use "replacements" and "placeholders" interchangeably
but am trying to go with "placeholder" as I think it's a better term for
the static analysis concept here
Closes https://github.com/astral-sh/ruff/issues/6821
ERA100 was not raising on commented parts of dictionaries if it included
another comment (such as a noqa clause). In cases where this comment was
a noqa clause, RUF100 to be emitted since the noqa would have no effect.
Here, we update ERA100 to raise even when there are trailing comments.
This resolves the linked issue _and_ increases the scope of ERA100. We
could narrow the regular expression to only apply to noqa comments if we
do not want to expand ERA100 however I think this change is within the
spirit of the rule.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds support for `PatternMatchMapping` -- i.e., cases like:
```python
match foo:
case {"a": 1, "b": 2, **rest}:
pass
```
Unfortunately, this node has _three_ kinds of dangling comments:
```python
{ # "open parenthesis comment"
key: pattern,
** # end-of-line "double star comment"
# own-line "double star comment"
rest # end-of-line "after rest comment"
# own-line "after rest comment"
}
```
Some of the complexity comes from the fact that in `**rest`, `rest` is
an _identifier_, not a node, so we have to handle comments _after_ it as
dangling on the enclosing node, rather than trailing on `**rest`. (We
could change the AST to use `PatternMatchAs` there, which would be more
permissive than the grammar but not totally crazy -- `PatternMatchAs` is
used elsewhere to mean "a single identifier".)
Closes https://github.com/astral-sh/ruff/issues/6644.
## Test Plan
`cargo test`
## Summary
This PR ensures that if an expression has an own-line leading comment
_before_ its open parentheses, we render it as such.
For example, given:
```python
[ # foo
# bar
( # baz
1
)
]
```
On `main`, we format as:
```python
[ # foo
(
# bar
# baz
1
)
]
```
As of this PR, we format as:
```python
[ # foo
# bar
( # baz
1
)
]
```
## Test Plan
`cargo test`
## Summary
This PR ensures that we handle bracketed comments on sequences, like `#
comment` here:
```python
match x:
case [ # comment
1, 2
]:
pass
```
The handling is very similar to other, similar nodes, except that we do
need some special logic to determine whether the sequence is
parenthesized, similar to our logic for tuples.
## Test Plan
`cargo test`
## Summary
This PR modifies our formatting of comments around the `.` in an
attribute. Specifically, the goal here is to avoid _reordering_
comments, and the net effect is that we generally leave comments
where-they-are when dealing with comments between around the dot (which
you can also think of as comments between attributes).
All comments around the dot are now treated as dangling and formatted
manually, with the exception of end-of-line or parenthesized comments on
the value, like those marked as trailing here, which remain trailing:
```python
(
(
a # trailing end-of-line
# trailing own-line
) # dangling before dot end-of-line
.b # trailing end-of-line
)
```
Closes https://github.com/astral-sh/ruff/issues/6823.
## Test Plan
`cargo test`
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
## Summary
This PR fixes the duplicate-parenthesis problem that's visible in the
tests from https://github.com/astral-sh/ruff/pull/6799. The issue is
that we might have parentheses around the entire match-case pattern,
like in `(1)` here:
```python
match foo:
case (1):
y = 0
```
In this case, the inner expression (`1`) will _think_ it's
parenthesized, but we'll _also_ detect the parentheses at the case level
-- so they get rendered by the case, then again by the expression.
Instead, if we detect parentheses at the case level, we can force-off
the parentheses for the pattern using a design similar to the way we
handle parentheses on expressions.
Closes https://github.com/astral-sh/ruff/issues/6753.
## Test Plan
`cargo test`
## Summary
Our first-party import detection uses a heuristic that doesn't exist in
isort: if an import appears to be from within the same package as the
containing file, we mark it as first-party. For example, if you have a
directory `./foo/__init__.py`, and you import `from foo import bar` in
`./foo/baz.py`, we'll mark that as first-party. (See:
https://github.com/astral-sh/ruff/pull/1266.)
This is often unnecessary, and arguably should be removed (though it
does have some important use-cases that are otherwise unserved -- I
believe Dagster uses it to ensure that all packages mark imports from
within the same package as first-party, but not imports _across_
different first-party packages)... but it does exist, and it does help
in cases in which the `src` field is not properly configured.
This PR adds an option to turn off this behavior:
```toml
[tool.ruff.isort]
detect-same-package = false
```
This is being introduced to help codebases migrating over from isort
that may want more consistent behavior with their current sorting.
## Test Plan
`cargo test`
Extends #6781
Part of https://github.com/astral-sh/ruff/issues/3762
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Allows calls in argument defaults if the argument is annotated as an
immutable type to avoid false positives.
## Test Plan
<!-- How was it tested? -->
Snapshots
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix#6834
## Test Plan
<!-- How was it tested? -->
Need tests?
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
The docs were out of date, and the new version incorporates some
feedback.
I tried to keep the language concise and the information ordered by how
early you need it, so people can get the relevant information quickly
before jumping into the code.
I did some minor format_dev changes for consistency in the docs.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Closes https://github.com/astral-sh/ruff/issues/6788 by special casing
integer literals with attribute access — either retaining parenthesis
for literals with values (e.g. `int(7).denominator` to
`(7).denominator)` or leaving calls without values (e.g.
`int().denominator`) unchanged.
## Summary
Another drive-by change to remove unnecessary custom lexing. We just
need to know the parenthesized range, so we can use...
`parenthesized_range`. I've also updated `parenthesized_range` to
support nested parentheses.
## Test Plan
`cargo test`
## Summary
This is effectively #6608, but with additional tests.
We aren't properly handling parenthesized patterns, but that needs to be
dealt with separately as it's somewhat involved.
Closes#6555
## Summary
Follows up on
https://github.com/astral-sh/ruff/pull/6652#discussion_r1300871033 with
some modifications to the `PatternMatchAs` comment handling.
Specifically, any comments between the `as` and the end are now
formatted as dangling, and we now insert some newlines in the
appropriate places.
## Test Plan
`cargo test`
## Summary
Ensures that we retain the open-parenthesis comment in cases like:
```python
match pattern_comments:
case ( # leading
only_leading
):
...
```
Previously, this was treated as a leading comment on `only_leading`.
## Test Plan
`cargo test`
## Summary
This PR fixes the bug where the decorator parentheses weren't being considered
when computing the autofix for `ANN204`. The existing logic would only look
for balanced parentheses and not multiple pairs of parentheses.
The solution is to remove the logic to generate the autofix and use the
`Parameters` end range directly which includes the parentheses as well.
## Test Plan
Add test case for `ANN204` with decorator being called
fixes: #6790
## Summary
Given:
```python
def end_of_file():
if False:
return 1
x = 2 \
```
Then when searching for the end of the `x = 2` statement, we'd reach a
panic as we'd hit the last line (`\\`) and abort, since the universal
iterator doesn't return trailing newlines. Instead, we should just use
the end of the file as the fallback.
Closes https://github.com/astral-sh/ruff/issues/6787.
## Test Plan
`cargo test`
## Summary
Avoid `C417` for `lambda` with default and variadic parameters.
## Test Plan
`cargo test` and checking if it generates any autofix errors as test
cases
for `lambda` with default parameters already exists.
fixes: #6715
## Summary
This PR updates the lexer tests to use the snapshot testing framework.
It also
makes the following changes:
* Remove the use of macros in the lexer tests
* Use `test_case` for EOL tests
## Test Plan
```
cargo test --package ruff_python_parser --lib --all-features -- lexer::tests --no-capture
```
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds a utility for transforming expressions via LibCST that
automatically wraps the expression in parentheses, applies a
user-provided transformation, then strips the parentheses from the
generated code. LibCST can't parse arbitrary expression ranges, since
some expressions may require parenthesization in order to be parsed
properly. For example:
```python
option = (
'{name}={value}'
.format(nam=name, value=value)
)
```
In this case, the expression range is:
```python
'{name}={value}'
.format(nam=name, value=value)
```
Which isn't valid on its own. So, instead, we add "fake" parentheses
around the expression.
We were already doing this in a few places, so this is mostly
formalizing and DRYing up that pattern.
Closes https://github.com/astral-sh/ruff/issues/6720.
## Summary
For imports, we enforce that there's _at least_ one empty line after an
import (assuming the next statement is _not_ an import), but allow up to
two at the module level.
Closes https://github.com/astral-sh/ruff/issues/6760.
## Test Plan
`cargo test`
## Summary
The isolation group for unused imports was relying on
`checker.semantic().current_statement()`, which isn't valid for that
rule, since it runs over the _scope_, not the statement. Instead, we
need to lookup the isolation group based on the `NodeId` of the
statement.
Our tests didn't catch this, because we mostly have cases that look like
this:
```python
if TYPE_CHECKING:
import shelve
import importlib
```
In this case, the two fixes to remove the two unused imports are
considered overlapping (since we delete the _full_ line, and the two
_full_ lines touch, and we consider exactly-adjacent fixes to be
overlapping), and so they don't run in a single pass due to the
non-overlapping-fixes requirement. That is: the isolation groups aren't
required for this case. They are, however, required for cases like:
```python
if TYPE_CHECKING:
import shelve
import importlib
```
...where the fixes don't overlap.
Closes https://github.com/astral-sh/ruff/issues/6758.
## Test Plan
`cargo test`
`IOError` is special, it is not actually a lint but an error before
linting. I'm not entirely sure how to document it since it does not
match the general lint rule pattern (`Checks that the file can be read
in its entirety.` is imho worse).
I added the in my experience two most common reasons for io errors on
unix systems and linked two tutorials on how to fix them.
See https://github.com/astral-sh/ruff/issues/2646
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
I noticed this in the ecosystem CI check from
https://github.com/astral-sh/ruff/pull/6742. If we include source code
directly in a diagnostic, we need to be careful to avoid rendering
multi-line diagnostics or even excessively long diagnostics.
## Test Plan
`cargo test`
## Summary
This PR is a follow-up to the suggestion in
https://github.com/astral-sh/ruff/pull/6345#discussion_r1285470953 to
use a single stack to store all statements and expressions, rather than
using separate vectors for each, which gives us something closer to a
full-fidelity chain. (We can then generalize this concept to include all
other AST nodes too.)
This is in part made possible by the removal of the hash map from
`&Stmt` to `StatementId` (#6694), which makes it much cheaper to store
these using a single interface (since doing so no longer introduces the
requirement that we hash all expressions).
I'll follow-up with some profiling, but a few notes on how the data
requirements have changed:
- We now store a `BranchId` for every expression, not just every
statement, so that's an extra `u32`.
- We now store a single `NodeId` on every snapshot, rather than separate
`StatementId` and `ExpressionId` IDs, so that's one fewer `u32` for each
snapshot.
- We're probably doing a few more lookups in general, since any calls to
`current_statement()` etc. now have to iterate up the node hierarchy
until they identify the first statement.
## Test Plan
`cargo test`
## Summary
Avoid attempting to rewrite `import matplotlib.pyplot` as `import
matplotlib.pyplot as plt`. We can't support these right now, since we
don't track references at the attribute level (like
`matplotlib.pyplot`).
Closes https://github.com/astral-sh/ruff/issues/6719.
## Summary
Given:
```python
from sys import *
exit(0)
```
We can't add `exit` to `from sys import *`, so we should just ignore it.
Ideally, we'd just resolve `exit` in the first place (since it's
imported from `from sys import *`), but as long as we don't support
wildcard imports, this is more consistent.
Closes https://github.com/astral-sh/ruff/issues/6718.
## Test Plan
`cargo test`
Avoid the nesting in a macro by using the new `WithNodeLevel` to
`PyFormatter` deref. No changes otherwise.
I wanted to follow this up with quickly fixing the typeshed empty line
rules but they turned out a lot more complex than i had anticipated.
**Summary** A common pattern in the code used to be
```rust
if statements.len() != 1 {
return;
}
use_single_entry(statements[0])?;
```
which can be better expressed as
```rust
let [statement] = statements else {
return;
};
use_single_entry(statements)?;
```
Direct indexing can cause panics if you don't manually take care of
checking the length, while matching (such as if-let or let-else) can
never panic.
This isn't a complete refactor, i've just removed some of the obvious
cases. I've specifically looked for `.len() != 1` and fixed those.
**Test Plan** No functional changes
## Summary
We're using LibCST to ensure that we return the full parenthesized range
of an expression, for display purposes. We can just use
`parenthesized_range` which is more efficient and removes one LibCST
dependency.
## Test Plan
`cargo test`
This _probably_ never matters given the set of rules we support and in
fact I'm having trouble thinking of a test-case for it, but it's
definitely incorrect _not_ to pass on the `BranchId` here.