## Summary
Implement rule `mutable-fromkeys-value` (`RUF023`).
Autofixes
```python
dict.fromkeys(foo, [])
```
to
```python
{key: [] for key in foo}
```
The fix is marked as unsafe as it changes runtime behaviour. It also
uses `key` as the comprehension variable, which may not always be
desired.
Closes#4613.
## Test Plan
`cargo test`
## Summary
This PR detects whether PLR0917 is being applied to a method or class
method, and if so, it ignores the first argument for the purposes of
counting the number of positional arguments.
## Test Plan
New tests have been added to the corresponding fixture.
Closes#9552.
## Summary
Long ago, we had a single `ruff` crate. We started to break that up, and
at some point, we wanted to separate the CLI from the core library. So
we created `ruff_cli`, which created a `ruff` binary. Later, the `ruff`
crate was renamed to `ruff_linter` and further broken up into additional
crates.
(This is all from memory -- I didn't bother to look through the history
to ensure that this is 100% correct :))
Now that `ruff` no longer exists, this PR renames `ruff_cli` to `ruff`.
The primary benefit is that the binary target and the crate name are now
the same, which helps with downstream tooling like `cargo-dist`, and
also removes some complexity from the crate and `Cargo.toml` itself.
## Test Plan
- Ran `rm -rf target/release`.
- Ran `cargo build --release`.
- Verified that `./target/release/ruff` was created.
## Summary
#5920 with a fix for the erroneous slice in `module_name`. Fixes#9547.
## Test Plan
Added `import bbb.ccc._ddd as eee` to the test fixture to ensure it no
longer panics.
`cargo test`
## Summary
This implements the rule proposed in #1198 (though it doesn't close the
issue, as there are some open questions about configuration that might
merit some further discussion).
## Test Plan
`cargo test` / `cargo insta review`. I also ran this PR branch on the CPython
codebase with `--fix --select=RUF022 --preview `, and the results looked
pretty good to me.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Andrew Gallant <andrew@astral.sh>
## Summary
add autofix for `deprecated_log_warn` (`PGH002`)
## Test Plan
`cargo test`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR is a more holistic fix for
https://github.com/astral-sh/ruff/issues/9534 and
https://github.com/astral-sh/ruff/issues/9159.
When we visit the AST, we track nodes that we need to visit _later_
(deferred nodes). For example, when visiting a function, we defer the
function body, since we don't want to visit the body until we've visited
the rest of the statements in the containing scope.
However, deferred nodes can themselves contain deferred nodes... For
example, a function body can contain a lambda (which contains a deferred
body). And then there are rarer cases, like a lambda inside of a type
annotation.
The aforementioned issues were fixed by reordering the deferral visits
to catch common cases. But even with those fixes, we still fail on cases
like:
```python
from __future__ import annotations
import re
from typing import cast
cast(lambda: re.match, 1)
```
Since we don't expect lambdas to appear inside of type definitions.
This PR modifies the `Checker` to keep visiting until all the deferred
stacks are empty. We _already_ do this for any one kind of deferred
node; now, we do it for _all_ of them at a level above.
## Summary
This is effectively the same problem as
https://github.com/astral-sh/ruff/pull/9175. And this just papers over
it again, though I'm gonna try a more holistic fix in a follow-up PR.
The _real_ fix here is that we need to continue to visit deferred items
until they're exhausted since, e.g., we still get this case wrong
(flagging `re` as unused):
```python
import re
cast(lambda: re.match, 1)
```
Closes https://github.com/astral-sh/ruff/issues/9534.
## Summary
Closes#9508 .
Add `__prepare__` method to dunder method list in
`is_known_dunder_method`.
## Test Plan
1. add "__prepare__" method to `Apple` class in
crates/ruff_linter/resources/test/fixtures/pylint/bad_dunder_method_name.py.
2. run `cargo test`
Implements SIM113 from #998
Added tests
Limitations
- No fix yet
- Only flag cases where index variable immediately precede `for` loop
@charliermarsh please review and let me know any improvements
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
In the `logical_lines`'s `expand_indent` , respect the
`LinterSettings::tab_size` setting instead of hardcoding the size of
tabs to 8.
Also see [this
conversation](https://github.com/astral-sh/ruff/pull/9266#discussion_r1447102212)
## Test Plan
Tested by running `cargo test`
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes#8334.
`Display` has been implemented for `ruff_workspace::Settings`, which
gives a much nicer and more readable output to `--show-settings`.
Internally, a `display_settings` utility macro has been implemented to
reduce the boilerplate of the display code.
### Work to be done
- [x] A lot of formatting for `Vec<_>` and `HashSet<_>` types have been
stubbed out, using `Debug` as a fallback. There should be a way to add
generic formatting support for these types as a modifier in
`display_settings`.
- [x] Several complex types were also stubbed out and need proper
`Display` implementations rather than falling back on `Debug`.
- [x] An open question needs to be answered: how important is it that
the output be valid TOML? Some types in settings, such as a hash-map
from a glob pattern to a multi-variant enum, will be hard to rework into
valid _and_ readable TOML.
- [x] Tests need to be implemented.
## Test Plan
Tests consist of a snapshot test for the default `--show-settings`
output and a doctest for `display_settings!`.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fix the message for `__aenter__ ` in PLC2801 (introduced in
https://github.com/astral-sh/ruff/pull/9166)
There is no `aenter` builtin in Python, so the current message is
misleading.
I take the message from original lint
https://github.com/pylint-dev/pylint/blob/main/pylint/constants.py#L211
P.S. I think here should be more accurate synchronization with original
lint (e.g. the current implementation will not lint `__enter__` on my
first sight), but it is out-of-scope of this change.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
I noticed that there should be a missing period added to some of the new
error messages for Unnecessary dunder call:
```
sandpit\test.py:6:16: PLC2801 Unnecessary dunder call to `__getattribute__`. Access attribute directly or use getattr built-in function..
```
## Test Plan
Static analysis of the implementation, as this has no existing test
cases.
## Summary
We haven't found time to flip this on, so feels like it's best to remove
it for now -- can always restore from source when we get back to it.
## Summary
Closes#9319, implements the [`SIM911` rule from
`flake8-simplify`](https://github.com/MartinThoma/flake8-simplify/pull/183).
#### Note
I wasn't sure whether or not to include
```rs
if checker.settings.preview.is_disabled() {
return;
}
```
at the beginning of the function with violation logic if the rule's
already declared as part of `RuleGroup::Preview`.
I've seen both variants, so I'd appreciate some feedback on that :)
## Summary
This PR attempts to improve `builtin-attribute-shadowing` (`A003`), a
rule which has been repeatedly criticized, but _does_ have value (just
not in the current form).
Historically, this rule would flag cases like:
```python
class Class:
id: int
```
This led to an increasing number of exceptions and special-cases to the
rule over time to try and improve it's specificity (e.g., ignore
`TypedDict`, ignore `@override`).
The crux of the issue is that given the above, referencing `id` will
never resolve to `Class.id`, so the shadowing is actually fine. There's
one exception, however:
```python
class Class:
id: int
def do_thing() -> id:
pass
```
Here, `id` actually resolves to the `id` attribute on the class, not the
`id` builtin.
So this PR completely reworks the rule around this _much_ more targeted
case, which will almost always be a mistake: when you reference a class
member from within the class, and that member shadows a builtin.
Closes https://github.com/astral-sh/ruff/issues/6524.
Closes https://github.com/astral-sh/ruff/issues/7806.
Fixes#8721
## Summary
This implements the rule proposed in #8721, as RUF021. `and` always
binds more tightly than `or` when chaining the two together.
(This should definitely be autofixable, but I'm leaving that to a
followup PR for now.)
## Test Plan
`cargo test` / `cargo insta review`
## Summary
This PR enables Ruff to remove redefined imports, as in:
```python
import os
import os
print(os)
```
Previously, Ruff would flag `F811` on the second `import os`, but
couldn't fix it.
For now, this fix is marked as safe, but only available in preview.
Closes https://github.com/astral-sh/ruff/issues/3477.
## Summary
On `main`, we flag redefinitions in cases like:
```python
import os
x = 1
if x > 0:
import os
```
That is, we consider these to be in the "same branch", since they're not
in disjoint branches. This matches Flake8's behavior, but it seems to
lead to false positives.
## Summary
Given a docstring like:
```python
def func(x: int, args: tuple[int]):
"""Toggle the gizmo.
Args:
x: Some argument.
args: Some other arguments.
"""
```
We were considering the `args:` descriptor to be an indented docstring
section header (since `Args:`) is a valid header name. This led to very
confusing diagnostics.
This PR makes the parsing a bit more lax in this case, such that if we
see a nested header that's more deeply indented than the preceding
header, and the preceding section allows sub-items (like `Args:`), we
avoid treating the nested item as a section header.
Closes https://github.com/astral-sh/ruff/issues/9426.
## Summary
After we apply fixes, the source code might be transformed. And yet,
we're using the _unmodified_ source code to compute locations in some
cases (e.g., for displaying parse errors, or Jupyter Notebook cells).
This can lead to subtle errors in reporting, or even panics. This PR
modifies the linter to use the _transformed_ source code for such
computations.
Closes https://github.com/astral-sh/ruff/issues/9407.
I just fixed this false negative in flake8-pyi
(https://github.com/PyCQA/flake8-pyi/pull/460), and then realised ruff
has the exact same bug! Luckily it's a very easy fix.
(The bug is that unused protocols go undetected if they're generic.)
## Summary
Ensures that any lint rules that include line locations render them as
relative to the cell (and include the cell number) when inside a Jupyter
notebook.
Closes https://github.com/astral-sh/ruff/issues/6672.
## Test Plan
`cargo test`
## Summary
This PR adds an autofix for the newly added PYI058 rule (added in
#9313). ~~The PR's current implementation is that the fix is only
available if the fully qualified name of `Generator` or `AsyncGenerator`
is being used:~~
- ~~`-> typing.Generator` is converted to `-> typing.Iterator`;~~
- ~~`-> collections.abc.AsyncGenerator[str, Any]` is converted to `->
collections.abc.AsyncIterator[str]`;~~
- ~~but `-> Generator` is _not_ converted to `-> Iterator`. (It would
require more work to figure out if `Iterator` was already imported or
not. And if it wasn't, where should we import it from? `typing`,
`typing_extensions`, or `collections.abc`? It seems much more
complicated.)~~
The fix is marked as always safe for `__iter__` or `__aiter__` methods
in `.pyi` files, but unsafe for all such methods in `.py` files that
have more than one statement in the method body.
This felt slightly fiddly to accomplish, but I couldn't _see_ any
utilities in
https://github.com/astral-sh/ruff/tree/main/crates/ruff_linter/src/fix
that would have made it simpler to implement. Lmk if I'm missing
something, though -- my first time implementing an autofix! :)
## Test Plan
`cargo test` / `cargo insta review`.
## Summary
We had an early `continue` in this loop, and we weren't setting
`comparator = next;` when continuing... This PR removes the early
continue altogether for clarity.
Closes https://github.com/astral-sh/ruff/issues/9374.
## Test Plan
`cargo test`
## Summary
Changes message from `"Relative imports are banned"` to `"Prefer
absolute imports over relative imports from parent modules"`.
Closes#9363.
## Test Plan
`cargo test`
## Summary
This PR modifies our `Cargo.toml` files to use workspace dependencies
for _all_ dependencies, rather than the status quo of sporadically
trying to use workspace dependencies for those dependencies that are
used across multiple crates. I find the current situation more confusing
and harder to manage, since we have a mix of workspace and crate-local
dependencies, whereas this setup consistently uses the same approach for
all dependencies.
The "wsl" crate was last touched in 2019, whereas the "is-wsl" crate was
last updated in 2023. Additionally, it is unclear whether the "wsl"
crate supports both WSL1 and WSL2 (which was announced in 2019), whereas
the "is-wsl" crate explicitly supports both WSL1 and WSL2.
The required code changes are minimal, since both crates provide only a
`is_wsl() -> bool` function.
paramiko `set_missing_host_key_policy` has mandatory positional arg.
The [current
documentation](https://docs.astral.sh/ruff/rules/ssh-no-host-key-verification/#example)
leads to non-running code
```
>>> from paramiko import client
>>> ssh_client = client.SSHClient()
>>> ssh_client.set_missing_host_key_policy()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: SSHClient.set_missing_host_key_policy() missing 1 required positional argument: 'policy'
```
This PR modifies the documentation to set the policy to the [default
`RejectPolicy`](https://docs.paramiko.org/en/latest/api/client.html#paramiko.client.SSHClient.set_missing_host_key_policy)
Signed-off-by: Mikael Arguedas <mikael.arguedas@gmail.com>
## Summary
Remove the period from a couple short messages, for consistency with all
other short messages.
All other short rule messages lack such a period, except for long
messages made of multiple sentences.
## Test Plan
Tests modified accordingly.
Not sure if this would qualify as a breaking change because user-visible
messages are modified.
## Summary
This PR implements Y058 from flake8-pyi -- this is a new flake8-pyi rule
that was released as part of `flake8-pyi 23.11.0`. I've followed the
Python implementation as closely as possible (see
858c0918a8),
except that for the Ruff version, the rule also applies to `.py` files
as well as for `.pyi` files. (For `.py` files, we only emit the
diagnostic in very specific situations, however, as there's a much
higher likelihood of emitting false positives when applying this rule to
a `.py` file.)
## Test Plan
`cargo test`/`cargo insta review`
## Summary
Historically, we encoded this list by extracting the `__all__`. I went
to update it, but... is there really any value in it? Seems easier to
just treat `typing_extensions` as an alias for `typing`.
Closes https://github.com/astral-sh/ruff/issues/9334.
## Summary
This is a non-behavior-changing refactor to follow-up
https://github.com/astral-sh/ruff/pull/9321 by modifying
`DisplayParseError` to use owned data and make it useable as a
standalone error type (rather than using references and implementing
`Display`). I don't feel very strongly either way. I thought it was
awkward that the `FormatCommandError` had two branches in the display
path, and wanted to represent the `Parse` vs. other cases as a separate
enum, so here we are.
## Summary
The logic that detects continuations assumed that tokens themselves
cannot span multiple lines. However, strings _can_ -- even single-quoted
strings.
Closes https://github.com/astral-sh/ruff/issues/9323.
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
This PR modifies the semantics of `runtime-evaluated-decorators` to
respect decorators on both classes _and_ functions. Historically, this
only respected classes, since the common use-case is (e.g.)
`pydantic.BaseModel` -- but functions are equally valid.
Closes https://github.com/astral-sh/ruff/issues/9312.
## Test Plan
`cargo test`
## Summary
Given:
```python
from somewhere import get_cfg
def lookup_cfg(cfg_description):
cfg = get_cfg(cfg_description)
if cfg is not None:
return cfg
raise AttributeError(f"No cfg found matching {cfg_description}")
```
We were analyzing the method from last-to-first statement. So we saw the
`raise`, then assumed the method _always_ raised. In reality, though, it
_might_ return. This PR improves the branch analysis to respect these
mixed cases.
Closes https://github.com/astral-sh/ruff/issues/9269.
Closes https://github.com/astral-sh/ruff/issues/9304.
## Summary
Remove `:` for PLR0917 to make all PLR09XX message look the same
```
PLR0904 Too many public methods (21 > 20)
PLR0911 Too many return statements (16 > 6)
PLR0912 Too many branches (13 > 12)
PLR0913 Too many arguments in function definition (10 > 5)
PLR0915 Too many statements (118 > 50)
PLR0917 Too many positional arguments: (15/5)
```
## Test Plan
<!-- How was it tested? -->
---------
Signed-off-by: Mikael Arguedas <mikael.arguedas@gmail.com>
## Summary
Part of #970.
This adds Pylint's [R0244
empty_comment](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/empty-comment.html)
lint as well as an always-safe fix.
## Test Plan
The included snapshot verifies the following:
- A line containing only a non-empty comment is not changed
- A line containing leading whitespace before a non-empty comment is not
changed
- A line containing only an empty comment has its content deleted
- A line containing only leading whitespace before an empty comment has
its content deleted
- A line containing only leading and trailing whitespace on an empty
comment has its content deleted
- A line containing trailing whitespace after a non-empty comment is not
changed
- A line containing only a single newline character (i.e. a blank line)
is not changed
- A line containing code followed by a non-empty comment is not changed
- A line containing code followed by an empty comment has its content
deleted after the last non-whitespace character
- Lines containing code and no comments are not changed
- Empty comment lines within block comments are ignored
- Empty comments within triple-quoted sections are ignored
## Comparison to Pylint
Running Ruff and Pylint 3.0.3 with Python 3.12.0 against the
`empty_comment.py` file added in this PR, we see the following:
* Identical behavior:
* empty_comment.py:3:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:4:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:5:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:18:0: R2044: Line with empty comment (empty-comment)
* Differing behavior:
* Pylint doesn't ignore empty comments in block comments commonly used
for visual spacing; I decided these were fine in this implementation
since many projects use these and likely do not want them removed.
* empty_comment.py:28:0: R2044: Line with empty comment (empty-comment)
* Pylint detects "empty comments" within the triple-quoted section at
the bottom of the file, which is arguably a bug in the Pylint
implementation since these are not truly comments. These are ignored by
this implementation.
* empty_comment.py:37:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:38:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:39:0: R2044: Line with empty comment (empty-comment)
## Summary
Hey there 👋 thanks for this great project!
On python code looking like the following
```
import yaml
from yaml.loader import SafeLoader
with MY_FILE_PATH.open("r") as my_file:
my_data = yaml.load(my_file, Loader=SafeLoader)
```
ruff reports this error:
```
S506 Probable use of unsafe loader `SafeLoader` with `yaml.load`. Allows instantiation of arbitrary objects. Consider `yaml.safe_load`.
```
This PR is an attempt to support SafeLoader being imported for either
`yaml` or `yaml.loader`
Disclaimer:
I am not familiar with Rust so this is likely not the better way of
doing it. Interested in hearing how to adapt this PR to provide similar
behavior in a better way
## Test Plan
The S506.py file was updated accordingly to cover the use cases and test
were confirmed to pass with this change.
## Summary
If `RUF100` is ignored via `per-file-ignores`, we need to avoid raising
it. `RUF100` has special "self-ignore" logic, since the rule itself
deals with `# noqa` directives. This PR wires up `per-file-ignores` to
that "self-ignore" logic.
Closes https://github.com/astral-sh/ruff/issues/9297.
We should avoid adding `-> None` to stubs in `.pyi` files, along with a
few other cases. (We already ignore abstract methods.)
Closes https://github.com/astral-sh/ruff/issues/9270.
## Summary
This PR adds some helper structs to the linter paths to enable passing
in the pre-computed tokens and parsed source code during benchmarking,
to remove lexing and parsing from the overall linter benchmark
measurement. We already remove parsing for the formatter, and we have
separate benchmarks for the lexer and the parser, so this should make it
much easier to measure linter performance changes.
## Summary
Adds a rule to detect unions that include `typing.NoReturn` or
`typing.Never`. In such cases, the use of the bottom type is redundant.
Closes https://github.com/astral-sh/ruff/issues/9113.
## Test Plan
`cargo test`
## Summary
fixes#6956
details in issue
Following an advice in
https://github.com/astral-sh/ruff/issues/6956#issuecomment-1817672585,
this change separates expressions to 3 levels of "constant likelihood":
* literals, empty dict and tuples... (definitely constant, level 2)
* CONSTANT_CASE vars (probably constant, 1)
* all other expressions (0)
a comparison is marked yoda if the level is strictly higher on its left
hand side
following
https://github.com/astral-sh/ruff/issues/6956#issuecomment-1697107822
marking compound expressions of literals (e.g. `60 * 60` ) as constants
this change current behaviour on
`SomeClass().settings.SOME_CONSTANT_VALUE > (60 * 60)` in the fixture
from error to ok
## Summary
Given a function like:
```python
def func(x: int):
if not x:
raise ValueError
else:
raise TypeError
```
We now correctly use `NoReturn` as the return type, rather than `None`.
Closes https://github.com/astral-sh/ruff/issues/9201.
## Summary
Part of #8771. flake8-pyi will emit a Y018 error for unused TypeVars,
ParamSpecs or TypeVarTuples; Ruff currently only emits PYI018 for unused
TypeVars.
This is my first "proper" Ruff PR -- let me know if there's a better way
of doing this! Not sure if the repeated calls to `match_typing_expr()`
are ideal.
## Test Plan
I manually updated the fixtures to add some unused ParamSpecs and
TypeVarTuples, and then updated the snapshots using `cargo insta
review`. All tests then passed when run using `cargo test`.
## Summary
Given `Callable[[Callable[_P, _R]], Callable[_P, _R]]` from the
originating issue, when quoting `Callable`, we quoted the inner
`[Callable[_P, _R]]`, and then created a separate edit for the outer
`Callable`. Since there's an extra level of nesting in the subscript,
the edit for `[Callable[_P, _R]]` correctly did _not_ expand to the
entire expression. However, in this case, we should discard the inner
edit, since the expression is getting quoted by the outer edit anyway.
Closes https://github.com/astral-sh/ruff/issues/9162.
## Summary
A common mistake is to add quotes around one member in an `X | Y`-style
type union, as in:
```python
contract_versions_list: list[ContractVersion] | 'QuerySet[ContractVersion]' | None = None
```
However, doing so will lead to a runtime error if the annotation is
runtime-evaluated. This PR lints against such patterns.
Closes https://github.com/astral-sh/ruff/issues/9139.
## Summary
Fix dropped union expressions for piped non-types in `PYI055` autofix
If you had `type[int] | type[str] | str`, it would have dropped the
`str`, which breaks the type!
Closes#9156
## Test Plan
`cargo test`
Fix#9080
Example, where `[]` is a 2 byte non-breaking space:
```
def f():
""" Docstring header
^^^^ Real indentation is 4 chars
docstring body, over-indented
^^^^^^ Over-indentation is 6 - 4 = 2 chars due to this line
[] [] docstring body 2, further indented
^^^^^ We take these 4 chars/5 bytes to match the docstring ...
^^^ ... and these 2 chars/3 bytes to remove the `over_indented_size` ...
^^ ... but preserve this real indent
```
Given:
```python
x: DataFrame[
int
] = 1
```
We currently wrap the annotation in single quotes, which leads to a
syntax error:
```python
x: "DataFrame[
int
]" = 1
```
There are a few options for what to suggest for users here... Use triple
quotes:
```python
x: """DataFrame[
int
]""" = 1
```
Or, use an implicit string concatenation (which may require
parentheses):
```python
x: ("DataFrame["
"int"
"]") = 1
```
The solution I settled on here is to use the `Generator`, which
effectively means we write it out on a single line, like:
```python
x: "DataFrame[int]" = 1
```
It's kind of the "least opinionated" solution, but it does mean we'll
expand to a very long line in some cases.
Closes https://github.com/astral-sh/ruff/issues/9136.
This PR adds a `as_slice` method to all the string nodes which returns
all the parts of the nodes as a slice. This will be useful in the next
PR to split the string formatting to use this method to extract the
_single node_ or _implicitly concanated nodes_.
## Summary
Adds support for sarif v2.1.0 output to cli, usable via the
output-format paramter.
`ruff . --output-format=sarif`
Includes a few changes I wasn't sure of, namely:
* Adds a few derives for Clone & Copy, which I think could be removed
with a little extra work as well.
## Test Plan
I built and ran this against several large open source projects and
verified that the output sarif was valid, using [Microsoft's SARIF
validator tool](https://sarifweb.azurewebsites.net/Validation)
I've also attached an output of the sarif generated by this version of
ruff on the main branch of django at commit: b287af5dc9
[django_main_b287af5dc9_sarif.json](https://github.com/astral-sh/ruff/files/13626222/django_main_b287af5dc9_sarif.json)
Note: this needs to be regenerated with the latest changes and
confirmed.
## Open Points
[ ] Convert to just using all Rules all the time
[ ] Fix the issue with getting the file URI when compiling for web
assembly
## Summary
This allows us to fix usages like:
```python
from pandas import DataFrame
def baz() -> DataFrame:
...
```
By quoting the `DataFrame` in `-> DataFrame`. Without quotes, moving
`from pandas import DataFrame` into an `if TYPE_CHECKING:` block will
fail at runtime, since Python tries to evaluate the annotation to add it
to the function's `__annotations__`.
Unfortunately, this does require us to split our "annotation kind" flags
into three categories, rather than two:
- `typing-only`: The annotation is only evaluated at type-checking-time.
- `runtime-evaluated`: Python will evaluate the annotation at runtime
(like above) -- but we're willing to quote it.
- `runtime-required`: Python will evaluate the annotation at runtime
(like above), and some library (like Pydantic) needs it to be available
at runtime, so we _can't_ quote it.
This functionality is gated behind a setting
(`flake8-type-checking.quote-annotations`).
Closes https://github.com/astral-sh/ruff/issues/5559.
## Summary
Adds `find_assigned_value` a function which gets the `&Expr` assigned to
a given `id` if one exists in the semantic model.
Open TODOs:
- [ ] Handle `binding.kind.is_unpacked_assignment()`: I am bit confused
by this one. The snippet from its documentation does not appear to be
counted as an unpacked assignment and the only ones I could find for
which that was true were invalid Python like:
```python
x, y = 1
```
- [ ] How to handle AugAssign. Can we combine statements like:
```python
(a, b) = [(1, 2, 3), (4,)]
a += (6, 7)
```
to get the full value for a? Code currently just returns `None` for
these assign types
- [ ] Multi target assigns
```python
m_c = (m_d, m_e) = (0, 0)
trio.sleep(m_c) # OK
trio.sleep(m_d) # TRIO115
trio.sleep(m_e) # TRIO115
```
## Test Plan
Used the function in two rules:
- `TRIO115`
- `PERF101`
Expanded both their fixtures for explicit multi target check
## Summary
A fairly common pattern which triggers F841 is unused variables from
tuple assignments, e.g.:
user, created = User.objects.get_or_create(...)
^ F841: Local variable `created` is assigned to but never used
This error is currently not auto-fixable.
This PR adds support for fixing the error automatically by renaming the
unused variable to have a leading underscore (i.e. `_created`) **iff**
the `dummy-variable-rgx` setting would match it.
I considered using `renamers::Renamer` here, but because by the nature
of the error there should be no references to it, that seemed like
overkill. Also note that the fix might break by shadowing the new name
if it is already used elsewhere in the scope. I left it as is because
1. the renamed variable matches the "unused" regex, so it should
hopefully not already be used,
2. the fix is marked as unsafe so it should be reviewed manually
anyways, and
3. I'm not actually sure how to check the scope for the new variable
name 😅