## Summary
Update the name of `ASYNC109` to match
[upstream](https://flake8-async.readthedocs.io/en/latest/rules.html).
Also update to the functionality to match upstream by supporting
additional context managers from `asyncio` and `anyio`. This doesn't
change any of the detection functionality, but recommends additional
context managers from `asyncio` and `anyio` depending on context.
Part of https://github.com/astral-sh/ruff/issues/12039.
## Test Plan
Added fixture for asyncio recommendation
## Summary
Implement mutable-contextvar-default (B039) which was added to
flake8-bugbear in https://github.com/PyCQA/flake8-bugbear/pull/476.
This rule is similar to [mutable-argument-default
(B006)](https://docs.astral.sh/ruff/rules/mutable-argument-default) and
[function-call-in-default-argument
(B008)](https://docs.astral.sh/ruff/rules/function-call-in-default-argument),
except that it checks the `default` keyword argument to
`contextvars.ContextVar`.
```
B039.py:19:26: B039 Do not use mutable data structures for ContextVar defaults
|
18 | # Bad
19 | ContextVar("cv", default=[])
| ^^ B039
20 | ContextVar("cv", default={})
21 | ContextVar("cv", default=list())
|
= help: Replace with `None`; initialize with `.set()` after checking for `None`
```
In the upstream flake8-plugin, this rule is written expressly as a
corollary to B008 and shares much of its logic. Likewise, this
implementation reuses the logic of the Ruff implementation of B008,
namely
f765d19402/crates/ruff_linter/src/rules/flake8_bugbear/rules/function_call_in_argument_default.rs (L104-L106)
and
f765d19402/crates/ruff_linter/src/rules/flake8_bugbear/rules/mutable_argument_default.rs (L106)
Thus, this rule deliberately replicates B006's and B008's heuristics.
For example, this rule assumes that all functions are mutable unless
otherwise qualified. If improvements are to be made to B039 heuristics,
they should probably be made to B006 and B008 as well (whilst trying to
match the upstream implementation).
This rule does not have an autofix as it is unknown where the ContextVar
next used (and it might not be within the same file).
Closes#12054
## Test Plan
`cargo nextest run`
## Summary
This PR updates `F811` rule to include assignment as possible shadowed
binding. This will fix issue: #11828 .
## Test Plan
Add a test file, F811_30.py, which includes a redefinition after an
assignment and a verified snapshot file.
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
* Potentially resolves#11619 (nondeterministic hashmap order across
different architectures) in F401 by replacing a hashmap with
nondeterministic traversal order with an ordered mapping.
I'm not sure how to test this with our CI/CD. I don't have an s390x
machine at home. Should I try it in Qemu?
## Summary
This PR ensures that if a variable is bound via `global`, and then the
`global` is read, the originating variable is also marked as read. It's
not perfect, in that it won't detect _rebindings_, like:
```python
from app import redis_connection
def func():
global redis_connection
redis_connection = 1
redis_connection()
```
So, above, `redis_connection` is still marked as unused.
But it does avoid flagging `redis_connection` as unused in:
```python
from app import redis_connection
def func():
global redis_connection
redis_connection()
```
Closes https://github.com/astral-sh/ruff/issues/11518.
## Summary
This PR adds updates the semantic model to detect attribute docstring.
Refer to [PEP 258](https://peps.python.org/pep-0258/#attribute-docstrings)
for the definition of an attribute docstring.
This PR doesn't add full support for it but only considers string
literals as attribute docstring for the following cases:
1. A string literal following an assignment statement in the **global
scope**.
2. A global class attribute
For an assignment statement, it's considered an attribute docstring only
if the target expression is a name expression (`x = 1`). So, chained
assignment, multiple assignment or unpacking, and starred expression,
which are all valid in the target position, aren't considered here.
In `__init__` method, an assignment to the `self` variable like `self.x = 1`
is also a candidate for an attribute docstring. **This PR does not
support this position.**
## Test Plan
I used the following source code along with a print statement to verify
that the attribute docstring detection is correct.
Refer to the PR description for the code snippet.
I'll add this in the follow-up PR
(https://github.com/astral-sh/ruff/pull/11302) which uses this method.
(Supersedes #9152, authored by @LaBatata101)
## Summary
This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.
It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.
Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.
For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.
## Test Plan
Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.
## Acknowledgements
- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)
---------
Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
If the user is analyzing a script (i.e., we have no module path), it
seems reasonable to use the script name when trying to identify paths to
objects defined _within_ the script.
Closes https://github.com/astral-sh/ruff/issues/10960.
## Test Plan
Ran:
```shell
check --isolated --select=B008 \
--config 'lint.flake8-bugbear.extend-immutable-calls=["test.A"]' \
test.py
```
On:
```python
class A: pass
def f(a=A()):
pass
```
## Summary
Fixes#3011.
Type checkers currently allow forward references in all contexts in stub
files, and stubs frequently make use of this capability (although it
doesn't actually seem to be specc'd anywhere --neither in PEP 484, nor
https://typing.readthedocs.io/en/latest/source/stubs.html#id6, nor the
CPython typing docs). Implementing it so that Ruff allows forward
references in _all contexts_ in stub files seems non-trivial, however
(or at least, I couldn't figure out how to do it easily), so this PR
does not do that. Perhaps it _should_; if we think this apporach isn't
principled enough, I'm happy to close it and postpone changing anything
here.
However, this does reduce the number of F821 errors Ruff emits on
typeshed down from 76 to 2, which would mean that we could enable the
rule at typeshed. The remaining 2 F821 errors can be trivially fixed at
typeshed by moving definitions around; forward references in class bases
were really the only remaining places where there was a real _use case_
for forward references in stub files that Ruff wasn't yet allowing.
## Test plan
`cargo test`. I also ran this PR branch on typeshed to check to see if
there were any new false positives caused by the changes here; there
were none.
## Summary
This PR adds a new semantic model flag to indicate that the checker is
inside an f-string replacement field. This will be used to ignore
certain checks if the target version doesn't support a specific feature
like PEP 701.
fixes: #10761
## Test Plan
Add a test case from the raised issue.
## Summary
In https://github.com/astral-sh/ruff/pull/10341, we fixed some false
positives in `.pyi` files, but introduced others. This PR effectively
reverts the change in #10341 and fixes it in a slightly different way.
Instead of changing the _bindings_ we generate in the semantic model in
`.pyi` files, we instead change how we _resolve_ them.
Closes https://github.com/astral-sh/ruff/issues/10509.
## Summary
Given `del X`, we'll typically add a `BindingKind::Deletion` to `X` to
shadow the current binding. However, if the deletion is inside of a
conditional operation, we _won't_, as in:
```python
def f():
global X
if X > 0:
del X
```
We will, however, track it as a reference to the binding. This PR adds
the expression context to those resolved references, so that we can
detect that the `X` in `global X` was "assigned to".
Closes https://github.com/astral-sh/ruff/issues/10397.
## Summary
When you try to remove an internal representation leaking into another
type and end up rewriting a simple version of `smallvec`.
The goal of this PR is to replace the `Box<[&'a str]>` with
`Box<QualifiedName>` to avoid that the internal `QualifiedName`
representation leaks (and it gives us a nicer API too). However, doing
this when `QualifiedName` uses `SmallVec` internally gives us all sort
of funny lifetime errors. I was lost but @BurntSushi came to rescue me.
He figured out that `smallvec` has a variance problem which is already
tracked in https://github.com/servo/rust-smallvec/issues/146
To fix the variants problem, I could use the smallvec-2-alpha-4 or
implement our own smallvec. I went with implementing our own small vec
for this specific problem. It obviously isn't as sophisticated as
smallvec (only uses safe code), e.g. it doesn't perform any size
optimizations, but it does its job.
Other changes:
* Removed `Imported::qualified_name` (the version that returns a
`String`). This can be replaced by calling `ToString` on the qualified
name.
* Renamed `Imported::call_path` to `qualified_name` and changed its
return type to `&QualifiedName`.
* Renamed `QualifiedName::imported` to `user_defined` which is the more
common term when talking about builtins vs the rest/user defined
functions.
## Test plan
`cargo test`
The expression types in our AST are called `ExprYield`, `ExprAwait`,
`ExprStringLiteral` etc, except `ExprNamedExpr`, `ExprIfExpr` and
`ExprGenratorExpr`. This seems to align with [Python AST's
naming](https://docs.python.org/3/library/ast.html) but feels
inconsistent and excessive.
This PR removes the `Expr` postfix from `ExprNamedExpr`, `ExprIfExpr`,
and `ExprGeneratorExpr`.
## Summary
Charlie can probably explain this better than I but it turns out,
`CallPath` is used for two different things:
* To represent unqualified names like `version` where `version` can be a
local variable or imported (e.g. `from sys import version` where the
full qualified name is `sys.version`)
* To represent resolved, full qualified names
This PR splits `CallPath` into two types to make this destinction clear.
> Note: I haven't renamed all `call_path` variables to `qualified_name`
or `unqualified_name`. I can do that if that's welcomed but I first want
to get feedback on the approach and naming overall.
## Test Plan
`cargo test`
## Summary
This PR changes the `CallPath` type alias to a newtype wrapper.
A newtype wrapper allows us to limit the API and to experiment with
alternative ways to implement matching on `CallPath`s.
## Test Plan
`cargo test`
## Summary
Allows, e.g.:
```python
import os
os.environ["WORLD_SIZE"] = "1"
os.putenv("CUDA_VISIBLE_DEVICES", "4")
import torch
```
For now, this is only allowed in preview.
Closes https://github.com/astral-sh/ruff/issues/10059
## Summary
This PR introduces a new semantic model flag `DOCSTRING` which suggests
that the model is currently in a module / class / function docstring.
This is the first step in eliminating the docstring detection state
machine which is prone to bugs as stated in #7595.
## Test Plan
~TODO: Is there a way to add a test case for this?~
I tested this using the following code snippet and adding a print
statement in the `string_like` analyzer to print if we're currently in a
docstring or not.
<details><summary>Test code snippet:</summary>
<p>
```python
"Docstring" ", still a docstring"
"Not a docstring"
def foo():
"Docstring"
"Not a docstring"
if foo:
"Not a docstring"
pass
class Foo:
"Docstring"
"Not a docstring"
foo: int
"Unofficial variable docstring"
def method():
"Docstring"
"Not a docstring"
pass
def bar():
"Not a docstring".strip()
def baz():
_something_else = 1
"""Not a docstring"""
```
</p>
</details>
## Summary
Implement [implicit readlines
(FURB129)](https://github.com/dosisod/refurb/blob/master/refurb/checks/iterable/implicit_readlines.py)
lint.
## Notes
I need a help/an opinion about suggested implementations.
This implementation differs from the original one from `refurb` in the
following way. This implementation checks syntactically the call of the
method with the name `readlines()` inside `for` {loop|generator
expression}. The implementation from refurb also
[checks](https://github.com/dosisod/refurb/blob/master/refurb/checks/iterable/implicit_readlines.py#L43)
that callee is a variable with a type `io.TextIOWrapper` or
`io.BufferedReader`.
- I do not see a simple way to implement the same logic.
- The best I can have is something like
```rust
checker.semantic().binding(checker.semantic().resolve_name(attr_expr.value.as_name_expr()?)?).statement(checker.semantic())
```
and analyze cases. But this will be not about types, but about guessing
the type by assignment (or with) expression.
- Also this logic has several false negatives, when the callee is not a
variable, but the result of function call (e.g. `open(...)`).
- On the other side, maybe it is good to lint this on other things,
where this suggestion is not safe, and push the developers to change
their interfaces to be less surprising, comparing with the standard
library.
- Anyway while the current implementation has false-positives (I
mentioned some of them in the test) I marked the fixes to be unsafe.
## Summary
I was surprised to learn that we treat `x` in `[_ for x in y]` as an
"assignment" binding kind, rather than a dedicated comprehension
variable.
## Summary
This PR renames the semantic model flag `MODULE_DOCSTRING` to
`MODULE_DOCSTRING_BOUNDARY`. The main reason is for readability and for
the new semantic model flag `DOCSTRING` which tracks that the model is
in a module / class / function docstring.
I got confused earlier with the name until I looked at the use case and
it seems that the `_BOUNDARY` prefix is more appropriate for the
use-case and is consistent with other flags.
## Summary
This is a simple idea to avoid unnecessary work in the linter,
especially for rules that run on all name and/or all attribute nodes.
Imagine a rule like the NumPy deprecation check. If the user never
imported `numpy`, we should be able to skip that rule entirely --
whereas today, we do a `resolve_call_path` check on _every_ name in the
file. It turns out that there's basically a finite set of modules that
we care about, so we now track imports on those modules as explicit
flags on the semantic model. In rules that can _only_ ever trigger if
those modules were imported, we add a dedicated and extremely cheap
check to the top of the rule.
We could consider generalizing this to all modules, but I would expect
that not to be much faster than `resolve_call_path`, which is just a
hash map lookup on `TextSize` anyway.
It would also be nice to make this declarative, such that rules could
declare the modules they care about, the analyzers could call the rules
as appropriate. But, I don't think such a design should block merging
this.
Implements SIM113 from #998
Added tests
Limitations
- No fix yet
- Only flag cases where index variable immediately precede `for` loop
@charliermarsh please review and let me know any improvements
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR attempts to improve `builtin-attribute-shadowing` (`A003`), a
rule which has been repeatedly criticized, but _does_ have value (just
not in the current form).
Historically, this rule would flag cases like:
```python
class Class:
id: int
```
This led to an increasing number of exceptions and special-cases to the
rule over time to try and improve it's specificity (e.g., ignore
`TypedDict`, ignore `@override`).
The crux of the issue is that given the above, referencing `id` will
never resolve to `Class.id`, so the shadowing is actually fine. There's
one exception, however:
```python
class Class:
id: int
def do_thing() -> id:
pass
```
Here, `id` actually resolves to the `id` attribute on the class, not the
`id` builtin.
So this PR completely reworks the rule around this _much_ more targeted
case, which will almost always be a mistake: when you reference a class
member from within the class, and that member shadows a builtin.
Closes https://github.com/astral-sh/ruff/issues/6524.
Closes https://github.com/astral-sh/ruff/issues/7806.
## Summary
On `main`, we flag redefinitions in cases like:
```python
import os
x = 1
if x > 0:
import os
```
That is, we consider these to be in the "same branch", since they're not
in disjoint branches. This matches Flake8's behavior, but it seems to
lead to false positives.
## Summary
Historically, we encoded this list by extracting the `__all__`. I went
to update it, but... is there really any value in it? Seems easier to
just treat `typing_extensions` as an alias for `typing`.
Closes https://github.com/astral-sh/ruff/issues/9334.
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
Given:
```python
from somewhere import get_cfg
def lookup_cfg(cfg_description):
cfg = get_cfg(cfg_description)
if cfg is not None:
return cfg
raise AttributeError(f"No cfg found matching {cfg_description}")
```
We were analyzing the method from last-to-first statement. So we saw the
`raise`, then assumed the method _always_ raised. In reality, though, it
_might_ return. This PR improves the branch analysis to respect these
mixed cases.
Closes https://github.com/astral-sh/ruff/issues/9269.
Closes https://github.com/astral-sh/ruff/issues/9304.
## Summary
Adds a rule to detect unions that include `typing.NoReturn` or
`typing.Never`. In such cases, the use of the bottom type is redundant.
Closes https://github.com/astral-sh/ruff/issues/9113.
## Test Plan
`cargo test`
## Summary
This allows us to fix usages like:
```python
from pandas import DataFrame
def baz() -> DataFrame:
...
```
By quoting the `DataFrame` in `-> DataFrame`. Without quotes, moving
`from pandas import DataFrame` into an `if TYPE_CHECKING:` block will
fail at runtime, since Python tries to evaluate the annotation to add it
to the function's `__annotations__`.
Unfortunately, this does require us to split our "annotation kind" flags
into three categories, rather than two:
- `typing-only`: The annotation is only evaluated at type-checking-time.
- `runtime-evaluated`: Python will evaluate the annotation at runtime
(like above) -- but we're willing to quote it.
- `runtime-required`: Python will evaluate the annotation at runtime
(like above), and some library (like Pydantic) needs it to be available
at runtime, so we _can't_ quote it.
This functionality is gated behind a setting
(`flake8-type-checking.quote-annotations`).
Closes https://github.com/astral-sh/ruff/issues/5559.
## Summary
Adds `find_assigned_value` a function which gets the `&Expr` assigned to
a given `id` if one exists in the semantic model.
Open TODOs:
- [ ] Handle `binding.kind.is_unpacked_assignment()`: I am bit confused
by this one. The snippet from its documentation does not appear to be
counted as an unpacked assignment and the only ones I could find for
which that was true were invalid Python like:
```python
x, y = 1
```
- [ ] How to handle AugAssign. Can we combine statements like:
```python
(a, b) = [(1, 2, 3), (4,)]
a += (6, 7)
```
to get the full value for a? Code currently just returns `None` for
these assign types
- [ ] Multi target assigns
```python
m_c = (m_d, m_e) = (0, 0)
trio.sleep(m_c) # OK
trio.sleep(m_d) # TRIO115
trio.sleep(m_e) # TRIO115
```
## Test Plan
Used the function in two rules:
- `TRIO115`
- `PERF101`
Expanded both their fixtures for explicit multi target check
This PR allows `matplotlib.use` calls to intersperse imports without
triggering `E402`. This is a pragmatic choice as it's common to require
`matplotlib.use` calls prior to importing from within `matplotlib`
itself.
Closes https://github.com/astral-sh/ruff/issues/9091.
## Summary
It's common to interleave a `sys.path` modification between imports at
the top of a file. This is a frequent cause of `# noqa: E402` false
positives, as seen in the ecosystem checks. This PR modifies E402 to
omit such modifications when determining the "import boundary".
(We could consider linting against `sys.path` modifications, but that
should be a separate rule.)
Closes: https://github.com/astral-sh/ruff/issues/5557.
Rebase of #6365 authored by @davidszotten.
## Summary
This PR updates the AST structure for an f-string elements.
The main **motivation** behind this change is to have a dedicated node
for the string part of an f-string. Previously, the existing
`ExprStringLiteral` node was used for this purpose which isn't exactly
correct. The `ExprStringLiteral` node should include the quotes as well
in the range but the f-string literal element doesn't include the quote
as it's a specific part within an f-string. For example,
```python
f"foo {x}"
# ^^^^
# This is the literal part of an f-string
```
The introduction of `FStringElement` enum is helpful which represent
either the literal part or the expression part of an f-string.
### Rule Updates
This means that there'll be two nodes representing a string depending on
the context. One for a normal string literal while the other is a string
literal within an f-string. The AST checker is updated to accommodate
this change. The rules which work on string literal are updated to check
on the literal part of f-string as well.
#### Notes
1. The `Expr::is_literal_expr` method would check for
`ExprStringLiteral` and return true if so. But now that we don't
represent the literal part of an f-string using that node, this improves
the method's behavior and confines to the actual expression. We do have
the `FStringElement::is_literal` method.
2. We avoid checking if we're in a f-string context before adding to
`string_type_definitions` because the f-string literal is now a
dedicated node and not part of `Expr`.
3. Annotations cannot use f-string so we avoid changing any rules which
work on annotation and checks for `ExprStringLiteral`.
## Test Plan
- All references of `Expr::StringLiteral` were checked to see if any of
the rules require updating to account for the f-string literal element
node.
- New test cases are added for rules which check against the literal
part of an f-string.
- Check the ecosystem results and ensure it remains unchanged.
## Performance
There's a performance penalty in the parser. The reason for this remains
unknown as it seems that the generated assembly code is now different
for the `__reduce154` function. The reduce function body is just popping
the `ParenthesizedExpr` on top of the stack and pushing it with the new
location.
- The size of `FStringElement` enum is the same as `Expr` which is what
it replaces in `FString::format_spec`
- The size of `FStringExpressionElement` is the same as
`ExprFormattedValue` which is what it replaces
I tried reducing the `Expr` enum from 80 bytes to 72 bytes but it hardly
resulted in any performance gain. The difference can be seen here:
- Original profile: https://share.firefox.dev/3Taa7ES
- Profile after boxing some node fields:
https://share.firefox.dev/3GsNXpD
### Backtracking
I tried backtracking the changes to see if any of the isolated change
produced this regression. The problem here is that the overall change is
so small that there's only a single checkpoint where I can backtrack and
that checkpoint results in the same regression. This checkpoint is to
revert using `Expr` to the `FString::format_spec` field. After this
point, the change would revert back to the original implementation.
## Review process
The review process is similar to #7927. The first set of commits update
the node structure, parser, and related AST files. Then, further commits
update the linter and formatter part to account for the AST change.
---------
Co-authored-by: David Szotten <davidszotten@gmail.com>
This PR renames the semantic model flag `LITERAL` to `TYPING_LITERAL` to
better reflect its purpose. The main motivation behind this change is to
avoid any confusion with the "literal" terminology used in the AST for
literal nodes like string, bytes, numbers, etc.
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.
The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.
I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> crates/ruff_source_file/src/newlines.rs:62:17
|
62 | #[allow(unsafe_code)]
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: `forbid` lint level was set on command line
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
We already support inserting imports for `I002` -- this PR just adds the
same fix for `FA102`, which is explicitly about `from __future__ import
annotations`.
Closes https://github.com/astral-sh/ruff/issues/8682.
## Summary
This PR adds (unsafe) fixes to the flake8-annotations rules that enforce
missing return types, offering to automatically insert type annotations
for functions with literal return values. The logic is smart enough to
generate simplified unions (e.g., `float` instead of `int | float`) and
deal with implicit returns (`return` without a value).
Closes https://github.com/astral-sh/ruff/issues/1640 (though we could
open a separate issue for referring parameter types).
Closes https://github.com/astral-sh/ruff/issues/8213.
## Test Plan
`cargo test`
## Summary
An assignment can be _both_ (e.g.) a loop variable _and_ assigned via
unpacking. In other words, unpacking is a quality of an assignment, not
a _kind_.
## Summary
This brings ruff's behavior in line with what `pep8-naming` already does
and thus closes#8397.
I had initially implemented this to look at the last segment of a dotted
path only when the entry in the `*-decorators` setting started with a
`.`, but in the end I thought it's better to remain consistent w/
`pep8-naming` and doing a match against the last segment of the
decorator name in any case.
If you prefer to diverge from this in favor of less ambiguity in the
configuration let me know and I'll change it so you would need to put
e.g. `.expression` in the `classmethod-decorators` list.
## Test Plan
Tested against the file in the issue linked below, plus the new testcase
added in this PR.
~Improves detection of types imported from `typing_extensions`. Removes
the hard-coded list of supported types in `typing_extensions`; instead
assuming all types could be imported from `typing`, `_typeshed`, or
`typing_extensions`.~
~The typing extensions package appears to re-export types even if they
do not need modification.~
Adds detection of `if typing_extensions.TYPE_CHECKING` blocks. Avoids
inserting a new `if TYPE_CHECKING` block and `from typing import
TYPE_CHECKING` if `typing_extensions.TYPE_CHECKING` is used (closes
https://github.com/astral-sh/ruff/issues/8427)
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
We have this pattern in a bunch of places, where we find the _only_
binding to a name (and return `None`) if it's bound multiple times. This
PR DRYs it up into a method on `SemanticModel`.
## Summary
This PR splits the `Constant` enum as individual literal nodes. It
introduces the following new nodes for each variant:
* `ExprStringLiteral`
* `ExprBytesLiteral`
* `ExprNumberLiteral`
* `ExprBooleanLiteral`
* `ExprNoneLiteral`
* `ExprEllipsisLiteral`
The main motivation behind this refactor is to introduce the new AST
node for implicit string concatenation in the coming PR. The elements of
that node will be either a string literal, bytes literal or a f-string
which can be implemented using an enum. This means that a string or
bytes literal cannot be represented by `Constant::Str` /
`Constant::Bytes` which creates an inconsistency.
This PR avoids that inconsistency by splitting the constant nodes into
it's own literal nodes, literal being the more appropriate naming
convention from a static analysis tool perspective.
This also makes working with literals in the linter and formatter much
more ergonomic like, for example, if one would want to check if this is
a string literal, it can be done easily using
`Expr::is_string_literal_expr` or matching against `Expr::StringLiteral`
as oppose to matching against the `ExprConstant` and enum `Constant`. A
few AST helper methods can be simplified as well which will be done in a
follow-up PR.
This introduces a new `Expr::is_literal_expr` method which is the same
as `Expr::is_constant_expr`. There are also intermediary changes related
to implicit string concatenation which are quiet less. This is done so
as to avoid having a huge PR which this already is.
## Test Plan
1. Verify and update all of the existing snapshots (parser, visitor)
2. Verify that the ecosystem check output remains **unchanged** for both
the linter and formatter
### Formatter ecosystem check
#### `main`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
#### `dhruv/constant-to-literal`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
In https://github.com/astral-sh/ruff/pull/7968, I introduced a
regression whereby we started to treat imports used _only_ in type
annotation bounds (with `__future__` annotations) as unused.
The root of the issue is that I started using `visit_annotation` for
these bounds. So we'd queue up the bound in the list of deferred type
parameters, then when visiting, we'd further queue it up in the list of
deferred type annotations... Which we'd then never visit, since deferred
type annotations are visited _before_ deferred type parameters.
Anyway, the better solution here is to use a dedicated flag for these,
since they have slightly different behavior than type annotations.
I've also fixed what I _think_ is a bug whereby we previously failed to
resolve `Callable` in:
```python
type RecordCallback[R: Record] = Callable[[R], None]
from collections.abc import Callable
```
IIUC, the values in type aliases should be evaluated lazily, like type
parameters.
Closes https://github.com/astral-sh/ruff/issues/8017.
## Test Plan
`cargo test`
## Summary
Check that the sequence type is a list, set, dict, or tuple before
recommending replacing the `enumerate(...)` call with `range(len(...))`.
Document behaviour so users are aware of the type inference limitation
leading to false negatives.
Closes#7656.
## Summary
This wasn't necessary in the past, since we _only_ applied this rule to
bodies that contained two statements, one of which was a `pass`. Now
that it applies to any `pass` in a block with multiple statements, we
can run into situations in which we remove both passes, and so need to
apply the fixes in isolation.
See:
https://github.com/astral-sh/ruff/issues/7455#issuecomment-1741107573.
## Summary
This is a follow-up to #7469 that attempts to achieve similar gains, but
without introducing malachite. Instead, this PR removes the `BigInt`
type altogether, instead opting for a simple enum that allows us to
store small integers directly and only allocate for values greater than
`i64`:
```rust
/// A Python integer literal. Represents both small (fits in an `i64`) and large integers.
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct Int(Number);
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum Number {
/// A "small" number that can be represented as an `i64`.
Small(i64),
/// A "large" number that cannot be represented as an `i64`.
Big(Box<str>),
}
impl std::fmt::Display for Number {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Number::Small(value) => write!(f, "{value}"),
Number::Big(value) => write!(f, "{value}"),
}
}
}
```
We typically don't care about numbers greater than `isize` -- our only
uses are comparisons against small constants (like `1`, `2`, `3`, etc.),
so there's no real loss of information, except in one or two rules where
we're now a little more conservative (with the worst-case being that we
don't flag, e.g., an `itertools.pairwise` that uses an extremely large
value for the slice start constant). For simplicity, a few diagnostics
now show a dedicated message when they see integers that are out of the
supported range (e.g., `outdated-version-block`).
An additional benefit here is that we get to remove a few dependencies,
especially `num-bigint`.
## Test Plan
`cargo test`
## Summary
Implement
[`no-ignored-enumerate-items`](https://github.com/dosisod/refurb/blob/master/refurb/checks/builtin/no_ignored_enumerate.py)
as `unnecessary-enumerate` (`FURB148`).
The auto-fix considers if a `start` argument is passed to the
`enumerate()` function. If only the index is used, then the suggested
fix is to pass the `start` value to the `range()` function. So,
```python
for i, _ in enumerate(xs, 1):
...
```
becomes
```python
for i in range(1, len(xs)):
...
```
If the index is ignored and only the value is ignored, and if a start
value greater than zero is passed to `enumerate()`, the rule doesn't
produce a suggestion. I couldn't find a unanimously accepted best way to
iterate over a collection whilst skipping the first n elements. The rule
still triggers, however.
Related to #1348.
## Test Plan
`cargo test`
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>