For now, we're just gonna avoid flagging this for `elif` blocks, following the same reasoning as for ternaries. We can handle all of these cases, but we'll knock out the TODOs as a pair, and this avoids broken code.
Closes#2007.
This PR adds a new check that turns expressions such as `[1, 2, 3] + foo` into `[1, 2, 3, *foo]`, since the latter is easier to read and faster:
```
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3] + b'
5000000 loops, best of 5: 81.4 nsec per loop
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3, *b]'
5000000 loops, best of 5: 66.2 nsec per loop
```
However there's a couple of gotchas:
* This felt like a `simplify` rule, so I borrowed an unused `SIM` code even if the upstream `flake8-simplify` doesn't do this transform. If it should be assigned some other code, let me know 😄
* **More importantly** this transform could be unsafe if the other operand of the `+` operation has overridden `__add__` to do something else. What's the `ruff` policy around potentially unsafe operations? (I think some of the suggestions other ported rules give could be semantically different from the original code, but I'm not sure.)
* I'm not a very established Rustacean, so there's no doubt my code isn't quite idiomatic. (For instance, is there a neater way to write that four-way `match` statement?)
Thanks for `ruff`, by the way! :)
This is slightly buggy due to Instagram/LibCST#855; it will complain `[ERROR] Failed to fix nested with: Failed to extract CST from source` when trying to fix nested parenthesized `with` statements lacking trailing commas. But presumably people who write parenthesized `with` statements already knew that they don’t need to nest them.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
If a `try` block has multiple statements, a compound statement, or
control flow, rewriting it with `contextlib.suppress` would obfuscate
the fact that the exception still short-circuits further statements in
the block.
Fixes#1947.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Since our binding tracking is somewhat limited, I opted to favor false negatives over false positives. So, e.g., this won't trigger SIM115:
```py
with contextlib.ExitStack():
f = exit_stack.enter_context(open("filename"))
```
(Notice that `exit_stack` is unbound.)
The alternative strategy required us to incorrectly trigger SIM115 on this:
```py
with contextlib.ExitStack() as exit_stack:
exit_stack_ = exit_stack
f = exit_stack_.enter_context(open("filename"))
```
Closes#1945.
I accept any suggestion. By the way, I have a doubt, I have checked and all flake8-pie plugins can be fixed by ruff, but is it necessary that this one is also fixed automatically ?
rel #1543
Implements [flake8-commas](https://github.com/PyCQA/flake8-commas). Fixes#1058.
The plugin is mostly redundant with Black (and also deprecated upstream), but very useful for projects which can't/won't use an auto-formatter.
This linter works on tokens. Before porting to Rust, I cleaned up the Python code ([link](https://gist.github.com/bluetech/7c5dcbdec4a73dd5a74d4bc09c72b8b9)) and made sure the tests pass. In the Rust version I tried to add explanatory comments, to the best of my understanding of the original logic.
Some changes I did make:
- Got rid of rule C814 - "missing trailing comma in Python 2". Ruff doesn't support Python 2.
- Merged rules C815 - "missing trailing comma in Python 3.5+" and C816 - "missing trailing comma in Python 3.6+" into C812 - "missing trailing comma". These Python versions are outdated, didn't think it was worth the complication.
- Added autofixes for C812 and C819.
Autofix is missing for C818 - "trailing comma on bare tuple prohibited". It needs to turn e.g. `x = 1,` into `x = (1, )`, it's a bit difficult to do with tokens only, so I skipped it for now.
I ran the rules on cpython/Lib and on a big internal code base and it works as intended (though I only sampled the diffs).
This PR makes the following changes to improve `SIM117`:
- Avoid emitting `SIM117` multiple times within the same `with`
statement:
- Adjust the error range.
## Example
```python
with A() as a: # SIM117
with B() as b:
with C() as c:
print("hello")
```
### Current
```
resources/test/fixtures/flake8_simplify/SIM117.py:5:1: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
|
5 | / with A() as a: # SIM117
6 | | with B() as b:
7 | | with C() as c:
8 | | print("hello")
| |__________________________^ SIM117
|
resources/test/fixtures/flake8_simplify/SIM117.py:6:5: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
|
6 | with B() as b:
| _____^
7 | | with C() as c:
8 | | print("hello")
| |__________________________^ SIM117
|
```
### Improved
```
resources/test/fixtures/flake8_simplify/SIM117.py:5:1: SIM117 Use a single `with` statement with multiple contexts instead of nested `with` statements
|
5 | / with A() as a: # SIM117
6 | | with B() as b:
7 | | with C() as c:
| |______________________^ SIM117
|
```
Signed-off-by: harupy <hkawamura0130@gmail.com>
This PR refactors our import-tracking logic to leverage our existing
logic for tracking bindings. It's both a significant simplification, a
significant improvement (as we can now track reassignments), and closes
out a bunch of subtle bugs.
Though the AST tracks all bindings (e.g., when parsing `import os as
foo`, we bind the name `foo` to a `BindingKind::Importation` that points
to the `os` module), when I went to implement import tracking (e.g., to
ensure that if the user references `List`, it's actually `typing.List`),
I added a parallel system specifically for this use-case.
That was a mistake, for a few reasons:
1. It didn't track reassignments, so if you had `from typing import
List`, but `List` was later overridden, we'd still consider any
reference to `List` to be `typing.List`.
2. It required a bunch of extra logic, include complex logic to try and
optimize the lookups, since it's such a hot codepath.
3. There were a few bugs in the implementation that were just hard to
correct under the existing abstractions (e.g., if you did `from typing
import Optional as Foo`, then we'd treat any reference to `Foo` _or_
`Optional` as `typing.Optional` (even though, in that case, `Optional`
was really unbound).
The new implementation goes through our existing binding tracking: when
we get a reference, we find the appropriate binding given the current
scope stack, and normalize it back to its original target.
Closes#1690.
Closes#1790.
This PR adds support for `SIM110` and `SIM111` simplifications of the
form:
```py
def f():
# SIM110
for x in iterable:
if check(x):
return True
else:
return False
```
This PR implements `reverse-relative`, from isort, but renames it to
`relative-imports-order` with the respected value `closest-to-furthest`
and `furthest-to-closest`, and the latter being the default.
Closes#1813.
This PR implements `W505` (`DocLineTooLong`), which is similar to `E501`
(`LineTooLong`) but confined to doc lines.
I based the "doc line" definition on pycodestyle, which defines a doc
line as a standalone comment or string statement. Our definition is a
bit more liberal, since we consider any string statement a doc line
(even if it's part of a multi-line statement) -- but that seems fine to
me.
Note that, unusually, this rule requires custom extraction from both the
token stream (to find standalone comments) and the AST (to find string
statements).
Closes#1784.