Fairly mechanical. Did a few of the simple cases manually to make sure things were working, and I think the rest will be easily achievable via a quick `fastmod` command.
ref #1871
Rule described here: https://www.flake8rules.com/rules/E101.html
I tried to follow contributing guidelines closely, I've never worked with Rust before. Stumbled across Ruff a few days ago and would like to use it in our project, but we use a bunch of flake8 rules that are not yet implemented in ruff, so I decided to give it a go.
This PR adds a new check that turns expressions such as `[1, 2, 3] + foo` into `[1, 2, 3, *foo]`, since the latter is easier to read and faster:
```
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3] + b'
5000000 loops, best of 5: 81.4 nsec per loop
~ $ python3.11 -m timeit -s 'b = [6, 5, 4]' '[1, 2, 3, *b]'
5000000 loops, best of 5: 66.2 nsec per loop
```
However there's a couple of gotchas:
* This felt like a `simplify` rule, so I borrowed an unused `SIM` code even if the upstream `flake8-simplify` doesn't do this transform. If it should be assigned some other code, let me know 😄
* **More importantly** this transform could be unsafe if the other operand of the `+` operation has overridden `__add__` to do something else. What's the `ruff` policy around potentially unsafe operations? (I think some of the suggestions other ported rules give could be semantically different from the original code, but I'm not sure.)
* I'm not a very established Rustacean, so there's no doubt my code isn't quite idiomatic. (For instance, is there a neater way to write that four-way `match` statement?)
Thanks for `ruff`, by the way! :)
# This commit has been generated via the following Python script:
# (followed by `cargo +nightly fmt` and `cargo dev generate-all`)
# For the reasoning see the previous commit(s).
import re
import sys
for path in (
'src/violations.rs',
'src/rules/flake8_tidy_imports/banned_api.rs',
'src/rules/flake8_tidy_imports/relative_imports.rs',
):
with open(path) as f:
text = ''
while line := next(f, None):
if line.strip() != 'fn message(&self) -> String {':
text += line
continue
text += ' #[derive_message_formats]\n' + line
body = next(f)
while (line := next(f)) != ' }\n':
body += line
# body = re.sub(r'(?<!code\| |\.push\()format!', 'format!', body)
body = re.sub(
r'("[^"]+")\s*\.to_string\(\)', r'format!(\1)', body, re.DOTALL
)
body = re.sub(
r'(r#".+?"#)\s*\.to_string\(\)', r'format!(\1)', body, re.DOTALL
)
text += body + ' }\n'
while (line := next(f)).strip() != 'fn placeholder() -> Self {':
text += line
while (line := next(f)) != ' }\n':
pass
with open(path, 'w') as f:
f.write(text)
The idea is nice and simple we replace:
fn placeholder() -> Self;
with
fn message_formats() -> &'static [&'static str];
So e.g. if a Violation implementation defines:
fn message(&self) -> String {
format!("Local variable `{name}` is assigned to but never used")
}
it would also have to define:
fn message_formats() -> &'static [&'static str] {
&["Local variable `{name}` is assigned to but never used"]
}
Since we however obviously do not want to duplicate all of our format
strings we simply introduce a new procedural macro attribute
#[derive_message_formats] that can be added to the message method
declaration in order to automatically derive the message_formats
implementation.
This commit implements the macro. The following and final commit
updates violations.rs to use the macro. (The changes have been separated
because the next commit is autogenerated via a Python script.)
ruff_dev::generate_rules_table previously documented which rules are
autofixable via DiagnosticKind::fixable ... since the DiagnosticKind was
obtained via Rule::kind (and Violation::placeholder) which we both want
to get rid of we have to obtain the autofixability via another way.
This commit implements such another way by adding an AUTOFIX
associated constant to the Violation trait. The constant is of the type
Option<AutoFixkind>, AutofixKind is a new struct containing an
Availability enum { Sometimes, Always}, letting us additionally document
that some autofixes are only available sometimes (which previously
wasn't documented). We intentionally introduce this information in a
struct so that we can easily introduce further autofix metadata in the
future such as autofix applicability[1].
[1]: https://doc.rust-lang.org/stable/nightly-rustc/rustc_errors/enum.Applicability.html
While ruff displays the string returned by Violation::message in its
output for detected violations the messages displayed in the README
and in the `--explain <code>` output previously used the
DiagnosticKind::summary() function which for some verbose messages
provided shorter descriptions.
This commit removes DiagnosticKind::summary, and moves the more
extensive documentation into doc comments ... these are not displayed
yet to the user but doing that is very much planned.
This is slightly buggy due to Instagram/LibCST#855; it will complain `[ERROR] Failed to fix nested with: Failed to extract CST from source` when trying to fix nested parenthesized `with` statements lacking trailing commas. But presumably people who write parenthesized `with` statements already knew that they don’t need to nest them.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
I accept any suggestion. By the way, I have a doubt, I have checked and all flake8-pie plugins can be fixed by ruff, but is it necessary that this one is also fixed automatically ?
rel #1543
The idea is to follow the Rust naming convention for lints[1]:
> the lint name should make sense when read as
> "allow lint-name" or "allow lint-name items"
Following that convention prefixing "Banned" is
redundant as it could be prefixed to any lint name.
[1]: https://rust-lang.github.io/rfcs/0344-conventions-galore.html#lints
Implements [flake8-commas](https://github.com/PyCQA/flake8-commas). Fixes#1058.
The plugin is mostly redundant with Black (and also deprecated upstream), but very useful for projects which can't/won't use an auto-formatter.
This linter works on tokens. Before porting to Rust, I cleaned up the Python code ([link](https://gist.github.com/bluetech/7c5dcbdec4a73dd5a74d4bc09c72b8b9)) and made sure the tests pass. In the Rust version I tried to add explanatory comments, to the best of my understanding of the original logic.
Some changes I did make:
- Got rid of rule C814 - "missing trailing comma in Python 2". Ruff doesn't support Python 2.
- Merged rules C815 - "missing trailing comma in Python 3.5+" and C816 - "missing trailing comma in Python 3.6+" into C812 - "missing trailing comma". These Python versions are outdated, didn't think it was worth the complication.
- Added autofixes for C812 and C819.
Autofix is missing for C818 - "trailing comma on bare tuple prohibited". It needs to turn e.g. `x = 1,` into `x = (1, )`, it's a bit difficult to do with tokens only, so I skipped it for now.
I ran the rules on cpython/Lib and on a big internal code base and it works as intended (though I only sampled the diffs).
This PR refactors our import-tracking logic to leverage our existing
logic for tracking bindings. It's both a significant simplification, a
significant improvement (as we can now track reassignments), and closes
out a bunch of subtle bugs.
Though the AST tracks all bindings (e.g., when parsing `import os as
foo`, we bind the name `foo` to a `BindingKind::Importation` that points
to the `os` module), when I went to implement import tracking (e.g., to
ensure that if the user references `List`, it's actually `typing.List`),
I added a parallel system specifically for this use-case.
That was a mistake, for a few reasons:
1. It didn't track reassignments, so if you had `from typing import
List`, but `List` was later overridden, we'd still consider any
reference to `List` to be `typing.List`.
2. It required a bunch of extra logic, include complex logic to try and
optimize the lookups, since it's such a hot codepath.
3. There were a few bugs in the implementation that were just hard to
correct under the existing abstractions (e.g., if you did `from typing
import Optional as Foo`, then we'd treat any reference to `Foo` _or_
`Optional` as `typing.Optional` (even though, in that case, `Optional`
was really unbound).
The new implementation goes through our existing binding tracking: when
we get a reference, we find the appropriate binding given the current
scope stack, and normalize it back to its original target.
Closes#1690.
Closes#1790.
This PR implements `W505` (`DocLineTooLong`), which is similar to `E501`
(`LineTooLong`) but confined to doc lines.
I based the "doc line" definition on pycodestyle, which defines a doc
line as a standalone comment or string statement. Our definition is a
bit more liberal, since we consider any string statement a doc line
(even if it's part of a multi-line statement) -- but that seems fine to
me.
Note that, unusually, this rule requires custom extraction from both the
token stream (to find standalone comments) and the AST (to find string
statements).
Closes#1784.
Ref #998
- Implements SIM401 with fix
- Added tests
Notes:
- only recognize simple ExprKind::Name variables in expr patterns for
now
- bug-fix from reference implementation: check 3-conditions (dict-key,
target-variable, dict-name) to be equal, `flake8_simplify` only test
first two (only first in second pattern)