## Summary
I'll write up a more detailed description tomorrow, but in short, this
PR removes our regex-based implementation in favor of "manual" parsing.
I tried a couple different implementations. In the benchmarks below:
- `Directive/Regex` is our implementation on `main`.
- `Directive/Find` just uses `text.find("noqa")`, which is insufficient,
since it doesn't cover case-insensitive variants like `NOQA`, and
doesn't handle multiple `noqa` matches in a single like, like ` # Here's
a noqa comment # noqa: F401`. But it's kind of a baseline.
- `Directive/Memchr` uses three `memchr` iterative finders (one for
`noqa`, `NOQA`, and `NoQA`).
- `Directive/AhoCorasick` is roughly the variant checked-in here.
The raw results:
```
Directive/Regex/# noqa: F401
time: [273.69 ns 274.71 ns 276.03 ns]
change: [+1.4467% +1.8979% +2.4243%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
3 (3.00%) low mild
8 (8.00%) high mild
4 (4.00%) high severe
Directive/Find/# noqa: F401
time: [66.972 ns 67.048 ns 67.132 ns]
change: [+2.8292% +2.9377% +3.0540%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) low severe
3 (3.00%) low mild
8 (8.00%) high mild
3 (3.00%) high severe
Directive/AhoCorasick/# noqa: F401
time: [76.922 ns 77.189 ns 77.536 ns]
change: [+0.4265% +0.6862% +0.9871%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) low mild
3 (3.00%) high mild
4 (4.00%) high severe
Directive/Memchr/# noqa: F401
time: [62.627 ns 62.654 ns 62.679 ns]
change: [-0.1780% -0.0887% -0.0120%] (p = 0.03 < 0.05)
Change within noise threshold.
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) low severe
5 (5.00%) low mild
3 (3.00%) high mild
2 (2.00%) high severe
Directive/Regex/# noqa: F401, F841
time: [321.83 ns 322.39 ns 322.93 ns]
change: [+8602.4% +8623.5% +8644.5%] (p = 0.00 < 0.05)
Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low severe
2 (2.00%) low mild
1 (1.00%) high mild
1 (1.00%) high severe
Directive/Find/# noqa: F401, F841
time: [78.618 ns 78.758 ns 78.896 ns]
change: [+1.6909% +1.8771% +2.0628%] (p = 0.00 < 0.05)
Performance has regressed.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
Directive/AhoCorasick/# noqa: F401, F841
time: [87.739 ns 88.057 ns 88.468 ns]
change: [+0.1843% +0.4685% +0.7854%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) low mild
3 (3.00%) high mild
3 (3.00%) high severe
Directive/Memchr/# noqa: F401, F841
time: [80.674 ns 80.774 ns 80.860 ns]
change: [-0.7343% -0.5633% -0.4031%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) low severe
9 (9.00%) low mild
1 (1.00%) high mild
Directive/Regex/# noqa time: [194.86 ns 195.93 ns 196.97 ns]
change: [+11973% +12039% +12103%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
5 (5.00%) low mild
1 (1.00%) high mild
Directive/Find/# noqa time: [25.327 ns 25.354 ns 25.383 ns]
change: [+3.8524% +4.0267% +4.1845%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
6 (6.00%) high mild
3 (3.00%) high severe
Directive/AhoCorasick/# noqa
time: [34.267 ns 34.368 ns 34.481 ns]
change: [+0.5646% +0.8505% +1.1281%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high mild
Directive/Memchr/# noqa time: [21.770 ns 21.818 ns 21.874 ns]
change: [-0.0990% +0.1464% +0.4046%] (p = 0.26 > 0.05)
No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
4 (4.00%) low mild
4 (4.00%) high mild
2 (2.00%) high severe
Directive/Regex/# type: ignore # noqa: E501
time: [278.76 ns 279.69 ns 280.72 ns]
change: [+7449.4% +7469.8% +7490.5%] (p = 0.00 < 0.05)
Performance has regressed.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) low mild
1 (1.00%) high mild
1 (1.00%) high severe
Directive/Find/# type: ignore # noqa: E501
time: [67.791 ns 67.976 ns 68.184 ns]
change: [+2.8321% +3.1735% +3.5418%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
5 (5.00%) high mild
1 (1.00%) high severe
Directive/AhoCorasick/# type: ignore # noqa: E501
time: [75.908 ns 76.055 ns 76.210 ns]
change: [+0.9269% +1.1427% +1.3955%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Directive/Memchr/# type: ignore # noqa: E501
time: [72.549 ns 72.723 ns 72.957 ns]
change: [+1.5881% +1.9660% +2.3974%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
10 (10.00%) high mild
5 (5.00%) high severe
Directive/Regex/# type: ignore # nosec
time: [66.967 ns 67.075 ns 67.207 ns]
change: [+1713.0% +1715.8% +1718.9%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low severe
3 (3.00%) low mild
2 (2.00%) high mild
4 (4.00%) high severe
Directive/Find/# type: ignore # nosec
time: [18.505 ns 18.548 ns 18.597 ns]
change: [+1.3520% +1.6976% +2.0333%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
Directive/AhoCorasick/# type: ignore # nosec
time: [16.162 ns 16.206 ns 16.252 ns]
change: [+1.2919% +1.5587% +1.8430%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
Directive/Memchr/# type: ignore # nosec
time: [39.192 ns 39.233 ns 39.276 ns]
change: [+0.5164% +0.7456% +0.9790%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
2 (2.00%) low severe
4 (4.00%) low mild
3 (3.00%) high mild
4 (4.00%) high severe
Directive/Regex/# some very long comment that # is interspersed with characters but # no directive
time: [81.460 ns 81.578 ns 81.703 ns]
change: [+2093.3% +2098.8% +2104.2%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) low mild
2 (2.00%) high mild
Directive/Find/# some very long comment that # is interspersed with characters but # no directive
time: [26.284 ns 26.331 ns 26.387 ns]
change: [+0.7554% +1.1027% +1.3832%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
5 (5.00%) high mild
1 (1.00%) high severe
Directive/AhoCorasick/# some very long comment that # is interspersed with characters but # no direc...
time: [28.643 ns 28.714 ns 28.787 ns]
change: [+1.3774% +1.6780% +2.0028%] (p = 0.00 < 0.05)
Performance has regressed.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
Directive/Memchr/# some very long comment that # is interspersed with characters but # no directive
time: [55.766 ns 55.831 ns 55.897 ns]
change: [+1.5802% +1.7476% +1.9021%] (p = 0.00 < 0.05)
Performance has regressed.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) low mild
```
While memchr is faster than aho-corasick in some of the common cases
(like `# noqa: F401`), the latter is way, way faster when there _isn't_
a match (like 2x faster -- see the last two cases). Since most comments
_aren't_ `noqa` comments, this felt like the right tradeoff. Note that
all implementations are significantly faster than the regex version.
(I know I originally reported a 10x speedup, but I ended up improving
the regex version a bit in some prior PRs, so it got unintentionally
faster via some refactors.)
There's also one behavior change in here, which is that we now allow
variable spaces, e.g., `#noqa` or `# noqa`. Previously, we required
exactly one space. This thus closes#5177.
## Summary
The following code was previously leading to unstable formatting:
```python
try:
try:
pass
finally:
print(1) # issue7208
except A:
pass
```
The comment would be formatted as a trailing comment of `try` which is
unstable as an end-of-line comment gets two extra whitespaces.
This was originally found in
99b00efd5e/Lib/getpass.py (L68-L91)
## Test Plan
I added a regression test
## Summary
It's a bit simpler to let the API just take the text itself, plus an
offset (to make the returned `TextRange` absolute, rather than
relative).
## Summary
Adds a `--case-sensitive` setting/flag to isort (default: `false`)
which, when set to `true` sorts imports case sensitively instead of case
insensitively.
Tests and Docs can be improved, can do that if the general idea of the
implementation is in order.
First `isort` edit so any and all feedback is welcomed even more than
usual.
## Test Plan
Added a fixture with an assortment of imports in various cases.
## Issue links
Closes: https://github.com/astral-sh/ruff/issues/5514
## Summary
This PR enables us to resolve attribute accesses within files, at least
for static and class methods. For example, we can now detect that this
is a function access (and avoid a false-positive):
```python
class Class:
@staticmethod
def error():
return ValueError("Something")
# OK
raise Class.error()
```
Closes#5487.
Closes#5416.
## Summary
Implement Pylint `typevar-double-variance` (`C0131`) as
`type-bivariance` (`PLC0131`). Includes documentation. Related to #970.
Renamed the rule to be more clear (it's not immediately obvious what
'double' means, IMO).
The Pylint implementation checks only `TypeVar`, but this PR checks
`ParamSpec` as well.
## Test Plan
Added tests.
`cargo test`
## Summary
This adds a `ruff rule --all` switch that prints out a human-readable
Markdown or a machine-readable JSON document of the lint rules known to
Ruff.
I needed a machine-readable document of the rules [for a
project](https://github.com/astral-sh/ruff/discussions/5078), and
figured it could be useful for other people – or tooling! – to be able
to interrogate Ruff about its arcane knowledge.
The JSON output is an array of the same objects printed by `ruff rule
--format=json`.
## Test Plan
I ran `ruff rule --all --format=json`. I think more might be needed, but
maybe a snapshot test is overkill?
## Summary
Implement Pylint `typevar-name-mismatch` (`C0132`) as
`type-param-name-mismatch` (`PLC0132`). Includes documentation. Related
to #970.
The Pylint implementation checks only `TypeVar`, but this PR checks
`TypeVarTuple`, `ParamSpec`, and `NewType` as well. This seems to better
represent the Pylint rule's [intended
behaviour](https://github.com/pylint-dev/pylint/issues/5224).
Full disclosure: I am not a fan of the translated name and think it
should probably be different.
## Test Plan
`cargo test`
## Summary
This makes the output of `check-formatter-stability` more concise by
removing extraneous newlines. It also adds a `--error-file` option to
that script that allows creating a file with just the errors (without
the status messages) to share with others.
## Test Plan
I ran it over CPython and looked at the output. I then added the
`--error-file` option and looked at the contents of the file
## Summary
Format import statements in all their variants. Specifically, this
implemented formatting `StmtImport`, `StmtImportFrom` and `Alias`.
## Test Plan
I added some custom snapshots, even though this has been covered well by
black's tests.
## Summary
As discussed on ~IRC~ Discord, this will make it easier for e.g. the
docs generation stuff to get all rules for a linter (using
`all_rules()`) instead of just non-nursery ones, and it also makes it
more Explicit Is Better Than Implicit to iterate over linter rules.
Grepping for `Item = Rule` reveals some remaining implicit
`IntoIterator`s that I didn't feel were necessarily in scope for this
(and honestly, iterating over a `RuleSet` makes sense).
## Summary
If a comma separated list has only one entry, black will respect the
magic trailing comma, but it will not add a new one.
The following code will remain as is:
```python
b1 = [
aksjdhflsakhdflkjsadlfajkslhfdkjsaldajlahflashdfljahlfksajlhfajfjfsaahflakjslhdfkjalhdskjfa
]
b2 = [
aksjdhflsakhdflkjsadlfajkslhfdkjsaldajlahflashdfljahlfksajlhfajfjfsaahflakjslhdfkjalhdskjfa,
]
b3 = [
aksjdhflsakhdflkjsadlfajkslhfdkjsaldajlahflashdfljahlfksajlhfajfjfsaahflakjslhdfkjalhdskjfa,
aksjdhflsakhdflkjsadlfajkslhfdkjsaldajlahflashdfljahlfksajlhfajfjfsaahflakjslhdfkjalhdskjfa
]
```
## Test Plan
This was first discovered in
7eeadc82c2/django/contrib/admin/checks.py (L674-L681),
which i've minimized into a call test.
I've added tests for the three cases (one entry + no comma, one entry +
comma, more than one entry) to the list tests.
The diffs from the black tests get smaller.
## Summary
Removing some false positives based on running over `zulip`.
`PERF401` now also detects cases like:
```py
original = list(range(10000))
filtered = []
for i in original:
filtered.append(i * i)
```
Previously, these were caught by the list-copy rule, but these too need
comprehensions.
## Summary
This PR applies the fix in #5478 to a variety of other call-sites, and
fixes some other range hygienic stuff in the rules that were modified.
## Summary
Change generator formatting dummy to include `NOT_YET_IMPLEMENTED`. This
makes it easier to correctly identify them as dummies
## Test Plan
This is a dummy change
## Summary
Adds `PERF401` and `PERF402` mirroring `W8401` and `W8402` from
https://github.com/tonybaloney/perflint
Implementation is not super smart but should be at parity with upstream
implementation judging by:
c07391c176/perflint/comprehension_checker.py (L42-L73)
It essentially checks:
- If the body of a for-loop is just one statement
- If that statement is an `if` and the if-statement contains a call to
`append()` we flag `PERF401` and suggest a list comprehension
- If that statement is a plain call to `append()` or `insert()` we flag
`PERF402` and suggest `list()` or `list.copy()`
I've set the violation to only flag the first append call in a long
`if-else` statement for `PERF401`. Happy to change this to some other
location or make it multiple violations if that makes more sense.
## Test Plan
Fixtures were added with the relevant scenarios for both rules
## Issue Links
Refers: https://github.com/astral-sh/ruff/issues/4789
## Summary
Implements flake8-pyi checks 002, 003, 004, 005. The logic is a bit
complex, as you can see in the [original
code](57921813c1/pyi.py (L1403C18-L1403C18)).
ref: #848
## Test Plan
Updated snapshot tests. Ran flake8 to double check lints, and ran ruff
with all PYI lints enabled to check for incorrect overlapping lint
errors.
Support for `let…else` formatting was just merged to nightly
(rust-lang/rust#113225). Rerun `cargo fmt` with Rust nightly 2023-07-02
to pick this up. Followup to #939.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
## Summary
This PR reduces the noise from `DJ012` by emitting a single violation
when you have multiple consecutive violations of the same "type".
For example, given:
```py
class MultipleConsecutiveFields(models.Model):
"""Model that contains multiple out-of-order field definitions in a row."""
class Meta:
verbose_name = "test"
first_name = models.CharField(max_length=32)
last_name = models.CharField(max_length=32)
```
It's convenient to only error on `first_name`, and not `last_name`,
since we're really flagging that the _section_ is out-of-order.
Closes#5465.
## Summary
Given a docstring like:
```py
def f(a: int, b: int) -> int:
"""Showcase function.
Parameters
----------
a : int
_description_
b : int
_description_
Returns
-------
int
_description
"""
```
We were failing to identify `Returns` as a section, because the previous
line was neither empty nor ended with punctuation. This was causing a
false negative, where by we weren't flagging a missing line before
`Returns`. So, the very reason for the rule (no blank line) was causing
us to fail to catch it.
Note that, we did have a test case for this, which was working properly:
```py
def f() -> int:
"""Showcase function.
Parameters
----------
Returns
-------
"""
```
...because the line before `Returns` "ends in a punctuation mark" (`-`).
Closes#5442.
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR normalizes line endings inside of strings to `\n` as required by the printer.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I added a new test using `\r\n` and ran the ecosystem check. There are no remaining end of line panics.
https://gist.github.com/MichaReiser/8f36b1391ca7b48475b3a4f592d74ff4
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR uses rayon to parallelize the stability check by scheduling each project as its own task.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I ran the ecosystem check. It now makes use of all cores (except at the end, there are some large projects).
## Performance
The check now completes in minutes where it took about 30 minutes before.
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR fixes an issue where the binary expression formatting removed parentheses around the left hand side of an expression.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I added a new regression test and re-ran the ecosystem check. It brings down the `check-formatter-stability` output from a 3.4MB file down to 900KB.
<!-- How was it tested? -->
## Summary
This PR makes E731 a "manual" fix in one other context: when the lambda
is shadowing another variable in the scope. Function declarations (with
shadowing) cause issues for type checkers, and so rewriting an
annotation, e.g., in branches of an `if` statement can lead to failures.
Closes https://github.com/astral-sh/ruff/issues/5421.
## Summary
Completes the documentation for the `flake8-logging-format` rules.
Related to #2646.
I included both the `flake8-logging-format` recommendation to use the
`extra` keyword and the Pylint recommendation to pass format values as
parameters so that formatting is done lazily, as #970 suggests the
Pylint logging rules are covered by this ruleset. Using lazy formatting
via parameters is probably more common than avoiding formatting entirely
in favour of the `extra` argument, regardless.
## Test Plan
`python scripts/check_docs_formatted.py`
## Summary
This PR extracts a bunch of complex logic from `add_binding`, instead
running the the shadowing rules in the deferred handler, thereby
decoupling the binding phase (during which we build up the semantic
model) from the analysis phase, and generally making `add_binding` much
more focused.
This was made possible by improving the semantic model to better handle
deletions -- previously, we'd "lose track" of bindings if they were
deleted, which made this kind of refactor impossible.
## Test Plan
We have good automated coverage for this, but I want to benchmark it
separately.
## Summary
This PR fixes a silent failure that manifested itself in
https://github.com/astral-sh/ruff-vscode/issues/238. In short, if the
user provided invalid arguments to Ruff in the VS Code extension (like
`"ruff.args": ["a"]`), then we generated something like the following
command:
```console
/path/to/ruff --force-exclude --no-cache --no-fix --format json - --fix a --stdin-filename /path/to/file.py
```
Since this contains both `-` and `a` as the "input files", Ruff would
treat this as if we're linting the files names `-` and `a`, rather than
linting standard input.
This PR modifies out standard input detection to force standard input
when `--stdin-filename` is present, or at least one file is `-`. (We
then warn and ignore the others.)
## Summary
This was just an oversight -- the last remaining `todo!()` that I never
filled in. We clearly don't have any test coverage for it yet, but this
mimics the Pyright implementation.
## Summary
This PR adds some snapshot tests for the resolver based on executing
resolutions within a "mock" of the Airflow repo (that is: a folder that
contains a subset of the repo's files, but all empty, and with an
only-partially-complete virtual environment). It's intended to act as a
lightweight integration test, to enable us to test resolutions on a
"real" project without adding a dependency on Airflow itself.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Currently the URL at the bottom of the `ruff rule SLOT00x` output points
to Python 3.7 docs.
Given that Python 3.7 is now end-of-life (as of yesterday), let's
instead point users to the current Python docs.
## Test Plan
<!-- How was it tested? -->
## Summary
Consider Jupyter index for code frames (`--show-source`).
This solves two problems as mentioned in the linked issue:
> Omit any contents from adjoining cells
If the Jupyter index is present, we'll use that to check if the
surrounding
lines belong to the same cell as the content line. If not, we'll skip
that line
until we either reach the one which does or we reach the content line.
> code frame line number
If the Jupyter index is present, we'll use that to get the actual start
line in
corresponding to the computed start index.
## Test Plan
`cargo run --bin ruff -- check --no-cache --isolated --select=ALL --show-source /path/to/notebook.ipynb`
fixes: #5395
## Summary
The `Y053` rule of `flake8-pyi` ignores docstrings, it only triggers on
other string literals.
The separate `Y021/PYI021` rule exists to disallow docstrings.
## Test Plan
Added some `# OK` test cases to `PYI053.py(i)` files.
## Summary
Implement Pylint rule `single-string-used-for-slots` (`C0205`) as
`single-string-slots` (`PLC0205`). This rule checks for single strings
being assigned to `__slots__`. For example
```python
class Foo:
__slots__: str = "bar"
def __init__(self, bar: str) -> None:
self.bar = bar
```
should be
```python
class Foo:
__slots__: tuple[str, ...] = ("bar",)
def __init__(self, bar: str) -> None:
self.bar = bar
```
Related to #970. Includes documentation.
## Test Plan
`cargo test`
## Summary
Replace same length equal line with dash line in D407
Do we want to update the message and autofix title to reflect this
change?
## Test Plan
Added test cases for:
- Equal line length == dash line length
- Equal line length != dash line length
fixes: #5378
## Summary
This PR contains the first step towards enabling robust first-party,
third-party, and standard library import resolution in Ruff (including
support for `typeshed`, stub files, native modules, etc.) by porting
Pyright's import resolver to Rust.
The strategy taken here was to start with a more-or-less direct port of
the Pyright's TypeScript resolver. The code is intentionally similar,
and the test suite is effectively a superset of Pyright's test suite for
its own resolver. Due to the nature of the port, the code is very, very
non-idiomatic for Rust. The code is also entirely unused outside of the
test suite, and no effort has been made to integrate it with the rest of
the codebase.
Future work will include:
- Refactoring the code (now that it works) to match Rust and Ruff
idioms.
- Further testing, in practice, to ensure that the resolver can resolve
imports in a complex project, when provided with a virtual environment
path.
- Caching, to minimize filesystem lookups and redundant resolutions.
- Integration into Ruff itself (use Ruff's existing settings, find rules
that can make use of robust resolution, etc.)
ruff_dev repeat recently broke (i think with the cargo update?):
> thread 'main' panicked at 'Command repeat: Short option names must be
unique for each argument, but '-n' is in use by both 'no_cache' and
'repeat''
This fixes this by removing the short argument.
## Summary
This formats call expressions with magic trailing comma and parentheses
behaviour but without call chaining
## Test Plan
Lots of new test fixtures, including some that don't work yet
## Summary
Implements PERF203 from #4789, which throws if a `try/except` block is
inside of a loop. Not sure if we want to extend the diagnostic to the
`except` as well, but I thought that that may get a little messy. We may
also want to just throw on the word `try` - open to suggestions though.
## Test Plan
`cargo test`
## Summary
Experimental release for Jupyter Notebook integration.
Currently, this requires a user to explicitly opt-in using the
[include](https://beta.ruff.rs/docs/settings/#include) configuration:
```toml
[tool.ruff]
include = ["*.py", "*.pyi", "**/pyproject.toml", "*.ipynb"]
```
Or, a user can pass in the file directly:
```sh
ruff check path/to/notebook.ipynb
```
For known limitations, please refer #5188
## Test Plan
Following command should work without the `--all-features` flag:
```sh
cargo dev round-trip /path/to/notebook.ipynb
```
Following command should work with the above config file along with
`select = ["ALL"]`:
```sh
cargo run --bin ruff -- check --no-cache --config=../test-repos/openai-cookbook/pyproject.toml --fix ../test-repos/openai-cookbook/
```
Passing the Jupyter notebook directly:
```sh
cargo run --bin ruff -- check --no-cache --isolated --select=ALL --fix ../test-repos/openai-cookbook/examples/Classification_using_embeddings.ipynb
```
## Summary
Add documentation to the `D1XX` rules that flag missing docstrings.
The examples are quite long and docstrings practices vary a lot between
projects, so I thought it would be best that the documentation for these
rules be their own PR separate to the other `pydocstyle` rules.
Related to #2646.
## Test Plan
`python scripts/check_docs_formatted.py`
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR extends the string formatting to respect the configured quote style.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Extended the string test with new cases and set it up to run twice: Once with the `quote_style: Doube`, and once with `quote_style: Single` single and double quotes.
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds tests that verify that the magic trailing comma is not respected if disabled in the formatter options.
Our test setup now allows to create a `<fixture-name>.options.json` file that contains an array of configurations that should be tested.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
It's all about tests :)
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds a new `PyFormatOptions` struct that stores the python formatter options.
The new options aren't used yet, with the exception of magical trailing commas and the options passed to the printer.
I'll follow up with more PRs that use the new options (e.g. `QuoteStyle`).
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test` I'll follow up with a new PR that adds support for overriding the options in our fixture tests.
## Motation
Previously,
```python
x = (
a1
.a2
# a
. # b
# c
a3
)
```
got formatted as
```python
x = a1.a2
# a
. # b
# c
a3
```
which is invalid syntax. This fixes that.
## Summary
This implements a basic form of attribute chaining
(<https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#call-chains>)
by checking if any inner attribute access contains an own line comment,
and if this is the case, adds parentheses around the outermost attribute
access while disabling parentheses for all inner attribute expressions.
We want to replace this with an implementation that uses recursion or a
stack while formatting instead of in `needs_parentheses` and also
includes calls rather sooner than later, but i'm fixing this now because
i'm uncomfortable with having known invalid syntax generation in the
formatter.
## Test Plan
I added new fixtures.
## Summary
Add documentation to the `D3XX` rules that check for issues with
docstring quotes. Related to #2646.
## Test Plan
`python scripts/check_docs_formatted.py`
## Summary
Ignore type aliases for RUF013 to avoid flagging false positives:
```python
from typing import Optional
MaybeInt = Optional[int]
def f(arg: MaybeInt = None):
pass
```
But, at the expense of having false negatives:
```python
Text = str | bytes
def f(arg: Text = None):
pass
```
## Test Plan
`cargo test`
fixes: #5295
## Summary
This is small refactoring to reuse the code that detects the magic
trailing comma across functions. I make this change now to avoid copying
code in a later PR. @MichaReiser is planning on making a larger
refactoring later that integrates with the join nodes builder
## Test Plan
No functional changes. The magic trailing comma behaviour is checked by
the fixtures.
## Summary
When visiting AugAssign in evaluation order, the AugAssign `target`
should be visited after it's `value`. Based on my testing, the pseudo
code for `a += b` is effectively:
```python
tmp = a
a = tmp.__iadd__(b)
```
That is, an ideal traversal order would look something like this:
1. load a
2. b
3. op
4. store a
But, there is only a single AST node which captures `a` in the statement
`a += b`, so it cannot be traversed both before and after the traversal
of `b` and the `op`.
Nonetheless, I think traversing `a` after `b` and the `op` makes the
most sense for a number of reasons:
1. All the other assignment expressions traverse their `value`s before
their `target`s. Having `AugAssign` traverse in the same order would be
more consistent.
2. Within the AST, the `ctx` of the `target` for an `AugAssign` is
`Store` (though technically this is a `Load` and `Store` operation, the
AST only indicates it as a `Store`). Since the the store portion of the
`AugAssign` occurs last, I think it makes sense to traverse the `target`
last as well.
The effect of this is marginal, but it may have an impact on the
behavior of #5271.
## Summary
And remove cached files that we haven't seen for a certain period of
time, currently 30 days.
For the last seen timestamp we actually use an `u64`, it's smaller on
disk than `SystemTime` (which size is OS dependent) and fits in an
`AtomicU64` which we can use to update it without locks.
## Test Plan
Added a new unit test, run by `cargo test`.
In the following code, the comment used to get wrongly associated with
the `if False` since it looked like an elif. This fixes it by checking
the indentation and adding a regression test
```python
if True:
pass
else: # Comment
if False:
pass
pass
```
Originally found in
1570b94a02/gradio/external.py (L478)
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR implements formatting for non-f-string Strings that do not use implicit concatenation.
Docstring formatting is out of the scope of this PR.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I added a few tests for simple string literals.
## Performance
Ouch. This is hitting performance somewhat hard. This is probably because we now iterate each string a couple of times:
1. To detect if it is an implicit string continuation
2. To detect if the string contains any new lines
3. To detect the preferred quote
4. To normalize the string
Edit: I integrated the detection of newlines into the preferred quote detection so that we only iterate the string three time.
We can probably do better by merging the implicit string continuation with the quote detection and new line detection by iterating till the end of the string part and returning the offset. We then use our simple tokenizer to skip over any comments or whitespace until we find the first non trivia token. From there we keep continue doing this in a loop until we reach the end o the string. I'll leave this improvement for later.
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR adds basic formatting for compare operations.
The implementation currently breaks diffeently when nesting binary like expressions. I haven't yet figured out what Black's logic is in that case but I think that this by itself is already an improvement worth merging.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I added a few new tests
<!-- How was it tested? -->
## Summary
The `Visitor` and `preorder::Visitor` traits provide some convenience
functions, `visit_annotation` and `visit_format_spec`, for handling
annotation and format spec expressions respectively. Both of these
functions accept an `&Expr` and have a default implementation which
delegates to `walk_expr`. The problem with this approach is that any
custom handling done in `visit_expr` will be skipped for annotations and
format specs. Instead, to capture any custom logic implemented in
`visit_expr`, both of these function's default implementations should
delegate to `visit_expr` instead of `walk_expr`.
## Example
Consider the below `Visitor` implementation:
```rust
impl<'a> Visitor<'a> for Example<'a> {
fn visit_expr(&mut self, expr: &'a Expr) {
match expr {
Expr::Name(ExprName { id, .. }) => println!("Visiting {:?}", id),
_ => walk_expr(self, expr),
}
}
}
```
Run on the following Python snippet:
```python
a: b
```
I would expect such a visitor to print the following:
```
Visiting b
Visiting a
```
But it instead prints the following:
```
Visiting a
```
Our custom `visit_expr` handler is not invoked for the annotation.
## Test Plan
Tests added in #5271 caught this behavior.
## Summary
Move `collection-literal-concatenation` markdown documentation to the
correct place.
Fixes error in #5262.
## Test Plan
`python scripts/check_docs_formatted.py`
## Summary
Adds PERF101 which checks for unnecessary casts to `list` in for loops.
NOTE: Is not fully equal to its upstream implementation as this
implementation does not flag based on type annotations
(i.e.):
```python
def foo(x: List[str]):
for y in list(x):
...
```
With the current set-up it's quite hard to get the annotation from a
function arg from its binding. Problem is best considered broader than
this implementation.
## Test Plan
Added fixture.
## Issue links
Refers: https://github.com/astral-sh/ruff/issues/4789
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This contains three changes:
* repos in `check_ecosystem.py` are stored as `org:name` instead of
`org/name` to create a flat directory layout
* `check_ecosystem.py` performs a maximum of 50 parallel jobs at the
same time to avoid consuming to much RAM
* `check-formatter-stability` gets a new option `--multi-project` so
it's possible to do `cargo run --bin ruff_dev --
check-formatter-stability --multi-project target/checkouts`
With these three changes it becomes easy to check the formatter
stability over a larger number of repositories. This is part of the
integration of integrating formatter regressions checks into the
ecosystem checks.
## Test Plan
```shell
python scripts/check_ecosystem.py --checkouts target/checkouts --projects github_search.jsonl -v $(which true) $(which true)
cargo run --bin ruff_dev -- check-formatter-stability --multi-project target/checkouts
```
## Summary
Remove recommendations to replace
`typing_extensions.dataclass_transform` and
`typing_extensions.SupportsIndex` with their `typing` library
counterparts.
Closes#5112.
## Test Plan
Added extra checks to the test fixture.
`cargo test`
## Summary
This snippet used to panic because it expected to see a comma or
something similar after the `2` but met the closing parentheses that is
not part of the range and panicked
```python
a = {
1: (2),
# comment
3: True,
}
```
Originally found in
636a717ef0/testing/marionette/client/marionette_driver/geckoinstance.py (L109)
This snippet is also the test plan.
This solves an instability when formatting cpython. It also introduces
another one, but i think it's still a worthwhile change for now.
There's no proper testing since this is just a dummy.