## Summary
I initially wanted this category to be more general and decoupled from
the plugin, but I got some feedback that the titling felt inconsistent
with others.
## Summary
This is a proper fix for the issue patched-over in
https://github.com/astral-sh/ruff/pull/5229, thanks to an extremely
helpful repro from @tlambert03 in that thread. It looks like we were
using the keys of `package_roots` rather than the values to initialize
the cache -- but it's a map from package to package root.
## Test Plan
Reverted #5229, then ran through the plan that @tlambert03 included in
https://github.com/astral-sh/ruff/pull/5229#issuecomment-1599723226.
Verified the panic before but not after this change.
## Summary
This PR reverts #4971 (aba073a791). It
turns out that `f"{str(x)}"` and `f"{x}"` are often but not exactly
equivalent, and performing that conversion automatically can lead to
subtle bugs, See the discussion in
https://github.com/astral-sh/ruff/issues/4958.
## Summary
I haven't been able to determine why / when this is happening, but in
some cases, users are reporting that this `unwrap()` is causing a panic.
It's fine to just return `None` here and fallback to "No cache",
certainly better than panicking (while we figure out the edge case).
Closes#5225.
Closes#5228.
## Summary
I'm looking into the Black stability tests, and here's one failing case.
We split `assert a and (b and c)` into:
```python
assert a
assert (b and c)
```
We fail to split `assert (b and c)` due to the parentheses. But Black
then removes then, and when running Ruff again, we get:
```python
assert a
assert b
assert c
```
This PR just enables us to fix to this in one pass.
## Summary
A new CLI option (`-o`/`--output-file`) to write output to a file
instead of stdout.
Major change is to remove the lock acquired on stdout. The argument is
that the output is buffered and thus the lock is acquired only when
writing a block (8kb). As per the benchmark below there is a slight
performance penalty.
Reference:
https://rustmagazine.org/issue-3/javascript-compiler/#printing-is-slow
## Benchmarks
_Output is truncated to only contain useful information:_
Command: `check --isolated --no-cache --select=ALL --show-source
./test-repos/cpython"`
Latest HEAD (361d45f2b2) with and without
the manual lock on stdout:
```console
Benchmark 1: With lock
Time (mean ± σ): 5.687 s ± 0.075 s [User: 17.110 s, System: 0.486 s]
Range (min … max): 5.615 s … 5.860 s 10 runs
Benchmark 2: Without lock
Time (mean ± σ): 5.719 s ± 0.064 s [User: 17.095 s, System: 0.491 s]
Range (min … max): 5.640 s … 5.865 s 10 runs
Summary
(1) ran 1.01 ± 0.02 times faster than (2)
```
This PR:
```console
Benchmark 1: This PR
Time (mean ± σ): 5.855 s ± 0.058 s [User: 17.197 s, System: 0.491 s]
Range (min … max): 5.786 s … 5.987 s 10 runs
Benchmark 2: Latest HEAD with lock
Time (mean ± σ): 5.645 s ± 0.033 s [User: 16.922 s, System: 0.495 s]
Range (min … max): 5.600 s … 5.712 s 10 runs
Summary
(2) ran 1.04 ± 0.01 times faster than (1)
```
## Test Plan
Run all of the commands which gives output with and without the
`--output-file=ruff.out` option:
* `--show-settings`
* `--show-files`
* `--show-fixes`
* `--diff`
* `--select=ALL`
* `--select=All --show-source`
* `--watch` (only stdout allowed)
resolves: #4754
## Summary
Black supports for layouts when it comes to breaking binary expressions:
```rust
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
enum BinaryLayout {
/// Put each operand on their own line if either side expands
Default,
/// Try to expand the left to make it fit. Add parentheses if the left or right don't fit.
///
///```python
/// [
/// a,
/// b
/// ] & c
///```
ExpandLeft,
/// Try to expand the right to make it fix. Add parentheses if the left or right don't fit.
///
/// ```python
/// a & [
/// b,
/// c
/// ]
/// ```
ExpandRight,
/// Both the left and right side can be expanded. Try in the following order:
/// * expand the right side
/// * expand the left side
/// * expand both sides
///
/// to make the expression fit
///
/// ```python
/// [
/// a,
/// b
/// ] & [
/// c,
/// d
/// ]
/// ```
ExpandRightThenLeft,
}
```
Our current implementation only handles `ExpandRight` and `Default` correctly. `ExpandLeft` turns out to be surprisingly hard. This PR adds a new `BestFittingMode` parameter to `BestFitting` to support `ExpandLeft`.
There are 3 variants that `ExpandLeft` must support:
**Variant 1**: Everything fits on the line (easy)
```python
[a, b] + c
```
**Variant 2**: Left breaks, but right fits on the line. Doesn't need parentheses
```python
[
a,
b
] + c
```
**Variant 3**: The left breaks, but there's still not enough space for the right hand side. Parenthesize the whole expression:
```python
(
[
a,
b
]
+ ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
)
```
Solving Variant 1 and 2 on their own is straightforward The printer gives us this behavior by nesting right inside of the group of left:
```
group(&format_args![
if_group_breaks(&text("(")),
soft_block_indent(&group(&format_args![
left,
soft_line_break_or_space(),
op,
space(),
group(&right)
])),
if_group_breaks(&text(")"))
])
```
The fundamental problem is that the outer group, which adds the parentheses, always breaks if the left side breaks. That means, we end up with
```python
(
[
a,
b
] + c
)
```
which is not what we want (we only want parentheses if the right side doesn't fit).
Okay, so nesting groups don't work because of the outer parentheses. Sequencing groups doesn't work because it results in a right-to-left breaking which is the opposite of what we want.
Could we use best fitting? Almost!
```
best_fitting![
// All flat
format_args![left, space(), op, space(), right],
// Break left
format_args!(group(&left).should_expand(true), space(), op, space(), right],
// Break all
format_args![
text("("),
block_indent!(&format_args![
left,
hard_line_break(),
op,
space()
right
])
]
]
```
I hope I managed to write this up correctly. The problem is that the printer never reaches the 3rd variant because the second variant always fits:
* The `group(&left).should_expand(true)` changes the group so that all `soft_line_breaks` are turned into hard line breaks. This is necessary because we want to test if the content fits if we break after the `[`.
* Now, the whole idea of `best_fitting` is that you can pretend that some content fits on the line when it actually does not. The way this works is that the printer **only** tests if all the content of the variant **up to** the first line break fits on the line (we insert that line break by using `should_expand(true))`. The printer doesn't care whether the rest `a\n, b\n ] + c` all fits on (multiple?) lines.
Why does breaking right work but not breaking the left? The difference is that we can make the decision whether to parenthesis the expression based on the left expression. We can't do this for breaking left because the decision whether to insert parentheses or not would depend on a lookahead: will the right side break. We simply don't know this yet when printing the parentheses (it would work for the right parentheses but not for the left and indent).
What we kind of want here is to tell the printer: Look, what comes here may or may not fit on a single line but we don't care. Simply test that what comes **after** fits on a line.
This PR adds a new `BestFittingMode` that has a new `AllLines` option that gives us the desired behavior of testing all content and not just up to the first line break.
## Test Plan
I added a new example to `BestFitting::with_mode`
## Summary
Completes the documentation for the `flake8-bugbear` ruleset. Related to
#2646.
## Test Plan
`python scripts/check_docs_formatted.py`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Now that all identifiers include ranges (#5194), we can remove a ton of
this "custom lexing" code that we have to sketchily extract identifier
ranges from source.
## Test Plan
`cargo test`
## Summary
In https://github.com/astral-sh/RustPython-Parser/pull/8, we modified
RustPython to include ranges for any identifiers that aren't
`Expr::Name` (which already has an identifier).
For example, the `e` in `except ValueError as e` was previously
un-ranged. To extract its range, we had to do some lexing of our own.
This change should improve performance and let us remove a bunch of
code.
## Test Plan
`cargo test`
## Summary
Open cache files in parallel (again), brings the performance back to be roughly equal to the old implementation.
## Test Plan
Existing tests should keep working.
## Summary
Handle trailing newline in Jupyter Notebook JSON string similar to how
`black`
does it.
## Test Plan
Add test cases when the JSON string for notebook ends with and without a
newline.
resolves: #5190
## Summary
Fixes#4404.
Consider this file:
```python
if True:
x = 1; \
<space><space><space>
```
The current implementation of W293 removes the 3 spaces on line 2. This
fix changes the file to:
```python
if True:
x = 1; \
```
A file can't end in a `\`, according to Python's [lexical
analysis](https://docs.python.org/3/reference/lexical_analysis.html), so
subsequent iterations of the autofixer fail (the AST-based ones
specifically, since they depend on a valid syntax tree and get
re-parsed).
This patch examines the line before the line checked in `W293`. If its
first non-whitespace character is a `\`, the patch will extend the
diagnostic's fix range to all whitespace up until the previous line's
*second* non-whitespace character; that is, it deletes all spaces and
potential `\`s up until the next non-whitespace character on the
previous line.
## Test Plan
Ran `cargo run -p ruff_cli -- ~/Downloads/aa.py --fix --select W293,D100
--no-cache` against the above file. This resulted in:
```
/Users/evan/Downloads/aa.py:1:1: D100 Missing docstring in public module
Found 2 errors (1 fixed, 1 remaining).
```
The file's contents, after the fix:
```python
if True:
x = 1;<space>
```
The `\` was removed, leaving the terminal space. The space should be
handled by `Rule::TrailingWhitespace`, not `BlankLineWithWhitespace`.
## Summary
This PR modifies our statement deletion logic to delete any preceding
continuation lines.
For example, given:
```py
x = 1; \
import os
```
We'll now rewrite to:
```py
x = 1;
```
In addition, the logic can now handle multiple preceding continuations
(which is unlikely, but valid).
## Summary
This PR upgrade RustPython to pull in the changes to `Arguments` (zip
defaults with their identifiers) and all the renames to `CmpOp` and
friends.
## Summary
I want to include URLs to the rule documentation in the LSP (the LSP has
a native `code_description` field for this, which, if specified, causes
the source to be rendered as a link to the docs). This PR exposes the
URL to the documentation in the Ruff JSON output.
## Summary
Maintain consistency while deserializing Jupyter notebook to JSON. The
following changes were made:
1. Use string array to store the source value as that's the default
(5781720423/nbformat/v4/nbjson.py (L56-L57))
2. Remove unused structs and enums
3. Reorder the keys in alphabetical order as that's the default.
(5781720423/nbformat/v4/nbjson.py (L51))
### Side effect
Removing the `preserve_order` feature means that the order of keys in
JSON output (`--format json`) will be in alphabetical order. This is
because the value is represented using `serde_json::Value` which
internally is a `BTreeMap`, thus sorting it as per the string key. For
posterity if this turns out to be not ideal, then we could define a
struct representing the JSON object and the order of struct fields will
determine the order in the JSON string.
## Test Plan
Add a test case to assert the raw JSON string.
## Summary
This changes the caching design from one cache file per source file, to
one cache file per package. This greatly reduces the amount of cache
files that are opened and written, while maintaining roughly the same
(combined) size as bincode is very compact.
Below are some very much not scientific performance tests. It uses
projects/sources to check:
* small.py: single, 31 bytes Python file with 2 errors.
* test.py: single, 43k Python file with 8 errors.
* fastapi: FastAPI repo, 1134 files checked, 0 errors.
Source | Before # files | After # files | Before size | After size
-------|-------|-------|-------|-------
small.py | 1 | 1 | 20 K | 20 K
test.py | 1 | 1 | 60 K | 60 K
fastapi | 1134 | 518 | 4.5 M | 2.3 M
One question that might come up is why fastapi still has 518 cache files
and not 1? That is because this is using the existing package
resolution, which sees examples, docs, etc. as separate from the "main"
source code (in the fastapi directory in the repo). In this future it
might be worth consider switching to a one cache file per repo strategy.
This new design is not perfect and does have a number of known issues.
First, like the old design it doesn't remove the cache for a source file
that has been (re)moved until `ruff clean` is called.
Second, this currently uses a large mutex around the mutation of the
package cache (e.g. inserting result). This could be (or become) a
bottleneck. It's future work to test and improve this (if needed).
Third, currently the packages and opened and stored in a sequential
loop, this could be done parallel. This is also future work.
## Test Plan
Run `ruff check` (with caching enabled) twice on any Python source code
and it should produce the same results.
## Summary
We want to ensure that once formatted content stays the same when
formatted again, which is known as formatter stability or formatter
idempotency, and that the formatter prints syntactically valid code. As
our test cases cover only a limited amount of code, this allows checking
entire repositories.
This adds a new subcommand to `ruff_dev` which can be invoked as `cargo
run --bin ruff_dev -- check-formatter-stability <repo>`. While initially
only intended to check stability, it has also found cases where the
formatter printed invalid syntax or panicked.
## Test Plan
Running this on cpython is already identifying bugs
(https://github.com/astral-sh/ruff/pull/5089)
This documentation change improves the section on dangling comments in
the formatter.
---------
Co-authored-by: David Szotten <davidszotten@gmail.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This adds a new subcommand that can be used as
```shell
cargo build --bin ruff_dev --profile=release-debug
perf record -g -F 999 target/release-debug/ruff_dev repeat --repeat 30 --exit-zero --no-cache path/to/cpython > /dev/null
flamegraph --perfdata perf.data
```
## Test Plan
This is a ruff internal script. I successfully used it to profile
cpython with the instructions above
## Summary
We weren't resetting the `allow_ellipsis` flag properly, which
ultimately caused us to treat the semicolon as "unnecessary" rather than
"creating a multi-statement line".
Closes#5154.
## Summary
Given:
```python
\
import os
```
Deleting `import os` leaves a syntax error: a file can't end in a
continuation. We have code to handle this case, but it failed to pick up
continuations at the _very start_ of a file.
Closes#5156.
## Summary
This PR moves the "unconventional import alias" rule (which enforces,
e.g., that `pandas` is imported as `pd`) to the "dead scopes" phase,
after the main linter pass. This (1) avoids an allocation since we no
longer need to create the qualified name in the linter pass; and (2)
will allow us to autofix it, since we'll have access to all references.
## Test Plan
`cargo test` -- all changes are to ranges (which are improvements IMO).
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Format `continue` statement.
## Test Plan
`continue` is used already in some tests, but if a new test is needed I
could add it.
---------
Co-authored-by: konstin <konstin@mailbox.org>
This tackles three problems:
* pre-commit was slow because it ran cargo commands
* Improve the clarity on what you need to run to get your PR pass on CI
(and make those fast)
* You had to compile and run `cargo dev generate-all` separately, which
was slow
The first change is to remove all cargo commands except running ruff
itself from pre-commit. With `cargo run --bin ruff` already compiled it
takes about 7s on my machine. It would make sense to also use the ruff
pre-commit action here even if we're then lagging a release behind for
checking ruff on ruff.
The contributing guide is now clear about what you need to run:
```shell
cargo clippy --workspace --all-targets --all-features -- -D warnings # Linting...
RUFF_UPDATE_SCHEMA=1 cargo test # Testing and updating ruff.schema.json
pre-commit run --all-files # rust and python formatting, markdown and python linting, etc.
```
Example timings from my machine:
`cargo clippy --workspace --all-targets --all-features -- -D warnings`:
23s
`RUFF_UPDATE_SCHEMA=1 cargo test`: 2min (recompiling), 1min (no code
changes, this is mainly doc tests)
`pre-commit run --all-files`: 7s
The exact numbers don't matter so much as the approximate experience (6s
is easier to just wait than 1min, esp if you need to fix and rerun). The
biggest remaining block seems to be doc tests, i'm surprised i didn't
find any solution to speeding them up (nextest simply doesn't run them
at all). Also note that the formatter has it's own tests which are much
faster since they avoid linking ruff (`cargo test
ruff_python_formatter`).
The third change is to enable `cargo test` to update the schema. Similar
to `INSTA_UPDATE=always`, i've added `RUFF_UPDATE_SCHEMA=1` (name open
to bikeshedding), so `RUFF_UPDATE_SCHEMA=1 cargo test` updates the
schema, while `cargo test` still fails as expected if the repo isn't
up-to-date.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Completes the documentation for the `flake8-blind-except` and
`flake8-raise` rules.
Related to #2646.
## Test Plan
`python scripts/check_docs_formatted.py`
## Summary
After #5140, I audited the codebase for similar patterns (defining a
list of `CallPath` entities in a static vector, then looping over them
to pattern-match). This PR migrates all other such cases to use `match`
and `matches!` where possible.
There are a few benefits to this:
1. It more clearly denotes the intended semantics (branches are
exclusive).
2. The compiler can help deduplicate the patterns and detect unreachable
branches.
3. Performance: in the benchmark below, the all-rules performance is
increased by nearly 10%...
## Benchmarks
I decided to benchmark against a large file in the Airflow repository
with a lot of type annotations
([`views.py`](https://raw.githubusercontent.com/apache/airflow/f03f73100e8a7d6019249889de567cb00e71e457/airflow/www/views.py)):
```
linter/default-rules/airflow/views.py
time: [10.871 ms 10.882 ms 10.894 ms]
thrpt: [19.739 MiB/s 19.761 MiB/s 19.781 MiB/s]
change:
time: [-2.7182% -2.5687% -2.4204%] (p = 0.00 < 0.05)
thrpt: [+2.4805% +2.6364% +2.7942%]
Performance has improved.
linter/all-rules/airflow/views.py
time: [24.021 ms 24.038 ms 24.062 ms]
thrpt: [8.9373 MiB/s 8.9461 MiB/s 8.9527 MiB/s]
change:
time: [-8.9537% -8.8516% -8.7527%] (p = 0.00 < 0.05)
thrpt: [+9.5923% +9.7112% +9.8342%]
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
```
The impact is dramatic -- nearly a 10% improvement for `all-rules`.
## Summary
This PR fixes a small quirk in the semantic model. Typically, when we
see an import, like `import foo`, we create a `BindingKind::Importation`
for it. However, if `foo` has been declared as a `global`, then we
propagate the kind forward. So given:
```python
global foo
import foo
```
We'd create two bindings for `foo`, both with type `global`.
This was originally borrowed from Pyflakes, and it exists to help avoid
false-positives like:
```python
def f():
global foo
# Don't mark `foo` as "assigned but unused"! It's a global!
foo = 1
```
This PR removes that behavior, and instead tracks "Does this binding
refer to a global?" as a flag. This is much cleaner, since it means we
don't "lose" the identity of various bindings.
As a very strange example of why this matters, consider:
```python
def foo():
global Member
from module import Member
x: Member = 1
```
`Member` is only used in a typing context, so we should flag it and say
"move it to a `TYPE_CHECKING` block". However, when we go to analyze
`from module import Member`, it has `BindingKind::Global`. So we don't
even know that it's an import!
## Summary
In #5074, we introduced an abstraction to support local symbol renames
("local" here refers to "within a module"). However, that abstraction
didn't support `global` and `nonlocal` symbols. This PR extends it to
those cases.
Broadly, there are considerations.
First, if we're renaming a symbol in a scope in which it is declared
`global` or `nonlocal`. For example, given:
```python
x = 1
def foo():
global x
```
Then when renaming `x` in `foo`, we need to detect that it's `global`
and instead perform the rename starting from the module scope.
Second, when renaming a symbol, we need to determine the scopes in which
it is declared `global` or `nonlocal`. This is effectively the inverse
of the above: when renaming `x` in the module scope, we need to detect
that we should _also_ rename `x` in `foo`.
To support these cases, the renaming algorithm was adjusted as follows:
- When we start a rename in a scope, determine whether the symbol is
declared `global` or `nonlocal` by looking for a `global` or `nonlocal`
binding. If it is, start the rename in the defining scope. (This
requires storing the defining scope on the `nonlocal` binding, which is
new.)
- We then perform the rename in the defining scope.
- We then check whether the symbol was declared as `global` or
`nonlocal` in any scopes, and perform the rename in those scopes too.
(Thankfully, this doesn't need to be done recursively.)
Closes#5092.
## Test Plan
Added some additional snapshot tests.
## Summary
This PR enables autofix behavior for the `flake8-pyi` rule that asks you
to alias `Set` to `AbstractSet` when importing `collections.abc.Set`.
It's not the most important rule, but it's a good isolated test-case for
local symbol renaming.
The renaming algorithm is outlined in-detail in the `renamer.rs` module.
But to demonstrate the behavior, here's the diff when running this fix
over a complex file that exercises a few edge cases:
```diff
--- a/foo.pyi
+++ b/foo.pyi
@@ -1,16 +1,16 @@
if True:
- from collections.abc import Set
+ from collections.abc import Set as AbstractSet
else:
- Set = 1
+ AbstractSet = 1
-x: Set = set()
+x: AbstractSet = set()
-x: Set
+x: AbstractSet
-del Set
+del AbstractSet
def f():
- print(Set)
+ print(AbstractSet)
def Set():
pass
```
Making this work required resolving a bunch of edge cases in the
semantic model that were causing us to "lose track" of references. For
example, the above wasn't possible with our previous approach to
handling deletions (#5071). Similarly, the `x: Set` "delayed annotation"
tracking was enabled via #5070. And many of these edits would've failed
if we hadn't changed `BindingKind` to always match the identifier range
(#5090). So it's really the culmination of a bunch of changes over the
course of the week.
The main outstanding TODO is that this doesn't support `global` or
`nonlocal` usages. I'm going to take a look at that tonight, but I'm
comfortable merging this as-is.
Closes#1106.
Closes#5091.
## Summary
I noticed that we have a few hot comparisons that involve called
`s.to_lowercase()`. We can avoid an allocation by comparing characters
directly.
## Summary
@konstin mentioned that in profiling, this function accounted for a
non-trivial amount of time (0.33% of total execution, the most of any
rule). This PR attempts to rewrite it as a match statement for better
performance over a looping comparison.
## Test Plan
`cargo test`
## Summary
If you `import __future__`, it's not subject to the same rules as `from
__future__ import feature` -- i.e., this is fine:
```python
x = 1
import __future__
```
It doesn't really make sense to treat these as `__future__` imports
(though I can't imagine anyone ever does this anyway).
## Summary
At present, when we store a binding, we include a `TextRange` alongside
it. The `TextRange` _sometimes_ matches the exact range of the
identifier to which the `Binding` is linked, but... not always.
For example, given:
```python
x = 1
```
The binding we create _will_ use the range of `x`, because the left-hand
side is an `Expr::Name`, which has a valid range on it.
However, given:
```python
try:
pass
except ValueError as e:
pass
```
When we create a binding for `e`, we don't have a `TextRange`... The AST
doesn't give us one. So we end up extracting it via lexing.
This PR extends that pattern to the rest of the binding kinds, to ensure
that whenever we create a binding, we always use the range of the bound
name. This leads to better diagnostics in cases like pattern matching,
whereby the diagnostic for "unused variable `x`" here used to include
`*x`, instead of just `x`:
```python
def f(provided: int) -> int:
match provided:
case [_, *x]:
pass
```
This is _also_ required for symbol renames, since we track writes as
bindings -- so we need to know the ranges of the bound symbols.
By storing these bindings precisely, we can also remove the
`binding.trimmed_range` abstraction -- since bindings already use the
"trimmed range".
To implement this behavior, I took some of our existing utilities (like
the code we had for `except ValueError as e` above), migrated them from
a full lexer to a zero-allocation lexer that _only_ identifies
"identifiers", and moved the behavior into a trait, so we can now do
`stmt.identifier(locator)` to get the range for the identifier.
Honestly, we might end up discarding much of this if we decide to put
ranges on all identifiers
(https://github.com/astral-sh/RustPython-Parser/pull/8). But even if we
do, this will _still_ be a good change, because the lexer introduced
here is useful beyond names (e.g., we use it find the `except` keyword
in an exception handler, to find the `else` after a `for` loop, and so
on). So, I'm fine committing this even if we end up changing our minds
about the right approach.
Closes#5090.
## Benchmarks
No significant change, with one statistically significant improvement
(-2.1654% on `linter/all-rules/large/dataset.py`):
```
linter/default-rules/numpy/globals.py
time: [73.922 µs 73.955 µs 73.986 µs]
thrpt: [39.882 MiB/s 39.898 MiB/s 39.916 MiB/s]
change:
time: [-0.5579% -0.4732% -0.3980%] (p = 0.00 < 0.05)
thrpt: [+0.3996% +0.4755% +0.5611%]
Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
4 (4.00%) low severe
1 (1.00%) low mild
1 (1.00%) high mild
linter/default-rules/pydantic/types.py
time: [1.4909 ms 1.4917 ms 1.4926 ms]
thrpt: [17.087 MiB/s 17.096 MiB/s 17.106 MiB/s]
change:
time: [+0.2140% +0.2741% +0.3392%] (p = 0.00 < 0.05)
thrpt: [-0.3380% -0.2734% -0.2136%]
Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
linter/default-rules/numpy/ctypeslib.py
time: [688.97 µs 691.34 µs 694.15 µs]
thrpt: [23.988 MiB/s 24.085 MiB/s 24.168 MiB/s]
change:
time: [-1.3282% -0.7298% -0.1466%] (p = 0.02 < 0.05)
thrpt: [+0.1468% +0.7351% +1.3461%]
Change within noise threshold.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) low mild
2 (2.00%) high mild
12 (12.00%) high severe
linter/default-rules/large/dataset.py
time: [3.3872 ms 3.4032 ms 3.4191 ms]
thrpt: [11.899 MiB/s 11.954 MiB/s 12.011 MiB/s]
change:
time: [-0.6427% -0.2635% +0.0906%] (p = 0.17 > 0.05)
thrpt: [-0.0905% +0.2642% +0.6469%]
No change in performance detected.
Found 20 outliers among 100 measurements (20.00%)
1 (1.00%) low severe
2 (2.00%) low mild
4 (4.00%) high mild
13 (13.00%) high severe
linter/all-rules/numpy/globals.py
time: [148.99 µs 149.21 µs 149.42 µs]
thrpt: [19.748 MiB/s 19.776 MiB/s 19.805 MiB/s]
change:
time: [-0.7340% -0.5068% -0.2778%] (p = 0.00 < 0.05)
thrpt: [+0.2785% +0.5094% +0.7395%]
Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high severe
linter/all-rules/pydantic/types.py
time: [3.0362 ms 3.0396 ms 3.0441 ms]
thrpt: [8.3779 MiB/s 8.3903 MiB/s 8.3997 MiB/s]
change:
time: [-0.0957% +0.0618% +0.2125%] (p = 0.45 > 0.05)
thrpt: [-0.2121% -0.0618% +0.0958%]
No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) low severe
3 (3.00%) low mild
5 (5.00%) high mild
2 (2.00%) high severe
linter/all-rules/numpy/ctypeslib.py
time: [1.6879 ms 1.6894 ms 1.6909 ms]
thrpt: [9.8478 MiB/s 9.8562 MiB/s 9.8652 MiB/s]
change:
time: [-0.2279% -0.0888% +0.0436%] (p = 0.18 > 0.05)
thrpt: [-0.0435% +0.0889% +0.2284%]
No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
4 (4.00%) low mild
1 (1.00%) high severe
linter/all-rules/large/dataset.py
time: [7.1520 ms 7.1586 ms 7.1654 ms]
thrpt: [5.6777 MiB/s 5.6831 MiB/s 5.6883 MiB/s]
change:
time: [-2.5626% -2.1654% -1.7780%] (p = 0.00 < 0.05)
thrpt: [+1.8102% +2.2133% +2.6300%]
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild
```
## Summary
This fixes a number of problems in the formatter that showed up with
various files in the [cpython](https://github.com/python/cpython)
repository. These problems surfaced as unstable formatting and invalid
code. This is not the entirety of problems discovered through cpython,
but a big enough chunk to separate it. Individual fixes are generally
individual commits. They were discovered with #5055, which i update as i
work through the output
## Test Plan
I added regression tests with links to cpython for each entry, except
for the two stubs that also got comment stubs since they'll be
implemented properly later.
## Summary
This PR runs `rustfmt` with a few nightly options as a one-time fix to
catch some malformatted comments. I ended up just running with:
```toml
condense_wildcard_suffixes = true
edition = "2021"
max_width = 100
normalize_comments = true
normalize_doc_attributes = true
reorder_impl_items = true
unstable_features = true
use_field_init_shorthand = true
```
Since these all seem like reasonable things to fix, so may as well while
I'm here.
## Summary
Small update to leverage `get_or_import_symbol` to fix `UP017` in more
cases (e.g., when we need to import `UTC`, or access it from an alias or
something).
## Test Plan
Check out the updated snapshot.
## Summary
This PR consistently uses `matches! for static `CallPath` comparisons.
In some cases, we can significantly reduce the number of cases or
checks.
## Test Plan
`cargo test `
## Summary
As discussed in Discord, and similar to oxc, we're going to refer to
this as `.semantic()` everywhere.
While I was auditing usages of `model: &SemanticModel`, I also changed
as many function signatures as I could find to consistently take the
model as the _last_ argument, rather than the first.
## Summary
This PR tackles a corner case that we'll need to support local symbol
renaming. It relates to a nuance in how we want handle annotations
(i.e., `AnnAssign` statements with no value, like `x: int` in a function
body).
When we see a statement like:
```python
x: int
```
We create a `BindingKind::Annotation` for `x`. This is a special
`BindingKind` that the resolver isn't allowed to return. For example,
given:
```python
x: int
print(x)
```
The second line will yield an `undefined-name` error.
So why does this `BindingKind` exist at all? In Pyflakes, to support the
`unused-annotation` lint:
```python
def f():
x: int # unused-annotation
```
If we don't track `BindingKind::Annotation`, we can't lint for unused
variables that are only "defined" via annotations.
There are a few other wrinkles to `BindingKind::Annotation`. One is
that, if a binding already exists in the scope, we actually just discard
the `BindingKind`. So in this case:
```python
x = 1
x: int
```
When we go to create the `BindingKind::Annotation` for the second
statement, we notice that (1) we're creating an annotation but (2) the
scope already has binding for the name -- so we just drop the binding on
the floor. This has the nice property that annotations aren't considered
to "shadow" another binding, which is important in a bunch of places
(e.g., if we have `import os; os: int`, we still consider `os` to be an
import, as we should). But it also means that these "delayed"
annotations are one of the few remaining references that we don't track
anywhere in the semantic model.
This PR adds explicit support for these via a new `delayed_annotations`
attribute on the semantic model. These should be extremely rare, but we
do need to track them if we want to support local symbol renaming.
### This isn't the right way to model this
This isn't the right way to model this.
Here's an alternative:
- Remove `BindingKind::Annotation`, and treat annotations as their own,
separate concept.
- Instead of storing a map from name to `BindingId` on each `Scope`,
store a map from name to... `SymbolId`.
- Introduce a `Symbol` abstraction, where a symbol can point to a
current binding, and a list of annotations, like:
```rust
pub struct Symbol {
binding: Option<BindingId>,
annotations: Vec<AnnotationId>
}
```
If we did this, we could appropriately model the semantics described
above. When we go to resolve a binding, we ignore annotations (always).
When we try to find unused variables, we look through the list of
symbols, and have sufficient information to discriminate between
annotations and bound variables. Etc.
The main downside of this `Symbol`-based approach is that it's going to
take a lot more work to implement, and it'll be less performant (we'll
be storing more data per symbol, and our binding lookups will have an
added layer of indirection).
## Summary
We now _always_ generate fixes, so `FixMode::None` and
`FixMode::Generate` are redundant. We can also remove the TODO around
`--fix-dry-run`, since that's our default behavior.
Closes#5081.
## Summary
Small follow-up to #4888 to add a dedicated `ResolvedRead` case for
unbound locals, mostly for clarity and documentation purposes (no
behavior changes).
## Test Plan
`cargo test`
## Summary
Our current mechanism for handling deletions (e.g., `del x`) is to
remove the symbol from the scope's `bindings` table. This "does the
right thing", in that if we then reference a deleted symbol, we're able
to determine that it's unbound -- but it causes a variety of problems,
mostly in that it makes certain bindings and references unreachable
after-the-fact.
Consider:
```python
x = 1
print(x)
del x
```
If we analyze this code _after_ running the semantic model over the AST,
we'll have no way of knowing that `x` was ever introduced in the scope,
much less that it was bound to a value, read, and then deleted --
because we effectively erased `x` from the model entirely when we hit
the deletion.
In practice, this will make it impossible for us to support local symbol
renames. It also means that certain rules that we want to move out of
the model-building phase and into the "check dead scopes" phase wouldn't
work today, since we'll have lost important information about the source
code.
This PR introduces two new `BindingKind` variants to model deletions:
- `BindingKind::Deletion`, which represents `x = 1; del x`.
- `BindingKind::UnboundException`, which represents:
```python
try:
1 / 0
except Exception as e:
pass
```
In the latter case, `e` gets unbound after the exception handler
(assuming it's triggered), so we want to handle it similarly to a
deletion.
The main challenge here is auditing all of our existing `Binding` and
`Scope` usages to understand whether they need to accommodate deletions
or otherwise behave differently. If you look one commit back on this
branch, you'll see that the code is littered with `NOTE(charlie)`
comments that describe the reasoning behind changing (or not) each of
those call sites. I've also augmented our test suite in preparation for
this change over a few prior PRs.
### Alternatives
As an alternative, I considered introducing a flag to `BindingFlags`,
like `BindingFlags::UNBOUND`, and setting that at the appropriate time.
This turned out to be a much more difficult change, because we tend to
match on `BindingKind` all over the place (e.g., we have a bunch of code
blocks that only run when a `BindingKind` is
`BindingKind::Importation`). As a result, introducing these new
`BindingKind` variants requires only a few changes at the client sites.
Adding a flag would've required a much wider-reaching change.
## Summary
This behavior dates back to a Pyflakes commit (5fc37cbd), which was used
to allow this test to pass:
```py
from __future__ import annotations
T: object
def f(t: T): pass
def g(t: 'T'): pass
```
But, I think this is an error. Mypy and Pyright don't accept it -- you
can only use variables as type annotations if they're type aliases
(i.e., annotated with `TypeAlias`), in which case, there has to be an
assignment on the right-hand side (see: [PEP
613](https://peps.python.org/pep-0613/)).
## Summary
This PR corrects a misunderstanding I had related to Python's handling
of bound exceptions.
Previously, I thought this code ran without error:
```py
def f():
x = 1
try:
1 / 0
except Exception as x:
pass
print(x)
```
My understanding was that `except Exception as x` bound `x` within the
`except` block, but then restored the `x = 1` binding after exiting the
block.
In practice, however, this throws a `UnboundLocalError` error, because
`x` becomes "unbound" after exiting the exception handler. It's similar
to a `del` statement in this way.
This PR removes our behavior to "restore" the previous binding. This
could lead to faulty analysis in conditional blocks due to our lack of
control flow analysis, but those same problems already exist for `del`
statements.
## Summary
In a dataclass:
```py
from dataclasses import dataclass
@dataclass
class X:
class_var = {}
x: int
```
`class_var` isn't actually a dataclass attribute, since it's
unannotated. This PR removes such attributes from RUF008
(`mutable-dataclass-default`), but it does enforce them in RUF012
(`mutable-class-default`), since those should be annotated with
`ClassVar` like any other mutable class attribute.
Closes#5043.
## Summary
This adds `json-lines` (https://jsonlines.org/ or http://ndjson.org/) as
an output format.
I'm sure you already know, but
* JSONL is more greppable (each record is a single line) than the pretty
JSON
* JSONL is faster to ingest piecewise (and/or in parallel) than JSON
## Test Plan
Snapshot test in the new module :)
I've written done my condensed learnings from working on the formatter
so that others can have an easier start working on it.
This is a pure docs change
## Summary
Per the [API
reference](https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.fail),
`reason` was added in version 7, and is equivalent to `msg` (but
preferred going forward).
I also grepped for `msg` usages in `flake8_pytest_style`, but found no
others (apart from those that reference `unittest` APIs.)
Closes#3387.
## Summary
This PR adds autofixer for rule ISC001 in cases where both string
literals are of the same kind and with same quotes (double / single).
Fixes#4829
## Test Plan
I added testcases with different combinations of string literals.
## Summary
This PR (1) avoids flagging `TypedDict` and `NamedTuple` conversions
when attributes are dunder methods, like `__dict__`, and (2) avoids
flagging the `A003` shadowed-attribute rule for `TypedDict` classes at
all, where it doesn't really apply (since those attributes are only
accessed via subscripting anyway).
Closes#5027.
## Summary
A few of our rules look at the parentheses that follow a class
definition (e.g., `class Foo(object):`) and attempt to modify those
parentheses. Neither of those rules were behaving properly in the
presence of decorators, which were recently added to the statement
range.
## Test Plan
`cargo test` with a variety of new fixture tests.
## Summary
Add rule to disallow implicit optional with autofix.
Currently, I've added it under `RUF` category.
### Limitation
Type aliases could result in false positive:
```python
from typing import Optional
StrOptional = Optional[str]
def foo(arg: StrOptional = None):
pass
```
## Test Plan
`cargo test`
resolves: #1983
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Improves the `ruff_parse_simple` fuzz harness by adding checks for
parsed locations to ensure they all lie on UTF-8 character boundaries.
This will allow for faster identification of issues like #5004.
This also adds additional details for Apple M1 users and clarifies the
importance of using `init-fuzzer.sh` (thanks for the feedback,
@jasikpark 🙂).
## Summary
Previously the rule for SIM117 explicitly ignored `async with`
statements as it would incorrectly suggestion to merge `async with` and
regular `with` statements as reported in issue #1902.
This partially reverts the fix for that (commit
396be5edea) by enabling the rules for
`async with` statements again, but with a check ensuring that the
statements are both of the same kind, i.e. both `async with` or both
(just) `with` statements.
Closes#3025
## Test Plan
Updated and existing test and added a new test case from #3025.
## Summary
Add support for applying auto-fixes in Jupyter Notebook.
### Solution
Cell offsets are the boundaries for each cell in the concatenated source
code. They are represented using `TextSize`. It includes the start and
end offset as well, thus creating a range for each cell. These offsets
are updated using the `SourceMap` markers.
### SourceMap
`SourceMap` contains markers constructed from each edits which tracks
the original source code position to the transformed positions. The
following drawing might make it clear:

The center column where the dotted lines are present are the markers
included in the `SourceMap`. The `Notebook` looks at these markers and
updates the cell offsets after each linter loop. If you notice closely,
the destination takes into account all of the markers before it.
The index is constructed only when required as it's only used to render
the diagnostics. So, a `OnceCell` is used for this purpose. The cell
offsets, cell content and the index will be updated after each iteration
of linting in the mentioned order. The order is important here as the
content is updated as per the new offsets and index is updated as per
the new content.
## Limitations
### 1
Styling rules such as the ones in `pycodestyle` will not be applicable
everywhere in Jupyter notebook, especially at the cell boundaries. Let's
take an example where a rule suggests to have 2 blank lines before a
function and the cells contains the following code:
```python
import something
# ---
def first():
pass
def second():
pass
```
(Again, the comment is only to visualize cell boundaries.)
In the concatenated source code, the 2 blank lines will be added but it
shouldn't actually be added when we look in terms of Jupyter notebook.
It's as if the function `first` is at the start of a file.
`nbqa` solves this by recording newlines before and after running
`autopep8`, then running the tool and restoring the newlines at the end
(refer https://github.com/nbQA-dev/nbQA/pull/807).
## Test Plan
Three commands were run in order with common flags (`--select=ALL
--no-cache --isolated`) to isolate which stage the problem is occurring:
1. Only diagnostics
2. Fix with diff (`--fix --diff`)
3. Fix (`--fix`)
### https://github.com/facebookresearch/segment-anything
```
-------------------------------------------------------------------------------
Jupyter Notebooks 3 0 0 0 0
|- Markdown 3 98 0 94 4
|- Python 3 513 468 4 41
(Total) 611 468 98 45
-------------------------------------------------------------------------------
```
```console
$ cargo run --all-features --bin ruff -- check --no-cache --isolated --select=ALL /path/to/segment-anything/**/*.ipynb --fix
...
Found 180 errors (89 fixed, 91 remaining).
```
### https://github.com/openai/openai-cookbook
```
-------------------------------------------------------------------------------
Jupyter Notebooks 65 0 0 0 0
|- Markdown 64 3475 12 2507 956
|- Python 65 9700 7362 1101 1237
(Total) 13175 7374 3608 2193
===============================================================================
```
```console
$ cargo run --all-features --bin ruff -- check --no-cache --isolated --select=ALL /path/to/openai-cookbook/**/*.ipynb --fix
error: Failed to parse /path/to/openai-cookbook/examples/vector_databases/Using_vector_databases_for_embeddings_search.ipynb:cell 4:29:18: unexpected token '-'
...
Found 4227 errors (2165 fixed, 2062 remaining).
```
### https://github.com/tensorflow/docs
```
-------------------------------------------------------------------------------
Jupyter Notebooks 150 0 0 0 0
|- Markdown 1 55 0 46 9
|- Python 1 402 289 60 53
(Total) 457 289 106 62
-------------------------------------------------------------------------------
```
```console
$ cargo run --all-features --bin ruff -- check --no-cache --isolated --select=ALL /path/to/tensorflow-docs/**/*.ipynb --fix
error: Failed to parse /path/to/tensorflow-docs/site/en/guide/extension_type.ipynb:cell 80:1:1: unexpected token Indent
error: Failed to parse /path/to/tensorflow-docs/site/en/r1/tutorials/eager/custom_layers.ipynb:cell 20:1:1: unexpected token Indent
error: Failed to parse /path/to/tensorflow-docs/site/en/guide/data.ipynb:cell 175:5:14: unindent does not match any outer indentation level
error: Failed to parse /path/to/tensorflow-docs/site/en/r1/tutorials/representation/unicode.ipynb:cell 30:1:1: unexpected token Indent
...
Found 12726 errors (5140 fixed, 7586 remaining).
```
### https://github.com/tensorflow/models
```
-------------------------------------------------------------------------------
Jupyter Notebooks 46 0 0 0 0
|- Markdown 1 11 0 6 5
|- Python 1 328 249 19 60
(Total) 339 249 25 65
-------------------------------------------------------------------------------
```
```console
$ cargo run --all-features --bin ruff -- check --no-cache --isolated --select=ALL /path/to/tensorflow-models/**/*.ipynb --fix
...
Found 4856 errors (2690 fixed, 2166 remaining).
```
resolves: #1218fixes: #4556
This implements formatting ExprTuple, including magic trailing comma. I
intentionally didn't change the settings mechanism but just added a
dummy global const flag.
Besides the snapshots, I added custom breaking/joining tests and a
deeply nested test case. The diffs look better than previously, proper
black compatibility depends on parentheses handling.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This implements PYI044. This rule checks if `from __future__ import
annotations` is used in stub files as it has no effect in stub files, since type
checkers automatically treat stubs as having those semantics.
Updates https://github.com/astral-sh/ruff/issues/848
## Test Plan
Added a test case and snapshots.
## Summary
Ignore pyproject.toml file for adding noqa directives using `--add-noqa`
## Test Plan
`cargo run --bin ruff -- check --add-noqa .`
fixes: #5012
## Summary
Add copyright notice detection to enforce the presence of copyright
headers in Python files.
Configurable settings include: the relevant regular expression, the
author name, and the minimum file size, similar to
[flake8-copyright](https://github.com/savoirfairelinux/flake8-copyright).
Closes https://github.com/charliermarsh/ruff/issues/3579
---------
Signed-off-by: ryan <ryang@waabi.ai>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The `RET504` rule, which looks for unnecessary assignments before return
statements, is a frequent source of issues (#4173, #4236, #4242, #1606,
#2950). Over time, we've tried to refine the logic to handle more cases.
For example, we now avoid analyzing any functions that contain any
function calls or attribute assignments, since those operations can
contain side effects (and so we mark them as a "read" on all variables
in the function -- we could do a better job with code graph analysis to
handle this limitation, but that'd be a more involved change.) We also
avoid flagging any variables that are the target of multiple
assignments. Ultimately, though, I'm not happy with the implementation
-- we just can't do sufficiently reliable analysis of arbitrary code
flow given the limited logic herein, and the existing logic is very hard
to reason about and maintain.
This PR refocuses the rule to only catch cases of the form:
```py
def f():
x = 1
return x
```
That is, we now only flag returns that are immediately preceded by an
assignment to the returned variable. While this is more limiting, in
some ways, it lets us flag more cases vis-a-vis the previous
implementation, since we no longer "fully eject" when functions contain
function calls and other effect-ful operations.
Closes#4173.
Closes#4236.
Closes#4242.
## Summary
We use `.trim()` and friends in a bunch of places, to strip whitespace
from source code. However, not all Unicode whitespace characters are
considered "whitespace" in Python, which only supports the standard
space, tab, and form-feed characters.
This PR audits our usages of `.trim()`, `.trim_start()`, `.trim_end()`,
and `char::is_whitespace`, and replaces them as appropriate with a new
`.trim_whitespace()` analogues, powered by a `PythonWhitespace` trait.
In general, the only place that should continue to use `.trim()` is
content within docstrings, which don't need to adhere to Python's
semantic definitions of whitespace.
Closes#4991.
## Summary
`ruff_newlines` becomes `ruff_python_whitespace`, and includes the
existing "universal newline" handlers alongside the Python
whitespace-specific utilities.
* Implement StmtPass
This implements StmtPass as `pass`.
The snapshot diff is small because pass mainly occurs in bodies and function (#4951) and if/for bodies.
* Implement StmtReturn
This implements StmtReturn as `return` or `return {value}`.
The snapshot diff is small because return occurs in functions (#4951)
* A basic StmtAssign formatter and better dummies for expressions
The goal of this PR was formatting StmtAssign since many nodes in the black tests (and in python in general) are after an assignment. This caused unstable formatting: The spacing of power op spacing depends on the type of the two involved expressions, but each expression was formatted as dummy string and re-parsed as a ExprName, so in the second round the different rules of ExprName were applied, causing unstable formatting.
This PR does not necessarily bring us closer to black's style, but it unlocks a good porting of black's test suite and is a basis for implementing the Expr nodes.
* fmt
* Review
* Use phf for confusables to reduce llvm lines
## Summary
This replaces FxHashMap for the confusables with a perfect hash map from the [phf crate](https://github.com/rust-phf/rust-phf) to reduce the generated llvm instructions.
A perfect hash function is one that doesn't have any collisions. We can build one because we know all keys at compile time. This improves hashmap efficiency, even though this is likely not noticeable in our case (except someone has a large non-english crate to test on).
The original hashmap contained a lot of duplicates, which i had to remove when phf_map complained, i did so by sorting the keys.
The important part that it reduces the llvm instructions generated (#3808, `RUSTFLAGS="-Csymbol-mangling-version=v0" cargo llvm-lines -p ruff --lib | head -20`):
```
Lines Copies Function name
----- ------ -------------
1740502 38973 (TOTAL)
27423 (1.6%, 1.6%) 1 (0.0%, 0.0%) ruff[cef4c65d96248843]::rules::ruff::rules::confusables::CONFUSABLES::{closure#0}
10193 (0.6%, 2.2%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::codes::RuleCodePrefix>::iter
8107 (0.5%, 2.6%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::codes::Rule>::noqa_code
7345 (0.4%, 3.0%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::checkers::ast::Checker as ruff_python_ast[3778b140caf21545]::visitor::Visitor>::visit_stmt
6412 (0.4%, 3.4%) 1 (0.0%, 0.0%) <<ruff[cef4c65d96248843]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:spanned::SpannedDeserializer<toml_edit[7e3a6c5e67260672]:🇩🇪:value::ValueDeserializer>>
6412 (0.4%, 3.8%) 1 (0.0%, 0.0%) <<ruff[cef4c65d96248843]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:table::TableMapAccess>
6409 (0.4%, 4.2%) 1 (0.0%, 0.0%) <<ruff[cef4c65d96248843]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:datetime::DatetimeDeserializer>
5696 (0.3%, 4.5%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::checkers::ast::Checker as ruff_python_ast[3778b140caf21545]::visitor::Visitor>::visit_expr
4448 (0.3%, 4.7%) 1 (0.0%, 0.0%) ruff[cef4c65d96248843]::flake8_to_ruff::converter::convert
3702 (0.2%, 4.9%) 1 (0.0%, 0.0%) <&ruff[cef4c65d96248843]::registry::Linter as core[da82827a87f140f9]::iter::traits::collect::IntoIterator>::into_iter
3349 (0.2%, 5.1%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::registry::Linter>::code_for_rule
3132 (0.2%, 5.3%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::codes::Rule as core[da82827a87f140f9]::fmt::Debug>::fmt
3130 (0.2%, 5.5%) 1 (0.0%, 0.0%) <&str as core[da82827a87f140f9]::convert::From<&ruff[cef4c65d96248843]::codes::Rule>>::from
3130 (0.2%, 5.7%) 1 (0.0%, 0.0%) <&str as core[da82827a87f140f9]::convert::From<ruff[cef4c65d96248843]::codes::Rule>>::from
3130 (0.2%, 5.9%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::codes::Rule as core[da82827a87f140f9]::convert::AsRef<str>>::as_ref
3128 (0.2%, 6.0%) 1 (0.0%, 0.0%) <ruff[cef4c65d96248843]::codes::RuleIter>::get
2669 (0.2%, 6.2%) 1 (0.0%, 0.0%) <<ruff[cef4c65d96248843]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_seq::<toml_edit[7e3a6c5e67260672]:🇩🇪:array::ArraySeqAccess>
```
After:
```
Lines Copies Function name
----- ------ -------------
1710487 38900 (TOTAL)
10193 (0.6%, 0.6%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::codes::RuleCodePrefix>::iter
8107 (0.5%, 1.1%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::codes::Rule>::noqa_code
7345 (0.4%, 1.5%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::checkers::ast::Checker as ruff_python_ast[5588cd60041c8605]::visitor::Visitor>::visit_stmt
6412 (0.4%, 1.9%) 1 (0.0%, 0.0%) <<ruff[52408f46d2058296]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:spanned::SpannedDeserializer<toml_edit[7e3a6c5e67260672]:🇩🇪:value::ValueDeserializer>>
6412 (0.4%, 2.2%) 1 (0.0%, 0.0%) <<ruff[52408f46d2058296]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:table::TableMapAccess>
6409 (0.4%, 2.6%) 1 (0.0%, 0.0%) <<ruff[52408f46d2058296]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_map::<toml_edit[7e3a6c5e67260672]:🇩🇪:datetime::DatetimeDeserializer>
5696 (0.3%, 3.0%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::checkers::ast::Checker as ruff_python_ast[5588cd60041c8605]::visitor::Visitor>::visit_expr
4448 (0.3%, 3.2%) 1 (0.0%, 0.0%) ruff[52408f46d2058296]::flake8_to_ruff::converter::convert
3702 (0.2%, 3.4%) 1 (0.0%, 0.0%) <&ruff[52408f46d2058296]::registry::Linter as core[da82827a87f140f9]::iter::traits::collect::IntoIterator>::into_iter
3349 (0.2%, 3.6%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::registry::Linter>::code_for_rule
3132 (0.2%, 3.8%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::codes::Rule as core[da82827a87f140f9]::fmt::Debug>::fmt
3130 (0.2%, 4.0%) 1 (0.0%, 0.0%) <&str as core[da82827a87f140f9]::convert::From<&ruff[52408f46d2058296]::codes::Rule>>::from
3130 (0.2%, 4.2%) 1 (0.0%, 0.0%) <&str as core[da82827a87f140f9]::convert::From<ruff[52408f46d2058296]::codes::Rule>>::from
3130 (0.2%, 4.4%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::codes::Rule as core[da82827a87f140f9]::convert::AsRef<str>>::as_ref
3128 (0.2%, 4.5%) 1 (0.0%, 0.0%) <ruff[52408f46d2058296]::codes::RuleIter>::get
2669 (0.2%, 4.7%) 1 (0.0%, 0.0%) <<ruff[52408f46d2058296]::settings::options::Options as serde[d89b1b632568f5a3]:🇩🇪:Deserialize>::deserialize::__Visitor as serde[d89b1b632568f5a3]:🇩🇪:Visitor>::visit_seq::<toml_edit[7e3a6c5e67260672]:🇩🇪:array::ArraySeqAccess>
2659 (0.2%, 4.9%) 1 (0.0%, 0.0%) <&ruff[52408f46d2058296]::codes::Pylint as core[da82827a87f140f9]::iter::traits::collect::IntoIterator>::into_iter
```
I'd assume this has a positive effect both on compile time and on runtime, but i don't know the actual effect on compile times and can't really measure.
## Test plan
Check CI for any performance regressions.
This should fix#3808 if we merge it.
* clippy
* Update update_ambiguous_characters.py
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR replaces the `verbatim_text` builder with a `not_yet_implemented` builder that emits `NOT_YET_IMPLEMENTED_<NodeKind>` for not yet implemented nodes.
The motivation for this change is that partially formatting compound statements can result in incorrectly indented code, which is a syntax error:
```python
def func_no_args():
a; b; c
if True: raise RuntimeError
if False: ...
for i in range(10):
print(i)
continue
```
Get's reformatted to
```python
def func_no_args():
a; b; c
if True: raise RuntimeError
if False: ...
for i in range(10):
print(i)
continue
```
because our formatter does not yet support `for` statements and just inserts the text from the source.
## Downsides
Using an identifier will not work in all situations. For example, an identifier is invalid in an `Arguments ` position. That's why I kept `verbatim_text` around and e.g. use it in the `Arguments` formatting logic where incorrect indentations are impossible (to my knowledge). Meaning, `verbatim_text` we can opt in to `verbatim_text` when we want to iterate quickly on nodes that we don't want to provide a full implementation yet and using an identifier would be invalid.
## Upsides
Running this on main discovered stability issues with the newline handling that were previously "hidden" because of the verbatim formatting. I guess that's an upside :)
## Test Plan
None?
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This issue fixes the removal of empty lines between a leading comment and the previous statement:
```python
a = 20
# leading comment
b = 10
```
Ruff removed the empty line between `a` and `b` because:
* The leading comments formatting does not preserve leading newlines (to avoid adding new lines at the top of a body)
* The `JoinNodesBuilder` counted the lines before `b`, which is 1 -> Doesn't insert a new line
This is fixed by changing the `JoinNodesBuilder` to count the lines instead *after* the last node. This correctly gives 1, and the `# leading comment` will insert the empty lines between any other leading comment or the node.
## Test Plan
I added a new test for empty lines.
According to https://docs.python.org/3/library/ast.html#ast-helpers, we expect type_ignores to be always be empty, so this adds a debug assert.
Test plan: I confirmed that the assertion holdes for the file below and for all the black tests which include a number of `type: ignore` comments.
```python
# type: ignore
if 1:
print("1") # type: ignore
# elsebranch
# type: ignore
else: # type: ignore
print("2") # type: ignore
while 1:
print()
# type: ignore
```
* Implement module formatting using JoinNodesBuilder
This uses JoinNodesBuilder to implement module formatting for #4800
See the snapshots for the changed behaviour. See one PR up for a CLI that i used to verify the trailing new line behaviour
* Add a formatter CLI for debugging
This adds a ruff_python_formatter cli modelled aber `rustfmt` that i use for debugging
* clippy
* Add print IR and print comments options
Tested with `cargo run --bin ruff_python_formatter -- --print-ir --print-comments scratch.py`
* Abstract codegen with stylist into a CodegenStylist trait
Replace all duplicate invocations of
```rust
let mut state = CodegenState {
default_newline: &stylist.line_ending(),
default_indent: stylist.indentation(),
..CodegenState::default()
}
tree.codegen(&mut state);
state.to_string()
```
with
```rust
tree.codegen_stylist(&stylist);
```
No functional changes.
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
And more custom logic around comments in bodies... uff.
Let's say we have the following code
```python
if x == y:
pass # trailing comment of pass
else: # trailing comment of `else`
print("I have no comments")
```
Right now, the formatter attaches the `# trailing comment of `else` as a trailing comment of `pass` because it doesn't "see" that there's an `else` keyword in between (because the else body is just a Vec and not a node).
This PR adds custom logic that attaches the trailing comments after the `else` as dangling comments to the `if` statement. The if statement must then split the dangling comments by `comments.text_position()`:
* All comments up to the first end-of-line comment are leading comments of the `else` keyword.
* All end-of-line comments coming after are `trailing` comments for the `else` keyword.
## Test Plan
I added new unit tests.
### Summary
This PR adds custom logic to handle end-of-line comments of the last statement in a body.
For example:
```python
while True:
if something.changed:
do.stuff() # trailing comment
b
```
The `# trailing comment` is a trailing comment of the `do.stuff()` expression statement. We incorrectly attached the comment as a trailing comment of the enclosing `while` statement because the comment is between the end of the while statement (the `while` statement ends right after `do.stuff()`) and before the `b` statement.
This PR fixes the placement to correctly attach these comments to the last statement in a body (recursively).
## Test Plan
I reviewed the snapshots and they now look correct. This may appear odd because a lot comments have now disappeared. This is the expected result because we use `verbatim` formatting for the block statements (like `while`) and that means that it only formats the inner content of the block, but not any trailing comments. The comments were visible before, because they were associated with the block statement (e.g. `while`).
* Document codes.rs
* Refactor codes.rs before merging
Helper script:
```python
# %%
from pathlib import Path
codes = Path("crates/ruff/src/codes.rs").read_text().splitlines()
rules = Path("a.txt").read_text().strip().splitlines()
rule_map = {i.split("::")[-1]: i for i in rules}
# %%
codes_new = []
for line in codes:
if ", Rule::" in line:
left, right = line.split(", Rule::")
right = right[:-2]
line = left + ", " + rule_map[right] + "),"
codes_new.append(line)
# %%
Path("crates/ruff/src/codes.rs").write_text("\n".join(codes_new))
```
Co-authored-by: Jonathan Plasse <13716151+JonathanPlasse@users.noreply.github.com>
* Add Format for Stmt
* Implement basic module formatting
This implements formatting each statement in a module with a hard line break in between, so that we can start formatting statements.
Basic testing is done by the snapshots
* Use dummy verbatim formatter for all nodes
* Use new formatter infrastructure in CLI and test
* Expose the new formatter in the CLI
* Merge import blocks
This adds a new rule `InvalidPyprojectToml` that lints pyproject.toml by checking if https://github.com/PyO3/pyproject-toml-rs can parse it. This means the linting is currently very basic, e.g. we don't check whether the name is actually a valid python project name or appropriately normalized. It does catch errors e.g. with invalid dependency requirements or problems withs the license specifications. It is open to be extended in the future (validate name, SPDX expressions, classifiers, ...), either in ruff or in pyproject-toml-rs.
Test plan:
```
scripts/ecosystem_all_check.sh check --select RUF200
```
This lead to a bunch of
```
RUF200 Failed to parse pyproject.toml: missing field `name`
```
(e.g. https://github.com/amitsk/fastapi-todos/blob/main/pyproject.toml) which is indeed invalid (https://packaging.python.org/en/latest/specifications/declaring-project-metadata/#specification).
Filtering those out, the following other problems were found by `cd target/ecosystem_all_results/ && rg RUF200`:
```
UCL-ARC:rred-reports.stdout.txt
1:pyproject.toml:27:16: RUF200 Failed to parse pyproject.toml: Version specifier `>='3.9'` doesn't match PEP 440 rules
EndlessTrax:python-start-project.stdout.txt
1:pyproject.toml:14:16: RUF200 Failed to parse pyproject.toml: Expected package name starting with an alphanumeric character, found '#'
redjax:gardening-api.stdout.txt
1:pyproject.toml:7:11: RUF200 Failed to parse pyproject.toml: Version `` doesn't match PEP 440 rules
ajslater:codex.stdout.txt
2: 3:17 RUF200 Failed to parse pyproject.toml: invalid type: sequence, expected a string
LDmitriy7:404_AvatarsBot.stdout.txt
1:pyproject.toml:3:11: RUF200 Failed to parse pyproject.toml: Version `` doesn't match PEP 440 rules
ajslater:comicbox.stdout.txt
1:pyproject.toml:3:17: RUF200 Failed to parse pyproject.toml: invalid type: sequence, expected a string
manueldevillena:forecast-earnings.stdout.txt
1:pyproject.toml:24:12: RUF200 Failed to parse pyproject.toml: Expected one of `@`, `(`, `<`, `=`, `>`, `~`, `!`, `;`, found `^`
redjax:ohio_utility_scraper.stdout.txt
1:pyproject.toml:11:11: RUF200 Failed to parse pyproject.toml: Version `` doesn't match PEP 440 rules
agronholm:typeguard.stdout.txt
1:pyproject.toml:40:8: RUF200 Failed to parse pyproject.toml: Expected a valid marker name, found 'python_implementation'
cyuss:decathlon-turnover.stdout.txt
1:pyproject.toml:7:12: RUF200 Failed to parse pyproject.toml: invalid type: string "Youcef", expected a table with 'name' and 'email' keys
ajslater:boilerplate.stdout.txt
1:pyproject.toml:3:17: RUF200 Failed to parse pyproject.toml: invalid type: sequence, expected a string
kaparoo:lightning-project-template.stdout.txt
1:pyproject.toml:56:16: RUF200 Failed to parse pyproject.toml: You can't mix a >= operator with a local version (`+cu117`)
dijital20:pytexas2023-decorators.stdout.txt
1:pyproject.toml:5:11: RUF200 Failed to parse pyproject.toml: Version `` doesn't match PEP 440 rules
pfouque:django-anymail-history.stdout.txt
1:pyproject.toml:137:12: RUF200 Failed to parse pyproject.toml: Version specifier `> = 1.2.0` doesn't match PEP 440 rules
pfouque:django-fakemessages.stdout.txt
1:pyproject.toml:130:12: RUF200 Failed to parse pyproject.toml: Version specifier `> = 1.2.0` doesn't match PEP 440 rules
pypa:build.stdout.txt
1:tests/packages/test-invalid-requirements/pyproject.toml:2:12: RUF200 Failed to parse pyproject.toml: Expected one of `@`, `(`, `<`, `=`, `>`, `~`, `!`, `;`, found `i`
4:tests/packages/test-no-requires/pyproject.toml:1:1: RUF200 Failed to parse pyproject.toml: missing field `requires`
UnoYakshi:FRAAND.stdout.txt
2: 3:11 RUF200 Failed to parse pyproject.toml: Version `` doesn't match PEP 440 rules
DHolmanCoding:python-template.stdout.txt
1:pyproject.toml:22:1: RUF200 Failed to parse pyproject.toml: missing field `requires`
```
Overall, this emitted errors in 43 out of 3408 projects (`rg -c RUF200 target/ecosystem_all_results/ | wc -l`)
Co-authored-by: Micha Reiser <micha@reiser.io>
Add Checker::any_enabled shortcut
## Summary
Akin to #4625, This is a refactoring that shortens a bunch of code by replacing `checker.settings.rules.any_enabled` with `checker.any_enabled`.
## Test Plan
`cargo clippy`
* Refactor and fix task trigger for dependent jobs in other repos
I have confirmed (https://github.com/konstin/ruff-pre-commit/actions/runs/5056928280/jobs/9075029868) that this does dispatch the workflow when running with act, `owner: 'konstin'`, `needs` commented out and personal access token. I can't properly test the actual release workflow, and i'm unsure how to best handle the next release after this was merged (should we do a beta release or will this break everything that assumes we only do stable releases?)
The command for act is
```
act -j update-dependents -s RUFF_PRE_COMMIT_PAT=<...>
```
* delete old file
* Update maturin to 1.0
A 1.0 release for maturin 🎉
* Create dummy format CLI
* Hide format from clap, too
Missed that this is a separate option from `#[doc(hidden)]`
* Remove cargo feature and replace with warning
* No-alloc files parameter matching
* beta warning: warn -> warn_user_once
* Rephrase warning
I noticed in the byte-offsets refactor that the `JsonEmitter` uses one indexed column numbers for the diagnostic start and end locations but not for `edits`.
This PR changes the `JsonEmitter` to emit one-indexed column numbers for edits, as we already do for `Message::location` and `Message::end_location`.
## Open questions
~We'll need to change the LSP to subtract 1 from the columns in `_parse_fix`~
6e44fadf8a/ruff_lsp/server.py (L129-L150)
~@charliermarsh is there a way to get the ruff version in that method? If not, then I recommend adding a `version` that we increment whenever we make incompatible changes to the serialized message. We can then use it in the LSP to correctly compute the column offset.~
I'll use the presence of the `Fix::applicability` field to detect if the Ruff version uses one or zero-based column indices.
See https://github.com/charliermarsh/ruff-lsp/pull/103
* Add basic jupyter notebook support behind a feature flag
* Address review comments
* Rename in separate commit to make both git and clippy happy
* cfg(feature = "jupyter_notebook") another test
* Address more review comments
* Address more review comments
* and clippy and windows
* More review comment
## Summary
In the rare event that a docstring contains an implicit string concatenation, we currently have the potential to panic, because we assume that if a string starts with triple quotes, it _ends_ with triple quotes. But with implicit concatenation, that's not the case: a single `Expr` could start and end with different quote styles, because it can contain multiple string tokens.
Supporting these "properly" is pretty hard. In some cases it's hard to even know what the "right" behavior is. So for now, I'm just detecting and warning, which is better than a panic.
Closes#3543.
Closes#3585.
* Infer target-version from project metadata
* Fix requires-python with ">=3.8.16"
* Load requires-python at runtime
* Use upstream VersionSpecifiers
* Add debug information when parsing ruff.toml
* Display debug only if target_version is not set
* Bump pep440-rs to add impl Error for Pep440Error
# Summary
We need to support CR line endings (as opposed to LF and CRLF line endings, which are already supported). They're rare, but they do appear in Python code, and we tend to panic on any file that uses them.
Our `Locator` abstraction now supports CR line endings. However, Rust's `str#lines` implementation does _not_.
This PR adds a `UniversalNewlineIterator` implementation that respects all of CR, LF, and CRLF line endings, and plugs it into most of the `.lines()` call sites.
As an alternative design, it could be nice if we could leverage `Locator` for this. We've already computed all of the line endings, so we could probably iterate much more efficiently?
# Test Plan
Largely relying on automated testing, however, also ran over some known failure cases, like #3404.
## Summary
This PR moves `Diagnostic`, `DiagnosticKind`, and `Fix` into their own crate, which will enable us to further split up Ruff, since sub-linter crates (which need to implement functions that return `Diagnostic`) can now depend on `ruff_diagnostics` rather than Ruff.
This PR productionizes @MichaReiser's suggestion in https://github.com/charliermarsh/ruff/issues/1820#issuecomment-1440204423, by creating a separate crate for the `ast` module (`rust_python_ast`). This will enable us to further split up the `ruff` crate, as we'll be able to create (e.g.) separate sub-linter crates that have access to these common AST utilities.
This was mostly a straightforward copy (with adjustments to module imports), as the few dependencies that _did_ require modifications were handled in #3366, #3367, and #3368.
In hindsight, `ruff_python` is too general. A good giveaway is that it's actually a prefix of some other crates. The intent of this crate is to reimplement pieces of the Python standard library and CPython itself, so `ruff_python_stdlib` feels appropriate.
This PR introduces a new `CacheKey` trait for types that can be used as a cache key.
I'm not entirely sure if this is worth the "overhead", but I was surprised to find `HashableHashSet` and got scared when I looked at the time complexity of the `hash` function. These implementations must be extremely slow in hashed collections.
I then searched for usages and quickly realized that only the cache uses these `Hash` implementations, where performance is less sensitive.
This PR introduces a new `CacheKey` trait to communicate the difference between a hash and computing a key for the cache. The new trait can be implemented for types that don't implement `Hash` for performance reasons, and we can define additional constraints on the implementation: For example, we'll want to enforce portability when we add remote caching support. Using a different trait further allows us not to implement it for types without stable identities (e.g. pointers) or use other implementations than the standard hash function.
Currently the quote style of the first string in a file is used for autodetecting what to use when rewriting code for fixes. This is an okay heuristic, but often the first line in a file is a docstring, rather than a string constant, and it's not uncommon for pre-Black code to have different quoting styles for those.
For example, in the Google style guide:
https://google.github.io/styleguide/pyguide.html
> Be consistent with your choice of string quote character within a file. Pick ' or " and stick with it. ... Docstrings must use """ regardless.
This branch adjusts the logic to instead skip over any `"""` triple doublequote string tokens. The default, if there are no single quoted strings, is still to use double quote as the style.
Implement PYI006 "bad version info comparison"
## What it does
Ensures that you only `<` and `>=` for version info comparisons with
`sys.version_info` in `.pyi` files. All other comparisons such as
`<`, `<=` and `==` are banned.
## Why is this bad?
```python
>>> import sys
>>> print(sys.version_info)
sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
>>> print(sys.version_info > (3, 8))
True
>>> print(sys.version_info == (3, 8))
False
>>> print(sys.version_info <= (3, 8))
False
>>> print(sys.version_info in (3, 8))
False
```
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This prevents the UP034 autofix simultaneously stripping the
parentheses from generators in the same linter pass, which causes
a SyntaxError.
Closes#3234.
With this fix:
```python
$ cat test.py
the_first_one = next(
(i for i in range(10) if i // 2 == 0)
)
$ cargo run --bin ruff check test.py --no-cache --select UP034,COM812 --fix
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
Running `target/debug/ruff check test.py --no-cache --select UP034,COM812 --fix`
Found 1 error (1 fixed, 0 remaining).
$ cat test.py
the_first_one = next(
i for i in range(10) if i // 2 == 0
)
```
* Use format
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Renames the following rules that stood out to me at a glance as needing better names:
- `or-true` to `expr-or-true`
- `and-false` to `expr-and-false`
- `a-or-not-a` to `expr-or-not-expr`
- `a-and-not-a` to `expr-and-not-expr`
Related to #2902.
PYI009 and PYI010 are very similar, always use `...` in function and class bodies in stubs.
PYI021 bans doc strings in stubs.
I think all of these rules should be relatively straightforward to implement auto fixes for but can do that later once we get all the other rules added.
rel: https://github.com/charliermarsh/ruff/issues/848
In ruff-lsp (https://github.com/charliermarsh/ruff-lsp/pull/76) we want to add a "Disable \<rule\> for this line" quickfix. However, finding the correct line into which the `noqa` comment should be inserted is non-trivial (multi-line strings for example).
Ruff already has this info, so expose it in the JSON output for use by ruff-lsp.
This PR enables us to apply the proper quotation marks, including support for escapes. There are some significant TODOs, especially around implicit concatenations like:
```py
(
"abc"
"def"
)
```
Which are represented as a single AST node, which requires us to tokenize _within_ the formatter to identify all the individual string parts.
I manually changed these in #3080 and #3083 to get the tests passing (with notes around the deviations) -- but that's no longer necessary, now that we have proper testing that takes deviations into account.
This just re-formats all the `.py.expect` files with Black, both to add a trailing newline and be doubly-certain that they're correctly formatted.
I also ensured that we add a hard line break after each statement, and that we avoid including an extra newline in the generated Markdown (since the code should contain the exact expected newlines).
This PR changes the testing infrastructure to run all black tests and:
* Pass if Ruff and Black generate the same formatting
* Fail and write a markdown snapshot that shows the input code, the differences between Black and Ruff, Ruffs output, and Blacks output
This is achieved by introducing a new `fixture` macro (open to better name suggestions) that "duplicates" the attributed test for every file that matches the specified glob pattern. Creating a new test for each file over having a test that iterates over all files has the advantage that you can run a single test, and that test failures indicate which case is failing.
The `fixture` macro also makes it straightforward to e.g. setup our own spec tests that test very specific formatting by creating a new folder and use insta to assert the formatted output.
I worked on #2993 and ran into issues that the formatter tests are failing on Windows because `writeln!` emits `\n` as line terminator on all platforms, but `git` on Windows converted the line endings in the snapshots to `\r\n`.
I then tried to replicate the issue on my Windows machine and was surprised that all linter snapshot tests are failing on my machine. I figured out after some time that it is due to my global git config keeping the input line endings rather than converting to `\r\n`.
Luckily, I've been made aware of #2033 which introduced an "override" for the `assert_yaml_snapshot` macro that normalizes new lines, by splitting the formatted string using the platform-specific newline character. This is a clever approach and gives nice diffs for multiline fixes but makes assumptions about the setup contributors use and requires special care whenever we use line endings inside of tests.
I recommend that we remove the special new line handling and use `.gitattributes` to enforce the use of `LF` on all platforms [guide](https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings). This gives us platform agnostic tests without having to worry about line endings in our tests or different git configurations.
## Note
It may be necessary for Windows contributors to run the following command to update the line endings of their files
```bash
git rm --cached -r .
git reset --hard
```
When creating a dict with string keys, some prefer to call dict instead of writing a dict literal.
For example: `dict(a=1, b=2, c=3)` instead of `{"a": 1, "b": 2, "c": 3}`.
This extends the autofix for TID252 to work with for relative imports without `module` (i.e. `from .. import`). Tested with `matplotlib` and `bokeh`.
(Previously it would panic on unwrap of the module)
Note that pandas has [replaced](6057d7a93e) `absolufy-imports` with `ruff` now!
# Summary
This allows users to do things like:
```py
# ruff: noqa: F401
```
...to ignore all `F401` directives in a file. It's equivalent to `per-file-ignores`, but allows users to specify the behavior inline.
Note that Flake8 does _not_ support this, so we _don't_ respect `# flake8: noqa: F401`. (Flake8 treats that as equivalent to `# flake8: noqa`, so ignores _all_ errors in the file. I think all of [these usages](https://cs.github.com/?scopeName=All+repos&scope=&q=%22%23+flake8%3A+noqa%3A+%22) are probably mistakes!)
A couple notes on the details:
- If a user has `# ruff: noqa: F401` in the file, but also `# noqa: F401` on a line that would legitimately trigger an `F401` violation, we _do_ mark that as "unused" for `RUF100` purposes. This may be the wrong choice. The `noqa` is legitimately unused, but it's also not "wrong". It's just redundant.
- If a user has `# ruff: noqa: F401`, and runs `--add-noqa`, we _won't_ add `# noqa: F401` to any lines (which seems like the obvious right choice to me).
Closes#1054 (which has some extra pieces that I'll carve out into a separate issue).
Closes#2446.
- Implement N999 (following flake8-module-naming) in pep8_naming
- Refactor pep8_naming: split rules.rs into file per rule
- Documentation for majority of the violations
Closes https://github.com/charliermarsh/ruff/issues/2734
This rule guards against `asyncio.create_task` usages of the form:
```py
asyncio.create_task(coordinator.ws_connect()) # Error
```
...which can lead to unexpected bugs due to the lack of a strong reference to the created task. See Will McGugan's blog post for reference: https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/.
Note that we can't detect issues like:
```py
def f():
# Stored as `task`, but never used...
task = asyncio.create_task(coordinator.ws_connect())
```
So that would be a false negative. But this catches the common case of failing to assign the task in any way.
Closes#2809.
For example:
$ ruff check --select=EM<Tab>
EM -- flake8-errmsg
EM10 EM1 --
EM101 -- raw-string-in-exception
EM102 -- f-string-in-exception
EM103 -- dot-format-in-exception
(You will need to enable autocompletion as described
in the Autocompletion section in the README.)
Fixes#2808.
(The --help help change in the README is due to a clap bug,
for which I already submitted a fix:
https://github.com/clap-rs/clap/pull/4710.)
# Summary
This PR contains the code for the autoformatter proof-of-concept.
## Crate structure
The primary formatting hook is the `fmt` function in `crates/ruff_python_formatter/src/lib.rs`.
The current formatter approach is outlined in `crates/ruff_python_formatter/src/lib.rs`, and is structured as follows:
- Tokenize the code using the RustPython lexer.
- In `crates/ruff_python_formatter/src/trivia.rs`, extract a variety of trivia tokens from the token stream. These include comments, trailing commas, and empty lines.
- Generate the AST via the RustPython parser.
- In `crates/ruff_python_formatter/src/cst.rs`, convert the AST to a CST structure. As of now, the CST is nearly identical to the AST, except that every node gets a `trivia` vector. But we might want to modify it further.
- In `crates/ruff_python_formatter/src/attachment.rs`, attach each trivia token to the corresponding CST node. The logic for this is mostly in `decorate_trivia` and is ported almost directly from Prettier (given each token, find its preceding, following, and enclosing nodes, then attach the token to the appropriate node in a second pass).
- In `crates/ruff_python_formatter/src/newlines.rs`, normalize newlines to match Black’s preferences. This involves traversing the CST and inserting or removing `TriviaToken` values as we go.
- Call `format!` on the CST, which delegates to type-specific formatter implementations (e.g., `crates/ruff_python_formatter/src/format/stmt.rs` for `Stmt` nodes, and similar for `Expr` nodes; the others are trivial). Those type-specific implementations delegate to kind-specific functions (e.g., `format_func_def`).
## Testing and iteration
The formatter is being developed against the Black test suite, which was copied over in-full to `crates/ruff_python_formatter/resources/test/fixtures/black`.
The Black fixtures had to be modified to create `[insta](https://github.com/mitsuhiko/insta)`-compatible snapshots, which now exist in the repo.
My approach thus far has been to try and improve coverage by tackling fixtures one-by-one.
## What works, and what doesn’t
- *Most* nodes are supported at a basic level (though there are a few stragglers at time of writing, like `StmtKind::Try`).
- Newlines are properly preserved in most cases.
- Magic trailing commas are properly preserved in some (but not all) cases.
- Trivial leading and trailing standalone comments mostly work (although maybe not at the end of a file).
- Inline comments, and comments within expressions, often don’t work -- they work in a few cases, but it’s one-off right now. (We’re probably associating them with the “right” nodes more often than we are actually rendering them in the right place.)
- We don’t properly normalize string quotes. (At present, we just repeat any constants verbatim.)
- We’re mishandling a bunch of wrapping cases (if we treat Black as the reference implementation). Here are a few examples (demonstrating Black's stable behavior):
```py
# In some cases, if the end expression is "self-closing" (functions,
# lists, dictionaries, sets, subscript accesses, and any length-two
# boolean operations that end in these elments), Black
# will wrap like this...
if some_expression and f(
b,
c,
d,
):
pass
# ...whereas we do this:
if (
some_expression
and f(
b,
c,
d,
)
):
pass
# If function arguments can fit on a single line, then Black will
# format them like this, rather than exploding them vertically.
if f(
a, b, c, d, e, f, g, ...
):
pass
```
- We don’t properly preserve parentheses in all cases. Black preserves parentheses in some but not all cases.
This PR removes the dependency on `ruff_rowan` (i.e., Rome's fork of rust-analyzer's `rowan`), and in turn, trims out a lot of code in `ruff_formatter` that isn't necessary (or isn't _yet_ necessary) to power the autoformatter.
We may end up pulling some of this back in -- TBD. For example, the autoformatter has its own comment representation right now, but we may eventually want to use the `comments.rs` data structures defined in `rome_formatter`.
Given our current parser abstractions, we need the ability to tell `ruff_formatter` to print a pre-defined slice from a fixed string of source code, which we've introduced here as `FormatElement::StaticTextSlice`.
The Ruff autoformatter is going to be based on an intermediate representation (IR) formatted via [Wadler's algorithm](https://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf). This is architecturally similar to [Rome](https://github.com/rome/tools), Prettier, [Skip](https://github.com/skiplang/skip/blob/master/src/tools/printer/printer.sk), and others.
This PR adds a fork of the `rome_formatter` crate from [Rome](https://github.com/rome/tools), renamed here to `ruff_formatter`, which provides generic definitions for a formatter IR as well as a generic IR printer. (We've also pulled in `rome_rowan`, `rome_text_size`, and `rome_text_edit`, though some of these will be removed in future PRs.)
Why fork? `rome_formatter` contains code that's specific to Rome's AST representation (e.g., it relies on a fork of rust-analyzer's `rowan`), and we'll likely want to support different abstractions and formatting capabilities (there are already a few changes coming in future PRs). Once we've dropped `ruff_rowan` and trimmed down `ruff_formatter` to the code we currently need, it's also not a huge surface area to maintain and update.
In 28c9263722 I introduced automatic
linkification of option references in rule documentation,
which automatically converted the following:
## Options
* `namespace-packages`
to:
## Options
* [`namespace-packages`]
[`namespace-packages`]: ../../settings#namespace-packages
While the above is a correct CommonMark[1] link definition,
what I was missing was that we used mkdocs for our documentation
generation, which as it turns out uses a non-CommonMark-compliant
Markdown parser, namely Python-Markdown, which contrary to CommonMark
doesn't support link definitions containing code tags.
This commit fixes the broken links via a regex hack.
[1]: https://commonmark.org/
Currently the define_rule_mapping! macro generates both the Rule enum as
well as the RuleCodePrefix enum and the mapping between the two. After
this commit series the macro will only generate the Rule enum and the
RuleCodePrefix enum and the mapping will be generated by a new map_codes
proc macro, so we rename the macro now to fit its new purpose.
Same reasoning as for the previous commit ... one &'static str
becomes two &'static str because we split the RuleCodePrefix enum.
Note that the .unwrap() we have to add now, will actually
be removed in the 6th commit.
Rule::noqa_code previously return a single &'static str,
which was possible because we had one enum listing all
rule code prefixes. This commit series will however split up
the RuleCodePrefix enum into several enums ... so we'll end up
with two &'static str ... this commit wraps the return type
of Rule::noqa_code into a newtype so that we can easily change
it to return two &'static str in the 6th commit of this series.
Post this commit series several codes can be mapped to a single rule,
this commit therefore renames Rule::code to Rule::noqa_code,
which is the code that --add-noqa will add to ignore a rule.
Previously the rule documentation referenced configuration options
via full https:// URLs, which was bad for several reasons:
* changing the website would mean you'd have to change all URLs
* the links didn't work when building mkdocs locally
* the URLs showed up in the `ruff rule` output
* broken references weren't detected by our CI
This commit solves all of these problems by post-processing the
Markdown, recognizing sections such as:
## Options
* `flake8-tidy-imports.ban-relative-imports`
`cargo dev generate-all` will automatically linkify such references
and panic if the referenced option doesn't exist.
Note that the option can also be linked in the other Markdown sections
via e.g. [`flake8-tidy-imports.ban-relative-imports`] since
the post-processing code generates a CommonMark link definition.
Resolves#2766.
The motivating issue here is of the following form:
```py
try:
raise Exception("We want to hide this error message")
except Exception:
try:
raise Exception("We want to show this")
except Exception as exc:
raise exc from None
```
However, I think we should avoid this if _any_ cause is present, since causes require a named exception.
Closes#2814.
The synopsis is as follows.
List all top-level config keys:
$ ruff config
allowed-confusables
builtins
cache-dir
... etc.
List all config keys in a specific section:
$ ruff config mccabe
max-complexity
Describe a specific config option:
$ ruff config mccabe.max-complexity
The maximum McCabe complexity to allow before triggering `C901` errors.
Default value: 10
Type: int
Example usage:
```toml
# Flag errors (`C901`) whenever the complexity level exceeds 5.
max-complexity = 5
```
The new `ruff rule` output format introduced in
551b810aeb doesn't print Markdown but
rather some rich text with escape sequences for colors and links,
it's actually the "text" format that prints Markdown, so naming the new
format "markdown" is very confusing. This commit therefore renames it to
"pretty".
This isn't a breaking change since there hasn't been a release yet.
This was just an oversight and misunderstanding on my part. We had some helpful tests, but I misunderstood the "right" behavior so thought they were passing.
Closes#2761.
if_all_same(codes.values().cloned()).unwrap_or_default()
was quite unreadable because it wasn't obvious that codes.values() are
the prefixes. It's better to introduce another Map rather than having
Maps within Maps.
It's not only `.pyi` that should be exempt for this, but also for example scripts which don't have an extension, explicitly passed in command line args.
```console
❯ cargo run rule B017
Finished dev [unoptimized + debuginfo] target(s) in 0.13s
Running `target/debug/ruff rule B017`
no-assert-raises-exception
Code: B017 (flake8-bugbear)
### What it does
Checks for `self.assertRaises(Exception)`.
## Why is this bad?
`assertRaises(Exception)` can lead to your test passing even if the
code being tested is never executed due to a typo.
Either assert for a more specific exception (builtin or custom), use
`assertRaisesRegex` or the context manager form of `assertRaises`.
```
This PR adds a configuration option to inhibit ANN* violations for functions that have no other annotations either, for easier gradual typing of a large codebase.