This PR productionizes @MichaReiser's suggestion in https://github.com/charliermarsh/ruff/issues/1820#issuecomment-1440204423, by creating a separate crate for the `ast` module (`rust_python_ast`). This will enable us to further split up the `ruff` crate, as we'll be able to create (e.g.) separate sub-linter crates that have access to these common AST utilities.
This was mostly a straightforward copy (with adjustments to module imports), as the few dependencies that _did_ require modifications were handled in #3366, #3367, and #3368.
In hindsight, `ruff_python` is too general. A good giveaway is that it's actually a prefix of some other crates. The intent of this crate is to reimplement pieces of the Python standard library and CPython itself, so `ruff_python_stdlib` feels appropriate.
This PR introduces a new `CacheKey` trait for types that can be used as a cache key.
I'm not entirely sure if this is worth the "overhead", but I was surprised to find `HashableHashSet` and got scared when I looked at the time complexity of the `hash` function. These implementations must be extremely slow in hashed collections.
I then searched for usages and quickly realized that only the cache uses these `Hash` implementations, where performance is less sensitive.
This PR introduces a new `CacheKey` trait to communicate the difference between a hash and computing a key for the cache. The new trait can be implemented for types that don't implement `Hash` for performance reasons, and we can define additional constraints on the implementation: For example, we'll want to enforce portability when we add remote caching support. Using a different trait further allows us not to implement it for types without stable identities (e.g. pointers) or use other implementations than the standard hash function.
Currently the quote style of the first string in a file is used for autodetecting what to use when rewriting code for fixes. This is an okay heuristic, but often the first line in a file is a docstring, rather than a string constant, and it's not uncommon for pre-Black code to have different quoting styles for those.
For example, in the Google style guide:
https://google.github.io/styleguide/pyguide.html
> Be consistent with your choice of string quote character within a file. Pick ' or " and stick with it. ... Docstrings must use """ regardless.
This branch adjusts the logic to instead skip over any `"""` triple doublequote string tokens. The default, if there are no single quoted strings, is still to use double quote as the style.
Implement PYI006 "bad version info comparison"
## What it does
Ensures that you only `<` and `>=` for version info comparisons with
`sys.version_info` in `.pyi` files. All other comparisons such as
`<`, `<=` and `==` are banned.
## Why is this bad?
```python
>>> import sys
>>> print(sys.version_info)
sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
>>> print(sys.version_info > (3, 8))
True
>>> print(sys.version_info == (3, 8))
False
>>> print(sys.version_info <= (3, 8))
False
>>> print(sys.version_info in (3, 8))
False
```
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This prevents the UP034 autofix simultaneously stripping the
parentheses from generators in the same linter pass, which causes
a SyntaxError.
Closes#3234.
With this fix:
```python
$ cat test.py
the_first_one = next(
(i for i in range(10) if i // 2 == 0)
)
$ cargo run --bin ruff check test.py --no-cache --select UP034,COM812 --fix
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
Running `target/debug/ruff check test.py --no-cache --select UP034,COM812 --fix`
Found 1 error (1 fixed, 0 remaining).
$ cat test.py
the_first_one = next(
i for i in range(10) if i // 2 == 0
)
```
* Use format
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Renames the following rules that stood out to me at a glance as needing better names:
- `or-true` to `expr-or-true`
- `and-false` to `expr-and-false`
- `a-or-not-a` to `expr-or-not-expr`
- `a-and-not-a` to `expr-and-not-expr`
Related to #2902.
PYI009 and PYI010 are very similar, always use `...` in function and class bodies in stubs.
PYI021 bans doc strings in stubs.
I think all of these rules should be relatively straightforward to implement auto fixes for but can do that later once we get all the other rules added.
rel: https://github.com/charliermarsh/ruff/issues/848
In ruff-lsp (https://github.com/charliermarsh/ruff-lsp/pull/76) we want to add a "Disable \<rule\> for this line" quickfix. However, finding the correct line into which the `noqa` comment should be inserted is non-trivial (multi-line strings for example).
Ruff already has this info, so expose it in the JSON output for use by ruff-lsp.
This PR enables us to apply the proper quotation marks, including support for escapes. There are some significant TODOs, especially around implicit concatenations like:
```py
(
"abc"
"def"
)
```
Which are represented as a single AST node, which requires us to tokenize _within_ the formatter to identify all the individual string parts.
I manually changed these in #3080 and #3083 to get the tests passing (with notes around the deviations) -- but that's no longer necessary, now that we have proper testing that takes deviations into account.
This just re-formats all the `.py.expect` files with Black, both to add a trailing newline and be doubly-certain that they're correctly formatted.
I also ensured that we add a hard line break after each statement, and that we avoid including an extra newline in the generated Markdown (since the code should contain the exact expected newlines).
This PR changes the testing infrastructure to run all black tests and:
* Pass if Ruff and Black generate the same formatting
* Fail and write a markdown snapshot that shows the input code, the differences between Black and Ruff, Ruffs output, and Blacks output
This is achieved by introducing a new `fixture` macro (open to better name suggestions) that "duplicates" the attributed test for every file that matches the specified glob pattern. Creating a new test for each file over having a test that iterates over all files has the advantage that you can run a single test, and that test failures indicate which case is failing.
The `fixture` macro also makes it straightforward to e.g. setup our own spec tests that test very specific formatting by creating a new folder and use insta to assert the formatted output.
I worked on #2993 and ran into issues that the formatter tests are failing on Windows because `writeln!` emits `\n` as line terminator on all platforms, but `git` on Windows converted the line endings in the snapshots to `\r\n`.
I then tried to replicate the issue on my Windows machine and was surprised that all linter snapshot tests are failing on my machine. I figured out after some time that it is due to my global git config keeping the input line endings rather than converting to `\r\n`.
Luckily, I've been made aware of #2033 which introduced an "override" for the `assert_yaml_snapshot` macro that normalizes new lines, by splitting the formatted string using the platform-specific newline character. This is a clever approach and gives nice diffs for multiline fixes but makes assumptions about the setup contributors use and requires special care whenever we use line endings inside of tests.
I recommend that we remove the special new line handling and use `.gitattributes` to enforce the use of `LF` on all platforms [guide](https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings). This gives us platform agnostic tests without having to worry about line endings in our tests or different git configurations.
## Note
It may be necessary for Windows contributors to run the following command to update the line endings of their files
```bash
git rm --cached -r .
git reset --hard
```
When creating a dict with string keys, some prefer to call dict instead of writing a dict literal.
For example: `dict(a=1, b=2, c=3)` instead of `{"a": 1, "b": 2, "c": 3}`.
This extends the autofix for TID252 to work with for relative imports without `module` (i.e. `from .. import`). Tested with `matplotlib` and `bokeh`.
(Previously it would panic on unwrap of the module)
Note that pandas has [replaced](6057d7a93e) `absolufy-imports` with `ruff` now!
# Summary
This allows users to do things like:
```py
# ruff: noqa: F401
```
...to ignore all `F401` directives in a file. It's equivalent to `per-file-ignores`, but allows users to specify the behavior inline.
Note that Flake8 does _not_ support this, so we _don't_ respect `# flake8: noqa: F401`. (Flake8 treats that as equivalent to `# flake8: noqa`, so ignores _all_ errors in the file. I think all of [these usages](https://cs.github.com/?scopeName=All+repos&scope=&q=%22%23+flake8%3A+noqa%3A+%22) are probably mistakes!)
A couple notes on the details:
- If a user has `# ruff: noqa: F401` in the file, but also `# noqa: F401` on a line that would legitimately trigger an `F401` violation, we _do_ mark that as "unused" for `RUF100` purposes. This may be the wrong choice. The `noqa` is legitimately unused, but it's also not "wrong". It's just redundant.
- If a user has `# ruff: noqa: F401`, and runs `--add-noqa`, we _won't_ add `# noqa: F401` to any lines (which seems like the obvious right choice to me).
Closes#1054 (which has some extra pieces that I'll carve out into a separate issue).
Closes#2446.
- Implement N999 (following flake8-module-naming) in pep8_naming
- Refactor pep8_naming: split rules.rs into file per rule
- Documentation for majority of the violations
Closes https://github.com/charliermarsh/ruff/issues/2734
This rule guards against `asyncio.create_task` usages of the form:
```py
asyncio.create_task(coordinator.ws_connect()) # Error
```
...which can lead to unexpected bugs due to the lack of a strong reference to the created task. See Will McGugan's blog post for reference: https://textual.textualize.io/blog/2023/02/11/the-heisenbug-lurking-in-your-async-code/.
Note that we can't detect issues like:
```py
def f():
# Stored as `task`, but never used...
task = asyncio.create_task(coordinator.ws_connect())
```
So that would be a false negative. But this catches the common case of failing to assign the task in any way.
Closes#2809.
For example:
$ ruff check --select=EM<Tab>
EM -- flake8-errmsg
EM10 EM1 --
EM101 -- raw-string-in-exception
EM102 -- f-string-in-exception
EM103 -- dot-format-in-exception
(You will need to enable autocompletion as described
in the Autocompletion section in the README.)
Fixes#2808.
(The --help help change in the README is due to a clap bug,
for which I already submitted a fix:
https://github.com/clap-rs/clap/pull/4710.)
# Summary
This PR contains the code for the autoformatter proof-of-concept.
## Crate structure
The primary formatting hook is the `fmt` function in `crates/ruff_python_formatter/src/lib.rs`.
The current formatter approach is outlined in `crates/ruff_python_formatter/src/lib.rs`, and is structured as follows:
- Tokenize the code using the RustPython lexer.
- In `crates/ruff_python_formatter/src/trivia.rs`, extract a variety of trivia tokens from the token stream. These include comments, trailing commas, and empty lines.
- Generate the AST via the RustPython parser.
- In `crates/ruff_python_formatter/src/cst.rs`, convert the AST to a CST structure. As of now, the CST is nearly identical to the AST, except that every node gets a `trivia` vector. But we might want to modify it further.
- In `crates/ruff_python_formatter/src/attachment.rs`, attach each trivia token to the corresponding CST node. The logic for this is mostly in `decorate_trivia` and is ported almost directly from Prettier (given each token, find its preceding, following, and enclosing nodes, then attach the token to the appropriate node in a second pass).
- In `crates/ruff_python_formatter/src/newlines.rs`, normalize newlines to match Black’s preferences. This involves traversing the CST and inserting or removing `TriviaToken` values as we go.
- Call `format!` on the CST, which delegates to type-specific formatter implementations (e.g., `crates/ruff_python_formatter/src/format/stmt.rs` for `Stmt` nodes, and similar for `Expr` nodes; the others are trivial). Those type-specific implementations delegate to kind-specific functions (e.g., `format_func_def`).
## Testing and iteration
The formatter is being developed against the Black test suite, which was copied over in-full to `crates/ruff_python_formatter/resources/test/fixtures/black`.
The Black fixtures had to be modified to create `[insta](https://github.com/mitsuhiko/insta)`-compatible snapshots, which now exist in the repo.
My approach thus far has been to try and improve coverage by tackling fixtures one-by-one.
## What works, and what doesn’t
- *Most* nodes are supported at a basic level (though there are a few stragglers at time of writing, like `StmtKind::Try`).
- Newlines are properly preserved in most cases.
- Magic trailing commas are properly preserved in some (but not all) cases.
- Trivial leading and trailing standalone comments mostly work (although maybe not at the end of a file).
- Inline comments, and comments within expressions, often don’t work -- they work in a few cases, but it’s one-off right now. (We’re probably associating them with the “right” nodes more often than we are actually rendering them in the right place.)
- We don’t properly normalize string quotes. (At present, we just repeat any constants verbatim.)
- We’re mishandling a bunch of wrapping cases (if we treat Black as the reference implementation). Here are a few examples (demonstrating Black's stable behavior):
```py
# In some cases, if the end expression is "self-closing" (functions,
# lists, dictionaries, sets, subscript accesses, and any length-two
# boolean operations that end in these elments), Black
# will wrap like this...
if some_expression and f(
b,
c,
d,
):
pass
# ...whereas we do this:
if (
some_expression
and f(
b,
c,
d,
)
):
pass
# If function arguments can fit on a single line, then Black will
# format them like this, rather than exploding them vertically.
if f(
a, b, c, d, e, f, g, ...
):
pass
```
- We don’t properly preserve parentheses in all cases. Black preserves parentheses in some but not all cases.
This PR removes the dependency on `ruff_rowan` (i.e., Rome's fork of rust-analyzer's `rowan`), and in turn, trims out a lot of code in `ruff_formatter` that isn't necessary (or isn't _yet_ necessary) to power the autoformatter.
We may end up pulling some of this back in -- TBD. For example, the autoformatter has its own comment representation right now, but we may eventually want to use the `comments.rs` data structures defined in `rome_formatter`.
Given our current parser abstractions, we need the ability to tell `ruff_formatter` to print a pre-defined slice from a fixed string of source code, which we've introduced here as `FormatElement::StaticTextSlice`.
The Ruff autoformatter is going to be based on an intermediate representation (IR) formatted via [Wadler's algorithm](https://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf). This is architecturally similar to [Rome](https://github.com/rome/tools), Prettier, [Skip](https://github.com/skiplang/skip/blob/master/src/tools/printer/printer.sk), and others.
This PR adds a fork of the `rome_formatter` crate from [Rome](https://github.com/rome/tools), renamed here to `ruff_formatter`, which provides generic definitions for a formatter IR as well as a generic IR printer. (We've also pulled in `rome_rowan`, `rome_text_size`, and `rome_text_edit`, though some of these will be removed in future PRs.)
Why fork? `rome_formatter` contains code that's specific to Rome's AST representation (e.g., it relies on a fork of rust-analyzer's `rowan`), and we'll likely want to support different abstractions and formatting capabilities (there are already a few changes coming in future PRs). Once we've dropped `ruff_rowan` and trimmed down `ruff_formatter` to the code we currently need, it's also not a huge surface area to maintain and update.
In 28c9263722 I introduced automatic
linkification of option references in rule documentation,
which automatically converted the following:
## Options
* `namespace-packages`
to:
## Options
* [`namespace-packages`]
[`namespace-packages`]: ../../settings#namespace-packages
While the above is a correct CommonMark[1] link definition,
what I was missing was that we used mkdocs for our documentation
generation, which as it turns out uses a non-CommonMark-compliant
Markdown parser, namely Python-Markdown, which contrary to CommonMark
doesn't support link definitions containing code tags.
This commit fixes the broken links via a regex hack.
[1]: https://commonmark.org/