The "wsl" crate was last touched in 2019, whereas the "is-wsl" crate was
last updated in 2023. Additionally, it is unclear whether the "wsl"
crate supports both WSL1 and WSL2 (which was announced in 2019), whereas
the "is-wsl" crate explicitly supports both WSL1 and WSL2.
The required code changes are minimal, since both crates provide only a
`is_wsl() -> bool` function.
paramiko `set_missing_host_key_policy` has mandatory positional arg.
The [current
documentation](https://docs.astral.sh/ruff/rules/ssh-no-host-key-verification/#example)
leads to non-running code
```
>>> from paramiko import client
>>> ssh_client = client.SSHClient()
>>> ssh_client.set_missing_host_key_policy()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: SSHClient.set_missing_host_key_policy() missing 1 required positional argument: 'policy'
```
This PR modifies the documentation to set the policy to the [default
`RejectPolicy`](https://docs.paramiko.org/en/latest/api/client.html#paramiko.client.SSHClient.set_missing_host_key_policy)
Signed-off-by: Mikael Arguedas <mikael.arguedas@gmail.com>
## Summary
Remove the period from a couple short messages, for consistency with all
other short messages.
All other short rule messages lack such a period, except for long
messages made of multiple sentences.
## Test Plan
Tests modified accordingly.
Not sure if this would qualify as a breaking change because user-visible
messages are modified.
## Summary
This PR implements Y058 from flake8-pyi -- this is a new flake8-pyi rule
that was released as part of `flake8-pyi 23.11.0`. I've followed the
Python implementation as closely as possible (see
858c0918a8),
except that for the Ruff version, the rule also applies to `.py` files
as well as for `.pyi` files. (For `.py` files, we only emit the
diagnostic in very specific situations, however, as there's a much
higher likelihood of emitting false positives when applying this rule to
a `.py` file.)
## Test Plan
`cargo test`/`cargo insta review`
## Summary
Historically, we encoded this list by extracting the `__all__`. I went
to update it, but... is there really any value in it? Seems easier to
just treat `typing_extensions` as an alias for `typing`.
Closes https://github.com/astral-sh/ruff/issues/9334.
## Summary
We stopped releasing this a while ago and no longer advertise it
anywhere. IMO, we should remove it so that we stop paying the cost of
maintaining it. If we want to revive it, we can always do so from Git.
## Summary
This is a non-behavior-changing refactor to follow-up
https://github.com/astral-sh/ruff/pull/9321 by modifying
`DisplayParseError` to use owned data and make it useable as a
standalone error type (rather than using references and implementing
`Display`). I don't feel very strongly either way. I thought it was
awkward that the `FormatCommandError` had two branches in the display
path, and wanted to represent the `Parse` vs. other cases as a separate
enum, so here we are.
## Summary
The logic that detects continuations assumed that tokens themselves
cannot span multiple lines. However, strings _can_ -- even single-quoted
strings.
Closes https://github.com/astral-sh/ruff/issues/9323.
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
This PR modifies the semantics of `runtime-evaluated-decorators` to
respect decorators on both classes _and_ functions. Historically, this
only respected classes, since the common use-case is (e.g.)
`pydantic.BaseModel` -- but functions are equally valid.
Closes https://github.com/astral-sh/ruff/issues/9312.
## Test Plan
`cargo test`
## Summary
This helps a bit with (but does not close) the issues described in
https://github.com/astral-sh/ruff/issues/9311. E.g., now, we at least
see: `error: Failed to format main.py: source contains syntax errors:
invalid syntax. Got unexpected token '=' at byte offset 20`.
## Summary
Given:
```python
from somewhere import get_cfg
def lookup_cfg(cfg_description):
cfg = get_cfg(cfg_description)
if cfg is not None:
return cfg
raise AttributeError(f"No cfg found matching {cfg_description}")
```
We were analyzing the method from last-to-first statement. So we saw the
`raise`, then assumed the method _always_ raised. In reality, though, it
_might_ return. This PR improves the branch analysis to respect these
mixed cases.
Closes https://github.com/astral-sh/ruff/issues/9269.
Closes https://github.com/astral-sh/ruff/issues/9304.
## Summary
Remove `:` for PLR0917 to make all PLR09XX message look the same
```
PLR0904 Too many public methods (21 > 20)
PLR0911 Too many return statements (16 > 6)
PLR0912 Too many branches (13 > 12)
PLR0913 Too many arguments in function definition (10 > 5)
PLR0915 Too many statements (118 > 50)
PLR0917 Too many positional arguments: (15/5)
```
## Test Plan
<!-- How was it tested? -->
---------
Signed-off-by: Mikael Arguedas <mikael.arguedas@gmail.com>
## Summary
Part of #970.
This adds Pylint's [R0244
empty_comment](https://pylint.pycqa.org/en/latest/user_guide/messages/refactor/empty-comment.html)
lint as well as an always-safe fix.
## Test Plan
The included snapshot verifies the following:
- A line containing only a non-empty comment is not changed
- A line containing leading whitespace before a non-empty comment is not
changed
- A line containing only an empty comment has its content deleted
- A line containing only leading whitespace before an empty comment has
its content deleted
- A line containing only leading and trailing whitespace on an empty
comment has its content deleted
- A line containing trailing whitespace after a non-empty comment is not
changed
- A line containing only a single newline character (i.e. a blank line)
is not changed
- A line containing code followed by a non-empty comment is not changed
- A line containing code followed by an empty comment has its content
deleted after the last non-whitespace character
- Lines containing code and no comments are not changed
- Empty comment lines within block comments are ignored
- Empty comments within triple-quoted sections are ignored
## Comparison to Pylint
Running Ruff and Pylint 3.0.3 with Python 3.12.0 against the
`empty_comment.py` file added in this PR, we see the following:
* Identical behavior:
* empty_comment.py:3:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:4:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:5:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:18:0: R2044: Line with empty comment (empty-comment)
* Differing behavior:
* Pylint doesn't ignore empty comments in block comments commonly used
for visual spacing; I decided these were fine in this implementation
since many projects use these and likely do not want them removed.
* empty_comment.py:28:0: R2044: Line with empty comment (empty-comment)
* Pylint detects "empty comments" within the triple-quoted section at
the bottom of the file, which is arguably a bug in the Pylint
implementation since these are not truly comments. These are ignored by
this implementation.
* empty_comment.py:37:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:38:0: R2044: Line with empty comment (empty-comment)
* empty_comment.py:39:0: R2044: Line with empty comment (empty-comment)
## Summary
If a rule ends with a trailing placeholder (like "Use {target}"), that
gets interpreted as an HTML attribute adding, `target="target"` to the
node. This PR escapes such cases. In reality, they're rare, since we
almost always wrap placeholders in backticks, which avoids this problem
-- but in some cases, they are in fact correct to be un-backticked.
Closes https://github.com/astral-sh/ruff/issues/9288.
## Test Plan
<img width="673" alt="Screen Shot 2023-12-28 at 9 33 40 AM"
src="https://github.com/astral-sh/ruff/assets/1309177/14aaa168-c802-4258-b82d-217a85b42ebf">
## Summary
Hey there 👋 thanks for this great project!
On python code looking like the following
```
import yaml
from yaml.loader import SafeLoader
with MY_FILE_PATH.open("r") as my_file:
my_data = yaml.load(my_file, Loader=SafeLoader)
```
ruff reports this error:
```
S506 Probable use of unsafe loader `SafeLoader` with `yaml.load`. Allows instantiation of arbitrary objects. Consider `yaml.safe_load`.
```
This PR is an attempt to support SafeLoader being imported for either
`yaml` or `yaml.loader`
Disclaimer:
I am not familiar with Rust so this is likely not the better way of
doing it. Interested in hearing how to adapt this PR to provide similar
behavior in a better way
## Test Plan
The S506.py file was updated accordingly to cover the use cases and test
were confirmed to pass with this change.
## Summary
If `RUF100` is ignored via `per-file-ignores`, we need to avoid raising
it. `RUF100` has special "self-ignore" logic, since the rule itself
deals with `# noqa` directives. This PR wires up `per-file-ignores` to
that "self-ignore" logic.
Closes https://github.com/astral-sh/ruff/issues/9297.
We should avoid adding `-> None` to stubs in `.pyi` files, along with a
few other cases. (We already ignore abstract methods.)
Closes https://github.com/astral-sh/ruff/issues/9270.
## Summary
This PR adds some helper structs to the linter paths to enable passing
in the pre-computed tokens and parsed source code during benchmarking,
to remove lexing and parsing from the overall linter benchmark
measurement. We already remove parsing for the formatter, and we have
separate benchmarks for the lexer and the parser, so this should make it
much easier to measure linter performance changes.
## Summary
Adds a rule to detect unions that include `typing.NoReturn` or
`typing.Never`. In such cases, the use of the bottom type is redundant.
Closes https://github.com/astral-sh/ruff/issues/9113.
## Test Plan
`cargo test`
## Summary
fixes#6956
details in issue
Following an advice in
https://github.com/astral-sh/ruff/issues/6956#issuecomment-1817672585,
this change separates expressions to 3 levels of "constant likelihood":
* literals, empty dict and tuples... (definitely constant, level 2)
* CONSTANT_CASE vars (probably constant, 1)
* all other expressions (0)
a comparison is marked yoda if the level is strictly higher on its left
hand side
following
https://github.com/astral-sh/ruff/issues/6956#issuecomment-1697107822
marking compound expressions of literals (e.g. `60 * 60` ) as constants
this change current behaviour on
`SomeClass().settings.SOME_CONSTANT_VALUE > (60 * 60)` in the fixture
from error to ok
## Summary
Given a function like:
```python
def func(x: int):
if not x:
raise ValueError
else:
raise TypeError
```
We now correctly use `NoReturn` as the return type, rather than `None`.
Closes https://github.com/astral-sh/ruff/issues/9201.
## Summary
Part of #8771. flake8-pyi will emit a Y018 error for unused TypeVars,
ParamSpecs or TypeVarTuples; Ruff currently only emits PYI018 for unused
TypeVars.
This is my first "proper" Ruff PR -- let me know if there's a better way
of doing this! Not sure if the repeated calls to `match_typing_expr()`
are ideal.
## Test Plan
I manually updated the fixtures to add some unused ParamSpecs and
TypeVarTuples, and then updated the snapshots using `cargo insta
review`. All tests then passed when run using `cargo test`.
Bumps [wasm-bindgen-test](https://github.com/rustwasm/wasm-bindgen) from
0.3.38 to 0.3.39.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/rustwasm/wasm-bindgen/commits">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
New messages for "format" mode.
Fixes#9132
## Test Plan
<!-- How was it tested? -->
I ran the tests specified in `CONTRIBUTING.md`
```bash
cargo run -p ruff_cli -- check /path/to/some_files.py --no-cache
cargo run -p ruff_cli -- format --check /path/to/some_files.py --no-cache
cargo clippy --workspace --all-targets --all-features -- -D warnings
RUFF_UPDATE_SCHEMA=1 cargo test
pre-commit run --all-files --show-diff-on-failure
```
**Note:** In case no files are detected, either correctly formatted,
changed, or unchanged, it does not display a message. Wouldn't it be
better to show some message in this case?
## Summary
Given `Callable[[Callable[_P, _R]], Callable[_P, _R]]` from the
originating issue, when quoting `Callable`, we quoted the inner
`[Callable[_P, _R]]`, and then created a separate edit for the outer
`Callable`. Since there's an extra level of nesting in the subscript,
the edit for `[Callable[_P, _R]]` correctly did _not_ expand to the
entire expression. However, in this case, we should discard the inner
edit, since the expression is getting quoted by the outer edit anyway.
Closes https://github.com/astral-sh/ruff/issues/9162.
## Summary
A common mistake is to add quotes around one member in an `X | Y`-style
type union, as in:
```python
contract_versions_list: list[ContractVersion] | 'QuerySet[ContractVersion]' | None = None
```
However, doing so will lead to a runtime error if the annotation is
runtime-evaluated. This PR lints against such patterns.
Closes https://github.com/astral-sh/ruff/issues/9139.
## Summary
Fix dropped union expressions for piped non-types in `PYI055` autofix
If you had `type[int] | type[str] | str`, it would have dropped the
`str`, which breaks the type!
Closes#9156
## Test Plan
`cargo test`
Fix#9080
Example, where `[]` is a 2 byte non-breaking space:
```
def f():
""" Docstring header
^^^^ Real indentation is 4 chars
docstring body, over-indented
^^^^^^ Over-indentation is 6 - 4 = 2 chars due to this line
[] [] docstring body 2, further indented
^^^^^ We take these 4 chars/5 bytes to match the docstring ...
^^^ ... and these 2 chars/3 bytes to remove the `over_indented_size` ...
^^ ... but preserve this real indent
```
The example below used to panic because we tried to split at 2 bytes in
the 4-bytes character `转`.
```python
def sample_func(xx):
"""
转置 (transpose)
"""
return xx.T
```
Fixes#9145
Fixes https://github.com/astral-sh/ruff-vscode/issues/362
The second commit is a small test refactoring.
This sets `lto = "thin"` instead of using "fat" LTO, and sets
`codegen-units = 16`. These are the defaults for Cargo's `release`
profile, and I think it may give us faster iteration times, especially
when benchmarking. The point of this PR is to see what kind of impact
this has on benchmarks. It is expected that benchmarks may regress to
some extent.
I did some quick ad hoc experiments to quantify this change in compile
times. Namely, I ran:
cargo build --profile release -p ruff_cli
Then I ran
touch crates/ruff_python_formatter/src/expression/string/docstring.rs
(because that's where i've been working lately) and re-ran
cargo build --profile release -p ruff_cli
This last command is what I timed, since it reflects how much time one
has to wait between making a change and getting a compiled artifact.
Here are my results:
* With status quo `release` profile, build takes 77s
* with `release` but `lto = "thin"`, build takes 41s
* with `release`, but `lto = false`, build takes 19s
* with `release`, but `lto = false` **and** `codegen-units = 16`, build
takes 7s
* with `release`, but `lto = "thin"` **and** `codegen-units = 16`, build
takes 16s (i believe this is the default `release` configuration)
This PR represents the last option. It's not the fastest to compile, but
it's nearly a whole minute faster! The idea is that with `codegen-units
= 16`, we still make use of parallelism, but keep _some_ level of LTO on
to try and re-gain what we lose by increasing the number of codegen
units.
## Summary
Add new test cases for `with_item` and `match` sequence that demonstrate how long headers break.
Removes one use of `optional_parentheses` in a position where it is know that the parentheses always need to be added.
## Test Plan
cargo test
## Summary
Add some more documentation to `can_omit_optional_parentheses` because it is realy hard to understand.
Restrict the `Attribute` and `None` `OperatorPrecedence` branches to ensure they only get applyied to the intended nodes.
## Test Plan
Ecosystem check reports no differences. The compatibility index remains unchanged.
Given:
```python
x: DataFrame[
int
] = 1
```
We currently wrap the annotation in single quotes, which leads to a
syntax error:
```python
x: "DataFrame[
int
]" = 1
```
There are a few options for what to suggest for users here... Use triple
quotes:
```python
x: """DataFrame[
int
]""" = 1
```
Or, use an implicit string concatenation (which may require
parentheses):
```python
x: ("DataFrame["
"int"
"]") = 1
```
The solution I settled on here is to use the `Generator`, which
effectively means we write it out on a single line, like:
```python
x: "DataFrame[int]" = 1
```
It's kind of the "least opinionated" solution, but it does mean we'll
expand to a very long line in some cases.
Closes https://github.com/astral-sh/ruff/issues/9136.
This PR splits the string formatting code in the formatter to be handled
by the respective nodes.
Previously, the string formatting was done through a single
`FormatString` interface. Now, the nodes themselves are responsible for
formatting.
The following changes were made:
1. Remove `StringLayout::ImplicitStringConcatenationInBinaryLike` and
inline the call to `FormatStringContinuation`. After the refactor, the
binary like formatting would delegate to `FormatString` which would then
delegate to `FormatStringContinuation`. This removes the intermediary
steps.
2. Add formatter implementation for `FStringPart` which delegates it to
the respective string literal or f-string node.
3. Add `ExprStringLiteralKind` which is either `String` or `Docstring`.
If it's a docstring variant, then the string expression would not be
implicitly concatenated. This is guaranteed by the
`DocstringStmt::try_from_expression` constructor.
4. Add `StringLiteralKind` which is either a `String`, `Docstring` or
`InImplicitlyConcatenatedFString`. The last variant is for when the
string literal is implicitly concatenated with an f-string (`"foo" f"bar
{x}"`).
5. Remove `FormatString`.
6. Extract the f-string quote detection as a standalone function which
is public to the crate. This is used to detect the quote to be used for
an f-string at the expression level (`ExprFString` or
`FormatStringContinuation`).
### Formatter ecosystem result
**This PR**
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |
**main**
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |
This fixes a bug where the current indent level was not calculated
correctly for doctests. Namely, it didn't account for the extra indent
level (in terms of ASCII spaces) used by by the PS1 (`>>> `) and PS2
(`... `) prompts. As a result, lines could extend up to 4 spaces beyond
the configured line length limit.
We fix that by passing the `CodeExampleKind` to the `format` routine
instead of just the code itself. In this way, `format` can query whether
there will be any extra indent added _after_ formatting the code and
take that into account for its line length setting.
We add a few regression tests, taken directly from @stinodego's
examples.
Fixes#9126
This PR does the plumbing to make a new formatting option,
`docstring-code-format`, available in the configuration for end users.
It is disabled by default (opt-in). It is opt-in at least initially to
reflect a conservative posture. The intent is to make it opt-out at some
point in the future.
This was split out from #8811 in order to make #8811 easier to merge.
Namely, once this is merged, docstring code snippet formatting will
become available to end users. (See comments below for how we arrived at
the name.)
Closes#7146
## Test Plan
Other than the standard test suite, I ran the formatter over the CPython
and polars projects to ensure both that the result looked sensible and
that tests still passed. At time of writing, one issue that currently
appears is that reformatting code snippets trips the long line lint:
https://github.com/BurntSushi/polars/actions/runs/7006619426/job/19058868021
This PR adds a `as_slice` method to all the string nodes which returns
all the parts of the nodes as a slice. This will be useful in the next
PR to split the string formatting to use this method to extract the
_single node_ or _implicitly concanated nodes_.
## Summary
Adds support for sarif v2.1.0 output to cli, usable via the
output-format paramter.
`ruff . --output-format=sarif`
Includes a few changes I wasn't sure of, namely:
* Adds a few derives for Clone & Copy, which I think could be removed
with a little extra work as well.
## Test Plan
I built and ran this against several large open source projects and
verified that the output sarif was valid, using [Microsoft's SARIF
validator tool](https://sarifweb.azurewebsites.net/Validation)
I've also attached an output of the sarif generated by this version of
ruff on the main branch of django at commit: b287af5dc9
[django_main_b287af5dc9_sarif.json](https://github.com/astral-sh/ruff/files/13626222/django_main_b287af5dc9_sarif.json)
Note: this needs to be regenerated with the latest changes and
confirmed.
## Open Points
[ ] Convert to just using all Rules all the time
[ ] Fix the issue with getting the file URI when compiling for web
assembly
## Summary
This allows us to fix usages like:
```python
from pandas import DataFrame
def baz() -> DataFrame:
...
```
By quoting the `DataFrame` in `-> DataFrame`. Without quotes, moving
`from pandas import DataFrame` into an `if TYPE_CHECKING:` block will
fail at runtime, since Python tries to evaluate the annotation to add it
to the function's `__annotations__`.
Unfortunately, this does require us to split our "annotation kind" flags
into three categories, rather than two:
- `typing-only`: The annotation is only evaluated at type-checking-time.
- `runtime-evaluated`: Python will evaluate the annotation at runtime
(like above) -- but we're willing to quote it.
- `runtime-required`: Python will evaluate the annotation at runtime
(like above), and some library (like Pydantic) needs it to be available
at runtime, so we _can't_ quote it.
This functionality is gated behind a setting
(`flake8-type-checking.quote-annotations`).
Closes https://github.com/astral-sh/ruff/issues/5559.
## Summary
Adds `find_assigned_value` a function which gets the `&Expr` assigned to
a given `id` if one exists in the semantic model.
Open TODOs:
- [ ] Handle `binding.kind.is_unpacked_assignment()`: I am bit confused
by this one. The snippet from its documentation does not appear to be
counted as an unpacked assignment and the only ones I could find for
which that was true were invalid Python like:
```python
x, y = 1
```
- [ ] How to handle AugAssign. Can we combine statements like:
```python
(a, b) = [(1, 2, 3), (4,)]
a += (6, 7)
```
to get the full value for a? Code currently just returns `None` for
these assign types
- [ ] Multi target assigns
```python
m_c = (m_d, m_e) = (0, 0)
trio.sleep(m_c) # OK
trio.sleep(m_d) # TRIO115
trio.sleep(m_e) # TRIO115
```
## Test Plan
Used the function in two rules:
- `TRIO115`
- `PERF101`
Expanded both their fixtures for explicit multi target check