## Summary
When we format the trailing comments on a clause body, we check if there
are any newlines after the last statement; if not, we insert one.
This logic didn't take into account that the last statement could itself
have trailing comments, as in:
```python
if True:
pass
# comment
else:
pass
```
We were thus inserting a newline after the comment, like:
```python
if True:
pass
# comment
else:
pass
```
In the context of function definitions, this led to an instability,
since we insert a newline _after_ a function, which would in turn lead
to the bug above appearing in the second formatting pass.
Closes https://github.com/astral-sh/ruff/issues/7465.
## Test Plan
`cargo test`
Small improvement in `transformers`, but no regressions.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99956 | 2587 | 404 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| **transformers** | **0.99957** | **2587** | **402** |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
## Summary
If a function has no parameters (and no comments within the parameters'
`()`), we're supposed to wrap the return annotation _whenever_ it
breaks. However, our `empty_parameters` test didn't properly account for
the case in which the parameters include a newline (but no other
content), like:
```python
def get_dashboards_hierarchy(
) -> Dict[Type['BaseDashboard'], List[Type['BaseDashboard']]]:
"""Get hierarchy of dashboards classes.
Returns:
Dict of dashboards classes.
"""
dashboards_hierarchy = {}
```
This PR fixes that detection. Instead of lexing, it now checks if the
parameters itself is empty (or if it contains comments).
Closes https://github.com/astral-sh/ruff/issues/7457.
## Summary
## Stack Summary
This stack splits `Settings` into `FormatterSettings` and `LinterSettings` and moves it into `ruff_workspace`. This change is necessary to add the `FormatterSettings` to `Settings` without adding `ruff_python_formatter` as a dependency to `ruff_linter` (and the linter should not contain the formatter settings).
A quick overview of our settings struct at play:
* `Options`: 1:1 representation of the options in the `pyproject.toml` or `ruff.toml`. Used for deserialization.
* `Configuration`: Resolved `Options`, potentially merged from multiple configurations (when using `extend`). The representation is very close if not identical to the `Options`.
* `Settings`: The resolved configuration that uses a data format optimized for reading. Optional fields are initialized with their default values. Initialized by `Configuration::into_settings` .
The goal of this stack is to split `Settings` into tool-specific resolved `Settings` that are independent of each other. This comes at the advantage that the individual crates don't need to know anything about the other tools. The downside is that information gets duplicated between `Settings`. Right now the duplication is minimal (`line-length`, `tab-width`) but we may need to come up with a solution if more expensive data needs sharing.
This stack focuses on `Settings`. Splitting `Configuration` into some smaller structs is something I'll follow up on later.
## PR Summary
This PR moves the `ResolverSettings` and `Settings` struct to `ruff_workspace`. `LinterSettings` remains in `ruff_linter` because it gets passed to lint rules, the `Checker` etc.
## Test Plan
`cargo test`
## Stack Summary
This stack splits `Settings` into `FormatterSettings` and `LinterSettings` and moves it into `ruff_workspace`. This change is necessary to add the `FormatterSettings` to `Settings` without adding `ruff_python_formatter` as a dependency to `ruff_linter` (and the linter should not contain the formatter settings).
A quick overview of our settings struct at play:
* `Options`: 1:1 representation of the options in the `pyproject.toml` or `ruff.toml`. Used for deserialization.
* `Configuration`: Resolved `Options`, potentially merged from multiple configurations (when using `extend`). The representation is very close if not identical to the `Options`.
* `Settings`: The resolved configuration that uses a data format optimized for reading. Optional fields are initialized with their default values. Initialized by `Configuration::into_settings` .
The goal of this stack is to split `Settings` into tool-specific resolved `Settings` that are independent of each other. This comes at the advantage that the individual crates don't need to know anything about the other tools. The downside is that information gets duplicated between `Settings`. Right now the duplication is minimal (`line-length`, `tab-width`) but we may need to come up with a solution if more expensive data needs sharing.
This stack focuses on `Settings`. Splitting `Configuration` into some smaller structs is something I'll follow up on later.
## PR Summary
This PR extracts the linter-specific settings into a new `LinterSettings` struct and adds it as a `linter` field to the `Settings` struct. This is in preparation for moving `Settings` from `ruff_linter` to `ruff_workspace`
## Test Plan
`cargo test`
## Stack Summary
This stack splits `Settings` into `FormatterSettings` and `LinterSettings` and moves it into `ruff_workspace`. This change is necessary to add the `FormatterSettings` to `Settings` without adding `ruff_python_formatter` as a dependency to `ruff_linter` (and the linter should not contain the formatter settings).
A quick overview of our settings struct at play:
* `Options`: 1:1 representation of the options in the `pyproject.toml` or `ruff.toml`. Used for deserialization.
* `Configuration`: Resolved `Options`, potentially merged from multiple configurations (when using `extend`). The representation is very close if not identical to the `Options`.
* `Settings`: The resolved configuration that uses a data format optimized for reading. Optional fields are initialized with their default values. Initialized by `Configuration::into_settings` .
The goal of this stack is to split `Settings` into tool-specific resolved `Settings` that are independent of each other. This comes at the advantage that the individual crates don't need to know anything about the other tools. The downside is that information gets duplicated between `Settings`. Right now the duplication is minimal (`line-length`, `tab-width`) but we may need to come up with a solution if more expensive data needs sharing.
This stack focuses on `Settings`. Splitting `Configuration` into some smaller structs is something I'll follow up on later.
## PR Summary
This PR extracts a `ResolverSettings` struct that holds all the resolver-relevant fields (uninteresting for the `Formatter` or `Linter`). This will allow us to move the `ResolverSettings` out of `ruff_linter` further up in the stack.
## Test Plan
`cargo test`
(I'll to more extensive testing at the top of this stack)
## Summary
This PR fixes the way NoQA range is inserted to the `NoqaMapping`.
Previously, the way the mapping insertion logic worked was as follows:
1. If the range which is to be inserted _touched_ the previous range, meaning
that the end of the previous range was the same as the start of the new
range, then the new range was added in addition to the previous range.
2. Else, if the new range intersected the previous range, then the previous
range was replaced with the new _intersection_ of the two ranges.
The problem with this logic is that it does not work for the following case:
```python
assert foo, \
"""multi-line
string"""
```
Now, the comments cannot be added to the same line which ends with a continuation
character. So, the `NoQA` directive has to be added to the next line. But, the
next line is also a triple-quoted string, so the `NoQA` directive for that line
needs to be added to the next line. This creates a **union** pattern instead of an
**intersection** pattern.
But, only union doesn't suffice because (1) means that for the edge case where
the range touch only at the end, the union won't take place.
### Solution
1. Replace '<=' with '<' to have a _strict_ insertion case
2. Use union instead of intersection
## Test Plan
Add a new test case. Run the test suite to ensure that nothing is broken.
### Integration
1. Make a `test.py` file with the following contents:
```python
assert foo, \
"""multi-line
string"""
```
2. Run the following command:
```console
$ cargo run --bin ruff -- check --isolated --no-cache --select=F821 test.py
/Users/dhruv/playground/ruff/fstring.py:1:8: F821 Undefined name `foo`
Found 1 error.
```
3. Use `--add-noqa`:
```console
$ cargo run --bin ruff -- check --isolated --no-cache --select=F821 --add-noqa test.py
Added 1 noqa directive.
```
4. Check that the NoQA directive was added in the correct position:
```python
assert foo, \
"""multi-line
string""" # noqa: F821
```
5. Run the `check` command to ensure that the NoQA directive is respected:
```console
$ cargo run --bin ruff -- check --isolated --no-cache --select=F821 test.py
```
fixes: #7530
## Summary
Implement
[`no-ignored-enumerate-items`](https://github.com/dosisod/refurb/blob/master/refurb/checks/builtin/no_ignored_enumerate.py)
as `unnecessary-enumerate` (`FURB148`).
The auto-fix considers if a `start` argument is passed to the
`enumerate()` function. If only the index is used, then the suggested
fix is to pass the `start` value to the `range()` function. So,
```python
for i, _ in enumerate(xs, 1):
...
```
becomes
```python
for i in range(1, len(xs)):
...
```
If the index is ignored and only the value is ignored, and if a start
value greater than zero is passed to `enumerate()`, the rule doesn't
produce a suggestion. I couldn't find a unanimously accepted best way to
iterate over a collection whilst skipping the first n elements. The rule
still triggers, however.
Related to #1348.
## Test Plan
`cargo test`
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
**Summary** Instead of emitting a bogus token per char, we now only emit
on single last bogus token. This leads to much more concise output.
**Test Plan** Updated fixtures
Closes#5497
Needs MkDocs 1.5 to be released.
- [x] https://github.com/mkdocs/mkdocs/milestone/15
## Summary
Uses MkDocs' `not_in_nav` config to hide spam about files in
`docs/rules/` not being in nav.
Close#7479
The `@override` was already implemented
## Test Plan
Tested the code in the issue. After removing all the noqa's, only one
occurrence of `BadName()` raised a violation.
Added a fixture
## Summary
This PR implements a new rule for `flake8-logging` plugin that checks
for uses of `logging.exception()` with `exc_info` set to `False` or a
falsy value. It suggests using `logging.error` in these cases instead.
I am unsure about the name. Open to suggestions there, went with the
most explicit name I could think of in the meantime.
Refer https://github.com/astral-sh/ruff/issues/7248
## Test Plan
Added a new fixture cases and ran `cargo test`
## Summary
This PR updates the `W191` (`tab-indentation`) rule from a line-based to
a token-based rule.
Earlier, the rule used the `triple_quoted_string_ranges` from the
indexer to skip over any lines _inside_ a triple-quoted string. This was the only
use of the ranges. These ranges were extracted through the tokens, so instead
we can directly use the newline tokens to perform the check.
This would also mean that we can remove the `triple_quoted_string_ranges` from
the indexer but I'll hold that off until we have a better idea on #7326
but I don't think it would be a problem to remove it.
This will also fix#7379 once PEP 701 changes are merged.
## Test Plan
`cargo test`
## Summary
This PR implements a new rule for `flake8-logging` plugin that checks
for
`logging.getLogger` calls with either `__file__` or `__cached__` as the
first
argument and generates a suggested fix to use `__name__` instead.
Refer: #7248
## Test Plan
Add test cases and `cargo test`
## Summary
The tokenizer was split into a forward and a backwards tokenizer. The
backwards tokenizer uses the same names as the forwards ones (e.g.
`next_token`). The backwards tokenizer gets the comment ranges that we
already built to skip comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
We're planning to move the documentation from
[https://beta.ruff.rs/docs](https://beta.ruff.rs/docs) to
[https://docs.astral.sh/ruff](https://docs.astral.sh/ruff), for a few
reasons:
1. We want to remove the `beta` from the domain, as Ruff is no longer
considered beta software.
2. We want to migrate to a structure that could accommodate multiple
future tools living under one domain.
The docs are actually already live at
[https://docs.astral.sh/ruff](https://docs.astral.sh/ruff), but later
today, I'll add a permanent redirect from the previous to the new
domain. **All existing links will continue to work, now and in
perpetuity.**
This PR contains the code changes necessary for the updated
documentation. As part of this effort, I moved the playground and
documentation from my personal Cloudflare account to our team Cloudflare
account (hence the new `--project-name` references). After merging, I'll
also update the secrets on this repo.
## Summary
Given a trailing operator comment in a unary expression, like:
```python
if (
not # comment
a):
...
```
We were attaching these to the operand (`a`), but formatting them in the
unary operator via special handling. Parents shouldn't format the
comments of their children, so this instead attaches them as dangling
comments on the unary expression. (No intended change in formatting.)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Adds the maximum of 320 for the line-length setting to the JSON schema
for better integration with IDEs.
Related https://github.com/astral-sh/ruff/pull/6873
## Test Plan
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
The previous reference was “CWE-78: Improper Neutralization of Special
Elements used in an OS Command ('OS Command Injection')”, which
describes another issue. The new reference is “CWE-426: Untrusted Search
Path”, which describes exactly the problem that this rule should warn
about.
## Test Plan
The change was not tested, as it only changes two numbers in the
documentation.
## Summary
Adds `LOG009` from
[flake8-logging](https://github.com/adamchainz/flake8-logging). Also
adds the boilerplate for a new plugin
Checks for usages of undocumented `logging.WARN` constant and suggests
replacement with `logging.WARNING`.
## Test Plan
`cargo test` with fresh fixture
## Issue links
Refers: https://github.com/astral-sh/ruff/issues/7248
## Summary
Extends UP040 to support moving type variables with
bounds/constraints/variance that are used in type aliases to use PEP-695
syntax.
Part of #4617.
## Test Plan
The existing tests added by #6314 already cover the relevant cases.
Rules like D209 and D205 are only intended to apply to multi-line
docstrings. If a docstring is single-quoted, but extends via a
continuation, it should be excluded (it'll be flagged by another rule
anyway). Closes https://github.com/astral-sh/ruff/issues/7058.
## Summary
At some point, we removed these so that they wouldn't be autocompleted
for users, since we wanted to discourage usage of `ALL`. But given that
they're valid values, I think that was a bad idea -- it leads to an even
more confusing experience whereby JSON Schema validators tell you that
you have an error, when you don't.
Closes https://github.com/astral-sh/ruff/issues/7261.
The rule selector is not useful because `--select PREVIEW` only targets
Ruff developers and `--ignore PREVIEW` has no effect due to its low
specificity. We may restore it later if useful.
`ComparableExpr` includes the `ExprContext` field on an expression, so,
e.g., the two tuples in `(a, b) = (a, b)` won't be considered equal.
Similarly, the tuples in `[(a, b) for (a, b) in c]` _also_ wouldn't be
considered equal. I find this behavior surprising, since
`ComparableExpr` is intended to allow you to compare two ASTs, but
`ExprContext` is really encoding information about the broader context
for the expression.
Bumps [shlex](https://github.com/comex/rust-shlex) from 1.1.0 to 1.2.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/comex/rust-shlex/blob/master/CHANGELOG.md">shlex's
changelog</a>.</em></p>
<blockquote>
<h1>1.2.0</h1>
<ul>
<li>Adds <code>bytes</code> module to support operating directly on byte
strings.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/comex/rust-shlex/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Given a statement like:
```python
result = (
f(111111111111111111111111111111111111111111111111111111111111111111111111111111111)
+ 1
)()
```
When we go to parenthesize the target of the assignment, we use
`maybe_parenthesize_expression` with `Parenthesize::IfBreaks`. This then
checks if the call on the right-hand side needs to be parenthesized, the
implementation of which looks like:
```rust
impl NeedsParentheses for ExprCall {
fn needs_parentheses(
&self,
_parent: AnyNodeRef,
context: &PyFormatContext,
) -> OptionalParentheses {
if CallChainLayout::from_expression(self.into(), context.source())
== CallChainLayout::Fluent
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {
OptionalParentheses::Always
} else {
self.func.needs_parentheses(self.into(), context)
}
}
}
```
Checking for `self.func.needs_parentheses(self.into(), context)` is
problematic, since, as in the example above, `self.func` may _already_
be parenthesized -- in which case, we _don't_ want to parenthesize the
entire expression. If we do, we end up with this non-ideal formatting:
```python
result = (
(
f(
111111111111111111111111111111111111111111111111111111111111111111111111111111111
)
+ 1
)()
)
```
This PR modifies the `NeedsParentheses` implementations for call chain
expressions to return `Never` if the inner expression has its own
parentheses, in which case, the formatting implementations for those
expressions will preserve them anyway.
Closes https://github.com/astral-sh/ruff/issues/7370.
## Test Plan
Zulip improves a bit, everything else is unchanged.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| zulip | 0.99956 | 1437 | 23 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| **zulip** | **0.99962** | **1437** | **22** |
## Summary
When fixing `reversed(sorted(x, reverse=False))`, we rewrite as
`sorted(x, reverse=True)`. However, if the `reverse` argument isn't
`True` or `False`, we leave it as-is, which is incorrect.
Now, given `reversed(sorted(x, reverse=y))`, we rewrite as `sorted(x,
reverse=not y)`.
## Summary
Adds warnings for cases where:
- A selector does not include any rules because preview is disabled
- A nursery rule is selected without the preview flag
## Test plan
Add integration tests
Moves the new rule from nursery to preview for the upcoming release.
Adds new test coverage for selection of a single preview rule and fixes
a bug where preview rules were incorrectly selectable with exact codes.