**Summary** Instead of emitting a bogus token per char, we now only emit
on single last bogus token. This leads to much more concise output.
**Test Plan** Updated fixtures
Closes#5497
Needs MkDocs 1.5 to be released.
- [x] https://github.com/mkdocs/mkdocs/milestone/15
## Summary
Uses MkDocs' `not_in_nav` config to hide spam about files in
`docs/rules/` not being in nav.
Close#7479
The `@override` was already implemented
## Test Plan
Tested the code in the issue. After removing all the noqa's, only one
occurrence of `BadName()` raised a violation.
Added a fixture
## Summary
This PR implements a new rule for `flake8-logging` plugin that checks
for uses of `logging.exception()` with `exc_info` set to `False` or a
falsy value. It suggests using `logging.error` in these cases instead.
I am unsure about the name. Open to suggestions there, went with the
most explicit name I could think of in the meantime.
Refer https://github.com/astral-sh/ruff/issues/7248
## Test Plan
Added a new fixture cases and ran `cargo test`
## Summary
This PR updates the `W191` (`tab-indentation`) rule from a line-based to
a token-based rule.
Earlier, the rule used the `triple_quoted_string_ranges` from the
indexer to skip over any lines _inside_ a triple-quoted string. This was the only
use of the ranges. These ranges were extracted through the tokens, so instead
we can directly use the newline tokens to perform the check.
This would also mean that we can remove the `triple_quoted_string_ranges` from
the indexer but I'll hold that off until we have a better idea on #7326
but I don't think it would be a problem to remove it.
This will also fix#7379 once PEP 701 changes are merged.
## Test Plan
`cargo test`
## Summary
This PR implements a new rule for `flake8-logging` plugin that checks
for
`logging.getLogger` calls with either `__file__` or `__cached__` as the
first
argument and generates a suggested fix to use `__name__` instead.
Refer: #7248
## Test Plan
Add test cases and `cargo test`
## Summary
The tokenizer was split into a forward and a backwards tokenizer. The
backwards tokenizer uses the same names as the forwards ones (e.g.
`next_token`). The backwards tokenizer gets the comment ranges that we
already built to skip comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
We're planning to move the documentation from
[https://beta.ruff.rs/docs](https://beta.ruff.rs/docs) to
[https://docs.astral.sh/ruff](https://docs.astral.sh/ruff), for a few
reasons:
1. We want to remove the `beta` from the domain, as Ruff is no longer
considered beta software.
2. We want to migrate to a structure that could accommodate multiple
future tools living under one domain.
The docs are actually already live at
[https://docs.astral.sh/ruff](https://docs.astral.sh/ruff), but later
today, I'll add a permanent redirect from the previous to the new
domain. **All existing links will continue to work, now and in
perpetuity.**
This PR contains the code changes necessary for the updated
documentation. As part of this effort, I moved the playground and
documentation from my personal Cloudflare account to our team Cloudflare
account (hence the new `--project-name` references). After merging, I'll
also update the secrets on this repo.
## Summary
Given a trailing operator comment in a unary expression, like:
```python
if (
not # comment
a):
...
```
We were attaching these to the operand (`a`), but formatting them in the
unary operator via special handling. Parents shouldn't format the
comments of their children, so this instead attaches them as dangling
comments on the unary expression. (No intended change in formatting.)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Adds the maximum of 320 for the line-length setting to the JSON schema
for better integration with IDEs.
Related https://github.com/astral-sh/ruff/pull/6873
## Test Plan
<!-- How was it tested? -->
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
The previous reference was “CWE-78: Improper Neutralization of Special
Elements used in an OS Command ('OS Command Injection')”, which
describes another issue. The new reference is “CWE-426: Untrusted Search
Path”, which describes exactly the problem that this rule should warn
about.
## Test Plan
The change was not tested, as it only changes two numbers in the
documentation.
## Summary
Adds `LOG009` from
[flake8-logging](https://github.com/adamchainz/flake8-logging). Also
adds the boilerplate for a new plugin
Checks for usages of undocumented `logging.WARN` constant and suggests
replacement with `logging.WARNING`.
## Test Plan
`cargo test` with fresh fixture
## Issue links
Refers: https://github.com/astral-sh/ruff/issues/7248
## Summary
Extends UP040 to support moving type variables with
bounds/constraints/variance that are used in type aliases to use PEP-695
syntax.
Part of #4617.
## Test Plan
The existing tests added by #6314 already cover the relevant cases.
Rules like D209 and D205 are only intended to apply to multi-line
docstrings. If a docstring is single-quoted, but extends via a
continuation, it should be excluded (it'll be flagged by another rule
anyway). Closes https://github.com/astral-sh/ruff/issues/7058.
## Summary
At some point, we removed these so that they wouldn't be autocompleted
for users, since we wanted to discourage usage of `ALL`. But given that
they're valid values, I think that was a bad idea -- it leads to an even
more confusing experience whereby JSON Schema validators tell you that
you have an error, when you don't.
Closes https://github.com/astral-sh/ruff/issues/7261.
The rule selector is not useful because `--select PREVIEW` only targets
Ruff developers and `--ignore PREVIEW` has no effect due to its low
specificity. We may restore it later if useful.
`ComparableExpr` includes the `ExprContext` field on an expression, so,
e.g., the two tuples in `(a, b) = (a, b)` won't be considered equal.
Similarly, the tuples in `[(a, b) for (a, b) in c]` _also_ wouldn't be
considered equal. I find this behavior surprising, since
`ComparableExpr` is intended to allow you to compare two ASTs, but
`ExprContext` is really encoding information about the broader context
for the expression.
Bumps [shlex](https://github.com/comex/rust-shlex) from 1.1.0 to 1.2.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/comex/rust-shlex/blob/master/CHANGELOG.md">shlex's
changelog</a>.</em></p>
<blockquote>
<h1>1.2.0</h1>
<ul>
<li>Adds <code>bytes</code> module to support operating directly on byte
strings.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/comex/rust-shlex/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Given a statement like:
```python
result = (
f(111111111111111111111111111111111111111111111111111111111111111111111111111111111)
+ 1
)()
```
When we go to parenthesize the target of the assignment, we use
`maybe_parenthesize_expression` with `Parenthesize::IfBreaks`. This then
checks if the call on the right-hand side needs to be parenthesized, the
implementation of which looks like:
```rust
impl NeedsParentheses for ExprCall {
fn needs_parentheses(
&self,
_parent: AnyNodeRef,
context: &PyFormatContext,
) -> OptionalParentheses {
if CallChainLayout::from_expression(self.into(), context.source())
== CallChainLayout::Fluent
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {
OptionalParentheses::Always
} else {
self.func.needs_parentheses(self.into(), context)
}
}
}
```
Checking for `self.func.needs_parentheses(self.into(), context)` is
problematic, since, as in the example above, `self.func` may _already_
be parenthesized -- in which case, we _don't_ want to parenthesize the
entire expression. If we do, we end up with this non-ideal formatting:
```python
result = (
(
f(
111111111111111111111111111111111111111111111111111111111111111111111111111111111
)
+ 1
)()
)
```
This PR modifies the `NeedsParentheses` implementations for call chain
expressions to return `Never` if the inner expression has its own
parentheses, in which case, the formatting implementations for those
expressions will preserve them anyway.
Closes https://github.com/astral-sh/ruff/issues/7370.
## Test Plan
Zulip improves a bit, everything else is unchanged.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| zulip | 0.99956 | 1437 | 23 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| **zulip** | **0.99962** | **1437** | **22** |
## Summary
When fixing `reversed(sorted(x, reverse=False))`, we rewrite as
`sorted(x, reverse=True)`. However, if the `reverse` argument isn't
`True` or `False`, we leave it as-is, which is incorrect.
Now, given `reversed(sorted(x, reverse=y))`, we rewrite as `sorted(x,
reverse=not y)`.
## Summary
Adds warnings for cases where:
- A selector does not include any rules because preview is disabled
- A nursery rule is selected without the preview flag
## Test plan
Add integration tests
Moves the new rule from nursery to preview for the upcoming release.
Adds new test coverage for selection of a single preview rule and fixes
a bug where preview rules were incorrectly selectable with exact codes.
## Summary
This PR bumps the pyproject-toml crate to 0.7.0. The only difference is that it now depends on indexmap 2. I reviewed the indexmap 2 changes and they don't seem relevant to us.
I used this opportunity to remove the default features from `serde_with` which removes our indexmap 1 dependency (and some other unused dependencies)
## Test Plan
`cargo test`
## Motivation
The `ast::Arguments` for call argument are split into positional
arguments (args) and keywords arguments (keywords). We currently assume
that call consists of first args and then keywords, which is generally
the case, but not always:
```python
f(*args, a=2, *args2, **kwargs)
class A(*args, a=2, *args2, **kwargs):
pass
```
The consequence is accidentally reordering arguments
(https://github.com/astral-sh/ruff/pull/7268).
## Summary
`Arguments::args_and_keywords` returns an iterator of an `ArgOrKeyword`
enum that yields args and keywords in the correct order. I've fixed the
obvious `args` and `keywords` usages, but there might be some cases with
wrong assumptions remaining.
## Test Plan
The generator got new test cases, otherwise the stacked PR
(https://github.com/astral-sh/ruff/pull/7268) which uncovered this.
## Summary
This PR updates the `FileCache` to include an optional `NotebookIndex`
to support caching for Jupyter Notebooks.
We only require the index to compute the diagnostics and thus we don't
really need to store the entire `Notebook` on the `Diagnostics` struct.
This means we only need the index to be stored in the cache to
reconstruct the `Diagnostics`.
## Test Plan
Update an existing test case to run over the fixtures under
`ruff_notebook` crate where there are multiple Jupyter Notebook.
Locally, the following commands were run in order:
1. Remove the cache: `rm -rf .ruff_cache`
2. Run without cache: `cargo run --bin ruff -- check --isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb
--no-cache`
3. Run with cache: `cargo run --bin ruff -- check --isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb`
4. Check whether the `.ruff_cache` directory was created or not
5. Run with cache again and verify: `cargo run --bin ruff -- check
--isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb`
## Benchmarks
https://github.com/astral-sh/ruff/pull/6863#issuecomment-1715675186fixes: #6671
## Summary
Closes#6958.
If a method has the `override` decorator, there is nothing you can do
about incorrect dunder methods, so they should be ignored.
## Test Plan
Overridden incorrect dunder method was added to the tests to verify ruff
doesn't catch it when evaluating the file. Snapshot changes are all just
line number changes
## Summary
This PR updates the lexer test snapshots to include the range value as
well. This is mainly a mechanical refactor.
### Motivation
The main motivation is so that we can verify that the ranges are valid
and do not overlap.
## Test Plan
`cargo test`
## Summary
This PR updates the remaining lexer test cases to use the snapshots.
This is mainly a mechanical refactor.
## Motivation
The main motivation is so that when we add the token range values to the
test case output, it's easier to update the test cases.
The reason they were not using the snapshots before was because of the usage of
`test_case` macro. The macros is mainly used for different EOL test cases. If we
just generate the snapshots directly, then the snapshot name would be suffixed
with `-1`, `-2`, etc. as the test function is still the same. So, we'll create
the snapshot ourselves with the platform name for the respective EOL
test cases.
## Test Plan
`cargo test`
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Extends work in #7046 (some relevant discussion there)
Changes:
- All nursery rules are now referred to as preview rules
- Documentation for the nursery is updated to describe preview
- Adds a "PREVIEW" selector for preview rules
- This is primarily to allow `--preview --ignore PREVIEW --extend-select
FOO001,BAR200`
- Using `--preview` enables preview rules that match selectors
Notable decisions:
- Preview rules are not selectable by their rule code without enabling
preview
- Retains the "NURSERY" selector for backwards compatibility
- Nursery rules are selectable by their rule code for backwards
compatiblity
Additional work:
- Selection of preview rules without the "--preview" flag should display
a warning
- Use of deprecated nursery selection behavior should display a warning
- Nursery selection should be removed after some time
## Test Plan
<!-- How was it tested? -->
Manual confirmation (i.e. we don't have an preview rules yet just
nursery rules so I added a preview rule for manual testing)
New unit tests
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Another statement on the same line as the docstring would previous make
the D204 (newline after docstring) fix fail:
```python
class StatementOnSameLineAsDocstring:
"After this docstring there's another statement on the same line separated by a semicolon." ;priorities=1
def sort_services(self):
pass
```
The fix handles this case manually:
```python
class StatementOnSameLineAsDocstring:
"After this docstring there's another statement on the same line separated by a semicolon."
priorities=1
def sort_services(self):
pass
```
Fixes#7088
## Test Plan
Added a new `D` test case
## Summary
Fix all but one empty line differences with the black preview style in
typeshed. The remaining differences are breaking with type comments and
trailing commas in function definitions.
I compared the empty line differences with the preview mode of black
since stable has some oddities that would have been hard to replicate
(https://github.com/psf/black/issues/3861). Additionally, it assumes the
style proposed in https://github.com/psf/black/issues/3862.
An edge case that also surfaced with typeshed are newline before
trailing module comments.
**main**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99978 | 3496 | **2173** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
**PR**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99983 | 3496 | **18** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
Closes#6723
## Test Plan
The main driver was the typeshed diff. I added new test cases for all
kinds of possible empty line combinations in stub files, test cases for
newlines before trailing module comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR adds the `--preview` and `--no-preview` options to the `format` command (hidden) and passes it through to the formatte.
## Test Plan
I added the `dbg(f.options().preview())` statement in `FormatNodeRule::fmt` and verified that the option gets correctly passed to the formatter.
Show header for formatter comment decoration info
**Summary** Show a header in the formatter comment decoration debug
output that shows which node is preceding/following/enclosing
(https://github.com/astral-sh/ruff/pull/6813#issuecomment-1708119550). I
kept this intentionally condensed to make it easy to use this is a small
sidebar without vertical scrolling.
```console
$ cargo run --bin ruff_python_formatter -- --emit stdout --print-comments scratch.py
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
17..20, Some((ParameterWithDefault, 6..10)), None, (Parameters, 5..22), "# a"
44..47, Some((StmtExpr, 28..39)), Some((StmtExpr, 52..60)), (StmtFunctionDef, 0..60), "# b"
77..80, None, None, (ExprList, 71..82), "# c"
{
Node {
kind: ParameterWithDefault,
range: 6..10,
source: `x=[]`,
}: {
...
```
**Test Plan** It's debug output.
**Summary** The comment visitor used to rebuild the locator for every
comment. Instead, we now keep the locator on the builder. Follow-up to
#6813.
**Test Plan** No formatting changes.
## Summary
Add a configuration option to extend the list of names that can be
accessed without triggering SLF001.
Fixes issue #7018
## Test Plan
Manually tested by creating a python file (`test.py`):
```python
def foo(obj):
obj._meta
```
and a `ruff.toml` file:
```toml
select = ["SLF"]
[flake8-self]
extend-ignore-names = ["_meta"]
```
Then running `cargo run -p ruff_cli -- check test.py --no-cache` (once
with the `extend-ignore-names` line comment out) to see if the
configuration option works.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR updates the revision of `LibCST` dependency to 9c263aa897
inorder to fix https://github.com/astral-sh/ruff/issues/4899
## Test Plan
The test case including the carriage return (`\r`) character was added for
`F504` and then `cargo test`.
fixes: #4899
If a user has `import collections, functools, operator`, and we try to
import from `functools` and `operator`, we end up adding two identical
synthetic edits to preserve that import statement. We need to dedupe
them.
Closes https://github.com/astral-sh/ruff/issues/7059.
## Summary
This PR moves `ruff/jupyter` into its own `ruff_notebook` crate. Beyond
the move itself, there were a few challenges:
1. `ruff_notebook` relies on the source map abstraction. I've moved the
source map into `ruff_diagnostics`, since it doesn't have any
dependencies on its own and is used alongside diagnostics.
2. `ruff_notebook` has a couple tests for end-to-end linting and
autofixing. I had to leave these tests in `ruff` itself.
3. We had code in `ruff/jupyter` that relied on Python lexing, in order
to provide a more targeted error message in the event that a user saves
a `.py` file with a `.ipynb` extension. I removed this in order to avoid
a dependency on the parser, it felt like it wasn't worth retaining just
for that dependency.
## Test Plan
`cargo test`
## Summary
I think the fallthrough here for some branches is a little confusing.
Now each branch either runs a command that returns `Result<ExitStatus>`,
or runs a command that returns `Result<()>` and then explicitly returns
`Ok(ExitStatus::SUCCESS)`.
## Summary
This PR refactors the error-handling cases around Jupyter notebooks to
use errors rather than `Box<Diagnostics>`, which creates some oddities
in the downstream handling. So, instead of formatting errors as
diagnostics _eagerly_ (in the notebook methods), we now return errors
and convert those errors to diagnostics at the last possible moment (in
`diagnostics.rs`). This is more ergonomic, as errors can be composed and
reported-on in different ways, whereas diagnostics require a `Printer`,
etc.
See, e.g.,
https://github.com/astral-sh/ruff/pull/7013#discussion_r1311136301.
## Test Plan
Ran `cargo run` over a Python file labeled with a `.ipynb` suffix, and
saw:
```
foo.ipynb:1:1: E999 SyntaxError: Expected a Jupyter Notebook, which must be internally stored as JSON, but found a Python source file: expected value at line 1 column 1
```
## Summary
This PR modifies our between-statement comment handling such that
comments that are not separated by a statement by any newlines continue
to be treated as leading comments on the statement, but comments that
_are_ separated are instead formatted as trailing comments on the
preceding statement.
See, e.g., the originating snippet:
```python
DEFAULT_TEMPLATE = "flatpages/default.html"
# This view is called from FlatpageFallbackMiddleware.process_response
# when a 404 is raised, which often means CsrfViewMiddleware.process_view
# has not been called even if CsrfViewMiddleware is installed. So we need
# to use @csrf_protect, in case the template needs {% csrf_token %}.
# However, we can't just wrap this view; if no matching flatpage exists,
# or a redirect is required for authentication, the 404 needs to be returned
# without any CSRF checks. Therefore, we only
# CSRF protect the internal implementation.
def flatpage(request, url):
pass
```
Here, we need to ensure that the `def flatpage` is precede by two empty
lines. However, we want those two empty lines to be enforced from the
_end_ of the comment block, _unless_ the comments are directly atop the
`def flatpage`.
I played with this a bit, and I think the simplest conceptual model and
implementation is to instead treat those as trailing comments on the
preceding node. The main difficulty with this approach is that, in order
to be fully compatible with Black, we'd sometimes need to insert
newlines _between_ the preceding node and its trailing comments. See,
e.g.:
```python
def func():
...
# comment
x = 1
```
In this case, we'd need to insert two blank lines between `def func():
...` and `# comment`, but `# comment` is trailing comment on `def
func(): ...`. So, we'd need to take this case into account in the
various nodes that _require_ newlines after them: functions, classes,
and imports. After some discussion, we've opted _not_ to support this,
and just treat these as trailing comments -- so we won't insert newlines
there. This means our handling is still identical to Black's on
Black-formatted code, but avoids moving such trailing comments on
unformatted code.
I dislike that the empty handling is so complex, and that it's split
between so many different nodes, but this is really tricky. Continuing
to treat these as leading comments is very difficult too, since we'd
need to do similar tricks for the leading comment handling in those
nodes, and influencing leading comments is even harder, since they're
all formatted _before_ the node itself.
Closes https://github.com/astral-sh/ruff/issues/6761.
## Test Plan
`cargo test`
Surprisingly, it doesn't change the similarity at all (apart from a
0.00001 change in CPython), but I manually confirmed that it did fix the
originating issue in Django.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76082 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76081 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
## Summary
This PR modifies a few of our rules related to which statements (and how
many) are allowed in function bodies within `.pyi` files, to improve
compatibility with flake8-pyi and improve the interplay dynamics between
them. Each change fixes a deviation from flake8-pyi:
- We now always trigger the multi-statement rule (PYI048) regardless of
whether one of the statements is a docstring.
- We no longer trigger the `...` rule (PYI010) if the single statement
is a docstring or a `pass` (since those are covered by other rules).
- We no longer trigger the `...` rule (PYI010) if the function body
contains multiple statements (since that's covered by PYI048).
Closes https://github.com/astral-sh/ruff/issues/7021.
## Test Plan
`cargo test`
## Summary
This PR attempts to address a problem in the parser related to the
range's of `WithItem` nodes in certain contexts -- specifically,
`WithItem` nodes in parentheses that do not have an `as` token after
them.
For example,
[here](https://play.ruff.rs/71be2d0b-2a04-4c7e-9082-e72bff152679):
```python
with (a, b):
pass
```
The range of the `WithItem` `a` is set to the range of `(a, b)`, as is
the range of the `WithItem` `b`. In other words, when we have this kind
of sequence, we use the range of the entire parenthesized context,
rather than the ranges of the items themselves.
Note that this also applies to cases
[like](https://play.ruff.rs/c551e8e9-c3db-4b74-8cc6-7c4e3bf3713a):
```python
with (a, b, c as d):
pass
```
You can see the issue in the parser here:
```rust
#[inline]
WithItemsNoAs: Vec<ast::WithItem> = {
<location:@L> <all:OneOrMore<Test<"all">>> <end_location:@R> => {
all.into_iter().map(|context_expr| ast::WithItem { context_expr, optional_vars: None, range: (location..end_location).into() }).collect()
},
}
```
Fixing this issue is... very tricky. The naive approach is to use the
range of the `context_expr` as the range for the `WithItem`, but that
range will be incorrect when the `context_expr` is itself parenthesized.
For example, _that_ solution would fail here, since the range of the
first `WithItem` would be that of `a`, rather than `(a)`:
```python
with ((a), b):
pass
```
The `with` parsing in general is highly precarious due to ambiguities in
the grammar. Changing it in _any_ way seems to lead to an ambiguous
grammar that LALRPOP fails to translate. Consensus seems to be that we
don't really understand _why_ the current grammar works (i.e., _how_ it
avoids these ambiguities as-is).
The solution implemented here is to avoid changing the grammar itself,
and instead change the shape of the nodes returned by various rules in
the grammar. Specifically, everywhere that we return `Expr`, we instead
return `ParenthesizedExpr`, which includes a parenthesized range and the
underlying `Expr` itself. (If an `Expr` isn't parenthesized, the ranges
will be equivalent.) In `WithItemsNoAs`, we can then use the
parenthesized range as the range for the `WithItem`.
Per discussion at https://github.com/astral-sh/ruff/discussions/6998
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Adds a `--preview` and `--no-preview` option to the CLI for `ruff check`
and corresponding settings. The CLI options are hidden for now.
Available in the settings as `preview = true` or `preview = false`.
Does not include environment variable configuration, although we may add
it in the future.
## Test Plan
<!-- How was it tested? -->
`cargo build`
Future work will build on this setting, such as toggling the mode during
a test.
## Summary
This PR adds comment handling for comments between the `=` and the
`value` for keywords, as in the following cases:
```python
func(
x # dangling
= # dangling
# dangling
1,
** # dangling
y
)
```
(Comments after the `**` were already handled in some cases, but I've
unified the handling with the `=` handling.)
Note that, previously, comments between the `**` and its value were
rendered as trailing comments on the value (so they'd appear after `y`).
This struck me as odd since it effectively re-ordered the comment with
respect to its closest AST node (the value). I've made them leading
comments, though I don't know that that's a significant improvement. I
could also imagine us leaving them where they are.
## Summary
Attempt at a small improvement to two `perflint` rules using the new
type inference capabilities to only flag `PERF401` and `PERF402` for
values we infer to be lists. This makes the rule more conservative, as
it only flags values that we _know_ to be lists, but it's overall a
desirable change, as it favors false negatives over false positives for
a "nice-to-have" rule.
Closes https://github.com/astral-sh/ruff/issues/6995.
## Test Plan
Add non-list value cases and make sure all old cases are still caught.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Rewriting the `if`-comparison to focus on the meaning of rule ` assert
S101`.
Fixes#6984
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Ensures that we use the same error types and messages. Also renames
those struct to `FormatCommand*` for consistency, and removes the
`FormatCommandResult::Skipped` variant in favor of skipping in the
iterator directly.
## Summary
Returns an exit code of 1 if any files would be reformatted:
```
ruff on charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0
❯ cargo run -p ruff_cli -- format foo.py --check
Compiling ruff_cli v0.0.286 (/Users/crmarsh/workspace/ruff/crates/ruff_cli)
Finished dev [unoptimized + debuginfo] target(s) in 1.69s
Running `target/debug/ruff format foo.py --check`
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended only for experimentation.
1 file would be reformatted
ruff on charlie/format-check:main [$?⇡] is 📦 v0.0.286 via 🐍 v3.11.2 via 🦀 v1.72.0 took 2s
❯ echo $?
1
```
Closes#6966.
## Summary
This is similar to `commands::check` vs. `commands::check_stdin`, and
gets the logic out of the parent file (`lib.rs`). It also ensures that
we avoid formatting files that should be excluded when `--force-exclude`
is provided.
## Summary
This PR adds a new helper method on the `Cursor` called `eat_char2`
which is similar to `eat_char` but accepts 2 characters instead of 1. It'll
`bump` the cursor twice if both characters are found on lookahead.
## Test Plan
`cargo test`
- Use `Option` instead of `Result` everywhere.
- Use `field` instead of `property` (to match the nomenclature of
`NamedTuple` and `TypedDict`).
- Put the violation function at the top of the file, rather than the
bottom.
## Summary
The `typename` argument to `NamedTuple` and `TypedDict` is a required
positional argument. We assumed as much, but panicked if it was provided
as a keyword argument or otherwise omitted. This PR handles the case
gracefully.
Closes https://github.com/astral-sh/ruff/issues/6953.
## Summary
As a small quality-of-life improvement, the locator can now slice like
`locator.slice(stmt)` instead of requiring
`locator.slice(stmt.range())`.
## Test Plan
`cargo test`
## Summary
This PR adds a higher-level enum (`SourceType`) around `PySourceType` to
allow us to use the same detection path to handle TOML files. Right now,
we have ad hoc `is_pyproject_toml` checks littered around, and some
codepaths are omitting that logic altogether (like `add_noqa`). Instead,
we should always be required to check the source type and handle TOML
files as appropriate.
This PR will also help with our pre-commit capabilities. If we add
`toml` to pre-commit (to support `pyproject.toml`), pre-commit will
start to pass _other_ files to Ruff (along with `poetry.lock` and
`Pipfile` -- see
[identify](b59996304f/identify/extensions.py (L355))).
By detecting those files and handling those cases, we avoid attempting
to parse them as Python files, which would lead to pre-commit errors.
(We tried to add `toml` to pre-commit here
(https://github.com/astral-sh/ruff-pre-commit/pull/44), but had to
revert here (https://github.com/astral-sh/ruff-pre-commit/pull/45) as it
led to the pre-commit hook attempting to parse `poetry.lock` files as
Python files.)
## Summary
This PR fixes a bug which sends the lexer into infinite loop for an invalid input.
The code in question is `[1` where the nesting is never finished. This means
that the lexer will keep emitting the `Err` token forever.
## Test Plan
Add a test case which collects all the tokens from the lexer. This just
makes sure that it doesn't go into infinite loop.
## Summary
Just making the formatter CLI more consistent with the linter -- e.g.,
we now use stdin on invocations like `cat foo.py | cargo run -p ruff_cli
-- format -- --stdin-filename=foo.py`, instead of _only_ relying on the
`-` file (and use the same helper as the linter to facilitate this).