## Summary
This PR addresses some additional feedback on #19053:
- Renaming the `syntax_error` methods to `invalid_syntax` to match the
lint id
- Moving the standalone `diagnostic_from_violation` function to
`Violation::into_diagnostic`
- Removing the `Ord` and `PartialOrd` implementations from `Diagnostic`
in favor of `Diagnostic::start_ordering`
## Test Plan
Existing tests
## Additional Follow-ups
Besides these, I also put the following comments on my todo list, but
they seemed like they might be big enough to have their own PRs:
- [Use `LintId::IOError` for IO
errors](https://github.com/astral-sh/ruff/pull/19053#discussion_r2189425922)
- [Move `Fix` and
`Edit`](https://github.com/astral-sh/ruff/pull/19053#discussion_r2189448647)
- [Avoid so many
unwraps](https://github.com/astral-sh/ruff/pull/19053#discussion_r2189465980)
## Summary
This PR is a collaboration with @AlexWaygood from our pairing session
last Friday.
The main goal here is removing `ruff_linter::message::OldDiagnostic` in
favor of
using `ruff_db::diagnostic::Diagnostic` directly. This involved a few
major steps:
- Transferring the fields
- Transferring the methods and trait implementations, where possible
- Converting some constructor methods to free functions
- Moving the `SecondaryCode` struct
- Updating the method names
I'm hoping that some of the methods, especially those in the
`expect_ruff_*`
family, won't be necessary long-term, but I avoided trying to replace
them
entirely for now to keep the already-large diff a bit smaller.
### Related refactors
Alex and I noticed a few refactoring opportunities while looking at the
code,
specifically the very similar implementations for
`create_parse_diagnostic`,
`create_unsupported_syntax_diagnostic`, and
`create_semantic_syntax_diagnostic`.
We combined these into a single generic function, which I then copied
into
`ruff_linter::message` with some small changes and a TODO to combine
them in the
future.
I also deleted the `DisplayParseErrorType` and `TruncateAtNewline` types
for
reporting parse errors. These were added in #4124, I believe to work
around the
error messages from LALRPOP. Removing these didn't affect any tests, so
I think
they were unnecessary now that we fully control the error messages from
the
parser.
On a more minor note, I factored out some calls to the
`OldDiagnostic::filename`
(now `Diagnostic::expect_ruff_filename`) function to avoid repeatedly
allocating
`String`s in some places.
### Snapshot changes
The `show_statistics_syntax_errors` integration test changed because the
`OldDiagnostic::name` method used `syntax-error` instead of
`invalid-syntax`
like in ty. I think this (`--statistics`) is one of the only places we
actually
use this name for syntax errors, so I hope this is okay. An alternative
is to
use `syntax-error` in ty too.
The other snapshot changes are from removing this code, as discussed on
[Discord](https://discord.com/channels/1039017663004942429/1228460843033821285/1388252408848847069):
34052a1185/crates/ruff_linter/src/message/mod.rs (L128-L135)
I think both of these are technically breaking changes, but they only
affect
syntax errors and are very narrow in scope, while also pretty
substantially
simplifying the refactor, so I hope they're okay to include in a patch
release.
## Test plan
Existing tests, with the adjustments mentioned above
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
I think this should be the last step before combining `OldDiagnostic`
and `ruff_db::Diagnostic`. We can't store a `NoqaCode` on
`ruff_db::Diagnostic`, so I converted the `noqa_code` field to an
`Option<String>` and then propagated this change to all of the callers.
I tried to use `&str` everywhere it was possible, so I think the
remaining `to_string` calls are necessary. I spent some time trying to
convert _everything_ to `&str` but ran into lifetime issues, especially
in the `FixTable`. Maybe we can take another look at that if it causes a
performance regression, but hopefully these paths aren't too hot. We
also avoid some `to_string` calls, so it might even out a bit too.
## Test Plan
Existing tests
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR removes the last two places we were using `NoqaCode::rule` in
`linter.rs` (see
https://github.com/astral-sh/ruff/pull/18391#discussion_r2154637329 and
https://github.com/astral-sh/ruff/pull/18391#discussion_r2154649726) by
checking whether fixes are actually desired before adding them to a
`DiagnosticGuard`. I implemented this by storing a `Violation`'s `Rule`
on the `DiagnosticGuard` so that we could check if it was enabled in the
embedded `LinterSettings` when trying to set a fix.
All of the corresponding `set_fix` methods on `OldDiagnostic` were now
unused (except in tests where I just set `.fix` directly), so I moved
these to the guard instead of keeping both sets.
The very last place where we were using `NoqaCode::rule` was in the
cache. I just reverted this to parsing the `Rule` from the name. I had
forgotten to update the comment there anyway. Hopefully this doesn't
cause too much of a perf hit.
In terms of binary size, we're back down almost to where `main` was two
days ago
(https://github.com/astral-sh/ruff/pull/18391#discussion_r2155034320):
```
41,559,344 bytes for main 2 days ago
41,669,840 bytes for #18391
41,653,760 bytes for main now (after #18391 merged)
41,602,224 bytes for this branch
```
Only 43 kb up, but that shouldn't all be me this time :)
## Test Plan
Existing tests and benchmarks on this PR
Summary
--
This PR unifies the remaining differences between `OldDiagnostic` and
`Message` (`OldDiagnostic` was only missing an optional `noqa_offset`
field) and
replaces `Message` with `OldDiagnostic`.
The biggest functional difference is that the combined `OldDiagnostic`
kind no
longer implements `AsRule` for an infallible conversion to `Rule`. This
was
pretty easy to work around with `is_some_and` and `is_none_or` in the
few places
it was needed. In `LintContext::report_diagnostic_if_enabled` we can
just use
the new `Violation::rule` method, which takes care of most cases.
Most of the interesting changes are in [this
range](8156992540)
before I started renaming.
Test Plan
--
Existing tests
Future Work
--
I think it's time to start shifting some of these fields to the new
`Diagnostic`
kind. I believe we want `Fix` for sure, but I'm less sure about the
others. We
may want to keep a thin wrapper type here anyway to implement a `rule`
method,
so we could leave some of these fields on that too.
## Summary
Fixes https://github.com/astral-sh/ruff/issues/18602 by:
1. Avoiding a fix when `*args` are present
2. Inserting the `Generic` base class right before the first keyword
argument, if one is present
In an intermediate commit, I also had special handling to avoid a fix in
the `**kwargs` case, but this is treated (roughly) as a normal keyword,
and I believe handling it properly falls out of the other keyword fix.
I also updated the `add_argument` utility function to insert new
arguments right before the keyword argument list instead of at the very
end of the argument list. This changed a couple of snapshots unrelated
to `PYI059`, but there shouldn't be any functional changes to other
rules because all other calls to `add_argument` were adding a keyword
argument anyway.
## Test Plan
Existing PYI059 cases, plus new tests based on the issue
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
As the title says, this PR removes the `Message::to_rule` method by
replacing related uses of `Rule` with `NoqaCode` (or the rule's name in
the case of the cache). Where it seemed a `Rule` was really needed, we
convert back to the `Rule` by parsing either the rule name (with
`str::parse`) or the `NoqaCode` (with `Rule::from_code`).
I thought this was kind of like cheating and that it might not resolve
this part of Micha's
[comment](https://github.com/astral-sh/ruff/pull/18391#issuecomment-2933764275):
> because we can't add Rule to Diagnostic or **have it anywhere in our
shared rendering logic**
but after looking again, the only remaining `Rule` conversion in
rendering code is for the SARIF output format. The other two non-test
`Rule` conversions are for caching and writing a fix summary, which I
don't think fall into the shared rendering logic. That leaves the SARIF
format as the only real problem, but maybe we can delay that for now.
The motivation here is that we won't be able to store a `Rule` on the
new `Diagnostic` type, but we should be able to store a `NoqaCode`,
likely as a string.
## Test Plan
Existing tests
##
[Benchmarks](https://codspeed.io/astral-sh/ruff/branches/brent%2Fremove-to-rule)
Almost no perf regression, only -1% on
`linter/default-rules[large/dataset.py]`.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Summary
--
This is the last main difference between the `OldDiagnostic` and
`Message`
types, so attaching a `SourceFile` to `OldDiagnostic` should make
combining the
two types almost trivial.
Initially I updated the remaining rules without access to a `Checker` to
take a
`&SourceFile` directly, but after Micha's suggestion in
https://github.com/astral-sh/ruff/pull/18356#discussion_r2113281552, I
updated all of these calls to take a
`LintContext` instead. This new type is a thin wrapper around a
`RefCell<Vec<OldDiagnostic>>`
and a `SourceFile` and now has the `report_diagnostic` method returning
a `DiagnosticGuard` instead of `Checker`.
This allows the same `Drop`-based implementation to be used in cases
without a `Checker` and also avoids a lot of intermediate allocations of
`Vec<OldDiagnostic>`s.
`Checker` now also contains a `LintContext`, which it defers to for its
`report_diagnostic` methods, which I preserved for convenience.
Test Plan
--
Existing tests
Summary
--
It's a bit late in the refactoring process, but I think there are still
a couple of PRs left before getting rid of this type entirely, so I
thought it would still be worth doing.
This PR is just a quick rename with no other changes.
Test Plan
--
Existing tests
## Summary
This PR unifies the ruff `Message` enum variants for syntax errors and
rule violations into a single `Message` struct consisting of a shared
`db::Diagnostic` and some additional, optional fields used for some rule
violations.
This version of `Message` is nearly a drop-in replacement for
`ruff_diagnostics::Diagnostic`, which is the next step I have in mind
for the refactor.
I think this is also a useful checkpoint because we could possibly add
some of these optional fields to the new `Diagnostic` type. I think
we've previously discussed wanting support for `Fix`es, but the other
fields seem less relevant, so we may just need to preserve the `Message`
wrapper for a bit longer.
## Test plan
Existing tests
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR deletes the `DiagnosticKind` type by inlining its three fields
(`name`, `body`, and `suggestion`) into three other diagnostic types:
`Diagnostic`, `DiagnosticMessage`, and `CacheMessage`.
Instead of deferring to an internal `DiagnosticKind`, both `Diagnostic`
and `DiagnosticMessage` now have their own macro-generated `AsRule`
implementations.
This should make both https://github.com/astral-sh/ruff/pull/18051 and
another follow-up PR changing the type of `name` on `CacheMessage`
easier since its type will be able to change separately from
`Diagnostic` and `DiagnosticMessage`.
## Test Plan
Existing tests
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
I decided to disable the new
[`needless_continue`](https://rust-lang.github.io/rust-clippy/master/index.html#needless_continue)
rule because I often found the explicit `continue` more readable over an
empty block or having to invert the condition of an other branch.
## Test Plan
`cargo test`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Summary
--
This PR updates `check_path` in the `ruff_linter` crate to return a
`Vec<Message>` instead of a `Vec<Diagnostic>`. The main motivation for
this is to make it easier to convert semantic syntax errors directly
into `Message`s rather than `Diagnostic`s in #16106. However, this also
has the benefit of keeping the preview check on unsupported syntax
errors in `check_path`, as suggested in
https://github.com/astral-sh/ruff/pull/16429#discussion_r1974748024.
All of the interesting changes are in the first commit. The second
commit just renames variables like `diagnostics` to `messages`, and the
third commit is a tiny import fix.
I also updated the `ExpandedMessage::location` field name, which caused
a few extra commits tidying up the playground code. I thought it was
nicely symmetric with `end_location`, but I'm happy to revert that too.
Test Plan
--
Existing tests. I also tested the playground and server manually.
## Summary
Right now, these are being applied in random order, since if we have two
`RedefinitionWhileUnused`, it just takes the first-generated (whereas
the next comparator in the sort here orders by location)... Which means
we frequently have to re-run!
## Summary
Resolves#12616.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
...and remove periods from messages that don't span more than a single
sentence.
This is more consistent with how we present user-facing messages in uv
(which has a defined style guide).
## Summary
Extends deletions for RUF100, deleting trailing text from noqa
directives, while preserving upcoming comments on the same line if any.
In cases where it deletes a comment up to another comment on the same
line, the whitespace between them is now shown to be in the autofix in
the diagnostic as well. Leading whitespace before the removed comment is
not, though.
Fixes#12251
## Test Plan
`cargo test`
This adds the `fast-api-unused-path-parameter` lint rule, as described
in #12632.
I'm still pretty new to rust, so the code can probably be improved, feel
free to tell me if there's any changes i should make.
Also, i needed to add the `add_parameter` edit function, not sure if it
was in the scope of the PR or if i should've made another one.
## Summary
This adds a fix for the `duplicate-bases` rule that removes the
duplicate base from the class definition.
## Test Plan
`cargo nextest run duplicate_bases`, `cargo insta review`.
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
Followup on #11168 and resolve#10391
# User facing changes
* F401 now recommends a fix to add unused import bindings to to
`__all__` if a single `__all__` list or tuple is found in `__init__.py`.
* If there are no `__all__` found in the file, fall back to recommending
redundant-aliases.
* If there are multiple `__all__` or only one but of the wrong type (non
list or tuple) then diagnostics are generated without fixes.
* `fix_title` is updated to reflect what the fix/recommendation is.
Subtlety: For a renamed import such as `import foo as bees`, we can
generate a fix to add `bees` to `__all__` but cannot generate a fix to
produce a redundant import (because that would break uses of the binding
`bees`).
# Implementation changes
* Add `name` field to `ImportBinding` to contain the name of the
_binding_ we want to add to `__all__` (important for the `import foo as
bees` case). It previously only contained the `AnyImport` which can give
us information about the import but not the binding.
* Add `binding` field to `UnusedImport` to contain the same. (Naming
note: the field `name` field already existed on `UnusedImport` and
contains the qualified name of the imported symbol/module)
* Change `fix_by_reexporting` to branch on the size of `dunder_all:
Vec<&Expr>`
* For length 0 call the edit-producing function `make_redundant_alias`.
* For length 1 call edit-producing function `add_to_dunder_all`.
* Otherwise, produce no fix.
* Implement the edit-producing function `add_to_dunder_all` and add unit
tests.
* Implement several fixture tests: empty `__all__ = []`, nonempty
`__all__ = ["foo"]`, mis-typed `__all__ = None`, plus-eq `__all__ +=
["foo"]`
* `UnusedImportContext::Init` variant now has two fields: whether the
fix is in `__init__.py` and how many `__all__` were found.
# Other changes
* Remove a spurious pattern match and instead use field lookups b/c the
addition of a field would have required changing the unrelated pattern.
* Tweak input type of `make_redundant_alias`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Resolves#10390 and starts to address #10391
# Changes to behavior
* In `__init__.py` we now offer some fixes for unused imports.
* If the import binding is first-party this PR suggests a fix to turn it
into a redundant alias.
* If the import binding is not first-party, this PR suggests a fix to
remove it from the `__init__.py`.
* The fix-titles are specific to these new suggested fixes.
* `checker.settings.ignore_init_module_imports` setting is
deprecated/ignored. There is probably a documentation change to make
that complete which I haven't done.
---
<details><summary>Old description of implementation changes</summary>
# Changes to the implementation
* In the body of the loop over import statements that contain unused
bindings, the bindings are partitioned into `to_reexport` and
`to_remove` (according to how we want to resolve the fact they're
unused) with the following predicate:
```rust
in_init && is_first_party(checker, &import.qualified_name().to_string())
// true means make it a reexport
```
* Instead of generating a single fix per import statement, we now
generate up to two fixes per import statement:
```rust
(fix_by_removing_imports(checker, node_id, &to_remove, in_init).ok(),
fix_by_reexporting(checker, node_id, &to_reexport, dunder_all).ok())
```
* The `to_remove` fixes are unsafe when `in_init`.
* The `to_explicit` fixes are safe. Currently, until a future PR, we
make them redundant aliases (e.g. `import a` would become `import a as
a`).
## Other changes
* `checker.settings.ignore_init_module_imports` is deprecated/ignored.
Instead, all fixes are gated on `checker.settings.preview.is_enabled()`.
* Got rid of the pattern match on the import-binding bound by the inner
loop because it seemed less readable than referencing fields on the
binding.
* [x] `// FIXME: rename "imports" to "bindings"` if reviewer agrees (see
code)
* [x] `// FIXME: rename "node_id" to "import_statement"` if reviewer
agrees (see code)
<details>
<summary><h2>Scope cut until a future PR</h2></summary>
* (Not implemented) The `to_explicit` fixes will be added to `__all__`
unless it doesn't exist. When `__all__` doesn't exist they're resolved
by converting to redundant aliases (e.g. `import a` would become `import
a as a`).
---
</details>
# Test plan
* [x] `crates/ruff_linter/resources/test/fixtures/pyflakes/F401_24`
contains an `__init__.py` with*out* `__all__` that exercises the
features in this PR, but it doesn't pass.
* [x]
`crates/ruff_linter/resources/test/fixtures/pyflakes/F401_25_dunder_all`
contains an `__init__.py` *with* `__all__` that exercises the features
in this PR, but it doesn't pass.
* [x] Write unit tests for the new edit functions in
`fix::edits::make_redundant_alias`.
</details>
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR adds an override to the fixer to ensure that we apply any
`redefined-while-unused` fixes prior to `unused-import`.
Closes https://github.com/astral-sh/ruff/issues/10905.
## Summary
Currently, [this
line](716688d44e/crates/ruff_linter/src/fix/edits.rs (L101))
assumes that the `noqa` comment begins with an octothorpe followed by a
space. (`# `) With anyone's random code, this of course is not always
true.
When there's a multi-byte character after the leading octothorpe, such
as
[`\u0085`](https://www.fileformat.info/info/unicode/char/85/index.htm),
we try slicing from within the character, causing a panic.
To fix this, the logic has been changed to remove unused `noqa`
directives and keep any trailing comments, or removing the whole comment
if the comment is just the unused `noqa`
Fixes#10097.
## Test Plan
`cargo test`
## Summary
This is a follow-up to https://github.com/astral-sh/ruff/pull/10238 to
offer fixes for the f-string rule regardless of the line length of the
resulting fix. To quote Alex in the linked PR:
> Yes, from the user's perspective I'd rather have a fix that may lead
to line length issues than have to fix them myself :-) Cleaning up line
lengths is easier than changing from `"".format()` to `f""`
I agree with this position, which is that if we're going to offer a
diagnostic, we should really be offering the user the ability to fix it
-- otherwise, we're just inconveniencing them.
## Summary
Charlie can probably explain this better than I but it turns out,
`CallPath` is used for two different things:
* To represent unqualified names like `version` where `version` can be a
local variable or imported (e.g. `from sys import version` where the
full qualified name is `sys.version`)
* To represent resolved, full qualified names
This PR splits `CallPath` into two types to make this destinction clear.
> Note: I haven't renamed all `call_path` variables to `qualified_name`
or `unqualified_name`. I can do that if that's welcomed but I first want
to get feedback on the approach and naming overall.
## Test Plan
`cargo test`
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Fixes#6611
## Summary
This lint rule spots comments that are _intended_ to suppress or enable
the formatter, but will be ignored by the Ruff formatter.
We borrow some functions the formatter uses for determining comment
placement / putting them in context within an AST.
The analysis function uses an AST visitor to visit each comment and
attach it to the AST. It then uses that context to check:
1. Is this comment in an expression?
2. Does this comment have bad placement? (e.g. a `# fmt: skip` above a
function instead of at the end of a line)
3. Is this comment redundant?
4. Does this comment actually suppress any code?
5. Does this comment have ambiguous placement? (e.g. a `# fmt: off`
above an `else:` block)
If any of these are true, a violation is thrown. The reported reason
depends on the order of the above check-list: in other words, a `# fmt:
skip` comment on its own line within a list expression will be reported
as being in an expression, since that reason takes priority.
The lint suggests removing the comment as an unsafe fix, regardless of
the reason.
## Test Plan
A snapshot test has been created.
## Summary
Often, when fixing, we need to dedent a block of code (e.g., if we
remove an `if` and dedent its body). Today, we use LibCST to parse and
adjust the indentation, which is really expensive -- but this is only
really necessary if the block contains a multiline string, since naively
adjusting the indentation for such a string can change the whitespace
_within_ the string.
This PR uses a simple dedent implementation for cases in which the block
doesn't intersect with a multi-line string (or an f-string, since we
don't support tracking multi-line strings for f-strings right now).
We could improve this even further by using the ranges to guide the
dedent function, such that we don't apply the dedent if the line starts
within a multiline string. But that would also need to take f-strings
into account, which is a little tricky.
## Test Plan
`cargo test`
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
- Adds `add_argument` similar to existing `remove_argument` utility to
safely add arguments to functions.
- Adds autofix for `PLW1514` as per specs requested in
https://github.com/astral-sh/ruff/issues/8883 as a test
## Test Plan
Checks on existing fixtures as well as additional test and fixture for
Python 3.9 and lower fix
## Issue Link
Closes: https://github.com/astral-sh/ruff/issues/8883
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.
The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.
I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> crates/ruff_source_file/src/newlines.rs:62:17
|
62 | #[allow(unsafe_code)]
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: `forbid` lint level was set on command line
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>