<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
During https://github.com/astral-sh/ruff/pull/15209, additional spaces
was accidentally added to the rule
`airflow.operators.latest_only.LatestOnlyOperator`. This PR fixes this
issue
## Test Plan
<!-- How was it tested? -->
A test fixture has been included for the rule.
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR add rules for the
following.
* Removed class attribute
* `airflow.providers_manager.ProvidersManager.dataset_factories` →
`airflow.providers_manager.ProvidersManager.asset_factories`
* `airflow.providers_manager.ProvidersManager.dataset_uri_handlers` →
`airflow.providers_manager.ProvidersManager.asset_uri_handlers`
*
`airflow.providers_manager.ProvidersManager.dataset_to_openlineage_converters`
→
`airflow.providers_manager.ProvidersManager.asset_to_openlineage_converters`
* `airflow.lineage.hook.DatasetLineageInfo.dataset` →
`airflow.lineage.hook.AssetLineageInfo.asset`
* Removed class method (subclasses in airflow should also checked)
* `airflow.secrets.base_secrets.BaseSecretsBackend.get_conn_uri` →
`airflow.secrets.base_secrets.BaseSecretsBackend.get_conn_value`
* `airflow.secrets.base_secrets.BaseSecretsBackend.get_connections` →
`airflow.secrets.base_secrets.BaseSecretsBackend.get_connection`
* `airflow.hooks.base.BaseHook.get_connections` → use `get_connection`
* `airflow.datasets.BaseDataset.iter_datasets` →
`airflow.sdk.definitions.asset.BaseAsset.iter_assets`
* `airflow.datasets.BaseDataset.iter_dataset_aliases` →
`airflow.sdk.definitions.asset.BaseAsset.iter_asset_aliases`
* Removed constructor args (subclasses in airflow should also checked)
* argument `filename_template`
in`airflow.utils.log.file_task_handler.FileTaskHandler`
* in `BaseOperator`
* `sla`
* `task_concurrency` → `max_active_tis_per_dag`
* in `BaseAuthManager`
* `appbuilder`
* Removed class variable (subclasses anywhere should be checked)
* in `airflow.plugins_manager.AirflowPlugin`
* `executors` (from #43289)
* `hooks`
* `operators`
* `sensors`
* Replaced names
* `airflow.hooks.base_hook.BaseHook` → `airflow.hooks.base.BaseHook`
* `airflow.operators.dagrun_operator.TriggerDagRunLink` →
`airflow.operators.trigger_dagrun.TriggerDagRunLink`
* `airflow.operators.dagrun_operator.TriggerDagRunOperator` →
`airflow.operators.trigger_dagrun.TriggerDagRunOperator`
* `airflow.operators.python_operator.BranchPythonOperator` →
`airflow.operators.python.BranchPythonOperator`
* `airflow.operators.python_operator.PythonOperator` →
`airflow.operators.python.PythonOperator`
* `airflow.operators.python_operator.PythonVirtualenvOperator` →
`airflow.operators.python.PythonVirtualenvOperator`
* `airflow.operators.python_operator.ShortCircuitOperator` →
`airflow.operators.python.ShortCircuitOperator`
* `airflow.operators.latest_only_operator.LatestOnlyOperator` →
`airflow.operators.latest_only.LatestOnlyOperator`
In additional to the changes above, this PR also add utility functions
and improve docstring.
## Test Plan
A test fixture is included in the PR.
Fixes: #15176
## Summary
Neither of these rules make any sense in stub files. Technically TC007
should already not have triggered, due to the typing only context of the
binding, but it's better to be explicit.
Keeping TC008 enabled on the other hand makes sense to me, although we
could probably be more aggressive with unquoting in a typing runtime
context.
## Test Plan
`cargo nextest run`
## Summary
Closes#14975 by modifying the docstring of the InvalidPyprojectToml
rule. Previously the docs were incorrectly stating that author name and
emails must be individual items in the authors list, rather than part of
a single object for each respective author.
## Test Plan
This was a docstring change, no tests needed.
## Summary
This PR fixes an issue where Ruff's `D403` rule
(`first-word-uncapitalized`) was not detecting some single-word edge
cases that are picked up by `pydocstyle`.
The change involves extracting the first word of the docstring by
identifying the first whitespace character. This is consistent with
`pydocstyle` which uses `.split()` - see
8d0cdfc93e/src/pydocstyle/checker.py (L581C13-L581C64)
## Example
Here is a playground example -
https://play.ruff.rs/eab9ea59-92cf-4e44-b1a9-b54b7f69b178
```py
def example1():
"""foo"""
def example2():
"""foo
Hello world!
"""
def example3():
"""foo bar
Hello world!
"""
def example4():
"""
foo
"""
def example5():
"""
foo bar
"""
```
`pydocstyle` detects all five cases:
```bash
$ pydocstyle test.py --select D403
dev/test.py:2 in public function `example1`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:5 in public function `example2`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:11 in public function `example3`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:17 in public function `example4`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
dev/test.py:22 in public function `example5`:
D403: First word of the first line should be properly capitalized ('Foo', not 'foo')
```
Ruff (`0.8.4`) fails to catch example2 and example4.
## Test Plan
* Added two new test cases to cover the previously missed single-word
docstring cases.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix#11482. Applies
https://github.com/adamchainz/flake8-comprehensions/pull/205 to ruff.
`C416` should be skipped if comprehension contains unpacking. Here's an
example:
```python
list_of_lists = [[1, 2], [3, 4]]
# ruff suggests `list(list_of_lists)` here, but that would change the result.
# `list(list_of_lists)` is not `[(1, 2), (3, 4)]`
a = [(x, y) for x, y in list_of_lists]
# This is equivalent to `list(list_of_lists)`
b = [x for x in list_of_lists]
```
## Test Plan
<!-- How was it tested? -->
Existing checks
---------
Signed-off-by: harupy <hkawamura0130@gmail.com>
## Summary
resolves#14883
This PR removes the known limitation section in the documentation of
`eq-without-hash`. That is not actually a limitation as a subclass
overriding the `__eq__` method would have its `__hash__` set to `None`
implicitly. The user should explicitly inherit the `__hash__` method
from the parent class.
## Test Plan
<img width="619" alt="Screenshot 2024-12-20 at 2 02 47 PM"
src="https://github.com/user-attachments/assets/552defcd-25e1-4153-9ab9-e5b9d5fbe8cc"
/>
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR deprecates the
following names and add a function for removed methods
* `airflow.datasets.manager.DatasetManager.register_dataset_change` →
`airflow.assets.manager.AssetManager.register_asset_change`
* `airflow.datasets.manager.DatasetManager.create_datasets` →
`airflow.assets.manager.AssetManager.create_assets`
* `airflow.datasets.manager.DatasetManager.notify_dataset_created` →
`airflow.assets.manager.AssetManager.notify_asset_created`
* `airflow.datasets.manager.DatasetManager.notify_dataset_changed` →
`airflow.assets.manager.AssetManager.notify_asset_changed`
* `airflow.datasets.manager.DatasetManager.notify_dataset_alias_created`
→ `airflow.assets.manager.AssetManager.notify_asset_alias_created`
*
`airflow.providers.amazon.auth_manager.aws_auth_manager.AwsAuthManager.is_authorized_dataset`
→
`airflow.providers.amazon.auth_manager.aws_auth_manager.AwsAuthManager.is_authorized_asset`
* `airflow.lineage.hook.HookLineageCollector.create_dataset` →
`airflow.lineage.hook.HookLineageCollector.create_asset`
* `airflow.lineage.hook.HookLineageCollector.add_input_dataset` →
`airflow.lineage.hook.HookLineageCollector.add_input_asset`
* `airflow.lineage.hook.HookLineageCollector.add_output_dataset` →
`airflow.lineage.hook.HookLineageCollector.dd_output_asset`
* `airflow.lineage.hook.HookLineageCollector.collected_datasets` →
`airflow.lineage.hook.HookLineageCollector.collected_assets`
*
`airflow.providers_manager.ProvidersManager.initialize_providers_dataset_uri_resources`
→
`airflow.providers_manager.ProvidersManager.initialize_providers_asset_uri_resources`
## Test Plan
A test fixture is included in the PR.
## Summary
A follow up PR on https://github.com/astral-sh/ruff/issues/14991
Ruff ignores hardcoded passwords for typed variables. Add a rule to
catch passwords in typed code bases
## Test Plan
Includes 2 more test typed variables
Closes#14000
## Summary
For typing context bindings we know that they won't be available at
runtime. We shouldn't recommend a fix, that will result in name errors
at runtime.
## Test Plan
`cargo nextest run`
Fixes#15012.
```python
def f():
# panics when the code can't find the loop variable
values = [1, 2, 3]
result = []
for i in values:
result.append(i + 1)
del i
```
I'm not sure exactly why this test case panics, but I suspect the `del
i` removes the binding from the semantic model's symbols.
I changed the code to search for the correct binding by directly
iterating through the bindings. Since we know exactly which binding we
want, this should find the loop variable without any complications.
This PR introduces three changes to `D403`, which has to do with
capitalizing the first word in a docstring.
1. The diagnostic and fix now skip leading whitespace when determining
what counts as "the first word".
2. The name has been changed to `first-word-uncapitalized` from
`first-line-capitalized`, for both clarity and compliance with our rule
naming policy.
3. The diagnostic message and documentation has been modified slightly
to reflect this.
Closes#14890
Fixes#14969.
The issue was that this line:
```rust
let from_assign_to_loop = TextRange::new(binding_stmt.end(), for_stmt.start());
```
was not safe if the binding was after the target. The only way (at least
that I can think of) this can happen is if they are in different scopes,
so it now checks for that before checking if there are usages between
the two.
## Summary
The summary is misleading, as well as the
`whitespace-after-open-bracket` and `whitespace-before-close-bracket`
names - it's not only brackets, but also parentheses and braces. Align
the documentation with the actual behaviour.
Don't change the names, but align the documentation with the behaviour.
## Test Plan
No test (documentation).
## Summary
Many core Airflow features have been deprecated and moved to Airflow
Providers since users might need to install an additional package (e.g.,
`apache-airflow-provider-fab==1.0.0`); a separate rule (AIR303) is
created for this.
As some of the changes only relate to the module/package moved, instead
of listing out all the functions, variables, and classes in a module or
a package, it warns the user to import from the new path instead of the
specific name.
The following is the ones that has been moved to
`apache-airflow-provider-fab==1.0.0`
* module moved
* `airflow.api.auth.backend.basic_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.basic_auth`
* `airflow.api.auth.backend.kerberos_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.kerberos_auth`
* `airflow.auth.managers.fab.api.auth.backend.kerberos_auth` →
`airflow.providers.fab.auth_manager.api.auth.backend.kerberos_auth`
* `airflow.auth.managers.fab.security_manager.override` →
`airflow.providers.fab.auth_manager.security_manager.override`
* classes (e.g., functions, classes) moved
* `airflow.www.security.FabAirflowSecurityManagerOverride` →
`airflow.providers.fab.auth_manager.security_manager.override.FabAirflowSecurityManagerOverride`
* `airflow.auth.managers.fab.fab_auth_manager.FabAuthManager` →
`airflow.providers.fab.auth_manager.security_manager.FabAuthManager`
## Test Plan
A test fixture has been included for the rule.
## Summary
Closes https://github.com/astral-sh/ruff/issues/14892, by adding
`sqlmodel.SQLModel` to the list of classes with default copy semantics.
## Test Plan
Added a test into `RUF012.py` containing the example from the original
issue.
## Summary
`PTH210` renamed to `invalid-pathlib-with-suffix` and extended to check for `.with_suffix(".")`. This caused the fix availability to be downgraded to "Sometimes", since there is no fix offered in this case.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Dylan <53534755+dylwil3@users.noreply.github.com>
## Summary
Add replacement fixes to deprecated arguments of a DAG.
Ref #14582#14626
## Test Plan
Diff was verified and snapshots were updated.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Part 1 of the big change introduced in #14828. This temporarily causes
all fixes for `round(...)` to be considered unsafe, but they will
eventually be enhanced.
## Test Plan
`cargo nextest run` and `cargo insta test`.
## Summary
Close#11243. Fix `pytest-parametrize-names-wrong-type (PT006)` to edit
both `argnames` and `argvalues` if both of them are single-element
tuples/lists.
```python
# Before fix
@pytest.mark.parametrize(("x",), [(1,), (2,)])
def test_foo(x):
...
# After fix:
@pytest.mark.parametrize("x", [1, 2])
def test_foo(x):
...
```
## Test Plan
New test cases
This PR introduces three changes to the diagnostic and fix behavior
(still under preview) for [boolean-chained-comparison
(PLR1716)](https://docs.astral.sh/ruff/rules/boolean-chained-comparison/#boolean-chained-comparison-plr1716).
1. We now offer a _fix_ in the case of parenthesized expressions like
`(a < b) and b < c`. The fix will merge the chains of comparisons and
then balance parentheses by _adding_ parentheses to one side of the
expression.
2. We now trigger a diagnostic (and fix) in the case where some
comparisons have multiple comparators like `a < b < c and c < d`.
3. When adjacent comparators are parenthesized, we prefer the left
parenthesization and apply the replacement to the whole parenthesized
range. So, for example, `a < (b) and ((b)) < c` becomes `a < (b) < c`.
While these seem like somewhat disconnected changes, they are actually
related. If we only offered (1), then we would see the following fix
behavior:
```diff
- (a < b) and b < c and ((c < d))
+ (a < b < c) and ((c < d))
```
This is because the fix which add parentheses to the first pair of
comparisons overlaps with the fix that removes the `and` between the
second two comparisons. So the latter fix is deferred. However, the
latter fix does not get a second chance because, upon the next lint
iteration, there is no violation of `PLR1716`.
Upon adopting (2), however, both fixes occur by the time ruff completes
several iterations and we get:
```diff
- (a < b) and b < c and ((c < d))
+ ((a < b < c < d))
```
Finally, (3) fixes a previously unobserved bug wherein the autofix for
`a < (b) and b < c` used to result in `a<(b<c` which gives a syntax
error. It could in theory have been fixed in a separate PR, but seems to
be on theme here.
----------
- Closes#13524
- (1), (2), and (3) are implemented in separate commits for ease of
review and modification.
- Technically a user can trigger an error in ruff (by reaching max
iterations) if they have a humongous boolean chained comparison with
differing parentheses levels.
## Summary
Minor change for the documentation of COM818 rule. This was a block
called “In the event that a tuple is intended”, but the suggested change
did not produce a tuple.
## Test Plan
```python
>>> import json
>>> (json.dumps({"bar": 1}),) # this is a tuple
('{"bar": 1}',)
>>> (json.dumps({"bar": 1})) # not a tuple
'{"bar": 1}'
```
Improves error message for [except*](https://peps.python.org/pep-0654/)
(Rules: B025, B029, B030, B904)
Example python snippet:
```python
try:
a = 1
except* ValueError:
a = 2
except* ValueError:
a = 2
try:
pass
except* ():
pass
try:
pass
except* 1: # error
pass
try:
raise ValueError
except* ValueError:
raise UserWarning
```
Error messages
Before:
```
$ ruff check --select=B foo.py
foo.py:6:9: B025 try-except block with duplicate exception `ValueError`
foo.py:11:1: B029 Using `except ():` with an empty tuple does not catch anything; add exceptions to handle
foo.py:16:9: B030 `except` handlers should only be exception classes or tuples of exception classes
foo.py:22:5: B904 Within an `except` clause, raise exceptions with `raise ... from err` or `raise ... from None` to distinguish them from errors in exception handling
Found 4 errors.
```
After:
```
$ ruff check --select=B foo.py
foo.py:6:9: B025 try-except* block with duplicate exception `ValueError`
foo.py:11:1: B029 Using `except* ():` with an empty tuple does not catch anything; add exceptions to handle
foo.py:16:9: B030 `except*` handlers should only be exception classes or tuples of exception classes
foo.py:22:5: B904 Within an `except*` clause, raise exceptions with `raise ... from err` or `raise ... from None` to distinguish them from errors in exception handling
Found 4 errors.
```
Closes https://github.com/astral-sh/ruff/issues/14791
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR deprecates the
following names.
* in `DAG`
* `sla_miss_callback` was removed
* in `airflow.operators.trigger_dagrun.TriggerDagRunOperator`
* `execution_date` was removed
* in `airflow.operators.weekday.DayOfWeekSensor`,
`airflow.operators.datetime.BranchDateTimeOperator` and
`airflow.operators.weekday.BranchDayOfWeekOperator`
* `use_task_execution_day` was removed in favor of
`use_task_logical_date`
The full list of rules we will extend
https://github.com/apache/airflow/issues/44556
## Test Plan
<!-- How was it tested? -->
A test fixture is included in the PR.
Closes: #14676
I think the consensus generally was to keep the rule as-is, but expand
the docs.
## Summary
Expands the docs for TC006 with an explanation for why the type
expression is always quoted, including mention of another potential
benefit to this style.
When fixing an invalid escape sequence in an f-string, each f-string
element is analyzed for valid escape characters prior to creating the
diagnostic and fix. This allows us to safely prefix with `r` to create a
raw string if no valid escape characters were found anywhere in the
f-string, and otherwise insert backslashes.
This fixes a bug in the original implementation: each "f-string part"
was treated separately, so it was not possible to tell whether a valid
escape character was or would be used elsewhere in the f-string.
Progress towards #11491 but format specifiers are not handled in this
PR.
## Summary
This PR makes changes to the `AIR001` rule as per
https://github.com/astral-sh/ruff/pull/14627#discussion_r1860212307.
Additionally,
* Avoid returning the `Diagnostic` and update the checker in the rule
logic for consistency
* Remove test case for different keyword position (I don't think it's
required here)
## Test Plan
Add test cases for multiple operators from various modules.
## Summary
Just some minor followups to the recently merged RUF052 rule, that was
added in bf0fd04:
- Some small tweaks to the docs
- A minor code-style nit
- Some more tests for my peace of mind, just to check that the new
methods on the semantic model are working correctly
I'm adding the "internal" label as this doesn't deserve a changelog
entry. RUF052 is a new rule that hasn't been released yet.
## Test Plan
`cargo test -p ruff_linter`
This PR extends the Decimal parsing used in [verbose-decimal-constructor
(FURB157)](https://docs.astral.sh/ruff/rules/verbose-decimal-constructor/)
to better handle non-finite `Decimal` objects, avoiding some false
negatives.
Closes#14587
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
- Check if `hashlib` and `crypt` imports have been seen for `FURB181`
and `S324`
- Mark the fix for `FURB181` as safe: I think it was accidentally marked
as unsafe in the first place. The rule does not support user-defined
classes as the "fix safety" section suggests.
- Removed `hashlib._Hash`, as it's not part of the `hashlib` module.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Updated the test snapshots
## Summary
This PR implements new rule discussed
[here](https://github.com/astral-sh/ruff/discussions/14449).
In short, it searches for assert messages which were unintentionally
used as a expression to be matched against.
## Test Plan
`cargo test` and review of `ruff-ecosystem`
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix#14525
## Test Plan
<!-- How was it tested? -->
New test cases
---------
Signed-off-by: harupy <hkawamura0130@gmail.com>
## Summary
Resolves#14289
The documentation for B028 no_explicit_stacklevel is updated to be more
clear.
---------
Co-authored-by: dylwil3 <dylwil3@gmail.com>
This PR adds a sometimes-available, safe autofix for [unraw-re-pattern
(RUF039)](https://docs.astral.sh/ruff/rules/unraw-re-pattern/#unraw-re-pattern-ruf039),
which prepends an `r` prefix. It is used only when the string in
question has no backslahses (and also does not have a `u` prefix, since
that causes a syntax error.)
Closes#14527
Notes:
- Test fixture unchanged, but snapshot changed to include fix messages.
- This fix is automatically only available in preview since the rule
itself is in preview
## Summary
- Expand some docs where they're unclear about the motivation, or assume
some knowledge that hasn't been introduced yet
- Add more links to external docs
- Rename PYI063 from `PrePep570PositionalArgument` to
`Pep484StylePositionalOnlyParameter`
- Rename the file `parenthesize_logical_operators.rs` to
`parenthesize_chained_operators.rs`, since the rule is called
`ParenthesizeChainedOperators`, not `ParenthesizeLogicalOperators`
## Test Plan
`cargo test`
## Summary
These rules were implemented in January, have been very stable, and have
no open issues about them. They were highly requested by the community
prior to being implemented. Let's stabilise them!
## Test Plan
Ecosystem check on this PR.
## Summary
Follow-up to #14371, this PR simplifies the visitor logic for list
expressions to remove the state management. We just need to make sure
that we visit the nested expressions using the `QuoteAnnotator` and not
the `Generator`. This is similar to what's being done for binary
expressions.
As per the
[grammar](https://typing.readthedocs.io/en/latest/spec/annotations.html#grammar-token-expression-grammar-annotation_expression),
list expressions can be present which can contain other type expressions
(`Callable`):
```
| <Callable> '[' <Concatenate> '[' (type_expression ',')+
(name | '...') ']' ',' type_expression ']'
(where name must be a valid in-scope ParamSpec)
| <Callable> '[' '[' maybe_unpacked (',' maybe_unpacked)*
']' ',' type_expression ']'
```
## Test Plan
`cargo insta test`
## Summary
Resolves#12616.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Resolves#14378.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Implements `redundant-bool-literal`
## Test Plan
<!-- How was it tested? -->
`cargo test`
The ecosystem results are all correct, but for `Airflow` the rule is not
relevant due to the use of overloading (and is marked as unsafe
correctly).
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR adds autofix for `redundant-numeric-union` (`PYI041`)
There are some comments below to explain the reasoning behind some
choices that might help review.
<!-- What's the purpose of the change? What does it do, and why? -->
Resolves part of https://github.com/astral-sh/ruff/issues/14185.
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This PR adds corrected handling of list expressions to the `Visitor`
implementation of `QuotedAnnotator` in `flake8_type_checking::helpers`.
Closes#14368
Fix: #13934
## Summary
Current implementation has a bug when the current annotation contains a
string with single and double quotes.
TL;DR: I think these cases happen less than other use cases of Literal.
So instead of fixing them we skip the fix in those cases.
One of the problematic cases:
```
from typing import Literal
from third_party import Type
def error(self, type1: Type[Literal["'"]]):
pass
```
The outcome is:
```
- def error(self, type1: Type[Literal["'"]]):
+ def error(self, type1: "Type[Literal[''']]"):
```
While it should be:
```
"Type[Literal['\'']"
```
The solution in this case is that we check if there’s any quotes same as
the quote style we want to use for this Literal parameter then escape
that same quote used in the string.
Also this case is not uncommon to have:
<https://grep.app/search?current=2&q=Literal["'>
But this can get more complicated for example in case of:
```
- def error(self, type1: Type[Literal["\'"]]):
+ def error(self, type1: "Type[Literal[''']]"):
```
Here we escaped the inner quote but in the generated annotation it gets
removed. Then we flip the quote style of the Literal paramter and the
formatting is wrong.
In this case the solution is more complicated.
1. When generating the string of the source code preserve the backslash.
2. After we have the annotation check if there isn’t any escaped quote
of the same type we want to use for the Literal parameter. In this case
check if we have any `’` without `\` before them. This can get more
complicated since there can be multiple backslashes so checking for only
`\’` won’t be enough.
Another problem is when the string contains `\n`. In case of
`Type[Literal["\n"]]` we generate `'Type[Literal["\n"]]'` and both
pyright and mypy reject this annotation.
https://pyright-play.net/?code=GYJw9gtgBALgngBwJYDsDmUkQWEMoAySMApiAIYA2AUAMaXkDOjUAKoiQNqsC6AXFAB0w6tQAmJYLBKMYAfQCOAVzCk5tMChjlUjOQCNytANaMGjABYAKRiUrAANLA4BGAQHJ2CLkVIVKnABEADoogTw87gCUfNRQ8VAITIyiElKksooqahpaOih6hiZmTNa29k7w3m5sHJy%2BZFRBoeE8MXEJScxAA
## Test Plan
I added test cases for the original code in the reported issue and two
more cases for backslash and new line.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Follow-up to #14287 : when checking that `name` is the same as `as_name`
in `import name as as_name`, we do not need to first do an early return
if `'.'` is found in `name`.
This PR handles a panic that occurs when applying unsafe fixes if a user
inserts a required import (I002) that has a "useless alias" in it, like
`import numpy as numpy`, and also selects PLC0414 (useless-import-alias)
In this case, the fixes alternate between adding the required import
statement, then removing the alias, until the recursion limit is
reached. See linked issue for an example.
Closes#14283
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Resolves#13217.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR improves the fix for `PYI055` to be able to handle nested and
mixed type unions.
It also marks the fix as unsafe when comments are present.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
`pytest-raises-too-broad (PT011)` should be raised when
`expected_exception` is provided as a keyword argument.
```python
def test_foo():
with pytest.raises(ValueError): # raises PT011
raise ValueError("Can't divide 1 by 0")
# This is minor but a valid pytest.raises call
with pytest.raises(expected_exception=ValueError): # doesn't raise PT011 but should
raise ValueError("Can't divide 1 by 0")
```
`pytest.raises` doc:
https://docs.pytest.org/en/8.3.x/reference/reference.html#pytest.raises
## Test Plan
<!-- How was it tested? -->
Unit tests
Signed-off-by: harupy <hkawamura0130@gmail.com>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Related to #970. Implement [`shallow-copy-environ /
W1507`](https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/shallow-copy-environ.html).
## Test Plan
<!-- How was it tested? -->
Unit test
---------
Co-authored-by: Simon Brugman <sbrugman@users.noreply.github.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The implicit namespace package rule currently fails to detect cases like
the following:
```text
foo/
├── __init__.py
└── bar/
└── baz/
└── __init__.py
```
The problem is that we detect a root at `foo`, and then an independent
root at `baz`. We _would_ detect that `bar` is an implicit namespace
package, but it doesn't contain any files! So we never check it, and
have no place to raise the diagnostic.
This PR adds detection for these kinds of nested packages, and augments
the `INP` rule to flag the `__init__.py` file above with a specialized
message. As a side effect, I've introduced a dedicated `PackageRoot`
struct which we can pass around in lieu of Yet Another `Path`.
For now, I'm only enabling this in preview (and the approach doesn't
affect any other rules). It's a bug fix, but it may end up expanding the
rule.
Closes https://github.com/astral-sh/ruff/issues/13519.
## Summary
It's only safe to enforce the `x in "1234567890"` case if `x` is exactly
one character, since the set on the right has been reordered as compared
to `string.digits`. We can't know if `x` is exactly one character unless
it's a literal. And if it's a literal, well, it's kind of silly code in
the first place?
Closes https://github.com/astral-sh/ruff/issues/13802.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix `await-outside-async` to allow `await` at the top-level scope of a
notebook.
```python
# foo.ipynb
await asyncio.sleep(1) # should be allowed
```
## Test Plan
<!-- How was it tested? -->
A unit test
## Summary
Resolves#13833.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This PR accounts for further subtleties in `Decimal` parsing:
- Strings which are empty modulo underscores and surrounding whitespace
are skipped
- `Decimal("-0")` is skipped
- `Decimal("{integer literal that is longer than 640 digits}")` are
skipped (see linked issue for explanation)
NB: The snapshot did not need to be updated since the new test cases are
"Ok" instances and added below the diff.
Closes#14204
## Summary
Implementation for one of the rules in
https://github.com/astral-sh/ruff/issues/1348
Refurb only deals only with classes with a single base, however the rule
is valid for any base.
(`str, Enum` is common prior to `StrEnum`)
## Test Plan
`cargo test`
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Flake8-builtins provides two checks for arguments (really, parameters)
of a function shadowing builtins: A002 checks function definitions, and
A006 checks lambda expressions. This PR ensures that A002 is restricted
to functions rather than lambda expressions.
Closes#14135 .
FURB157 suggests replacing expressions like `Decimal("123")` with
`Decimal(123)`. This PR extends the rule to cover cases where the input
string to `Decimal` can be easily transformed into an integer literal.
For example:
```python
Decimal("1__000") # fix: `Decimal(1000)`
```
Note: we do not implement the full decimal parsing logic from CPython on
the grounds that certain acceptable string inputs to the `Decimal`
constructor may be presumed purposeful on the part of the developer. For
example, as in the linked issue, `Decimal("١٢٣")` is valid and equal to
`Decimal(123)`, but we do not suggest a replacement in this case.
Closes#13807
## Summary
The `commented-out-code` rule (ERA001) from `eradicate` is currently
flagging a very common idiom that marks Python strings as another
language, to help with syntax highlighting:

This PR adds this idiom to the list of allowed exceptions to the rule.
## Test Plan
I've added some additional test cases.
## Summary
Like https://github.com/astral-sh/ruff/pull/14063, but ensures that we
catch cases like `{1, True}` in which the items hash to the same value
despite not being identical.
## Summary
Closes https://github.com/astral-sh/ruff/issues/13944
## Test Plan
Standard snapshot testing
flake8-simplify surprisingly only has a single test case
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR updates the fix generation logic for auto-quoting an annotation
to generate an edit even when there's a quote character present.
The logic uses the visitor pattern, maintaining it's state on where it
is and generating the string value one node at a time. This can be
considered as a specialized form of `Generator`. The state required to
maintain is whether we're currently inside a `typing.Literal` or
`typing.Annotated` because the string value in those types should not be
un-quoted i.e., `Generic[Literal["int"]]` should become
`"Generic[Literal['int']]`, the quotes inside the `Literal` should be
preserved.
Fixes: https://github.com/astral-sh/ruff/issues/9137
## Test Plan
Add various test cases to validate this change, validate the snapshots.
There are no ecosystem changes to go through.
---------
Signed-off-by: Shaygan <hey@glyphack.com>
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Remove unnecessary uses of `.as_ref()`, `.iter()`, `&**` and similar, mostly in situations when iterating over variables. Many of these changes are only possible following #13826, when we bumped our MSRV to 1.80: several useful implementations on `&Box<[T]>` were only stabilised in Rust 1.80. Some of these changes we could have done earlier, however.
## Summary
This pull request resolves some rule thrashing identified in #12427 by
allowing for unused arguments when using `NotImplementedError` with a
variable per [this
comment](https://github.com/astral-sh/ruff/issues/12427#issuecomment-2384727468).
**Note**
This feels a little heavy-handed / edge-case-prone. So, to be clear, I'm
happy to scrap this code and just update the docs to communicate that
`abstractmethod` and friends should be used in this scenario (or
similar). Just let me know what you'd like done!
fixes: #12427
## Test Plan
I added a test-case to the existing `ARG.py` file and ran...
```sh
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/flake8_unused_arguments/ARG.py --no-cache --preview --select ARG002
```
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Treat async generators as "await" in ASYNC100.
Fixes#13637
## Test Plan
Updated snapshot
## Summary
Resolves https://github.com/astral-sh/ruff/issues/9962 by allowing a
configuration setting `allowed-unused-imports`
TODO:
- [x] Figure out the correct name and place for the setting; currently,
I have added it top level.
- [x] The comparison is pretty naive. I tried using `glob::Pattern` but
couldn't get it to work in the configuration.
- [x] Add tests
- [x] Update documentations
## Test Plan
`cargo test`
Closes https://github.com/astral-sh/ruff/issues/13545
As described in the issue, we move comments before the inner `if`
statement to before the newly constructed `elif` statement (previously
`else`).
## Summary
fix#13602
Currently, `UP043` only applies to typing.Generator, but it should also
support collections.abc.Generator.
This update ensures `UP043` correctly handles both
`collections.abc.Generator` and `collections.abc.AsyncGenerator`
### UP043
> `UP043`
> Python 3.13 introduced the ability for type parameters to specify
default values. As such, the default type arguments for some types in
the standard library (e.g., Generator, AsyncGenerator) are now optional.
> Omitting type parameters that match the default values can make the
code more concise and easier to read.
```py
Generator[int, None, None] -> Generator[int]
```
## Summary
...and remove periods from messages that don't span more than a single
sentence.
This is more consistent with how we present user-facing messages in uv
(which has a defined style guide).
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
There was a typo in the links of the docs of PTH116, where Path.stat
used to link to Path.group.
Another rule, PTH202, does it correctly:
ec72e675d9/crates/ruff_linter/src/rules/flake8_use_pathlib/rules/os_path_getsize.rs (L33)
This PR only fixes a one word typo.
## Test Plan
<!-- How was it tested? -->
I did not test that the doc generation framework picked up these
changes, I assume it will do it successfully.
Related to https://github.com/astral-sh/ruff/issues/13524
Doesn't offer a valid fix, opting to instead just not offer a fix at
all. If someone points me to a good way to handle parenthesis here I'm
down to try to fix the fix separately, but it looks quite hard.
Closes https://github.com/astral-sh/ruff/issues/13266
Avoids false negatives for shadowed bindings that aren't actually
references to the loop variable. There are some shadowed bindings we
need to support still, e.g., `del` requires the loop variable to exist.
## Summary
This PR adds an experimental Ruff subcommand to generate dependency
graphs based on module resolution.
A few highlights:
- You can generate either dependency or dependent graphs via the
`--direction` command-line argument.
- Like Pants, we also provide an option to identify imports from string
literals (`--detect-string-imports`).
- Users can also provide additional dependency data via the
`include-dependencies` key under `[tool.ruff.import-map]`. This map uses
file paths as keys, and lists of strings as values. Those strings can be
file paths or globs.
The dependency resolution uses the red-knot module resolver which is
intended to be fully spec compliant, so it's also a chance to expose the
module resolver in a real-world setting.
The CLI is, e.g., `ruff graph build ../autobot`, which will output a
JSON map from file to files it depends on for the `autobot` project.
## Summary
Follow-up from #13268, this PR updates the test case to use
`assert_snapshot` now that the output is limited to only include the
rules with diagnostics.
## Test Plan
`cargo insta test`
## Summary
Follow-up to #13147, this PR implements the `AstNode` for `Identifier`.
This makes it easier to create the `NodeKey` in red knot because it uses
a generic method to construct the key from `AnyNodeRef` and is important
for definitions that are created only on identifiers instead of
`ExprName`.
## Test Plan
`cargo test` and `cargo clippy`
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [quick-junit](https://redirect.github.com/nextest-rs/quick-junit) |
workspace.dependencies | minor | `0.4.0` -> `0.5.0` |
---
### Release Notes
<details>
<summary>nextest-rs/quick-junit (quick-junit)</summary>
###
[`v0.5.0`](https://redirect.github.com/nextest-rs/quick-junit/blob/HEAD/CHANGELOG.md#050---2024-09-01)
[Compare
Source](https://redirect.github.com/nextest-rs/quick-junit/compare/quick-junit-0.4.0...quick-junit-0.5.0)
##### Changed
- The `Output` type, which strips invalid XML characters from a string,
has been renamed to
`XmlString`.
- All internal storage now uses `XmlString` rather than `String`.
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC41Ni4wIiwidXBkYXRlZEluVmVyIjoiMzguNTkuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
The `SequenceIndexVisitor` currently does not recurse into
subexpressions of subscripts when searching for subscript accesses that
would trigger this rule. That means that we don't currently detect
violations of the rule on snippets like this:
```py
data = {"a": 1, "b": 2}
column_names = ["a", "b"]
for index, column_name in enumerate(column_names):
_ = data[column_names[index]]
```
Fixes#13183
## Test Plan
`cargo test -p ruff_linter`
## Summary
Extends deletions for RUF100, deleting trailing text from noqa
directives, while preserving upcoming comments on the same line if any.
In cases where it deletes a comment up to another comment on the same
line, the whitespace between them is now shown to be in the autofix in
the diagnostic as well. Leading whitespace before the removed comment is
not, though.
Fixes#12251
## Test Plan
`cargo test`
## Summary
Now that Ruff provides a formatter, there is no need to rely on Black to
check that the docs are formatted correctly in
`check_docs_formatted.py`. This PR swaps out Black for the Ruff
formatter and updates inconsistencies between the two.
This PR will be a precursor to another PR
([branch](https://github.com/calumy/ruff/tree/format-pyi-in-docs)),
updating the `check_docs_formatted.py` script to check for pyi files,
fixing #11568.
## Test Plan
- CI to check that the docs are formatted correctly using the updated
script.
## Summary
As suggested by @MichaReiser in
https://github.com/astral-sh/ruff/pull/12886#pullrequestreview-2237679793,
this adds an exemption to `RUF027` for `fastAPI` paths, which require
template strings rather than eagerly evaluated f-strings.
## Test Plan
I added a fixture that causes Ruff to emit a false-positive error on
`main` but no longer does with this PR.
## Summary
This PR is a pure refactor to simplify some of the logic for `RUF027`.
This will make it easier to file some followup PRs to help reduce the
false positives from this rule. I'm separating the refactor out into a
separate PR so it's easier to review, and so I can double-check from the
ecosystem report that this doesn't have any user-facing impact.
## Test Plan
`cargo test -p ruff_linter --lib`
This adds the `fast-api-unused-path-parameter` lint rule, as described
in #12632.
I'm still pretty new to rust, so the code can probably be improved, feel
free to tell me if there's any changes i should make.
Also, i needed to add the `add_parameter` edit function, not sure if it
was in the scope of the PR or if i should've made another one.
List and set comprehensions using `async for` cannot be replaced with
underlying generators; this PR modifies C419 to skip such
comprehensions.
Closes#12891.
## Summary
Occasionally, we receive bug reports that imports in `src` directories
aren't correctly detected. The root of the problem is that we default to
`src = ["."]`, so users have to set `src = ["src"]` explicitly. This PR
extends the default to cover _both_ of them: `src = [".", "src"]`.
Closes https://github.com/astral-sh/ruff/issues/12454.
## Test Plan
I replicated the structure described in
https://github.com/astral-sh/ruff/issues/12453, and verified that the
imports were considered sorted, but that adding `src = ["."]` showed an
error.
## Summary
This PR adds support for VS Code specific cell metadata to consider when
collecting valid code cells.
For context, Ruff only runs on valid code cells. These are the code
cells that doesn't contain cell magics. Previously, Ruff only used the
notebook's metadata to determine whether it's a Python notebook. But, in
VS Code, a notebook's preferred language might be Python but it could
still contain code cells for other languages. This can be determined
with the `metadata.vscode.languageId` field.
### References:
* https://code.visualstudio.com/docs/languages/identifiers
* e6c009a3d4/extensions/ipynb/src/serializers.ts (L104-L107)
*
e6c009a3d4/extensions/ipynb/src/serializers.ts (L117-L122)
This brings us one step closer to fixing #12281.
## Test Plan
Add test cases for `is_valid_python_code_cell` and an integration test
case which showcase running it end to end. The test notebook contains a
JavaScript code cell and a Python code cell.
## Summary
This PR fixes a bug in the semantic model where it would evaluate the
default parameter value in the type parameter scope. For example,
```py
def foo[T1: int](a = T1):
pass
```
Here, the `T1` in `a = T1` is undefined but Ruff doesn't flag it
(https://play.ruff.rs/ba2f7c2f-4da6-417e-aa2a-104aa63e6d5e).
The fix here is to evaluate the default parameter value in the
_enclosing_ scope instead.
## Test Plan
Add a test case which includes the above code under `F821`
(`undefined-name`) and validate the snapshot.
## Summary
See #12703. This only addresses the first bullet point, adding a space
after the comma in the suggested fix from list/tuple to string.
## Test Plan
Updated the snapshots and compared.
In most cases we should suggest a ternary operator, but there are three
edge cases where a binary operator is more appropriate.
Given an if-else block of the form
```python
if test:
target_var = body_value
else:
target_var = else_value
```
This PR updates the check for SIM108 to the following:
- If `test == body_value` and preview enabled, suggest to replace with
`target_var = test or else_value`
- If `test == not body_value` and preview enabled, suggest to replace
with `target_var = body_value and else_value`
- If `not test == body_value` and preview enabled, suggest to replace
with `target_var = body_value and else_value`
- Otherwise, suggest to replace with `target_var = body_value if test
else else_value`
Closes#12189.
## Summary
Adding parentheses to a tuple in a subscript with elements that include
slice expressions causes a syntax error. For example, `d[(1,2,:)]` is a
syntax error.
So, when `lint.ruff.parenthesize-tuple-in-subscript = true` and the
tuple includes a slice expression, we skip this check and fix.
Closes#12766.
> ~Builtins are also more efficient than `for` loops.~
Let's not promise performance because this code transformation does not
deliver.
Benchmark written by @dcbaker
> `any()` seems to be about 1/3 as fast (Python 3.11.9, NixOS):
```python
loop = 'abcdef'.split()
found = 'f'
nfound = 'g'
def test1():
for x in loop:
if x == found:
return True
return False
def test2():
return any(x == found for x in loop)
def test3():
for x in loop:
if x == nfound:
return True
return False
def test4():
return any(x == nfound for x in loop)
if __name__ == "__main__":
import timeit
print('for loop (found) :', timeit.timeit(test1))
print('for loop (not found):', timeit.timeit(test3))
print('any() (found) :', timeit.timeit(test2))
print('any() (not found) :', timeit.timeit(test4))
```
```
for loop (found) : 0.051076093994197436
for loop (not found): 0.04388196699437685
any() (found) : 0.15422860698890872
any() (not found) : 0.15568504799739458
```
I have retested with longer lists and on multiple Python versions with
similar results.
Implements the new fixable lint rule `RUF031` which checks for the use or omission of parentheses around tuples in subscripts, depending on the setting `lint.ruff.parenthesize-tuple-in-getitem`. By default, the use of parentheses is considered a violation.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves#12636
Consider docstrings which begin with the word "Returns" as having
satisfactorily documented they're returns. For example
```python
def f():
"""Returns 1."""
return 1
```
is valid.
## Test Plan
Added example to test fixture.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Removes set comprehension as a violation for `sum` when checking `C419`,
because set comprehension may de-duplicate entries in a generator,
thereby modifying the value of the sum.
Closes#12690.
## Summary
Make it a violation of `C409` to call `tuple` with a list or set
comprehension, and
implement the (unsafe) fix of calling the `tuple` with the underlying
generator instead.
Closes#12648.
## Test Plan
Test fixture updated, cargo test, docs checked for updated description.
## Summary
Adds autofix for `RUF007`
## Test Plan
`cargo test`, however I get errors for `test resolver::tests::symlink
... FAILED` which seems to not be my fault
## Summary
Fixes#12630.
DOC501 and DOC502 now understand functions with constructs like this to
be explicitly raising `TypeError` (which should be documented in a
function's docstring):
```py
try:
foo():
except TypeError:
...
raise
```
I made an exception for `Exception` and `BaseException`, however.
Constructs like this are reasonably common, and I don't think anybody
would say that it's worth putting in the docstring that it raises "some
kind of generic exception":
```py
try:
foo()
except BaseException:
do_some_logging()
raise
```
## Test Plan
`cargo test -p ruff_linter --lib`
## Summary
Please see
https://github.com/astral-sh/ruff/pull/12605#discussion_r1699957443 for
a description of the issue.
They way I fixed it is to get the *last* timeout item in the `with`, and
if it's an `async with` and there are items after it, then don't trigger
the lint.
## Test Plan
Updated the fixture with some more cases.
## Summary
There's still a problem here. Given:
```python
class Class():
pass
# comment
# another comment
a = 1
```
We only add one newline before `a = 1` on the first pass, because
`max_precedling_blank_lines` is 1... We then add the second newline on
the second pass, so it ends up in the right state, but the logic is
clearly wonky.
Closes https://github.com/astral-sh/ruff/issues/11508.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Extend `flake8-builtins` to imports, lambda-arguments, and modules to be
consistent with original checker
[flake8_builtins](https://github.com/gforcada/flake8-builtins/blob/main/flake8_builtins.py).
closes#12540
## Details
- Implement builtin-import-shadowing (A004)
- Stop tracking imports shadowing in builtin-variable-shadowing (A001)
in preview mode.
- Implement builtin-lambda-argument-shadowing (A005)
- Implement builtin-module-shadowing (A006)
- Add new option `linter.flake8_builtins.builtins_allowed_modules`
## Test Plan
cargo test
## Summary
If an import is marked as "required", we should never flag it as unused.
In practice, this is rare, since required imports are typically used for
`__future__` annotations, which are always considered "used".
Closes https://github.com/astral-sh/ruff/issues/12458.
## Summary
Right now, in the isort comment model, there's nowhere for trailing
comments on the _statement_ to go, as in:
```python
from mylib import (
MyClient,
MyMgmtClient,
) # some comment
```
If the comment is on the _alias_, we do preserve it, because we attach
it to the alias, as in:
```python
from mylib import (
MyClient,
MyMgmtClient, # some comment
)
```
Similarly, if the comment is trailing on an import statement
(non-`from`), we again attach it to the alias, because it can't be
parenthesized, as in:
```python
import foo # some comment
```
This PR adds logic to track and preserve those trailing comments.
We also no longer drop several other comments, like:
```python
from mylib import (
# some comment
MyClient
)
```
Closes https://github.com/astral-sh/ruff/issues/12487.
## Summary
This PR fixes a bug to raise a syntax error when an unparenthesized
generator expression is used as an argument to a call when there are
more than one argument.
For reference, the grammar is:
```
primary:
| ...
| primary genexp
| primary '(' [arguments] ')'
| ...
genexp:
| '(' ( assignment_expression | expression !':=') for_if_clauses ')'
```
The `genexp` requires the parenthesis as mentioned in the grammar. So,
the grammar for a call expression is either a name followed by a
generator expression or a name followed by a list of argument. In the
former case, the parenthesis are excluded because the generator
expression provides them while in the later case, the parenthesis are
explicitly provided for a list of arguments which means that the
generator expression requires it's own parenthesis.
This was discovered in https://github.com/astral-sh/ruff/issues/12420.
## Test Plan
Add test cases for valid and invalid syntax.
Make sure that the parser from CPython also raises this at the parsing
step:
```console
$ python3.13 -m ast parser/_.py
File "parser/_.py", line 1
total(1, 2, x for x in range(5), 6)
^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
$ python3.13 -m ast parser/_.py
File "parser/_.py", line 1
sum(x for x in range(10), 10)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
```
## Summary
Fix panic reported in #12428. Where a string would sometimes get split
within a character boundary. This bypasses the need to split the string.
This does not guarantee the correct formatting of the docstring, but
neither did the previous implementation.
Resolves#12428
## Test Plan
Test case added to fixture
## Summary
These are the first rules implemented as part of #458, but I plan to
implement more.
Specifically, this implements `docstring-missing-exception` which checks
for raised exceptions not documented in the docstring, and
`docstring-extraneous-exception` which checks for exceptions in the
docstring not present in the body.
## Test Plan
Test fixtures added for both google and numpy style.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR updates D301 rule to allow inclduing escaped docstring, e.g.
`\"""Foo.\"""` or `\"\"\"Bar.\"\"\"`, within a docstring.
Related issue: #12152
## Test Plan
Add more test cases to D301.py and update the snapshot file.
<!-- How was it tested? -->
## Summary
This PR allows us to fix both expressions in `foo == "a" or foo == "b"
or ("c" != bar and "d" != bar)`, but limits the rule to consecutive
comparisons, following https://github.com/astral-sh/ruff/issues/7797.
I think this logic was _probably_ added because of
https://github.com/astral-sh/ruff/pull/12368 -- the intent being that
we'd replace the _entire_ expression.
## Summary
Add new rule and implement for `unnecessary default type arguments`
under the `UP` category (`UP043`).
```py
// < py313
Generator[int, None, None]
// >= py313
Generator[int]
```
I think that as Python 3.13 develops, there might be more default type
arguments added besides `Generator` and `AsyncGenerator`. So, I made
this more flexible to accommodate future changes.
related issue: #12286
## Test Plan
snapshot included..!
## Summary
Pretty sure this should still be an error, but also, I think I added
this because of ecosystem CI? So want to see what pops up.
Closes https://github.com/astral-sh/ruff/issues/12164.
## Summary
This is the _intended_ default that PEP 597 _wants_, but it's not
backwards compatible. The fix is already unsafe, so it's better for us
to recommend the desired and expected behavior.
Closes https://github.com/astral-sh/ruff/issues/12069.
## Summary
I believe these should always bind more tightly -- e.g., in:
```python
for _ in bar(baz for foo in [1]):
pass
```
The inner `baz` and `foo` should be considered comprehension variables,
not for loop bindings.
We need to revisit this more holistically. In some of these cases,
`BindingKind` should probably be a flag, not an enum, since the values
aren't mutually exclusive. Separately, we should probably be more
precise in how we set it (e.g., by passing down from the parent rather
than sniffing in `handle_node_store`).
Closes https://github.com/astral-sh/ruff/issues/12339
## Summary
I don't know that there's more to do here. We could consider not raising
the violation at all for arguments, but that would have some false
negatives and could also be surprising to users.
Closes https://github.com/astral-sh/ruff/issues/12267.
## Summary
Ensures that, e.g., the following is not considered a
redefinition-without-use:
```python
import contextlib
foo = None
with contextlib.suppress(ImportError):
from some_module import foo
```
Closes https://github.com/astral-sh/ruff/issues/12309.
## Summary
Closes https://github.com/astral-sh/ruff/issues/12291.
## Test Plan
```shell
❯ cargo run check ../uv/foo --select INP
/Users/crmarsh/workspace/uv/foo/bar/baz.py:1:1: INP001 File `/Users/crmarsh/workspace/uv/foo/bar/baz.py` is part of an implicit namespace package. Add an `__init__.py`.
Found 1 error.
```
## Summary
I don't fully understand the purpose of this. In #7905, it was just
copied over from the previous non-preview implementation. But it means
that (e.g.) we don't treat `type(self.foo)` as a type -- which is wrong.
Closes https://github.com/astral-sh/ruff/issues/12290.
## Summary
Update the name of `ASYNC109` to match
[upstream](https://flake8-async.readthedocs.io/en/latest/rules.html).
Also update to the functionality to match upstream by supporting
additional context managers from `asyncio` and `anyio`. This doesn't
change any of the detection functionality, but recommends additional
context managers from `asyncio` and `anyio` depending on context.
Part of https://github.com/astral-sh/ruff/issues/12039.
## Test Plan
Added fixture for asyncio recommendation
## Summary
S113 exists because `requests` doesn't have a default timeout, so
request without timeout may hang indefinitely
> B113: Test for missing requests timeout
This plugin test checks for requests or httpx calls without a timeout
specified.
>
> Nearly all production code should use this parameter in nearly all
requests, **Failure to do so can cause your program to hang
indefinitely.**
But httpx has default timeout 5s, so S113 for httpx request without
`timeout` argument is a false positive, only valid case would be
`timeout=None`.
https://www.python-httpx.org/advanced/timeouts/
> HTTPX is careful to enforce timeouts everywhere by default.
>
> The default behavior is to raise a TimeoutException after 5 seconds of
network inactivity.
## Test Plan
snap updated
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This is the implementation for the new rule of `pycodestyle (E204)`. It
follows the guidlines described in the contributing site, and as such it
has a new file named `whitespace_after_decorator.rs`, a new test file
called `E204.py`, and as such invokes the `function` in the `AST
statement checker` for functions and functions in classes. Linking #2402
because it has all the pycodestyle rules.
## Test Plan
<!-- How was it tested? -->
The file E204.py, has a `decorator` defined called wrapper, and this
decorator is used for 2 cases. The first one is when a `function` which
has a `decorator` is called in the file, and the second one is when
there is a `class` and 2 `methods` are defined for the `class` with a
`decorator` attached it.
Test file:
``` python
def foo(fun):
def wrapper():
print('before')
fun()
print('after')
return wrapper
# No error
@foo
def bar():
print('bar')
# E204
@ foo
def baz():
print('baz')
class Test:
# No error
@foo
def bar(self):
print('bar')
# E204
@ foo
def baz(self):
print('baz')
```
I am still new to rust and any suggestion is appreciated. Specially with
the way im using native ruff utilities.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Bandit now also reports `B113` on `httpx`
(https://github.com/PyCQA/bandit/pull/1060). This PR implements the same
logic, to detect missing or `None` timeouts for `httpx` alongside
`requests`.
## Test Plan
Snapshot tests.
## Summary
This PR updates the linter, specifically the token-based rules, to work
on the tokens that come after a syntax error.
For context, the token-based rules only diagnose the tokens up to the
first lexical error. This PR builds up an error resilience by
introducing a `TokenIterWithContext` which updates the `nesting` level
and tries to reflect it with what the lexer is seeing. This isn't 100%
accurate because if the parser recovered from an unclosed parenthesis in
the middle of the line, the context won't reduce the nesting level until
it sees the newline token at the end of the line.
resolves: #11915
## Test Plan
* Add test cases for a bunch of rules that are affected by this change.
* Run the fuzzer for a long time, making sure to fix any other bugs.
## Summary
This PR updates Ruff to **not** generate auto-fixes if the source code
contains syntax errors as determined by the parser.
The main motivation behind this is to avoid infinite autofix loop when
the token-based rules are run over any source with syntax errors in
#11950.
Although even after this, it's not certain that there won't be an
infinite autofix loop because the logic might be incorrect. For example,
https://github.com/astral-sh/ruff/issues/12094 and
https://github.com/astral-sh/ruff/pull/12136.
This requires updating the test infrastructure to not validate for fix
availability status when the source contained syntax errors. This is
required because otherwise the fuzzer might fail as it uses the test
function to run the linter and validate the source code.
resolves: #11455
## Test Plan
`cargo insta test`
## Summary
This PR updates various references in the linter to compute the
line-width for summing the width of each `char` in a `str` instead of
computing the width of the `str` itself.
Refer to #12133 for more details.
fixes: #12130
## Test Plan
Add a file with null (`\0`) character which is zero-width. Run this test
case on `main` to make sure it panics and switch over to this branch to
make sure it doesn't panic now.
## Summary
Use the following to reproduce this:
```console
$ cargo run -- check --select=E275,E203 --preview --no-cache ~/playground/ruff/src/play.py --fix
debug error: Failed to converge after 100 iterations in `/Users/dhruv/playground/ruff/src/play.py` with rule codes E275:---
yield,x
---
/Users/dhruv/playground/ruff/src/play.py:1:1: E275 Missing whitespace after keyword
|
1 | yield,x
| ^^^^^ E275
|
= help: Added missing whitespace after keyword
Found 101 errors (100 fixed, 1 remaining).
[*] 1 fixable with the `--fix` option.
```
## Test Plan
Add a test case and run `cargo insta test`.
## Summary
This patch inverts the defaults for
[pytest-fixture-incorrect-parentheses-style
(PT001)](https://docs.astral.sh/ruff/rules/pytest-fixture-incorrect-parentheses-style/)
and [pytest-incorrect-mark-parentheses-style
(PT003)](https://docs.astral.sh/ruff/rules/pytest-incorrect-mark-parentheses-style/)
to prefer dropping superfluous parentheses.
Presently, Ruff defaults to adding superfluous parentheses on pytest
mark and fixture decorators for documented purpose of consistency; for
example,
```diff
import pytest
-@pytest.mark.foo
+@pytest.mark.foo()
def test_bar(): ...
```
This behaviour is counter to the official pytest recommendation and
diverges from the flake8-pytest-style plugin as of version 2.0.0 (see
https://github.com/m-burst/flake8-pytest-style/issues/272). Seeing as
either default satisfies the documented benefit of consistency across a
codebase, it makes sense to change the behaviour to be consistent with
pytest and the flake8 plugin as well.
This change is breaking, so is gated behind preview (at least under my
understanding of Ruff versioning). The implementation of this gating
feature is a bit hacky, but seemed to be the least disruptive solution
without performing invasive surgery on the `#[option()]` macro.
Related to #8796.
### Caveat
Whilst updating the documentation, I sought to reference the pytest
recommendation to drop superfluous parentheses, but couldn't find any
official instruction beyond it being a revealed preference within the
pytest documentation code examples (as well as the linked issues from a
core pytest developer). Thus, the wording of the preference is
deliberately timid; it's to cohere with pytest rather than follow an
explicit guidance.
## Test Plan
`cargo nextest run`
I also ran
```sh
cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/flake8_pytest_style/PT001.py --no-cache --diff --select PT001
```
and compared against it with `--preview` to verify that the default does
change under preview (I also repeated this with `echo
'[tool.ruff]\npreview = true' > pyproject.toml` to verify that it works
with a configuration file).
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Implement mutable-contextvar-default (B039) which was added to
flake8-bugbear in https://github.com/PyCQA/flake8-bugbear/pull/476.
This rule is similar to [mutable-argument-default
(B006)](https://docs.astral.sh/ruff/rules/mutable-argument-default) and
[function-call-in-default-argument
(B008)](https://docs.astral.sh/ruff/rules/function-call-in-default-argument),
except that it checks the `default` keyword argument to
`contextvars.ContextVar`.
```
B039.py:19:26: B039 Do not use mutable data structures for ContextVar defaults
|
18 | # Bad
19 | ContextVar("cv", default=[])
| ^^ B039
20 | ContextVar("cv", default={})
21 | ContextVar("cv", default=list())
|
= help: Replace with `None`; initialize with `.set()` after checking for `None`
```
In the upstream flake8-plugin, this rule is written expressly as a
corollary to B008 and shares much of its logic. Likewise, this
implementation reuses the logic of the Ruff implementation of B008,
namely
f765d19402/crates/ruff_linter/src/rules/flake8_bugbear/rules/function_call_in_argument_default.rs (L104-L106)
and
f765d19402/crates/ruff_linter/src/rules/flake8_bugbear/rules/mutable_argument_default.rs (L106)
Thus, this rule deliberately replicates B006's and B008's heuristics.
For example, this rule assumes that all functions are mutable unless
otherwise qualified. If improvements are to be made to B039 heuristics,
they should probably be made to B006 and B008 as well (whilst trying to
match the upstream implementation).
This rule does not have an autofix as it is unknown where the ContextVar
next used (and it might not be within the same file).
Closes#12054
## Test Plan
`cargo nextest run`
## Summary
This adds a fix for the `duplicate-bases` rule that removes the
duplicate base from the class definition.
## Test Plan
`cargo nextest run duplicate_bases`, `cargo insta review`.
## Summary
Follow-up to #11902
This PR simplifies the `LinterResult` struct by avoiding the generic and
not store the `ParseError`.
This is possible because the callers already have access to the
`ParseError` via the `Parsed` output. This also means that we can
simplify the return type of `check_path` and avoid the generic `T` on
`LinterResult`.
## Test Plan
`cargo insta test`
## Summary
Follow-up to #11901
This PR avoids displaying the syntax errors as log message now that the
`E999` diagnostic cannot be disabled.
For context on why this was added, refer to
https://github.com/astral-sh/ruff/pull/2505. Basically, we would allow
ignoring the syntax error diagnostic because certain syntax feature
weren't supported back then like `match` statement. And, if a user
ignored `E999`, Ruff would give no feedback if the source code contained
any syntax error. So, this log message was a way to indicate to the user
even if `E999` was disabled.
The current state of the parser is such that (a) it matches with the
latest grammar and (b) it's easy to add support for any new syntax.
**Note:** This PR doesn't remove the `DisplayParseError` struct because
it's still being used by the formatter.
## Test Plan
Update existing snapshots from the integration tests.
## Summary
This PR updates the way syntax errors are handled throughout the linter.
The main change is that it's now not considered as a rule which involves
the following changes:
* Update `Message` to be an enum with two variants - one for diagnostic
message and the other for syntax error message
* Provide methods on the new message enum to query information required
by downstream usages
This means that the syntax errors cannot be hidden / disabled via any
disablement methods. These are:
1. Configuration via `select`, `ignore`, `per-file-ignores`, and their
`extend-*` variants
```console
$ cargo run -- check ~/playground/ruff/src/lsp.py --extend-select=E999
--no-preview --no-cache
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.10s
Running `target/debug/ruff check /Users/dhruv/playground/ruff/src/lsp.py
--extend-select=E999 --no-preview --no-cache`
warning: Rule `E999` is deprecated and will be removed in a future
release. Syntax errors will always be shown regardless of whether this
rule is selected or not.
/Users/dhruv/playground/ruff/src/lsp.py:1:8: F401 [*] `abc` imported but
unused
|
1 | import abc
| ^^^ F401
2 | from pathlib import Path
3 | import os
|
= help: Remove unused import: `abc`
```
3. Command-line flags via `--select`, `--ignore`, `--per-file-ignores`,
and their `--extend-*` variants
```console
$ cargo run -- check ~/playground/ruff/src/lsp.py --no-cache
--config=~/playground/ruff/pyproject.toml
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.11s
Running `target/debug/ruff check /Users/dhruv/playground/ruff/src/lsp.py
--no-cache --config=/Users/dhruv/playground/ruff/pyproject.toml`
warning: Rule `E999` is deprecated and will be removed in a future
release. Syntax errors will always be shown regardless of whether this
rule is selected or not.
/Users/dhruv/playground/ruff/src/lsp.py:1:8: F401 [*] `abc` imported but
unused
|
1 | import abc
| ^^^ F401
2 | from pathlib import Path
3 | import os
|
= help: Remove unused import: `abc`
```
This also means that the **output format** needs to be updated:
1. The `code`, `noqa_row`, `url` fields in the JSON output is optional
(`null` for syntax errors)
2. Other formats are changed accordingly
For each format, a new test case specific to syntax errors have been
added. Please refer to the snapshot output for the exact format for
syntax error message.
The output of the `--statistics` flag will have a blank entry for syntax
errors:
```
315 F821 [ ] undefined-name
119 [ ] syntax-error
103 F811 [ ] redefined-while-unused
```
The **language server** is updated to consider the syntax errors by
convert them into LSP diagnostic format separately.
### Preview
There are no quick fixes provided to disable syntax errors. This will
automatically work for `ruff-lsp` because the `noqa_row` field will be
`null` in that case.
<img width="772" alt="Screenshot 2024-06-26 at 14 57 08"
src="https://github.com/astral-sh/ruff/assets/67177269/aaac827e-4777-4ac8-8c68-eaf9f2c36774">
Even with `noqa` comment, the syntax error is displayed:
<img width="763" alt="Screenshot 2024-06-26 at 14 59 51"
src="https://github.com/astral-sh/ruff/assets/67177269/ba1afb68-7eaf-4b44-91af-6d93246475e2">
Rule documentation page:
<img width="1371" alt="Screenshot 2024-06-26 at 16 48 07"
src="https://github.com/astral-sh/ruff/assets/67177269/524f01df-d91f-4ac0-86cc-40e76b318b24">
## Test Plan
- [x] Disablement methods via config shows a warning
- [x] `select`, `extend-select`
- [ ] ~`ignore`~ _doesn't show any message_
- [ ] ~`per-file-ignores`, `extend-per-file-ignores`~ _doesn't show any
message_
- [x] Disablement methods via command-line flag shows a warning
- [x] `--select`, `--extend-select`
- [ ] ~`--ignore`~ _doesn't show any message_
- [ ] ~`--per-file-ignores`, `--extend-per-file-ignores`~ _doesn't show
any message_
- [x] File with syntax errors should exit with code 1
- [x] Language server
- [x] Should show diagnostics for syntax errors
- [x] Should not recommend a quick fix edit for adding `noqa` comment
- [x] Same for `ruff-lsp`
resolves: #8447
The motivation for this rule is solid; it's been in preview for a long
time; the implementation and tests seem sound; there are no open issues
regarding it, and as far as I can tell there never have been any.
The only issue I see is that the docs don't really describe the rule
accurately right now; I fix that in this PR.
## Summary
This rule removes `PLR1701` and redirects it to `SIM101`.
In addition to that, the `SIM101` autofix has been fixed to add padding
if required.
### `PLR1701` has bugs
It also seems that the implementation of `PLR1701` is incorrect in
multiple scenarios. For example, the following code snippet:
```py
# There are two _different_ variables `a` and `b`
if isinstance(a, int) or isinstance(b, bool) or isinstance(a, float):
pass
# There's another condition `or 1`
if isinstance(self.k, int) or isinstance(self.k, float) or 1:
pass
```
is fixed to:
```py
# Fixed to only considering variable `a`
if isinstance(a, (float, int)):
pass
# The additional condition is not present in the fix
if isinstance(self.k, (float, int)):
pass
```
Playground: https://play.ruff.rs/6cfbdfb7-f183-43b0-b59e-31e728b34190
## Documentation Preview
### `PLR1701`
<img width="1397" alt="Screenshot 2024-06-25 at 11 14 40"
src="https://github.com/astral-sh/ruff/assets/67177269/779ee84d-7c4d-4bb8-a3a4-c2b23a313eba">
## Test Plan
Remove the test cases for `PLR1701`, port the padding test case to
`SIM101` and update the snapshot.
## Summary
Right now, it's inconsistent... We sometimes match against the name, and
sometimes against the alias (`asname`). I could see a case for always
matching against the name, but matching against both seems fine too,
since the rule is really about the combination of the two?
Closes https://github.com/astral-sh/ruff/issues/12031.
## Summary
This PR fixes a bug where Ruff would raise `E203` for f-string debug
expression. This isn't valid because whitespaces are important for debug
expressions.
fixes: #12023
## Test Plan
Add test case and make sure there are no snapshot changes.
## Summary
This PR updates `F811` rule to include assignment as possible shadowed
binding. This will fix issue: #11828 .
## Test Plan
Add a test file, F811_30.py, which includes a redefinition after an
assignment and a verified snapshot file.
## Summary
Addresses #11974 to add a `RUF` rule to replace `print` expressions in
`assert` statements with the inner message.
An autofix is available, but is considered unsafe as it changes
behaviour of the execution, notably:
- removal of the printout in `stdout`, and
- `AssertionError` instance containing a different message.
While the detection of the condition is a straightforward matter,
deciding how to resolve the print arguments into a string literal can be
a relatively subjective matter. The implementation of this PR chooses to
be as tolerant as possible, and will attempt to reformat any number of
`print` arguments containing single or concatenated strings or variables
into either a string literal, or a f-string if any variables or
placeholders are detected.
## Test Plan
`cargo test`.
## Examples
For ease of discussion, this is the diff for the tests:
```diff
# Standard Case
# Expects:
# - single StringLiteral
-assert True, print("This print is not intentional.")
+assert True, "This print is not intentional."
# Concatenated string literals
# Expects:
# - single StringLiteral
-assert True, print("This print" " is not intentional.")
+assert True, "This print is not intentional."
# Positional arguments, string literals
# Expects:
# - single StringLiteral concatenated with " "
-assert True, print("This print", "is not intentional")
+assert True, "This print is not intentional"
# Concatenated string literals combined with Positional arguments
# Expects:
# - single stringliteral concatenated with " " only between `print` and `is`
-assert True, print("This " "print", "is not intentional.")
+assert True, "This print is not intentional."
# Positional arguments, string literals with a variable
# Expects:
# - single FString concatenated with " "
-assert True, print("This", print.__name__, "is not intentional.")
+assert True, f"This {print.__name__} is not intentional."
# Mixed brackets string literals
# Expects:
# - single StringLiteral concatenated with " "
-assert True, print("This print", 'is not intentional', """and should be removed""")
+assert True, "This print is not intentional and should be removed"
# Mixed brackets with other brackets inside
# Expects:
# - single StringLiteral concatenated with " " and escaped brackets
-assert True, print("This print", 'is not "intentional"', """and "should" be 'removed'""")
+assert True, "This print is not \"intentional\" and \"should\" be 'removed'"
# Positional arguments, string literals with a separator
# Expects:
# - single StringLiteral concatenated with "|"
-assert True, print("This print", "is not intentional", sep="|")
+assert True, "This print|is not intentional"
# Positional arguments, string literals with None as separator
# Expects:
# - single StringLiteral concatenated with " "
-assert True, print("This print", "is not intentional", sep=None)
+assert True, "This print is not intentional"
# Positional arguments, string literals with variable as separator, needs f-string
# Expects:
# - single FString concatenated with "{U00A0}"
-assert True, print("This print", "is not intentional", sep=U00A0)
+assert True, f"This print{U00A0}is not intentional"
# Unnecessary f-string
# Expects:
# - single StringLiteral
-assert True, print(f"This f-string is just a literal.")
+assert True, "This f-string is just a literal."
# Positional arguments, string literals and f-strings
# Expects:
# - single FString concatenated with " "
-assert True, print("This print", f"is not {'intentional':s}")
+assert True, f"This print is not {'intentional':s}"
# Positional arguments, string literals and f-strings with a separator
# Expects:
# - single FString concatenated with "|"
-assert True, print("This print", f"is not {'intentional':s}", sep="|")
+assert True, f"This print|is not {'intentional':s}"
# A single f-string
# Expects:
# - single FString
-assert True, print(f"This print is not {'intentional':s}")
+assert True, f"This print is not {'intentional':s}"
# A single f-string with a redundant separator
# Expects:
# - single FString
-assert True, print(f"This print is not {'intentional':s}", sep="|")
+assert True, f"This print is not {'intentional':s}"
# Complex f-string with variable as separator
# Expects:
# - single FString concatenated with "{U00A0}", all placeholders preserved
condition = "True is True"
maintainer = "John Doe"
-assert True, print("Unreachable due to", condition, f", ask {maintainer} for advice", sep=U00A0)
+assert True, f"Unreachable due to{U00A0}{condition}{U00A0}, ask {maintainer} for advice"
# Empty print
# Expects:
# - `msg` entirely removed from assertion
-assert True, print()
+assert True
# Empty print with separator
# Expects:
# - `msg` entirely removed from assertion
-assert True, print(sep=" ")
+assert True
# Custom print function that actually returns a string
# Expects:
@@ -100,4 +100,4 @@
# Use of `builtins.print`
# Expects:
# - single StringLiteral
-assert True, builtins.print("This print should be removed.")
+assert True, "This print should be removed."
```
## Known Issues
The current implementation resolves all arguments and separators of the
`print` expression into a single string, be it
`StringLiteralValue::single` or a `FStringValue::single`. This:
- potentially joins together strings well beyond the ideal character
limit for each line, and
- does not preserve multi-line strings in their original format, in
favour of a single line `"...\n...\n..."` format.
These are purely formatting issues only occurring in unusual scenarios.
Additionally, the autofix will tolerate `print` calls that were
previously invalid:
```python
assert True, print("this", "should not be allowed", sep=42)
```
This will be transformed into
```python
assert True, f"this{42}should not be allowed"
```
which some could argue is an alteration of behaviour.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Documentation mentions:
> PEP 563 enabled the use of a number of convenient type annotations,
such as `list[str]` instead of `List[str]`
but it meant [PEP 585](https://peps.python.org/pep-0585/) instead.
[PEP 563](https://peps.python.org/pep-0563/) is the one defining `from
__future__ import annotations`.
## Test Plan
No automated test required, just verify that
https://peps.python.org/pep-0585/ is the correct reference.
## Summary
This PR removes the duplication around `is_trivia` functions.
There are two of them in the codebase:
1. In `pycodestyle`, it's for newline, indent, dedent, non-logical
newline and comment
2. In the parser, it's for non-logical newline and comment
The `TokenKind::is_trivia` method used (1) but that's not correct in
that context. So, this PR introduces a new `is_non_logical_token` helper
method for the `pycodestyle` crate and updates the
`TokenKind::is_trivia` implementation with (2).
This also means we can remove `Token::is_trivia` method and the
standalone `token_source::is_trivia` function and use the one on
`TokenKind`.
## Test Plan
`cargo insta test`
## Summary
This PR updates the logical line rules entry-point function to only run
the logic if any of the rules within that group is enabled.
Although this shouldn't really give any performance improvements, it's
better not to do additional work if we can. This is also consistent with
how other rules are run.
## Test Plan
`cargo insta test`
## Summary
This PR updates the linter to show all the parse errors as diagnostics
instead of just the first one.
Note that this doesn't affect the parse error displayed as error log
message. This will be removed in a follow-up PR.
### Breaking?
I don't think this is a breaking change even though this might give more
diagnostics. The main reason is that this shouldn't affect any users
because it'll only give additional diagnostics in the case of multiple
syntax errors.
## Test Plan
Add an integration test case which would raise more than one parse
error.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
related to https://github.com/astral-sh/ruff/issues/5306
The check right now only checks in the first 1024 bytes, and that's
really not enough when there's a docstring at the beginning of a file.
A more proper fix might be needed, which might be more complex (and I
don't have the `rust` skills to implement that). But this temporary
"fix" might enable more users to use this.
Context: We want to use this rule in
https://github.com/scikit-learn/scikit-learn/ and we got blocked because
of this hardcoded rule (which TBH took us quite a while to figure out
why it was failing since it's not documented).
## Test Plan
This is already kinda tested, modified the test for the new byte number.
<!-- How was it tested? -->
## Summary
This PR removes most of the syntax errors from the test cases. This
would create noise when https://github.com/astral-sh/ruff/pull/11901 is
complete. These syntax errors are also just noise for the test itself.
## Test Plan
Update the snapshots and verify that they're still the same.
## Summary
This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.
For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.
## Test Plan
`cargo insta test`
## Summary
The fix for E203 now produces the same result as ruff format in cases
where a slice ends on a colon and the closing square bracket is on the
following line.
Refers to https://github.com/astral-sh/ruff/issues/10973
## Test Plan
The minimal reproduction case in the ticket was added as test case
producing no error. Additional cases with multiple spaces or a tab
before the colon where added to make sure that the rule still finds
these.
## Summary
This PR removes the `result-like` dependency and instead implement the
required functionality. The motivation being that `noqa.is_enabled()` is
easier to read than `noqa.into()`.
For context, I was just trying to understand the syntax error workflow
and I saw these flags which were being converted via `into`. I always
find `into` confusing because you never know what's it being converted
into unless you know the type. Later realized that it's just a boolean
flag. After removing the usages from these two flags, it turns out that
the dependency is only being used in one rule so I thought to remove
that as well.
## Test Plan
`cargo insta test`
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements the [consider dict
items](https://pylint.pycqa.org/en/latest/user_guide/messages/convention/consider-using-dict-items.html)
rule from Pylint. Enabling this rule flags:
```python
ORCHESTRA = {
"violin": "strings",
"oboe": "woodwind",
"tuba": "brass",
"gong": "percussion",
}
for instrument in ORCHESTRA:
print(f"{instrument}: {ORCHESTRA[instrument]}")
for instrument in ORCHESTRA.keys():
print(f"{instrument}: {ORCHESTRA[instrument]}")
for instrument in (inline_dict := {"foo": "bar"}):
print(f"{instrument}: {inline_dict[instrument]}")
```
For not using `items()` to extract the value out of the dict. We ignore
the case of an assignment, as you can't modify the underlying
representation with the value in the list of tuples returned.
## Test Plan
<!-- How was it tested? -->
`cargo test`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR is a follow-up to #11740 to restrict access to the `Parsed`
output by replacing the `parsed` API function with a more specific one.
Currently, that is `comment_ranges` but the linked PR exposes a `tokens`
method.
The main motivation is so that there's no way to get an incorrect
information from the checker. And, it also encapsulates the source of
the comment ranges and the tokens itself. This way it would become
easier to just update the checker if the source for these information
changes in the future.
## Test Plan
`cargo insta test`
## Summary
This PR fixes a bug where the checker would require the tokens for an
invalid offset w.r.t. the source code.
Taking the source code from the linked issue as an example:
```py
relese_version :"0.0is 64"
```
Now, this isn't really a valid type annotation but that's what this PR
is fixing. Regardless of whether it's valid or not, Ruff shouldn't
panic.
The checker would visit the parsed type annotation (`0.0is 64`) and try
to detect any violations. Certain rule logic requests the tokens for the
same but it would fail because the lexer would only have the `String`
token considering original source code. This worked before because the
lexer was invoked again for each rule logic.
The solution is to store the parsed type annotation on the checker if
it's in a typing context and use the tokens from that instead if it's
available. This is enforced by creating a new API on the checker to get
the tokens.
But, this means that there are two ways to get the tokens via the
checker API. I want to restrict this in a follow-up PR (#11741) to only
expose `tokens` and `comment_ranges` as methods and restrict access to
the parsed source code.
fixes: #11736
## Test Plan
- [x] Add a test case for `F632` rule and update the snapshot
- [x] Check all affected rules
- [x] No ecosystem changes
## Summary
This PR updates the return type of `parse_type_annotation` from `Expr`
to `Parsed<ModExpression>`. This is to allow accessing the tokens for
the parsed sub-expression in the follow-up PR.
## Test Plan
`cargo insta test`
## Summary
Ensures that we respect per-file ignores and exemptions for these rules.
Specifically, we allow:
```python
# ruff: noqa: PGH004
```
...to ignore `PGH004`.
## Summary
Should resolve https://github.com/astral-sh/ruff/issues/11454.
This is my first PR to `ruff`, so I may have missed something.
If I understood the suggestion in the issue correctly, rule `PGH004`
should be set to `Preview` again.
## Test Plan
Created two fixtures derived from the issue.
## Summary
This PR updates the logic for parsing type annotation to accept a
`ExprStringLiteral` node instead of the string value and the range.
The main motivation of this change is to simplify the implementation of
`parse_type_annotation` function with:
* Use the `opener_len` and `closer_len` from the string flags to get the
raw contents range instead of extracting it via
* `str::leading_quote(expression).unwrap().text_len()`
* `str::trailing_quote(expression).unwrap().text_len()`
* Avoid comparing the string content if we already know that it's
implicitly concatenated
## Test Plan
`cargo insta test`
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
## Summary
Implement support for RDJson output for `ruff check`, as requested in
#8655.
## Test Plan
Tested using a snapshot test. Same approach as for e.g. the JSON output
formatter.
## Additional info
I tried to keep the implementation close to the JSON implementation.
I had to deviate a bit to make the `suggestions` key work: If there are
no suggestions, then setting `suggestions` to `null` is invalid
according to the JSONSchema. Therefore, I opted for a slightly more
complex implementation, that skips the `suggestions` key entirely if
there are no fixes available for the given diagnostic. Maybe it would
have been easier to set `"suggestions": []`, but I ended up doing it
this way.
I didn't consider notebooks, as I _think_ that RDJson doesn't work with
notebooks. This should be confirmed, and if so, there should be some
form of warning or error emitted when trying to output diagnostics for a
notebook.
I also didn't consider `ruff format`, as this comment:
https://github.com/astral-sh/ruff/issues/8655#issuecomment-1811446160
suggests that that wouldn't be compatible.
I'm new to Rust, any feedback is appreciated. 🙂 I
implemented this in order to have a productive rainy saturday afternoon,
I'm not knowledgeable about RDJson beyond the sources linked in the
issue.
## Summary
This PR implements the rule B901, which is part of the opinionated rules
of `flake8-bugbear`.
This rule seems to be desired in `ruff` as per
https://github.com/astral-sh/ruff/issues/3758 and
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976.
## Test Plan
As this PR was made closely following the
[CONTRIBUTING.md](8a25531a71/CONTRIBUTING.md),
it tests using the snapshot approach, that is described there.
## Sources
The implementation is inspired by [the original implementation in the
`flake8-bugbear`
repository](d1aec4cbef/bugbear.py (L1092)).
The error message and [test
file](d1aec4cbef/tests/b901.py)
where also copied from there.
The documentation I came up with on my own and needs improvement. Maybe
the example given in
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976
could be used, but maybe they are too complex, I'm not sure.
## Open Questions
- [ ] Documentation. (See above.)
- [x] Can I access the parent in a visitor?
The [original
implementation](d1aec4cbef/bugbear.py (L1100))
references the `yield` statement's parent to check if it is an
expression statement. I didn't find a way to do this in `ruff` and used
the `is_expresssion_statement` field on the visitor instead. What are
your thoughts on this? Is it possible and / or desired to access the
parent node here?
- [x] Is `Option::is_some(...)` -> `...unwrap()` the right thing to do?
Referring to [this piece of
code](9d5a280f71/crates/ruff_linter/src/rules/flake8_bugbear/rules/return_x_in_generator.rs?plain=1#L91-L96).
From my understanding, the `.unwrap()` is safe, because it is checked
that `return_` is not `None`. However, I feel like I missed a more
elegant solution that does both in one.
## Other
I don't know a lot about this rule, I just implemented it because I
found it in a
https://github.com/astral-sh/ruff/labels/good%20first%20issue.
I'm new to Rust, so any constructive critisism is appreciated.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
* Potentially resolves#11619 (nondeterministic hashmap order across
different architectures) in F401 by replacing a hashmap with
nondeterministic traversal order with an ordered mapping.
I'm not sure how to test this with our CI/CD. I don't have an s390x
machine at home. Should I try it in Qemu?
## Summary
In an `__init__.py` file, it's not uncommon to lack a logical indent
(since it may just contain imports). In such cases, we were always
falling back to four-space indent. This PR adds detection for indents
within import groups.
Closes https://github.com/astral-sh/ruff/issues/11606.
## Summary
This PR aims to close#10095 by adding an option
`init-allow-undef-export` to the `pyflakes` settings. This option is
currently set to `true` such that behavior is kept identical.
But setting this option to `false` will lead to `F822` warnings to be
shown in all files, **including** `__init__.py` files.
As I've mentioned on #10095, I think `init-allow-undef-export=false`
would be the more user-friendly default option, as it creates fewer
surprises. @charliermarsh what do you think about making that the
default?
With this option in place, it's a single line fix for people that rely
on the old behavior.
And thinking longer term, for future major releases, one could probably
consider deprecating the option and eventually having people just `noqa`
these warnings if they are not wanted.
## Test Plan
I've added a `test_init_f822_enabled` test which repeats the test that
is done in the `init` test but this time with
`init-allow-undef-export=false` and the snap file correctly shows that
ruff will then trigger the otherwise suppressed F822 warning.
closes#10095
## Summary
Removed stray space in sample code snippet that is against ruff's own
default formatting rules.
This documentation appears on
https://docs.astral.sh/ruff/rules/unused-import/
## Test Plan
This is a trivially obvious change, verifiable with `ruff format
--check`
## Summary
- Implements `Y066` from `flake8-pyi` as `PYI066`
- Fixes `PYI006` not being raised for `elif` clauses. This would have
conflicted with PYI006's implementation, so decided to do it in the same
PR.
## Test Plan
`cargo test` / `cargo insta review`
## Summary
This PR ensures that if a variable is bound via `global`, and then the
`global` is read, the originating variable is also marked as read. It's
not perfect, in that it won't detect _rebindings_, like:
```python
from app import redis_connection
def func():
global redis_connection
redis_connection = 1
redis_connection()
```
So, above, `redis_connection` is still marked as unused.
But it does avoid flagging `redis_connection` as unused in:
```python
from app import redis_connection
def func():
global redis_connection
redis_connection()
```
Closes https://github.com/astral-sh/ruff/issues/11518.
## Summary
Follow up to https://github.com/astral-sh/ruff/pull/11521
Removes the extra added complexity for catch all match cases. This
matches the implementation of plain `else` statements.
## Test Plan
Added new test cases.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
This PR brings back the functionality to remove empty strings when
converting to an f-string in `UP032`.
For context, https://github.com/astral-sh/ruff/pull/8712 added this
functionality to remove _trailing_ empty strings but it got removed in
https://github.com/astral-sh/ruff/pull/8697 possibly unexpectedly so.
There's one difference which is that this PR will remove _any_ empty
strings and not just trailing ones. For example,
```diff
--- /Users/dhruv/playground/ruff/src/UP032.py
+++ /Users/dhruv/playground/ruff/src/UP032.py
@@ -1,7 +1,5 @@
(
- "{a}"
- ""
- "{b}"
- ""
-).format(a=1, b=1)
+ f"{1}"
+ f"{1}"
+)
```
## Test Plan
Run `cargo insta test` and update the snapshots.
## Summary
This PR updates the sequence sorting (`RUF022` and `RUF023`) to avoid
using the owned data from the string token. Instead, we will directly
use the reference to the data on the AST. This does introduce a lot of
lifetimes but that's required.
The main motivation for this is to allow removing the `lex_starts_at`
usage easily.
### Alternatives
1. Extract the raw string content (stripping the prefix and quotes)
using the `Locator` and use that for comparison
2. Build up an
[`IndexVec`](3e30962077/crates/ruff_index/src/vec.rs)
and use the newtype index in place of the string value itself. This also
does require lifetimes so we might as well just use the method in this
PR.
## Test Plan
`cargo insta test` and no ecosystem changes
## Summary
Concurrent GitLab runners clone projects into separate directories, e.g.
`{builds_dir}/$RUNNER_TOKEN_KEY/$CONCURRENT_ID/$NAMESPACE/$PROJECT_NAME`.
Since the fingerprint uses the full path to the file, the fingerprints
calculated by Ruff are different depending on which concurrent runner it
executes on, so often an MR will appear to remove all existing issues
and add them with new fingerprints.
I've adjusted the fingerprint function to use the project relative path,
which fixes this. Unfortunately this will have a breaking change for any
current users of this output - the fingerprints will change and appear
in GitLab as all linting messages having been fixed and then created.
## Test Plan
`cargo nextest run`
Running `ruff check --output-format gitlab` in a git repo, moving the
repo and running again, verifying no diffs between the outputs
Hi!
I left out some of the functions in the migration rule which became
removed in NumPy 2.0:
- `np.alltrue`
- `np.anytrue`
- `np.cumproduct`
- `np.product`
Addressing: https://github.com/numpy/numpy/issues/26493
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Current doc says `sys.version[0]` will select the first digit of a major
version number (correct) then as an example says
> e.g., `"3.10"` would evaluate to `"1"`
(would actually evaluate to `"3"`). Changed the example version to a
two-digit number to make the problem more clear.
## Test Plan
<!-- How was it tested? -->
ran the following:
- `cargo run -p ruff -- check
crates/ruff_linter/resources/test/fixtures/flake8_2020/YTT301.py
--no-cache`
- `cargo insta review`
- `cargo test`
which all passed.
## Summary
Rule `logging-warn` (`G010`) prescribes a change from `warn` to
`warning` and has a corresponding autofix, but the autofix is mistakenly
titled ```"Convert to `warn`"``` instead of ```"Convert to `warning`"```
(the latter is what the autofix actually does). Seems to be a plain
typo.
## Summary
It turns out that `singledispatch` does end up evaluating all arguments,
even though only the first is used to dispatch.
Closes https://github.com/astral-sh/ruff/issues/11520.
## Summary
Addresses #8451 by implementing rule 116 to add an unsafe fix when sleep
is used with a >24 hour interval to instead consider sleeping forever.
This rule is added as async instead as I my understanding was that these
trio rules would be moved to async anyway.
There are a couple of TODOs, which address further extending the rule by
adding support for lookups and evaluations, and also supporting `anyio`.
## Summary
This PR updates the `FA102` rule logic to use the `Importer` which is
available on the `Checker`.
The main motivation is that this would make updating the `Importer` to
use the `Tokens` struct which will be required to remove the
`lex_starts_at` usage in `Insertion::start_of_block` method.
## Test Plan
`cargo insta test`
## Summary
Similar to #11414, this PR extends `UP037` to flag quoted annotations
that are located in positions that won't be evaluated at runtime.
For example, the quotes on `Tuple` are unnecessary in:
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from typing import Tuple
def foo():
x: "Tuple[int, int]" = (0, 0)
foo()
```
## Summary
This PR moves the `has_comments` function from `Indexer` to
`CommentRanges`. The main motivation is that the `CommentRanges` will
now be built by the parser which is shared between the linter and the
formatter. Thus, the `CommentRanges` will be removed from the `Indexer`.
## Test Plan
`cargo test`
## Summary
Matching Pylint, we now omit the `try` body itself from branch counting.
Each `except` counts as a branch, as does the `else` and the `finally`.
Closes https://github.com/astral-sh/ruff/issues/11205.
The wording 'negative comparison' is a rather vague description of the
'is not' operation and does not describe what the 'not in' operation
does (potentially copied from 'is not'). This was replaced with more
precise language to describe the operators taken from the official
python docs[1].
Both rules didn't have a strong reasoning besides 'it's bad, use the
other'. The origin of these rules seems to be PEP8[2] which prefers 'is
not' over 'not ... is' for readability. This is now reflected in the
description.
[1]:
https://docs.python.org/3/reference/expressions.html#membership-test-operations
[2]: https://peps.python.org/pep-0008/#programming-recommendations
## Summary
If an annotation won't be evaluated at runtime, we don't need to flag
`from __future__ import annotations` as required. This applies both to
quoted annotations and annotations outside of runtime-evaluated
positions, like:
```python
def main() -> None:
a_list: list[str] | None = []
a_list.append("hello")
```
Closes https://github.com/astral-sh/ruff/issues/11397.
## Summary
* Update documentation for F401 following recent PRs
* #11168
* #11314
* Deprecate `ignore_init_module_imports`
* Add a deprecation pragma to the option and a "warn user once" message
when the option is used.
* Restore the old behavior for stable (non-preview) mode:
* When `ignore_init_module_imports` is set to `true` (default) there are
no `__init_.py` fixes (but we get nice fix titles!).
* When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.
* When preview mode is enabled, it overrides
`ignore_init_module_imports`.
* Fixed a bug in fix titles where `import foo as bar` would recommend
reexporting `bar as bar`. It now says to reexport `foo as foo`. (In this
case we don't issue a fix, fwiw; it was just a fix title bug.)
## Test plan
Added new fixture tests that reuse the existing fixtures for
`__init__.py` files. Each of the three situations listed above has
fixture tests. The F401 "stable" tests cover:
> * When `ignore_init_module_imports` is set to `true` (default) there
are no `__init_.py` fixes (but we get nice fix titles!).
The F401 "deprecated option" tests cover:
> * When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.
These complement existing "preview" tests that show the new behavior
which recommends fixes in `__init__.py` according to whether the import
is 1st party and other circumstances (for more on that behavior see:
#11314).
## Summary
This is a follow-up PR to #11445 update the `E27` rules to consider soft
keywords as well.
## Test Plan
Add test cases consisting of soft keywords and update the snapshot.
## Summary
We weren't treating the escaped newline as a valid condition to trigger
the safer fix (add an extra backslash before each invalid escape
sequence).
Closes https://github.com/astral-sh/ruff/issues/11461.
## Summary
This PR updates the `TokenKind::is_keyword` check to include soft
keywords. To account for this change, it adds a new
`is_non_soft_keyword` method.
The usage in logical line rules were updated to use the
`is_non_soft_keyword` method but it'll be updated to use `is_keyword` in
a follow-up PR (#11446).
While, the parser usages were kept as is. And because of that, the
snapshots for two test cases were updated in a better direction.
## Test Plan
`cargo insta test`
## Summary
We already have handling for "references that get quoted within our
quoted references", but we were assuming a specific ordering in the way
edits were generated.
Closes https://github.com/astral-sh/ruff/issues/11449.
## Summary
As discussed in issue #11408, PLR0912 has a broader definition of
"branches" than I expected. This updates the documentation to include
this definition.
I also updated the example to include several different types of
branches, while still maintaining dictionary lookup as an alternative
solution. (Crafting a realistic example was quite a challenge 😅).
Closes https://github.com/astral-sh/ruff/issues/11408.
## Summary
This moves the string-prefix enumerations in `ruff_python_ast` to a
separate submodule. I think this helps clarify that these prefixes are
purely abstract: they only depend on each other, and do not depend on
any of the other code in `nodes.rs` in any way. Moreover, while various
AST nodes _use_ them, they're not really nodes themselves, so they feel
slightly out of place in `nodes.rs`.
I considered moving all of them to `str.rs`, but it felt like enough
code that it could be a separate submodule.
## Test Plan
`cargo test`
Followup on #11168 and resolve#10391
# User facing changes
* F401 now recommends a fix to add unused import bindings to to
`__all__` if a single `__all__` list or tuple is found in `__init__.py`.
* If there are no `__all__` found in the file, fall back to recommending
redundant-aliases.
* If there are multiple `__all__` or only one but of the wrong type (non
list or tuple) then diagnostics are generated without fixes.
* `fix_title` is updated to reflect what the fix/recommendation is.
Subtlety: For a renamed import such as `import foo as bees`, we can
generate a fix to add `bees` to `__all__` but cannot generate a fix to
produce a redundant import (because that would break uses of the binding
`bees`).
# Implementation changes
* Add `name` field to `ImportBinding` to contain the name of the
_binding_ we want to add to `__all__` (important for the `import foo as
bees` case). It previously only contained the `AnyImport` which can give
us information about the import but not the binding.
* Add `binding` field to `UnusedImport` to contain the same. (Naming
note: the field `name` field already existed on `UnusedImport` and
contains the qualified name of the imported symbol/module)
* Change `fix_by_reexporting` to branch on the size of `dunder_all:
Vec<&Expr>`
* For length 0 call the edit-producing function `make_redundant_alias`.
* For length 1 call edit-producing function `add_to_dunder_all`.
* Otherwise, produce no fix.
* Implement the edit-producing function `add_to_dunder_all` and add unit
tests.
* Implement several fixture tests: empty `__all__ = []`, nonempty
`__all__ = ["foo"]`, mis-typed `__all__ = None`, plus-eq `__all__ +=
["foo"]`
* `UnusedImportContext::Init` variant now has two fields: whether the
fix is in `__init__.py` and how many `__all__` were found.
# Other changes
* Remove a spurious pattern match and instead use field lookups b/c the
addition of a field would have required changing the unrelated pattern.
* Tweak input type of `make_redundant_alias`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR follows up from #11420 to move `UP034` to use `TokenKind`
instead of `Tok`.
The main reason to have a separate PR is so that the reviewing is easy.
This required a lot more updates because the rule used an index (`i`) to
keep track of the current position in the token vector. Now, as it's
just an iterator, we just use `next` to move the iterator forward and
extract the relevant information.
This is part of https://github.com/astral-sh/ruff/issues/11401
## Test Plan
`cargo test`
## Summary
This PR moves the following rules to use `TokenKind` instead of `Tok`:
* `PLE2510`, `PLE2512`, `PLE2513`, `PLE2514`, `PLE2515`
* `E701`, `E702`, `E703`
* `ISC001`, `ISC002`
* `COM812`, `COM818`, `COM819`
* `W391`
I've paused here because the next set of rules
(`pyupgrade::rules::extraneous_parentheses`) indexes into the token
slice but we only have an iterator implementation. So, I want to isolate
that change to make sure the logic is still the same when I move to
using the iterator approach.
This is part of #11401
## Test Plan
`cargo test`
## Summary
Alternative to #11237
This PR adds a new `Tokens` struct which is a newtype wrapper around a
vector of lexer output. This allows us to add a `kinds` method which
returns an iterator over the corresponding `TokenKind`. This iterator is
implemented as a separate `TokenKindIter` struct to allow using the type
and provide additional methods like `peek` directly on the iterator.
This exposes the linter to access the stream of `TokenKind` instead of
`Tok`.
Edit: I've made the necessary downstream changes and plan to merge the
entire stack at once.
## Summary
This PR adds a newtype wrapper around `Vec<FStringElement>` that derefs
to a `&Vec<FStringElement>`.
Both f-string and format specifier are made up of `Vec<FStringElement>`.
By creating a newtype wrapper around it, we can share the methods for
both parent types.
## Summary
This PR adds support to iterate over each part of a string-like
expression.
This similar to the one in the formatter:
128414cd95/crates/ruff_python_formatter/src/string/any.rs (L121-L125)
Although I don't think it's a 1-1 replacement in the formatter because
the one implemented in the formatter has another information for certain
variants (as can be seen for `FString`).
The main motivation for this is to avoid duplication for rules which
work only on the parts of the string and doesn't require any information
from the parent node. Here, the parent node being the expression node
which could be an implicitly concatenated string.
This PR also updates certain rule implementation to make use of this and
avoids logic duplication.
## Summary
This PR renames `AnyStringKind` to `AnyStringFlags` and `AnyStringFlags`
to `AnyStringFlagsInner`.
The main motivation is to have consistent usage of "kind" and "flags".
For each string kind, it's "flags" like `StringLiteralFlags`,
`BytesLiteralFlags`, and `FStringFlags` but it was `AnyStringKind` for
the "any" variant.
## Summary
Changes `future-rewritable-type-annotation` (`FA100`) message to be less
confusing. Uses phrasing from the rule documentation to be consistent.
For example,
```
from_typing_import.py:5:13: FA100 Add `from __future__ import annotations` to rewrite `typing.List` more succinctly
```
Closes#10573.
## Test Plan
`cargo nextest run`
## Summary
Should this consider the decorator only if the name is actually a
property or is the logic in this PR correct?
fixes: #11358
## Test Plan
Add test case.
## Summary
This PR fixes a bug where the auto-fix for `TCH005` would delete the
entire `if` statement.
The fix in this PR is to not consider it a violation if there are any
`elif`/`else` blocks. This also matches the behavior of the original
plugin.
fixes: #11368
## Test plan
Add test cases.
## Summary
`--add-noqa` now runs in two stages: first, the linter finds all
diagnostics that need noqa comments and generate edits on a per-line
basis. Second, these edits are applied, in order, to the document.
A public-facing function, `generate_noqa_edits`, has also been
introduced, which returns noqa edits generated on a per-diagnostic
basis. This will be used by `ruff server` for noqa comment quick-fixes.
## Test Plan
Unit tests have been updated.
## Summary
This PR adds updates the semantic model to detect attribute docstring.
Refer to [PEP 258](https://peps.python.org/pep-0258/#attribute-docstrings)
for the definition of an attribute docstring.
This PR doesn't add full support for it but only considers string
literals as attribute docstring for the following cases:
1. A string literal following an assignment statement in the **global
scope**.
2. A global class attribute
For an assignment statement, it's considered an attribute docstring only
if the target expression is a name expression (`x = 1`). So, chained
assignment, multiple assignment or unpacking, and starred expression,
which are all valid in the target position, aren't considered here.
In `__init__` method, an assignment to the `self` variable like `self.x = 1`
is also a candidate for an attribute docstring. **This PR does not
support this position.**
## Test Plan
I used the following source code along with a print statement to verify
that the attribute docstring detection is correct.
Refer to the PR description for the code snippet.
I'll add this in the follow-up PR
(https://github.com/astral-sh/ruff/pull/11302) which uses this method.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves#11263
Detect `pathlib.Path.open` calls which do not specify a file encoding.
## Test Plan
Test cases added to fixture.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Resolves https://github.com/astral-sh/ruff/issues/11313
## Summary
PLR0912(too-many-branches) did not count branches inside with: blocks.
With this fix, the branches inside with statements are also counted.
## Test Plan
Added a new test case.
## Summary
While I was here, I also updated the rule to use
`function_type::classify` rather than hard-coding `staticmethod` and
friends.
Per Carl:
> Enum instances are already referred to by the class, forming a cycle
that won't get collected until the class itself does. At which point the
`lru_cache` itself would be collected, too.
Closes https://github.com/astral-sh/ruff/issues/9912.
## Summary
Historically, we only ignored `flake8-blind-except` if you re-raised or
logged the exception as a _direct_ child statement; but it could be
nested somewhere. This was just a known limitation at the time of adding
the previous logic.
Closes https://github.com/astral-sh/ruff/issues/11289.
## Summary
In #9218 `Rule::NeverUnion` was partially removed from a
`checker.any_enabled` call. This makes the change consistent.
## Test Plan
`cargo test`
Resolves#10390 and starts to address #10391
# Changes to behavior
* In `__init__.py` we now offer some fixes for unused imports.
* If the import binding is first-party this PR suggests a fix to turn it
into a redundant alias.
* If the import binding is not first-party, this PR suggests a fix to
remove it from the `__init__.py`.
* The fix-titles are specific to these new suggested fixes.
* `checker.settings.ignore_init_module_imports` setting is
deprecated/ignored. There is probably a documentation change to make
that complete which I haven't done.
---
<details><summary>Old description of implementation changes</summary>
# Changes to the implementation
* In the body of the loop over import statements that contain unused
bindings, the bindings are partitioned into `to_reexport` and
`to_remove` (according to how we want to resolve the fact they're
unused) with the following predicate:
```rust
in_init && is_first_party(checker, &import.qualified_name().to_string())
// true means make it a reexport
```
* Instead of generating a single fix per import statement, we now
generate up to two fixes per import statement:
```rust
(fix_by_removing_imports(checker, node_id, &to_remove, in_init).ok(),
fix_by_reexporting(checker, node_id, &to_reexport, dunder_all).ok())
```
* The `to_remove` fixes are unsafe when `in_init`.
* The `to_explicit` fixes are safe. Currently, until a future PR, we
make them redundant aliases (e.g. `import a` would become `import a as
a`).
## Other changes
* `checker.settings.ignore_init_module_imports` is deprecated/ignored.
Instead, all fixes are gated on `checker.settings.preview.is_enabled()`.
* Got rid of the pattern match on the import-binding bound by the inner
loop because it seemed less readable than referencing fields on the
binding.
* [x] `// FIXME: rename "imports" to "bindings"` if reviewer agrees (see
code)
* [x] `// FIXME: rename "node_id" to "import_statement"` if reviewer
agrees (see code)
<details>
<summary><h2>Scope cut until a future PR</h2></summary>
* (Not implemented) The `to_explicit` fixes will be added to `__all__`
unless it doesn't exist. When `__all__` doesn't exist they're resolved
by converting to redundant aliases (e.g. `import a` would become `import
a as a`).
---
</details>
# Test plan
* [x] `crates/ruff_linter/resources/test/fixtures/pyflakes/F401_24`
contains an `__init__.py` with*out* `__all__` that exercises the
features in this PR, but it doesn't pass.
* [x]
`crates/ruff_linter/resources/test/fixtures/pyflakes/F401_25_dunder_all`
contains an `__init__.py` *with* `__all__` that exercises the features
in this PR, but it doesn't pass.
* [x] Write unit tests for the new edit functions in
`fix::edits::make_redundant_alias`.
</details>
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR removes the `ImportMap` implementation and all its routing
through ruff.
The import map was added in https://github.com/astral-sh/ruff/pull/3243
but we then never ended up using it to do cross file analysis.
We are now working on adding multifile analysis to ruff, and revisit
import resolution as part of it.
```
hyperfine --warmup 10 --runs 20 --setup "./target/release/ruff clean" \
"./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I" \
"./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I"
Benchmark 1: ./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
Time (mean ± σ): 37.6 ms ± 0.9 ms [User: 52.2 ms, System: 63.7 ms]
Range (min … max): 35.8 ms … 39.8 ms 20 runs
Benchmark 2: ./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
Time (mean ± σ): 36.0 ms ± 0.7 ms [User: 50.3 ms, System: 58.4 ms]
Range (min … max): 34.5 ms … 37.6 ms 20 runs
Summary
./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I ran
1.04 ± 0.03 times faster than ./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
```
I suspect that the performance improvement should even be more
significant for users that otherwise don't have any diagnostics.
```
hyperfine --warmup 10 --runs 20 --setup "cd ../ecosystem/airflow && ../../ruff/target/release/ruff clean" \
"./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I" \
"./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I"
Benchmark 1: ./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I
Time (mean ± σ): 53.7 ms ± 1.8 ms [User: 68.4 ms, System: 63.0 ms]
Range (min … max): 51.1 ms … 58.7 ms 20 runs
Benchmark 2: ./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I
Time (mean ± σ): 50.8 ms ± 1.4 ms [User: 50.7 ms, System: 60.9 ms]
Range (min … max): 48.5 ms … 55.3 ms 20 runs
Summary
./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I ran
1.06 ± 0.05 times faster than ./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I
```
## Test Plan
`cargo test`
## Summary
I think the check included here does make sense, but I don't see why we
would allow it if a value is provided for the attribute -- since, in
that case, isn't it _not_ abstract?
Closes: https://github.com/astral-sh/ruff/issues/11208.
## Summary
This PR adds an override to the fixer to ensure that we apply any
`redefined-while-unused` fixes prior to `unused-import`.
Closes https://github.com/astral-sh/ruff/issues/10905.
## Summary
Implement duplicate code detection as part of `RUF100`, mirroring the
behavior of `flake8-noqa` (`NQA005`) mentioned in #850. The idea to
merge the rule into `RUF100` was suggested by @MichaReiser
https://github.com/astral-sh/ruff/pull/10325#issuecomment-2025535444.
## Test Plan
Test cases were added to the fixture.
This syntax wasn't "deprecated" in Python 3; it was removed.
I started looking at this rule because I was curious how Ruff could even
detect this without a Python 2 parser. Then I realized that
"print >> f, x" is actually valid Python 3 syntax: it creates a tuple
containing a right-shifted version of the print function.
## Summary
Based on discussion in #10850.
As it stands today `RUF100` will attempt to replace code redirects with
their target codes even though this is not the "goal" of `RUF100`. This
behavior is confusing and inconsistent, since code redirects which don't
otherwise violate `RUF100` will not be updated. The behavior is also
undocumented. Additionally, users who want to use `RUF100` but do not
want to update redirects have no way to opt out.
This PR explicitly detects redirects with a new rule `RUF101` and
patches `RUF100` to keep original codes in fixes and reporting.
## Test Plan
Added fixture.
## Summary
Resolves#11102
The error stems from these lines
f5c7a62aa6/crates/ruff_linter/src/noqa.rs (L697-L702)
I don't really understand the purpose of incrementing the last index,
but it makes the resulting range invalid for indexing into `contents`.
For now I just detect if the index is too high in `blanket_noqa` and
adjust it if necessary.
## Test Plan
Created fixture from issue example.
## Summary
This allows `raise from` in BLE001.
```python
try:
...
except Exception as e:
raise ValueError from e
```
Fixes#10806
## Test Plan
Test case added.
## Summary
This PR refactors unary expression parsing with the following changes:
* Ability to get `OperatorPrecedence` from a unary operator (`UnaryOp`)
* Implement methods on `TokenKind`
* Add `as_unary_operator` which returns an `Option<UnaryOp>`
* Add `as_unary_arithmetic_operator` which returns an `Option<UnaryOp>`
(used for pattern parsing)
* Rename `is_unary` to `is_unary_arithmetic_operator` (used in the
linter)
resolves: #10752
## Test Plan
Verify that the existing test cases pass, no ecosystem changes, run the
Python based fuzzer on 3000 random inputs and run it on dozens of
open-source repositories.
## Summary
Fixes#10463
Add `FURB192` which detects violations like this:
```python
# Bad
a = sorted(l)[0]
# Good
a = min(l)
```
There is a caveat that @Skylion007 has pointed out, which is that
violations with `reverse=True` technically aren't compatible with this
change, in the edge case where the unstable behavior is intended. For
example:
```python
from operator import itemgetter
data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)]
min(data, key=itemgetter(0)) # ('blue', 1)
sorted(data, key=itemgetter(0))[0] # ('blue', 1)
sorted(data, key=itemgetter(0), reverse=True)[-1] # ('blue, 2')
```
This seems like a rare edge case, but I can make the `reverse=True`
fixes unsafe if that's best.
## Test Plan
This is unit tested.
## References
https://github.com/dosisod/refurb/pull/333/files
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The `operator.itemgetter` behavior changes where there's more than one
argument, such that `operator.itemgetter(0)` yields `r[0]`, rather than
`(r[0],)`.
Closes https://github.com/astral-sh/ruff/issues/11075.
## Summary
There is no class `integer` in python, nor is there a type `integer`, so
I updated the docs to remove the backticks on these references, such
that it is the representation of an integer, and not a reference.
## Summary
Move `blanket-noqa` rule from the token checker to the noqa checker.
This allows us to make use of the line directives already computed in
the noqa checker.
## Test Plan
Verified test results are unchanged.
Resolves#10187
<details>
<summary>Old PR description; accurate through commit e86dd7d; probably
best to leave this fold closed</summary>
## Description of change
In the case of a printf-style format string with only one %-placeholder
and a variable at right (e.g. `"%s" % var`):
* The new behavior attempts to dereference the variable and then match
on the bound expression to distinguish between a 1-tuple (fix), n-tuple
(bug 🐛), or a non-tuple (fix). Dereferencing is via
`analyze::typing::find_binding_value`.
* If the variable cannot be dereferenced, then the type-analysis routine
is called to distinguish only tuple (no-fix) or non-tuple (fix). Type
analysis is via `analyze::typing::is_tuple`.
* If any of the above fails, the rule still fires, but no fix is
offered.
## Alternatives
* If the reviewers think that singling out the 1-tuple case is too
complicated, I will remove that.
* The ecosystem results show that no new fixes are detected. So I could
probably delete all the variable dereferencing code and code that tries
to generate fixes, tbh.
## Changes to existing behavior
**All the previous rule-firings and fixes are unchanged except for** the
"false negatives" in
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_1.py`. Those
previous "false negatives" are now true positives and so I moved them to
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py`.
<details>
<summary>Existing false negatives that are now true positives</summary>
```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:134:1: UP031 Use format specifiers instead of percent format
|
133 | # UP031 (no longer false negatives)
134 | 'Hello %s' % bar
| ^^^^^^^^^^^^^^^^ UP031
135 |
136 | 'Hello %s' % bar.baz
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:136:1: UP031 Use format specifiers instead of percent format
|
134 | 'Hello %s' % bar
135 |
136 | 'Hello %s' % bar.baz
| ^^^^^^^^^^^^^^^^^^^^ UP031
137 |
138 | 'Hello %s' % bar['bop']
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:138:1: UP031 Use format specifiers instead of percent format
|
136 | 'Hello %s' % bar.baz
137 |
138 | 'Hello %s' % bar['bop']
| ^^^^^^^^^^^^^^^^^^^^^^^ UP031
|
= help: Replace with format specifiers
```
One of them newly offers a fix.
```
# UP031 (no longer false negatives)
-'Hello %s' % bar
+'Hello {}'.format(bar)
```
This fix occurs because the new code dereferences `bar` to where it was
defined earlier in the file as a non-tuple:
```python
bar = {"bar": y}
```
---
</details>
## Behavior requiring new tests
Additionally, we now handle a few cases that we didn't previously test.
These cases are when a string has a single %-placeholder and the
righthand operand to the modulo operator is a variable **which can be
dereferenced.** One of those was shown in the previous section (the
"dereference non-tuple" case).
<details>
<summary>New cases handled</summary>
```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:126:1: UP031 [*] Use format specifiers instead of percent format
|
125 | t1 = (x,)
126 | "%s" % t1
| ^^^^^^^^^ UP031
127 | # UP031: deref t1 to 1-tuple, offer fix
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:130:1: UP031 Use format specifiers instead of percent format
|
129 | t2 = (x,y)
130 | "%s" % t2
| ^^^^^^^^^ UP031
131 | # UP031: deref t2 to n-tuple, this is a bug
|
= help: Replace with format specifiers
```
One of these offers a fix.
```
t1 = (x,)
-"%s" % t1
+"{}".format(t1[0])
# UP031: deref t1 to 1-tuple, offer fix
```
The other doesn't offer a fix because it's a bug.
---
</details>
---
</details>
## Changes to existing behavior
In the case of a string with a single %-placeholder and a single
ambiguous righthand argument to the modulo operator, (e.g. `"%s" % var`)
the rule now fires and offers a fix. We explain about this in the "fix
safety" section of the updated documentation.
## Documentation changes
I swapped the order of the "known problems" and the "examples" sections
so that the examples which describe the rule are first, before the
exceptions to the rule are described. I also tweaked the language to be
more explicit, as I had trouble understanding the documentation at
first. The "known problems" section is now "fix safety" but the content
is largely similar.
The diff of the documentation changes looks a little difficult unless
you look at the individual commits.
## Summary
I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.
@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.
So boxing type-params also on functions reduces the overall size of
`Stmt`.
## Test Plan
The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
Occasionally you intentionally have iterables of differing lengths. The
rule permits this by explicitly adding `strict=False`, but this was not
documented.
## Summary
The rule does not currently document how to avoid it when having
differing length iterables is intentional. This PR adds that to the rule
documentation.
Add pylint rule invalid-hash-returned (PLE0309)
See https://github.com/astral-sh/ruff/issues/970 for rules
Test Plan: `cargo test`
TBD: from the description: "Strictly speaking `bool` is a subclass of
`int`, thus returning `True`/`False` is valid. To be consistent with
other rules (e.g.
[PLE0305](https://github.com/astral-sh/ruff/pull/10962)
invalid-index-returned), ruff will raise, compared to pylint which will
not raise."
(Supersedes #9152, authored by @LaBatata101)
## Summary
This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.
It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.
Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.
For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.
## Test Plan
Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.
## Acknowledgements
- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)
---------
Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
Add pylint rule invalid-length-returned (PLE0303)
See https://github.com/astral-sh/ruff/issues/970 for rules
Test Plan: `cargo test`
TBD: from the description: "Strictly speaking `bool` is a subclass of
`int`, thus returning `True`/`False` is valid. To be consistent with
other rules (e.g.
[PLE0305](https://github.com/astral-sh/ruff/pull/10962)
invalid-index-returned), ruff will raise, compared to pylint which will
not raise."
## Summary
If the user is analyzing a script (i.e., we have no module path), it
seems reasonable to use the script name when trying to identify paths to
objects defined _within_ the script.
Closes https://github.com/astral-sh/ruff/issues/10960.
## Test Plan
Ran:
```shell
check --isolated --select=B008 \
--config 'lint.flake8-bugbear.extend-immutable-calls=["test.A"]' \
test.py
```
On:
```python
class A: pass
def f(a=A()):
pass
```
## Summary
This PR switches more callsites of `SemanticModel::is_builtin` to move
over to the new methods I introduced in #10919, which are more concise
and more accurate. I missed these calls in the first PR.
## Summary
This change adds a rule to detect functions declared `async` but lacking
any of `await`, `async with`, or `async for`. This resolves#9951.
## Test Plan
This change was tested by following
https://docs.astral.sh/ruff/contributing/#rule-testing-fixtures-and-snapshots
and adding positive and negative cases for each of `await` vs nothing,
`async with` vs `with`, and `async for` vs `for`.
## Summary
This PR moves the `Q003` rule to AST checker.
This is the final rule that used the docstring detection state machine
and thus this PR removes it as well.
resolves: #7595resolves: #7808
## Test Plan
- [x] `cargo test`
- [x] Make sure there are no changes in the ecosystem
## Summary
Adds more aggressive logic to PLR1730, `if-stmt-min-max`
Closes#10907
## Test Plan
`cargo test`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Hi! 👋
Thanks for sharing ruff as software libre — it helps me keep Python code
quality up with pre-commit, both locally and CI 🙏
While studying the examples at
https://docs.astral.sh/ruff/rules/function-uses-loop-variable/#example I
noticed that the last of the examples had a bug: prior to this fix, `ì`
was passed to the lambda for `x` rather than for `i` — the two are
mixed-up. The reason it's easy to overlook is because addition is an
commutative operation and so `x + i` and `i + x` give the same result
(and least with integers), despite the mix-up. For proof, let me demo
the relevant part with before and after:
```python
In [1]: from functools import partial
In [2]: [partial(lambda x, i: (x, i), i)(123) for i in range(3)]
Out[2]: [(0, 123), (1, 123), (2, 123)]
In [3]: [partial(lambda x, i: (x, i), i=i)(123) for i in range(3)]
Out[3]: [(123, 0), (123, 1), (123, 2)]
```
Does that make sense?
## Test Plan
<!-- How was it tested? -->
Was manually tested using IPython.
CC @r4f @grandchild
## Summary
If `RUF100` was included in a per-file-ignore, we respected it on cases
like `# noqa: F401`, but not the blanket variant (`# noqa`).
Closes https://github.com/astral-sh/ruff/issues/10906.
## Summary
Implement new rule: Prefer augmented assignment (#8877). It checks for
the assignment statement with the form of `<expr> = <expr>
<binary-operator> …` with a unsafe fix to use augmented assignment
instead.
## Test Plan
1. Snapshot test is included in the PR.
2. Manually test with playground.
## Summary
This PR adds the implementation for the current
[flake8-bugbear](https://github.com/PyCQA/flake8-bugbear)'s B038 rule.
The B038 rule checks for mutation of loop iterators in the body of a for
loop and alerts when found.
Rational:
Editing the loop iterator can lead to undesired behavior and is probably
a bug in most cases.
Closes#9511.
Note there will be a second iteration of B038 implemented in
`flake8-bugbear` soon, and this PR currently only implements the weakest
form of the rule.
I'd be happy to also implement the further improvements to B038 here in
ruff 🙂
See https://github.com/PyCQA/flake8-bugbear/issues/454 for more
information on the planned improvements.
## Test Plan
Re-using the same test file that I've used for `flake8-bugbear`, which
is included in this PR (look for the `B038.py` file).
Note: this is my first time using `rust` (beside `rustlings`) - I'd be
very happy about thorough feedback on what I could've done better
🙂 - Bring it on 😀
## Summary
Code cleanup for per-file ignores; use a struct instead of a tuple.
Named the structs for individual ignores and the list of ignores
`CompiledPerFileIgnore` and `CompiledPerFileIgnoreList`. Name choice is
because we already have a `PerFileIgnore` struct for a
pre-compiled-matchers form of the config. Name bikeshedding welcome.
## Test Plan
Refactor, should not change behavior; existing tests pass.
---------
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
## Summary
Implement `write-whole-file` (`FURB103`), part of #1348. This is largely
a copy and paste of `read-whole-file` #7682.
## Test Plan
Text fixture added.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
Improve `blanket-noqa` error message in cases where codes are provided
but not detected due to formatting issues. Namely `# noqa X100` (missing
colon) or `noqa : X100` (space before colon). The behavior is similar to
`NQA002` and `NQA003` from `flake8-noqa` mentioned in #850. The idea to
merge the rules into `PGH004` was suggested by @MichaReiser
https://github.com/astral-sh/ruff/pull/10325#issuecomment-2025535444.
## Test Plan
Test cases added to fixture.
Fixes#3172
## Summary
Allow prefixing [extend-]per-file-ignores patterns with `!` to negate
the pattern; listed rules / prefixes will be ignored in all files that
don't match the pattern.
## Test Plan
Added tests for the feature.
Rendered docs and checked rendered output.
## Summary
Came across this code while digging into the semantic model with
@AlexWaygood, and found it confusing because of how it splits
`push_scope` from the paired `pop_scope` (took me a few minutes to even
figure out if/where we were popping the pushed scope). Since this
"cleanup" is already totally split by node type, there doesn't seem to
be any gain in having it as a separate "step" rather than just
incorporating it into the traversal clauses for those node types.
I left the equivalent cleanup step alone for the expression case,
because in that case it is actually generic across several different
node types, and due to the use of the common `visit_generators` utility
there isn't a clear way to keep the pushes and corresponding pops
localized.
Feel free to just reject this if I've missed a good reason for it to
stay this way!
## Test Plan
Tests and clippy.
## Summary
Fixes#3011.
Type checkers currently allow forward references in all contexts in stub
files, and stubs frequently make use of this capability (although it
doesn't actually seem to be specc'd anywhere --neither in PEP 484, nor
https://typing.readthedocs.io/en/latest/source/stubs.html#id6, nor the
CPython typing docs). Implementing it so that Ruff allows forward
references in _all contexts_ in stub files seems non-trivial, however
(or at least, I couldn't figure out how to do it easily), so this PR
does not do that. Perhaps it _should_; if we think this apporach isn't
principled enough, I'm happy to close it and postpone changing anything
here.
However, this does reduce the number of F821 errors Ruff emits on
typeshed down from 76 to 2, which would mean that we could enable the
rule at typeshed. The remaining 2 F821 errors can be trivially fixed at
typeshed by moving definitions around; forward references in class bases
were really the only remaining places where there was a real _use case_
for forward references in stub files that Ruff wasn't yet allowing.
## Test plan
`cargo test`. I also ran this PR branch on typeshed to check to see if
there were any new false positives caused by the changes here; there
were none.
## Summary
`Path.read_bytes()` does not support any keyword arguments, so `FURB101`
should not be triggered if the file is opened in `rb` mode with any
keyword arguments.
## Test Plan
Move erroneous test to "Non-error" section of fixture.
## Summary
Historically, given:
```python
__all__ = [ # noqa: F822
"Bernoulli",
"Beta",
"Binomial",
]
```
The F822 violations would be attached to the `__all__`, so this `# noqa`
would be enforced for _all_ definitions in the list. This changed in
https://github.com/astral-sh/ruff/pull/10525 for the better, in that we
now use the range of each string. But these `# noqa` directives stopped
working.
This PR sets the `__all__` as a parent range in the diagnostic, so that
these directives are respected once again.
Closes https://github.com/astral-sh/ruff/issues/10795.
## Test Plan
`cargo test`
## Summary
Add new rule `pyupgrade - UP042` (I picked next available number).
Closes https://github.com/astral-sh/ruff/discussions/3867
Closes https://github.com/astral-sh/ruff/issues/9569
It should warn + provide a fix `class A(str, Enum)` -> `class
A(StrEnum)` for py311+.
## Test Plan
Added UP042.py test.
## Notes
I did not find a way to call `remove_argument` 2 times consecutively, so
the automatic fixing works only for classes that inherit exactly `str,
Enum` (regardless of the order).
I also plan to extend this rule to support IntEnum in next PR.
## Summary
This PR adds a new semantic model flag to indicate that the checker is
inside an f-string replacement field. This will be used to ignore
certain checks if the target version doesn't support a specific feature
like PEP 701.
fixes: #10761
## Test Plan
Add a test case from the raised issue.
Fixes#3259
## Summary
Renames `UnnecessaryComprehensionAnyAll` to
`UnnecessaryComprehensionInCall` and extends the check to `sum`, `min`,
and `max`, in addition to `any` and `all`.
## Test Plan
Updated snapshot test.
Built docs locally and verified the docs for this rule still render
correctly.
## Summary
We may not have had access to this in the past, but in short, if the
diagnostic is related to a specific section of a docstring, it seems
better to highlight the section (via the header) than the _entire_
docstring.
This should be completely compatible with existing `# noqa` since it's
always inside of a multi-line string anyway, and in such cases the `#
noqa` is always placed at the end of the multiline string.
Closes https://github.com/astral-sh/ruff/issues/10736.
## Summary
We lost the per-rule ignores when these were migrated to the AST, so if
_any_ `Q` rule is enabled, they're now all enabled.
Closes https://github.com/astral-sh/ruff/issues/10724.
## Test Plan
Ran:
```shell
ruff check . --isolated --select Q --ignore Q000
ruff check . --isolated --select Q --ignore Q001
ruff check . --isolated --select Q --ignore Q002
ruff check . --isolated --select Q --ignore Q000,Q001
ruff check . --isolated --select Q --ignore Q000,Q002
ruff check . --isolated --select Q --ignore Q001,Q002
```
...against:
```python
'''
bad docsting
'''
a = 'single'
b = '''
bad multi line
'''
```
## Summary
An annotated lambda assignment within a class scope is often
intentional. For example, within a dataclass or Pydantic model, these
are treated as fields rather than methods (and so can be passed values
in constructors).
I originally wrote this to special-case dataclasses and Pydantic
models... But was left feeling like we'd see more false positives here
for little gain (an annotated lambda within a `class` is likely
intentional?). Open to opinions, though.
Closes https://github.com/astral-sh/ruff/issues/10718.
## Summary
Currently, [this
line](716688d44e/crates/ruff_linter/src/fix/edits.rs (L101))
assumes that the `noqa` comment begins with an octothorpe followed by a
space. (`# `) With anyone's random code, this of course is not always
true.
When there's a multi-byte character after the leading octothorpe, such
as
[`\u0085`](https://www.fileformat.info/info/unicode/char/85/index.htm),
we try slicing from within the character, causing a panic.
To fix this, the logic has been changed to remove unused `noqa`
directives and keep any trailing comments, or removing the whole comment
if the comment is just the unused `noqa`
Fixes#10097.
## Test Plan
`cargo test`
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Implement FURB164 in the issue #1348.
Relevant Refurb docs is here:
https://github.com/dosisod/refurb/blob/v2.0.0/docs/checks.md#furb164-no-from-float
I've changed the name from `no-from-float` to
`verbose-decimal-fraction-construction`.
## Test Plan
<!-- How was it tested? -->
I've written it in the `FURB164.py`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
When `relative-imports-order = "closest-to-furthest"` is set, we should
_still_ put non-relative imports after relative imports. It's rare for
them to be in the same section, but _possible_ if you use
`known-local-folder`.
Closes https://github.com/astral-sh/ruff/issues/10655.
## Test Plan
New tests.
Also sorted this file:
```python
from ..models import ABC
from .models import Question
from .utils import create_question
from django_polls.apps.polls.models import Choice
```
With both:
- `isort view.py`
- `ruff check view.py --select I --fix`
And the following `pyproject.toml`:
```toml
[tool.ruff.lint.isort]
order-by-type = false
relative-imports-order = "closest-to-furthest"
known-local-folder = ["django_polls"]
[tool.isort]
profile = "black"
reverse_relative = true
known_local_folder = ["django_polls"]
```
I verified that Ruff and isort gave the same result, and that they
_still_ gave the same result when removing the relevant setting:
```toml
[tool.ruff.lint.isort]
order-by-type = false
known-local-folder = ["django_polls"]
[tool.isort]
profile = "black"
known_local_folder = ["django_polls"]
```
## Summary
Add a setting `extend-allowed-calls` to allow users to define their own
list of calls which allow boolean traps.
Resolves#10485.
Resolves#10356.
## Test Plan
Extended text fixture and added setting test.
## Summary
This PR fixes the bug for `DTZ007` rule where it didn't consider to
check for the presence of `%z` in f-strings. It also considers the
string parts of an implicitly concatenated f-strings for which I want to
find a better solution (#10308).
fixes: #10601
## Test Plan
Add test cases and update the snapshots.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR updates the warning message for rule S305 to accurately reflect
the security concern over using ECB mode in block ciphers, which is
considered insecure compared to other modes like CBC or CTR. The
previous message incorrectly mentioned AES as a [block cipher
mode](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation),
which has been corrected to avoid confusion.
Ref:
c85576d903/bandit/blacklists/calls.py (L99-L102)825fd7c990/crates/ruff_linter/src/rules/flake8_bandit/rules/suspicious_function_call.rs (L187-L216)
## Test Plan
No testing required as the change is limited to a minor change of
warning message update.
## Summary
The example for tab-after-comma (E242):
```python
a = 4,\t5
```
Use instead:
```python
a = 4, 3
```
is confusing since both the whitespace and the numbers are changed.
Change so the examples use the same numbers before/after.
## Test Plan
Untested.
- Clearly state in the documentation that passing `tz=None` is just as bad as not passing a `tz=` argument, from the perspective of these rules.
- Clearly state in the error messages exactly what the user is doing wrong, if the user is passing `tz=None` rather than failing to pass a `tz=` argument at all.
- Make error messages more concise, and separate out the suggested remedy from the thing that the user is identified as doing wrong.
Co-authored-by: Christian Clauss <cclauss@me.com>
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Similar to #10419, there was a case where there is a collision of C401
and C416 (as discussed in #10101).
Fixed this by implementing short-circuit for the comprehension of the
form `{x for x in foo}`.
## Test Plan
<!-- How was it tested? -->
Extended `C401.py` with the case where `set` is not builtin function,
and divided the case where the short-circuit should occur.
Removed the last testcase of `print(f"{ {set(a for a in 'abc')} }")`
test as this is invalid as a python code, but should I keep this?
## Summary
This is just a nitpicky improvement, but I thought it'd be a good
opportunity to look at the ruff source.
> The rules list in the documentation is generated using the registry
order. Currently, flake8-logging is separated from the rest of the
flake8 plugins. This patch puts it next to them.
https://docs.astral.sh/ruff/rules/
If it makes sense, we could alternatively just sort the linters in
https://github.com/astral-sh/ruff/blob/main/crates/ruff_dev/src/generate_rules_table.rs.
Signed-off-by: Filipe Laíns <lains@riseup.net>
## Summary
This is not the holistic solution but just to fix that issue.
fixes: #10546
## Test Plan
Add a regression test for it and check the snapshots.
## Summary
Fixed false-positive on the rule `PLW1641`, where the explicit
assignment on the `__hash__` method is not counted as an definition of
`__hash__`. (Discussed in #10557).
Also, added one new testcase.
## Test Plan
Checked on `cargo test` in `eq_without_hash.py`.
Before the change, for the assignment into `__hash__`, only `__hash__ =
None` was counted as an explicit definition of `__hash__` method.
Probably any assignment into `__hash__` property could be counted as an
explicit definition of hash, so I removed `value.is_none_literal_expr()`
check.
## Summary
Closes#10228
The PR makes the blank lines rules keep track of the cell status when
running on a notebook, and makes the rules not trigger when the line is
the first of the cell.
## Test Plan
The example given in #10228 is added as a fixture, along with a few
tests from the main blank lines fixtures.
## Summary
Continuing with #7595, this PR moves the `Q004` rule to the AST checker.
## Test Plan
- [x] Existing test cases should pass
- [x] No ecosystem updates
## Summary
PEP 420 says [nested namespace
packages](https://peps.python.org/pep-0420/#nested-namespace-packages)
are allowed, i.e. marking a directory as a namespace package marks all
subdirectories in the subtree as namespace packages.
`is_package` is modified to use `Path::starts_with` and the order of
checks is reversed to do in-memory checks first before hitting the disk.
## Test Plan
Added unit tests. Previously all tests were run with `namespace_packages
== &[]`. Verified that one of the tests was failing before changing the
implementation.
## Future Improvements
The `is_package_with_cache` can probably be rewritten to avoid repeated
calls to `Path::starts_with`, by caching all directories up to the
`namespace_root`:
```ruff
let namespace_root = namespace_packages
.iter()
.filter(|namespace_package| path.starts_with(namespace_package))
.min();
```
## Summary
Closes#10337.
I've fixed the code to count usage of variable.
Usage count inside the block is reset when there is a following
statement.
- continue
- break
- return
## Test Plan
Add test case.
## Summary
The fix for PYI025 is currently marked as unsafe in non-global scopes
for both `.py` and `.pyi` files, on the grounds that all global-scope
symbols in Python are implicitly exported from the module, so changing
the name of something in the global scope could break other modules that
import the module we're fixing. Unlike in `.py` files, however, imported
symbols are never implicitly re-exported from stub files. Symbols are
only understood by static analysis tools as being re-exported from stubs
if they are marked as explicit re-exports, which take three forms:
```py
from foo import * # all symbols from foo are re-exported from the stub
# the "redundant" alias marks it as an explicit re-export
# (note that the alias needs to be identical to the symbol's "actual" name
# in order for it to be a re-export)
from bar import barrr as barrr
# inclusion in __all__ also marks it as an explicit re-export,
# just like in `.py` files
from baz import bazzz
__all__ = ["bazzz"]
```
This is [specc'd in PEP
484](https://peps.python.org/pep-0484/#stub-files), and means that we
can mark the fix for PYI025 as safe in more cases for `.pyi` files.
## Test Plan
`cargo test`. An existing test case goes from being an unsafe fix to a
safe fix in a `.pyi` fixture. I also added a new fixture so we have
coverage of global-scope imports that are marked as re-exports using
"redundant" `from collections.abc import Set as Set` aliases.
## Summary
Continuing with https://github.com/astral-sh/ruff/issues/7595, this PR
moves the `Q001`, `Q002`, `Q003` rules to the AST based checker.
## Test Plan
Make sure all of the existing test cases pass and verify there are no
ecosystem changes.
## Summary
This error was found browsing
https://github.com/qarmin/Automated-Fuzzer/actions/runs/8396966850.
Which failed when trying to autofix the PT014 violation in the following
code:
```python
@pytest.mark.parametrize('data, spec', [(1.0, 1.0), (1.0, 1.0)])
def test_numbers(data, spec):
...
```
Investigation revealed that the implementation was not properly tested,
when the duplicate value was also the last in the list. In particular
the following function, which is in charge of finding the comma
following an element to create the suggested fix,
0a99bd84ce/crates/ruff_linter/src/rules/flake8_pytest_style/rules/parametrize.rs (L647-L651)
would find the next comma even if it was outside the list itself leading
to a lot of code being deleted.
This PR fixes that.
## Test Plan
Added misbehaving code to the test fixture.
## Summary
Ensures that we use the raw identifier as provided in the source code,
rather than the normalized Unicode identifier.
This _does_ mean that we treat these as two separate identifiers, and
_don't_ merge them, even though Python will treat them as the same
symbol:
```python
import numpy as ℂℇℊℋℌℍℎℐℑℒℓℕℤΩℨKÅℬℭℯℰℱℹℴ
import numpy as CƐgHHHhIILlNZΩZKÅBCeEFio
```
I think that's fine, this is super rare anyway and would likely be
confusing for users.
Closes https://github.com/astral-sh/ruff/issues/10528.
## Test Plan
`cargo test`
## Summary
Adds commas as an accepted separator between copyright years by default,
which is actually documented in one spot, but not currently accurate.
Fixes#9477.
## Summary
In https://github.com/astral-sh/ruff/pull/10341, we fixed some false
positives in `.pyi` files, but introduced others. This PR effectively
reverts the change in #10341 and fixes it in a slightly different way.
Instead of changing the _bindings_ we generate in the semantic model in
`.pyi` files, we instead change how we _resolve_ them.
Closes https://github.com/astral-sh/ruff/issues/10509.
- Improve clarity over the motivation for some rules
- Improve links to external references. In particular, reduce links to PEPs, as PEPs are generally historical documents rather than pieces of living documentation. Where possible, it's better to link to the official typing spec, the other docs at typing.readthedocs.io/en/latest, or the docs at docs.python.org/3/library/typing.html.
- Use more concise language in a few places
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fix `E231` bug: Inconsistent catch compared to pycodestyle, such as when
dict nested in list. Resolves#10113.
## Test Plan
Example from #10113 added to test fixture.
Fix a typos in the error message of rule C400
With the latest version of Ruff (0.3.3) if I have a `scratch.py` script
like that:
```python
from typing import Dict, List, Tuple
def generate_samples(test_cases: Dict) -> List[Tuple]:
return list(
(input, expected)
for input, expected in zip(test_cases["input_value"], test_cases["expected_value"])
)
```
and I run ruff
```shell
>>> ruff check scratch.py --select C400
>>> scratch.py:5:12: C400 Unnecessary generator (rewrite using `list()`
```
This PR fixes the error message from _"(rewrite using `list()`"_ to
_"(rewrite using `list()`)"_, and it fixes also the doc.
Related question: why I have this error message? The rule is not correct
in this case. Should I open an issue for that?
## Summary
This PR fixes a panic in the linter for `W605`.
Consider the following f-string:
```python
f"{{}}ab"
```
The `FStringMiddle` token would contain `{}ab`. Notice that the escaped
braces have _reduced_ the string. This means we cannot use the text
value from the token to determine the location of the escape sequence
but need to extract it from the source code.
fixes: #10434
## Test Plan
Add new test cases and update the snapshots.
## Summary
We're seeing failures in https://github.com/astral-sh/ruff/issues/10470
because `resolve_qualified_import_name` isn't guaranteed to return a
specific import if a symbol is accessible in two ways (e.g., you have
both `import logging` and `from logging import error` in scope, and you
want `logging.error`). This PR breaks up the failing tests such that the
imports aren't in the same scope.
Closes https://github.com/astral-sh/ruff/issues/10470.
## Test Plan
I added a `bindings.reverse()` to `resolve_qualified_import_name` to
ensure that the tests pass regardless of the binding order.
## Summary
I used `cargo-shear` (see
[tweet](https://twitter.com/boshen_c/status/1770106165923586395)) to
remove some unused dependencies that `cargo udeps` wasn't reporting.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
## Summary
This adds automatic fixes for the `PT007` rule.
I am currently reviewing and adding Ruff rules to Home Assistant. One
rule is PT007, which has multiple hundred occurrences in the codebase,
but no automatic fix, and this is not fun to do manually, especially
because using Regexes are not really possible with this.
My knowledge of the Ruff codebase and Rust in general is not good and
this is my first PR here, so I hope it is not too bad.
One thing where I need help is: How can I have the transformed code to
be formatted automatically, instead of it being minimized as it does it
now?
## Test Plan
Using the existing fixtures and updated snapshots.
## Summary
In issue https://github.com/astral-sh/ruff/issues/6785 it is reported
that a docstring in the form of `''"assert" ' SAM macro definitions '''`
is autocorrected to `"""assert" ' SAM macro definitions '''` (note the
triple quotes one only one side), which breaks the python program due
`undetermined string lateral`.
* `Q002`: Not only would docstrings in the form of `''"assert" ' SAM
macro definitions '''` (single quotes) be autofixed wrongly, but also
e.g. `""'assert' ' SAM macro definitions '''` (double quotes). The bug
is present for docstrings in all scopes (e.g. module docstrings, class
docstrings, function docstrings)
* `Q000`: The autofix error is not only present for `Q002` (docstrings),
but also for inline strings (`Q000`). Therefore `s = ''"assert" ' SAM
macro definitions '''` will also be wrongly autofixed.
Note that situation in which the first string is non-empty can be fixed,
e.g. `'123'"assert" ' SAM macro definitions '''` -> `"123""assert" ' SAM
macro definitions '''` is valid.
## What
* Change FixAvailability of `Q000` `Q002` to `Sometimes`
* Changed both rules such that docstrings/inline strings that cannot be
fixed are still reported as bad quotes via diagnostics, but no fix is
provided
## Test Plan
* For `Q000`: Add docstrings in different scopes that (partially) would
have been autofixed wrongly
* For `Q002`: Add inline strings that (partially) would have been
autofixed wrongly
Closes https://github.com/astral-sh/ruff/issues/6785
## Summary
The upstream category check here
fd26b29986/crates/ruff_linter/src/upstream_categories.rs (L54-L65)
was not working because the code is actually "E0001" not "PLE0001", I
changed it so it will detect the upstream category correctly.
I also sorted the upstream categories alphabetically, so that the
document generation will be deterministic.
## Test Plan
I compared the diff before and after the change.
Fixes#10426
## Summary
Fix rule B030 giving a false positive with Tuple operations like `+`.
[Playground](https://play.ruff.rs/17b086bc-cc43-40a7-b5bf-76d7d5fce78a)
```python
try:
...
except (ValueError,TypeError) + (EOFError,ArithmeticError):
...
```
## Reviewer notes
This is a little more convoluted than I was expecting -- because we can
have valid nested Tuples with operations done on them, the flattening
logic has become a bit more complex.
Shall I guard this behind --preview?
## Test Plan
Unit tested.
## Summary
Implement `singledispatchmethod-function` from pylint, part of #970.
This is essentially a copy paste of #8934 for `@singledispatchmethod`
decorator.
## Test Plan
Text fixture added.
## Summary
Short-circuit implementation mentioned in #10403.
I implemented this by extending C400:
- Made `UnnecessaryGeneratorList` have information of whether the the
short-circuiting occurred (to put diagnostic)
- Add additional check for whether in `unnecessary_generator_list`
function.
Please give me suggestions if you think this isn't the best way to
handle this :)
## Test Plan
Extended `C400.py` a little, and written the cases where:
- Code could be converted to one single conversion to `list` e.g.
`list(x for x in range(3))` -> `list(range(3))`
- Code couldn't be converted to one single conversion to `list` e.g.
`list(2 * x for x in range(3))` -> `[2 * x for x in range(3)]`
- `list` function is not built-in, and should not modify the code in any
way.
## Summary
Trailing ellipses in objects defined in `typing.TYPE_CHECKING` might be
meaningful (it might be declaring a stub). Thus, we should skip the
`unnecessary-placeholder` (`PIE970`) rule in such contexts.
Closes#10358.
## Test Plan
`cargo nextest run`
## Summary
Given `del X`, we'll typically add a `BindingKind::Deletion` to `X` to
shadow the current binding. However, if the deletion is inside of a
conditional operation, we _won't_, as in:
```python
def f():
global X
if X > 0:
del X
```
We will, however, track it as a reference to the binding. This PR adds
the expression context to those resolved references, so that we can
detect that the `X` in `global X` was "assigned to".
Closes https://github.com/astral-sh/ruff/issues/10397.
## Summary
The previous documentation sounded as if typing a mutable default as a
`ClassVar` were optional. However, it is not, as not doing so causes a
`ValueError`. The snippet below was tested in Python's interactive
shell:
```
>>> from dataclasses import dataclass
>>> @dataclass
... class A:
... mutable_default: list[int] = []
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'list'> for field mutable_default is not allowed: use default_factory
>>>
```
This behavior is also documented in Python's docs, see
[here](https://docs.python.org/3/library/dataclasses.html#mutable-default-values):
> [...] the
[dataclass()](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass)
decorator will raise a
[ValueError](https://docs.python.org/3/library/exceptions.html#ValueError)
if it detects an unhashable default parameter. The assumption is that if
a value is unhashable, it is mutable. This is a partial solution, but it
does protect against many common errors.
And
[here](https://docs.python.org/3/library/dataclasses.html#class-variables)
it is documented why it works if it is typed as a `ClassVar`:
> One of the few places where
[dataclass()](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass)
actually inspects the type of a field is to determine if a field is a
class variable as defined in [PEP
526](https://peps.python.org/pep-0526/). It does this by checking if the
type of the field is typing.ClassVar. If a field is a ClassVar, it is
excluded from consideration as a field and is ignored by the dataclass
mechanisms. Such ClassVar pseudo-fields are not returned by the
module-level
[fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields)
function.
In this PR I have changed the documentation to make it a little bit
clearer that not using `ClassVar` makes the code invalid.
## Summary
Ignoring all lines until the first logical line does not match the
behavior from pycodestyle. This PR therefore removes the `if
state.is_not_first_logical_line` skipping the line check before the
first logical line, and applies it only to `E302`.
For example, in the snippet below a rule violation should be detected on
the second comment and on the import.
```python
# first comment
# second comment
import foo
```
Fixes#10374
## Test Plan
Add test cases, update the snapshots and verify the ecosystem check output
## Summary
This PR updates the `StringLike::FString` variant to use `ExprFString`
instead of `FStringLiteralElement`.
For context, the reason it used `FStringLiteralElement` is that the node
is actually the string part of an f-string ("foo" in `f"foo{x}"`). But,
this is inconsistent with other variants where the captured value is the
_entire_ string.
This is also problematic w.r.t. implicitly concatenated strings. Any
rules which work with `StringLike::FString` doesn't account for the
string part in an implicitly concatenated f-strings. For example, we
don't flag confusable character in the first part of `"𝐁ad" f"𝐁ad
string"`, but only the second part
(https://play.ruff.rs/16071c4c-a1dd-4920-b56f-e2ce2f69c843).
### Update `PYI053`
_This is included in this PR because otherwise it requires a temporary
workaround to be compatible with the old logic._
This PR also updates the `PYI053` (`string-or-bytes-too-long`) rule for
f-string to consider _all_ the visible characters in a f-string,
including the ones which are implicitly concatenated. This is consistent
with implicitly concatenated strings and bytes.
For example,
```python
def foo(
# We count all the characters here
arg1: str = '51 character ' 'stringgggggggggggggggggggggggggggggggg',
# But not here because of the `{x}` replacement field which _breaks_ them up into two chunks
arg2: str = f'51 character {x} stringgggggggggggggggggggggggggggggggggggggggggggg',
) -> None: ...
```
This PR fixes it to consider all _visible_ characters inside an f-string
which includes expressions as well.
fixes: #10310fixes: #10307
## Test Plan
Add new test cases and update the snapshots.
## Review
To facilitate the review process, the change have been split into two
commits: one which has the code change while the other has the test
cases and updated snapshots.
Re-implementation of https://github.com/astral-sh/ruff/pull/5845 but
instead of deprecating the option I toggle the default. Now users can
_opt-in_ via the setting which will give them an unsafe fix to remove
the import. Otherwise, we raise violations but do not offer a fix. The
setting is a bit of a misnomer in either case, maybe we'll want to
remove it still someday.
As discussed there, I think the safe fix should be to import it as an
alias. I'm not sure. We need support for offering multiple fixes for
ideal behavior though? I think we should remove the fix entirely and
consider it separately.
Closes https://github.com/astral-sh/ruff/issues/5697
Supersedes https://github.com/astral-sh/ruff/pull/5845
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR adds methods on `FString` to iterate over the two different kind
of elements it can have - literals and expressions. This is similar to
the methods we have on `ExprFString`.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Fix "TRIO115 false positive with with sleep(var) where var starts as 0"
#9935 based on the discussion in the issue.
## Test Plan
Issue code added to fixture
## Summary
I used `codespell` and `gramma` to identify mispellings and grammar
errors throughout the codebase and fixed them. I tried not to make any
controversial changes, but feel free to revert as you see fit.
## Summary
We had a report of a test failure on a specific architecture, and
looking into it, I think the test assumes that the hash keys are
iterated in a specific order. This PR thus adds a variant to our
settings display macro specifically for maps and sets. Like `CacheKey`,
it sorts the keys when printing.
Closes https://github.com/astral-sh/ruff/issues/10359.