Compare commits

...

170 Commits
0.14.8 ... main

Author SHA1 Message Date
Phong Do b0bc990cbf
[`pyupgrade`] Fix parsing named Unicode escape sequences (`UP032`) (#21901)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/19771

Fixes incorrect parsing of Unicode named escape sequences like `Hey
\N{snowman}` in `FormatString`, which were being incorrectly split into
separate literal and field parts instead of being treated as a single
literal unit.

## Problem

The `FormatString` parser incorrectly handles Unicode named escape
sequences:
- **Current**: `Hey \N{snowman}` is parsed into 2 parts `Literal("Hey
\N")` & `Field("snowman")`
- **Expected**: `Hey \N{snowman}` should be parsed into 1 part
`Literal("Hey \N{snowman}")`

This affects f-string conversion rules when fixing `UP032` that rely on
proper format string parsing.

## Solution

I modified `parse_literal` to detect and handle Unicode named escape
sequences before parsing single characters:
- Introduced a flag to track when a backslash is "available" to escape
something.
- When the flag is `true`, and the text starts with `N{`, try to parse
the complete Unicode escape sequence as one unit, and set the flag to
`false` after parsing successfully.
- Set the flag to `false` when the backslash is already consumed.

## Manual Verification

`"\N{angle}AOB = {angle}°".format(angle=180)` 

**Result**

```bash
 def foo():
-    "\N{angle}AOB = {angle}°".format(angle=180)
+    f"\N{angle}AOB = {180}°"

Would fix 1 error.
```

`"\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\N{snowman} {snowman}".format(snowman=1)
+    f"\N{snowman} {1}"

Would fix 1 error.
```

`"\\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\\N{snowman} {snowman}".format(snowman=1)
+    f"\\N{1} {1}"

Would fix 1 error.
```

## Test Plan

- Added test cases (happy case, invalid case, edge case) for
`FormatString` when parsing Unicode escape sequence.
- Updated snapshots.
2025-12-16 16:33:39 -05:00
Aria Desires ad3de4e488
[ty] Improve rendering of default values for function args (#22010)
## Summary

We're actually quite good at computing this but the main issue is just
that we compute it at the type-level and so wrap it in `Literal[...]`.
So just special-case the rendering of these to omit `Literal[...]` and
fallback to `...` in cases where the thing we'll show is probably
useless (i.e. `x: str = str`).

Fixes https://github.com/astral-sh/ty/issues/1882
2025-12-16 13:39:19 -05:00
Douglas Creager 2214a46139
[ty] Don't use implicit superclass annotation when converting a class constructor into a `Callable` (#22011)
This fixes a bug @zsol found running ty against pyx. His original repro
is:

```py
class Base:
    def __init__(self) -> None: pass

class A(Base):
    pass

def foo[T](callable: Callable[..., T]) -> T:
    return callable()

a: A = foo(A)
```

The call at the bottom would fail, since we would infer `() -> Base` as
the callable type of `A`, when it should be `() -> A`.

The issue was how we add implicit annotations to `self` parameters.
Typically, we turn it into `self: Self`. But in cases where we don't
need to introduce a full typevar, we turn it into `self: [the class
itself]` — in this case, `self: Base`. Then, when turning the class
constructor into a callable, we would see this non-`Self` annotation and
think that it was important and load-bearing.

The fix is that we skip all implicit annotations when determining
whether the `self` annotation should take precedence in the callable's
return type.
2025-12-16 13:37:11 -05:00
Douglas Creager c02bd11b93
[ty] Infer typevar specializations for `Callable` types (#21551)
This is a first stab at solving
https://github.com/astral-sh/ty/issues/500, at least in part, with the
old solver. We add a new `TypeRelation` that lets us opt into using
constraint sets to describe when a typevar is assignability to some
type, and then use that to calculate a constraint set that describes
when two callable types are assignable. If the callable types contain
typevars, that constraint set will describe their valid specializations.
We can then walk through all of the ways the constraint set can be
satisfied, and record a type mapping in the old solver for each one.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-16 09:16:49 -08:00
Carl Meyer eeaaa8e9fe
[ty] propagate classmethod-ness through decorators returning Callables (#21958)
Fixes https://github.com/astral-sh/ty/issues/1787

## Summary

Allow method decorators returning Callables to presumptively propagate
"classmethod-ness" in the same way that they already presumptively
propagate "function-like-ness". We can't actually be sure that this is
the case, based on the decorator's annotations, but (along with other
type checkers) we heuristically assume it to be the case for decorators
applied via decorator syntax.

## Test Plan

Added mdtest.
2025-12-16 09:16:40 -08:00
Rob Hand 7f7485d608
[ty] Fixed benchmark markdown auto-linenumbers (#22008) 2025-12-16 16:38:11 +01:00
Aria Desires d755f3b522
[ty] Improve syntax-highlighting of constants (#22006)
## Summary

BLAH1 getting different highlighting/classification from BLAH is very
distracting and undesirable.
2025-12-16 09:23:40 -05:00
Aria Desires 83168a1bb1
[ty] highlight special type syntax in hovers as xml (#22005)
## Summary

These types look better rendered as XML than python

## Test Plan

<img width="532" height="299" alt="Screenshot 2025-12-16 at 8 40 56 AM"
src="https://github.com/user-attachments/assets/42d9abfa-3f4a-44ba-b6b4-6700ab06832d"
/>
2025-12-16 14:20:35 +00:00
Andrew Gallant 0f373603eb [ty] Suppress keyword argument completions unless we're in the "arguments"
Otherwise, given a case like this:

```
(lambda foo: (<CURSOR> + 1))(2)
```

we'll offer _argument_ completions for `foo` at the cursor position.
While we do actually want to offer completions for `foo` in this
context, it is currently difficult to do so. But we definitely don't
want to offer completions for `foo` as an argument to a function here.
Which is what we were doing.

We also add an end-to-end test here to verify that the actual label we
offer in completion suggestions includes the `=` suffix.

Closes https://github.com/astral-sh/ruff/pull/21970
2025-12-16 08:00:04 -05:00
Andrew Gallant cc23af944f [ty] Tweak how we show qualified auto-import completions
Specifically, we make two changes:

1. We only show `import ...` when there is an actual import edit.
2. We now show the text we will insert. This means that when we
insert a qualified symbol, the qualification will show in the
completions suggested.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant 0589700ca1 [ty] Prefer unqualified imports over qualified imports
It seems like this is perhaps a better default:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790

For me personally, I think I'd prefer the qualified
variant. But I can easily see this changing based on
the specific scenario. I think the thing that pushes
me toward prioritizing the unqualified variant is that
the user could have typed the qualified variant themselves,
but they didn't. So we should perhaps prioritize the
form they typed, which is unqualified.
2025-12-16 08:00:04 -05:00
Andrew Gallant 43d983ecae [ty] Add tests for how qualified auto-imports are shown
Specifically, we want to test that something like `import typing`
should only be shown when we are actually going to insert an import.
*And* that when we insert a qualified name, then we should show it
as such in the completion suggestions.
2025-12-16 08:00:04 -05:00
Andrew Gallant 5c69bb564c [ty] Add a test capturing sub-optimal auto-import heuristic
Specifically, here, we'd probably like to add `TypedDict` to
the existing `from typing import ...` statement instead of
using the fully qualified `typing.TypedDict` form.

To test this, we add another snapshot mode for including imports
that a completion will insert when selected.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant 89fed85a8d [ty] Switch completion tests to use "insertion" text in snapshots
I think this gives us better tests overall, since it
tells us what is actually going to be inserted. Plus,
we'll want this in the next commit.
2025-12-16 08:00:04 -05:00
Zanie Blue 051f6896ac
[ty] Remove extra headings and split examples in the `overrides` configuration docs (#21994)
Having these as markdown headings ends up being weird in the reference
documentation, e.g., before:

<img width="1071" height="779" alt="Screenshot 2025-12-15 at 8 45 25 PM"
src="https://github.com/user-attachments/assets/2118d4f1-f557-46f3-a4b6-56c406cf9aca"
/>
2025-12-16 06:57:06 -06:00
David Peter 5b1d3ac9b9
[ty] Document `TY_CONFIG_FILE` (#22001) 2025-12-16 13:15:24 +01:00
Micha Reiser b2b0ad38ea
[ty] Cache `KnownClass::to_class_literal` (#22000) 2025-12-16 13:04:12 +01:00
Micha Reiser 01c0a3e960
[ty] Fix benchmark assertion (#22003) 2025-12-16 12:24:54 +01:00
Zanie Blue 5c942119f8
Add uv and ty to the Ruff README (#21996)
Matching the style of the uv README
2025-12-16 10:37:15 +01:00
David Peter 2acf1cc0fd
[ty] Infer precise types for `isinstance(…)` calls involving typevars (#21999)
## Summary

Infer `Literal[True]` for `isinstance(x, C)` calls when `x: T` and `T`
has a bound `B` that satisfies the `isinstance` check against `C`.
Similar for constrained typevars.

closes https://github.com/astral-sh/ty/issues/1895

## Test Plan

* New Markdown tests
* Verified the the example in the linked ticket checks without errors
2025-12-16 10:34:30 +01:00
Micha Reiser 4fdbe26445
[ty] Use `FxHashMap` in `Signature::has_relation_to` (#21997) 2025-12-16 10:10:45 +01:00
Charlie Marsh 682d29c256
[ty] Avoid enforcing standalone expression for tests in f-strings (#21967)
## Summary

Based on what we do elsewhere and my understanding of "standalone"
here...

Closes https://github.com/astral-sh/ty/issues/1865.
2025-12-15 22:31:04 -05:00
Zanie Blue 8e13765b57
[ty] Use `title` for configuration code fences in ty reference documentation (#21992)
Part of https://github.com/astral-sh/ty/pull/1904
2025-12-15 16:36:08 -05:00
Douglas Creager 7d3b7c5754
[ty] Consistent ordering of constraint set specializations, take 2 (#21983)
In https://github.com/astral-sh/ruff/pull/21957, we tried to use
`union_or_intersection_elements_ordering` to provide a stable ordering
of the union and intersection elements that are created when determining
which type a typevar should specialize to. @AlexWaygood [pointed
out](https://github.com/astral-sh/ruff/pull/21551#discussion_r2616543762)
that this won't work, since that provides a consistent ordering within a
single process run, but does not provide a stable ordering across runs.

This is an attempt to produce a proper stable ordering for constraint
sets, so that we end up with consistent diagnostic and test output.

We do this by maintaining a new `source_order` field on each interior
BDD node, which records when that node's constraint was added to the
set. Several of the BDD operators (`and`, `or`, etc) now have
`_with_offset` variants, which update each `source_order` in the rhs to
be larger than any of the `source_order`s in the lhs. This is what
causes that field to be in line with (a) when you add each constraint to
the set, and (b) the order of the parameters you provide to `and`, `or`,
etc. Then we sort by that new field before constructing the
union/intersection types when creating a specialization.
2025-12-15 14:24:08 -05:00
RasmusNygren d6a5bbd91c
[ty] Remove invalid statement-keyword completions in for-statements (#21979)
In `for x in <CURSOR>` statements it's only valid to provide expressions
that eventually evaluate to an iterable. While it's extremely difficult
to know if something can evaulate to an iterable in a general case,
there are some suggestions we know can never lead to an iterable. Most
keywords are such and hence we remove them here.

## Summary
This suppresses statement-keywords from auto-complete suggestions in
`for x in <CURSOR>` statements where we know they can never be valid, as
whatever is typed has to (at some point) evaluate to an iterable.

It handles the core issue from
https://github.com/astral-sh/ty/issues/1774 but there's a lot of related
cases that probably has to be handled piece-wise.

## Test Plan
New tests and verifying in the playground.
2025-12-15 12:56:34 -05:00
Micha Reiser 1df6544ad8
[ty] Avoid caching trivial is-redundant-with calls (#21989) 2025-12-15 18:45:03 +01:00
Dylan 4e1cf5747a
Fluent formatting of method chains (#21369)
This PR implements a modification (in preview) to fluent formatting for
method chains: We break _at_ the first call instead of _after_.

For example, we have the following diff between `main` and this PR (with
`line-length=8` so I don't have to stretch out the text):

```diff
 x = (
-    df.merge()
+    df
+    .merge()
     .groupby()
     .agg()
     .filter()
 )
```

## Explanation of current implementation

Recall that we traverse the AST to apply formatting. A method chain,
while read left-to-right, is stored in the AST "in reverse". So if we
start with something like

```python
a.b.c.d().e.f()
```

then the first syntax node we meet is essentially `.f()`. So we have to
peek ahead. And we actually _already_ do this in our current fluent
formatting logic: we peek ahead to count how many calls we have in the
chain to see whether we should be using fluent formatting or now.

In this implementation, we actually _record_ this number inside the enum
for `CallChainLayout`. That is, we make the variant `Fluent` hold an
`AttributeState`. This state can either be:

- The number of call-like attributes preceding the current attribute
- The state `FirstCallOrSubscript` which means we are at the first
call-like attribute in the chain (reading from left to right)
- The state `BeforeFirstCallOrSubscript` which means we are in the
"first group" of attributes, preceding that first call.

In our example, here's what it looks like at each attribute:

```
a.b.c.d().e.f @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d().e @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d @ Fluent(FirstCallOrSubscript)
a.b.c @ Fluent(BeforeFirstCallOrSubscript)
a.b @ Fluent(BeforeFirstCallOrSubscript)
```

Now, as we descend down from the parent expression, we pass along this
little piece of state and modify it as we go to track where we are. This
state doesn't do anything except when we are in `FirstCallOrSubscript`,
in which case we add a soft line break.

Closes #8598

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-15 09:29:50 -06:00
Douglas Creager cbfecfaf41
[ty] Avoid stack overflow when calculating inferable typevars (#21971)
When we calculate which typevars are inferable in a generic context, the
result might include more than the typevars bound by the generic
context. The canonical example is a generic method of a generic class:

```py
class C[A]:
    def method[T](self, t: T): ...
```

Here, the inferable typevar set of `method` contains `Self` and `T`, as
you'd expect. (Those are the typevars bound by the method.) But it also
contains `A@C`, since the implicit `Self` typevar is defined as `Self:
C[A]`. That means when we call `method`, we need to mark `A@C` as
inferable, so that we can determine the correct mapping for `A@C` at the
call site.

Fixes https://github.com/astral-sh/ty/issues/1874
2025-12-15 10:25:33 -05:00
Aria Desires 8f530a7ab0
[ty] Add "qualify ..." code fix for undefined references (#21968)
## Summary

If `import warnings` exists in the file, we will suggest an edit of
`deprecated -> warnings.deprecated` as "qualify warnings.deprecated"

## Test Plan

Should test more cases...
2025-12-15 10:14:36 -05:00
Micha Reiser 5372bb3440
[ty] Use jemalloc on linux (#21975)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 16:04:34 +01:00
Micha Reiser d08e414179
Update MSRV to 1.90 (#21987) 2025-12-15 14:29:11 +01:00
Alex Waygood 0b918ae4d5
[ty] Improve check enforcing that an overloaded function must have an implementation (#21978)
## Summary

- Treat `if TYPE_CHECKING` blocks the same as stub files (the feature
requested in https://github.com/astral-sh/ty/issues/1216)
- We currently only allow `@abstractmethod`-decorated methods to omit
the implementation if they're methods in classes that have _exactly_
`ABCMeta` as their metaclass. That seems wrong -- `@abstractmethod` has
the same semantics if a class has a subclass of `ABCMeta` as its
metaclass. This PR fixes that too. (I'm actually not _totally_ sure we
should care what the class's metaclass is at all -- see discussion in
https://github.com/astral-sh/ty/issues/1877#issue-3725937441... but the
change this PR is making seems less wrong than what we have currently,
anyway.)

Fixes https://github.com/astral-sh/ty/issues/1216

## Test Plan

Mdtests and snapshots
2025-12-15 08:56:35 +00:00
renovate[bot] 9838f81baf
Update actions/checkout digest to 8e8c483 (#21982)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-15 06:52:52 +00:00
Dhruv Manilawala ba47349c2e
[ty] Use `ParamSpec` without the attr for inferable check (#21934)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1820

## Test Plan

Add new mdtests.

Ecosystem changes removes all false positives.
2025-12-15 11:04:28 +05:30
Bhuminjay Soni 04f9949711
[ty] Emit diagnostic when a type variable with a default is followed by one without a default (#21787)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-14 19:35:37 +00:00
Leandro Braga 8bc753b842
[ty] Fix callout syntax in configuration mkdocs (#1875) (#21961) 2025-12-14 10:21:54 +01:00
Peter Law c7eea1f2e3
Update debug_assert which pointed at missing method (#21969)
## Summary

I assume that the class has been renamed or split since this assertion
was created.

## Test Plan

Compiled locally, nothing more. Relying on CI given the triviality of
this change.
2025-12-13 17:56:59 -05:00
Charlie Marsh be8eb92946
[ty] Add support for `__qualname__` and other implicit class attributes (#21966)
## Summary

Closes https://github.com/astral-sh/ty/issues/1873
2025-12-13 17:10:25 -05:00
Simon Lamon a544c59186
[ty] Emit a diagnostic when frozen dataclass inherits a non-frozen dataclass and the other way around (#21962)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-13 20:59:26 +00:00
Alex Waygood bb464ed924
[ty] Use unqualified names for displays of `TypeAliasType`s and unbound `ParamSpec`s/`TypeVar`s (#21960) 2025-12-13 20:23:16 +00:00
Alex Waygood f57917becd
fix typo in `fuzz/README.md` (#21963) 2025-12-13 18:21:46 +00:00
David Peter 82a7598aa8
[ty] Remove now-unnecessary Divergent check (#21935)
## Summary

This check is not necessary thanks to
https://github.com/astral-sh/ruff/pull/21906.
2025-12-13 16:32:09 +01:00
Micha Reiser e2ec2bc306
Use datatest for formatter tests (#21933) 2025-12-13 08:02:22 +00:00
Douglas Creager b413a6dec4
[ty] Allow gradual lower/upper bounds in a constraint set (#21957)
We now allow the lower and upper bounds of a constraint to be gradual.
Before, we would take the top/bottom materializations of the bounds.
This required us to pass in whether the constraint was intended for a
subtyping check or an assignability check, since that would control
whether we took the "restrictive" or "permissive" materializations,
respectively.

Unfortunately, doing so means that we lost information about whether the
original query involves a non-fully-static type. This would cause us to
create specializations like `T = object` for the constraint `T ≤ Any`,
when it would be nicer to carry through the gradual type and produce `T
= Any`.

We're not currently using constraint sets for subtyping checks, nor are
we going to in the very near future. So for now, we're going to assume
that constraint sets are always used for assignability checks, and allow
the lower/upper bounds to not be fully static. Once we get to the point
where we need to use constraint sets for subtyping checks, we will
consider how best to record this information in constraints.
2025-12-12 22:18:30 -05:00
Shunsuke Shibayama e19c050386
[ty] disallow explicit specialization of type variables themselves (#21938)
## Summary

This PR makes explicit specialization of a type variable itself an
error, and the result of the specialization is `Unknown`.

The change also fixes https://github.com/astral-sh/ty/issues/1794.

## Test Plan

mdtests updated
new corpus test

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-12 15:49:20 -08:00
Alex Waygood 5a2aba237b
[ty] Improve diagnostics for unsupported binary operations and unsupported augmented assignments (#21947)
## Summary

This PR takes the improvements we made to unsupported-comparison
diagnostics in https://github.com/astral-sh/ruff/pull/21737, and extends
them to other `unsupported-operator` diagnostics.

## Test Plan

Mdtests and snapshots
2025-12-12 21:53:29 +00:00
Aria Desires ca5f099481
[ty] update implicit root docs (#21955)
## Summary

./tests is now no longer an implicit root, per
https://github.com/astral-sh/ruff/pull/21817
2025-12-12 16:30:23 -05:00
Alex Waygood a722df6a73
[ty] Enable even more goto-definition on inlay hints (#21950)
## Summary

Working on py-fuzzer recently (AKA, a Python project!) reminded me how
cool our "inlay hint goto-definition feature" is. So this PR adds a
bunch more of that!

I also made a couple of other minor changes to type display. For
example, in the playground, this snippet:

```py
def f(): ...
reveal_type(f.__get__)
```

currently leads to this diagnostic:

```
Revealed type: `<method-wrapper `__get__` of `f`>` (revealed-type) [Ln 2, Col 13]
```

But the fact that we have backticks both around the type display and
inside the type display isn't _great_ there. This PR changes it to

```
Revealed type: `<method-wrapper '__get__' of function 'f'>` (revealed-type) [Ln 2, Col 13]
```

which avoids the nested-backticks issue in diagnostics, and is more
similar to our display for various other `Type` variants such as
class-literal types (`<class 'Foo'>`, etc., not ``<class `Foo`>``).

## Test Plan

inlay snapshots added; mdtests updated
2025-12-12 12:57:38 -05:00
Brent Westbrook dec4154c8a
Document known lambda formatting deviations from Black (#21954)
Summary
--

Following #8179, we now format long lambda expressions a bit more like
Black, preferring to keep long parameter lists on a single line, but we
go one step further to break the body itself across multiple lines and
parenthesize it if it's still too long. This PR documents both the
stable deviation that breaks parameters across multiple lines, and the
new preview deviation that breaks the body instead.

I also fixed a couple of typos in the section immediately above my
addition.

Test Plan
--

I tested all of the snippets here against `main` for the preview
behavior, our playground for the stable behavior, and Black's playground
for their behavior
2025-12-12 12:57:09 -05:00
Carl Meyer 69d1bfbebc
[ty] fix hover type on named expression target (#21952)
## Summary

What it says on the tin.

## Test Plan

Added hover test.
2025-12-12 09:30:50 -08:00
Micha Reiser 90b29c9e87
Bump benchmark dependencies (#21951) 2025-12-12 17:05:57 +00:00
Brent Westbrook 0ebdebddd8
Keep lambda parameters on one line and parenthesize the body if it expands (#21385)
## Summary

This PR makes two changes to our formatting of `lambda` expressions:
1. We now parenthesize the body expression if it expands
2. We now try to keep the parameters on a single line

The latter of these fixes #8179:

Black formatting and this PR's formatting:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        ),
    )
```

Stable Ruff formatting

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self,
        *args,
        **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
    )
```

We don't parenthesize the body expression here because the call to
`aaaa...` has its own parentheses, but adding a binary operator shows
the new parenthesization:

```diff
@@ -3,7 +3,7 @@
         c,
         d,
         e,
-        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
-            *args, **kwargs
-        ) + 1,
+        f=lambda self, *args, **kwargs: (
+            aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
+        ),
     )
```

This is actually a new divergence from Black, which formats this input
like this:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        )
        + 1,
    )
```

But I think this is an improvement, unlike the case from #8179.

One other, smaller benefit is that because we now add parentheses to
lambda bodies, we also remove redundant parentheses:

```diff
 @pytest.mark.parametrize(
     "f",
     [
-        lambda x: (x.expanding(min_periods=5).cov(x, pairwise=True)),
-        lambda x: (x.expanding(min_periods=5).corr(x, pairwise=True)),
+        lambda x: x.expanding(min_periods=5).cov(x, pairwise=True),
+        lambda x: x.expanding(min_periods=5).corr(x, pairwise=True),
     ],
 )
 def test_moment_functions_zero_length_pairwise(f):
```

## Test Plan

New tests taken from #8465 and probably a few more I should grab from
the ecosystem results.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-12 12:02:25 -05:00
Aria Desires d5546508cf
[ty] Improve resolution of absolute imports in tests (#21817)
By teaching desperate resolution to try every possible ancestor that
doesn't have an `__init__.py(i)` when resolving absolute imports.

* Fixes https://github.com/astral-sh/ty/issues/1782
2025-12-12 11:59:06 -05:00
Andrew Gallant 3ac58b47bd [ty] Support `__all__ += submodule.__all__`
... and also `__all__.extend(submodule.__all__)`.

I originally left out support for this since I was unclear on whether
we'd really need it. But it turns out this is used somewhat frequently.
For example, in `numpy`.

See the comments on the new `Imports` type for how we approach this.
2025-12-12 10:11:04 -05:00
Andrew Gallant a2b138e789 [ty] Change frequency of invalid `__all__` debug message
This was being emitted for every symbol we checked, which
is clearly too frequent. This switches to emitting it once
per module.
2025-12-12 10:11:04 -05:00
Alex Waygood ff0ed4e752
[ty] Add `KnownUnion::to_type()` (#21948) 2025-12-12 14:06:35 +00:00
Micha Reiser bc8efa2fd8
[ty] Classify `cls` as class parameter (#21944) 2025-12-12 13:54:37 +01:00
Micha Reiser 4249736d74
[ty] Stabilize rename (#21940) 2025-12-12 13:52:47 +01:00
Andrew Gallant 0181568fb5 [ty] Ignore `__all__` for document and workspace symbol requests
We also ignore names introduced by import statements, which seems to
match pylance behavior.

Fixes astral-sh/ty#1856
2025-12-12 07:29:29 -05:00
Micha Reiser 8cc7c993de
[ty] Attach db to background request handler task (#21941) 2025-12-12 11:31:13 +00:00
Micha Reiser 315bf80eed
[ty] Fix outdated version in publish diagnostics after `didChange` (#21943) 2025-12-12 11:30:56 +00:00
Carl Meyer 0138cd238a
[ty] avoid fixpoint unioning of types containing current-cycle Divergent (#21910)
Partially addresses https://github.com/astral-sh/ty/issues/1732

## Summary

Don't union the previous type in fixpoint iteration if the previous type
contains a `Divergent` from the current cycle and the latest type does
not. The theory here, as outlined by @mtshiba at
https://github.com/astral-sh/ty/issues/1732#issuecomment-3609937420, is
that oscillation can't occur by removing and then reintroducing a
`Divergent` type repeatedly, since `Divergent` types are only introduced
at the start of fixpoint iteration.

## Test Plan

Removes a `Divergent` type from the added mdtest, doesn't otherwise
regress any tests.
2025-12-11 19:52:34 -08:00
Shunsuke Shibayama 5e42926eee
[ty] improve bad specialization results & error messages (#21840)
## Summary

This PR includes the following changes:

* When attempting to specialize a non-generic type (or a type that is
already specialized), the result is `Unknown`. Also, the error message
is improved.
* When an implicit type alias is incorrectly specialized, the result is
`Unknown`. Also, the error message is improved.
* When only some of the type alias bounds and constraints are not
satisfied, not all substitutions are `Unknown`.
* Double specialization is prohibited. e.g. `G[int][int]`

Furthermore, after applying this PR, the fuzzing tests for seeds 1052
and 4419, which panic in main, now pass.
This is because the false recursions on type variables have been
removed.

```python
# name_2[0] => Unknown
class name_1[name_2: name_2[0]]:
    def name_4(name_3: name_2, /):
        if name_3:
            pass

#  (name_5 if unique_name_0 else name_1)[0] => Unknown
def name_4[name_5: (name_5 if unique_name_0 else name_1)[0], **name_1](): ...
```

## Test Plan

New corpus test
mdtest files updated
2025-12-11 19:21:34 -08:00
Jack O'Connor ddb7645e9d
[ty] support `NewType`s of `float` and `complex` (#21886)
Fixes https://github.com/astral-sh/ty/issues/1818.
2025-12-12 00:43:09 +00:00
Amethyst Reese 3f63ea4b50
Prepare 0.14.9 release (#21927)
- **Changelog and docs**
- **metadata**
2025-12-11 13:17:52 -08:00
Douglas Creager c8851ecf70
[ty] Defer all parameter and return type annotations (#21906)
As described in astral-sh/ty#1729, we previously had a salsa cycle when
inferring the signature of many function definitions.

The most obvious case happened when (a) the function was decorated, (b)
it had no PEP-695 type params, and (c) annotations were not always
deferred (e.g. in a stub file). We currently evaluate and apply function
decorators eagerly, as part of `infer_function_definition`. Applying a
decorator requires knowing the signature of the function being
decorated. There were two places where signature construction called
`infer_definition_types` cyclically.

The simpler case was that we were looking up the generic context and
decorator list of the function to determine whether it has an implicit
`self` parameter. Before, we used `infer_definition_types` to determine
that information. But since we're in the middle of signature
construction for the function, we can just thread the information
through directly.

The harder case is that signature construction requires knowing the
inferred parameter and return type annotations. When (b) and (c) hold,
those type annotations are inferred in `infer_function_definition`! (In
theory, we've already finished that by the time we start applying
decorators, but signature construction doesn't know that.)

If annotations are deferred, the params/return annotations are inferred
in `infer_deferred_types`; if there are PEP-695 type params, they're
inferred in `infer_function_type_params`. Both of those are different
salsa queries, and don't induce this cycle.

So the quick fix here is to always defer inference of the function
params/return, so that they are always inferred under a different salsa
query.

A more principled fix would be to apply decorators lazily, just like we
construct signatures lazily. But that is a more invasive fix.

Fixes astral-sh/ty#1729

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-11 15:00:18 -05:00
Micha Reiser d442433e93
[ty] Fix workspace symbols to return members too (#21926) 2025-12-11 20:22:21 +01:00
Amethyst Reese c055d665ef
Document range suppressions, reorganize suppression docs (#21884)
- **Reorganize suppression documentation, document range suppressions**
- **Note preview mode requirement**

Issue #21874, #3711
2025-12-11 11:16:36 -08:00
Amethyst Reese 7a578ce833
Ignore ruff:isort like ruff:noqa in new suppressions (#21922)
## Summary

Ignores `#ruff:isort` when parsing suppressions similar to `#ruff:noqa`.
Should clear up ecosystem issues in #21908

## Test Plan

cargo tests
2025-12-11 11:04:28 -08:00
Micha Reiser 34f7a04ef7
[ty] Handle `Definition`s in `SemanticModel::scope` (#21919) 2025-12-11 18:04:57 +00:00
Micha Reiser c9fe4e2703
[ty] Attach salsa db when running ide tests for easier debugging (#21917) 2025-12-11 19:03:52 +01:00
Micha Reiser fbeeb050af
[ty] Don't show hover for expressions with no inferred type (#21924) 2025-12-11 18:55:32 +01:00
Carl Meyer 4fdb4e8219
[ty] avoid unions of generic aliases of the same class in fixpoint (#21909)
Partially addresses https://github.com/astral-sh/ty/issues/1732
Fixes https://github.com/astral-sh/ty/issues/1800

## Summary

At each fixpoint iteration, we union the "previous" and "current"
iteration types, to ensure that the type can only widen at each
iteration. This prevents oscillation and ensures convergence.

But some unions triggered by this behavior (in particular, unions of
differently-specialized generic-aliases of the same class) never
simplify, and cause spurious errors. Since we haven't seen examples of
oscillating types involving class-literal or generic-alias types, just
don't union those.

There may be more thorough/principled ways to avoid undesirable unions
in fixpoint iteration, but this narrow change seems like it results in
strict improvement.

## Test Plan

Removes two false positive `unsupported-class-base` in mdtests, and
several in the ecosystem, without causing other regression.
2025-12-11 09:53:43 -08:00
Andrew Gallant c548ef2027 [ty] Squash false positive logs for failing to find `builtins` as a real module
I recently started noticing this showing up in the logs for every scope
based completion request:

```
2025-12-11 11:25:35.704329935 DEBUG request{id=29 method="textDocument/completion"}:map_stub_definition: Module `builtins` not found while looking in parent dirs
```

And in particular, it was repeated several times. This was confusing to
me because, well, of course `builtins` should resolve.

This particular code path comes from looking for the docstrings
of completion items. This involves a spelunking that ultimately
tries to resolve a "real" module if the stub doesn't have available
docstrings. But I guess there is no "real" `builtins` module, so
`resolve_real_module` fails. Which is fine, but the noisy logs were
annoying since this is an expected case.

So here, we carve out a short circuit for `builtins` and also improve
the log message.
2025-12-11 12:50:08 -05:00
Luca Chiodini 5a9d6a91ea
[ty] Uniformly use "not supported" in diagnostics (#21916) 2025-12-11 15:03:55 +00:00
Micha Reiser c9155d5e72
[ty] Reduce size of ty-ide snapshots (#21915) 2025-12-11 13:36:16 +00:00
Andrew Gallant 8647844572 [ty] Adjust scope completions to use all reachable symbols
Fixes astral-sh/ty#1294
2025-12-11 08:26:15 -05:00
Andrew Gallant 1dcb7f89f1 [ty] Rename `all_members_of_scope` to `all_end_of_scope_members`
This reflects more precisely its behavior based on how it uses the
use-def map.
2025-12-11 08:26:15 -05:00
Andrew Gallant c1c45a6a13 [ty] Remove `all_` prefix from some routines on UseDefMap
These routines don't return *all* symbols/members, but rather,
only *for* a particular scope. We do specifically want to add
some routines that return *all* symbols/members, and this naming
scheme made that confusing. It was also inconsistent with other
routines like `all_end_of_scope_symbol_declarations` which *do*
return *all* symbols.
2025-12-11 08:26:15 -05:00
Brent Westbrook c51727708a
Enable `--document-private-items` for `ruff_python_formatter` (#21903) 2025-12-11 08:23:10 -05:00
Denys Zhak 27912d46b1
Remove `BackwardsTokenizer` based `parenthesized_range` references in `ruff_linter` (#21836)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-11 13:04:57 +01:00
David Peter 71540c03b6
[ty] Revert "Do not infer types for invalid binary expressions in annotations" (#21914)
See discussion here:
https://github.com/astral-sh/ruff/pull/21911#discussion_r2610155157
2025-12-11 11:57:45 +00:00
Micha Reiser aa27925e87
Skip over trivia tokens after re-lexing (#21895) 2025-12-11 10:45:18 +00:00
Charlie Marsh 5c320990f7
[ty] Avoid inferring types for invalid binary expressions in string annotations (#21911)
## Summary

Closes https://github.com/astral-sh/ty/issues/1847.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-11 09:40:19 +01:00
Dhruv Manilawala 24ed28e314
[ty] Improve overload call resolution tracing (#21913)
This PR improves the overload call resolution tracing messages as:
- Use `trace` level instead of `debug` level
- Add a `trace_span` which contains the call arguments and signature
- Remove the signature from individual tracing messages
2025-12-11 12:28:45 +05:30
Carl Meyer 2d0681da08
[ty] fix missing heap_size on Salsa query (#21912) 2025-12-10 18:34:00 -08:00
Ibraheem Ahmed 29bf2cd201
[ty] Support implicit type of `cls` in signatures (#21771)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20517 to support the
implicit type of `cls` in `@classmethod` signatures. Part of
https://github.com/astral-sh/ty/issues/159.
2025-12-10 16:56:20 -05:00
Jack O'Connor 1b44d7e2a7
[ty] add `SyntheticTypedDictType` and implement `normalized` and `is_equivalent_to` (#21784) 2025-12-10 20:36:36 +00:00
Ibraheem Ahmed a2fb2ee06c
[ty] Fix disjointness checks with type-of `@final` classes (#21770)
## Summary

We currently perform a subtyping check, similar to what we were doing
for `@final` instances before
https://github.com/astral-sh/ruff/pull/21167, which is incorrect, e.g.
we currently consider `type[X[Any]]` and `type[X[T]]]` disjoint (where
`X` is `@final`).
2025-12-10 15:15:10 -05:00
Douglas Creager 3e00221a6c
[ty] Fix negation upper bounds in constraint sets (#21897)
This fixes the logic error that @sharkdp
[found](https://github.com/astral-sh/ruff/pull/21871#discussion_r2605755588)
in the constraint set upper bound normalization logic I introduced in
#21871.

I had originally claimed that `(T ≤ α & ~β)` should simplify into `(T ≤
α) ∧ ¬(T ≤ β)`. But that also suggests that `T ≤ ~β` should simplify to
`¬(T ≤ β)` on its own, and that's not correct.

The correct simplification is that `~α` is an "atomic" type, not an
"intersection" for the purposes of our upper bound simplifcation. So `(T
≤ α & ~β)` should simplify to `(T ≤ α) ∧ (T ≤ ~β)`. That is, break apart
the elements of a (proper) intersection, regardless of whether each
element is negated or not.

This PR fixes the logic, adds a test case, and updates the comments to
be hopefully more clear and accurate.
2025-12-10 15:07:50 -05:00
Ibraheem Ahmed 5dc0079e78
[ty] Fix disjointness checks on `@final` class instances (#21769)
## Summary

This was left unfinished in
https://github.com/astral-sh/ruff/pull/21167. This is required to fix
our disjointness checks with type-of a final class, which is currently
broken, and blocking https://github.com/astral-sh/ty/issues/159.
2025-12-10 14:17:22 -05:00
Micha Reiser f7528bd325
[ty] Checking files without extension (#21867) 2025-12-10 16:47:41 +00:00
Avasam 59b92b3522
Document `*.pyw` is included by default in preview (#21885)
Document `*.pyw` is included by default in preview mode.
Originally requested in https://github.com/astral-sh/ruff/issues/13246
and added in https://github.com/astral-sh/ruff/pull/20458

Co-authored-by: Amethyst Reese <amethyst@n7.gg>
2025-12-10 16:43:55 +00:00
Micha Reiser 9ceec359a0
[ty] Add mypy primer check comparing same revisions (#21864) 2025-12-10 16:37:17 +00:00
Micha Reiser 2dd412c89a
Update README to remove production warning (#21899) 2025-12-10 17:25:41 +01:00
Carl Meyer 951766d1fb
[ty] default-specialize class-literal types in assignment to generic-alias types (#21883)
Fixes https://github.com/astral-sh/ty/issues/1832, fixes
https://github.com/astral-sh/ty/issues/1513

## Summary

A class object `C` (for which we infer an unspecialized `ClassLiteral`
type) should always be assignable to the type `type[C]` (which is
default-specialized, if `C` is generic). We already implemented this for
most cases, but we missed the case of a generic final type, where we
simplify `type[C]` to the `GenericAlias` type for the default
specialization of `C`. So we also need to implement this assignability
of generic `ClassLiteral` types as-if default-specialized.

## Test Plan

Added mdtests that failed before this PR.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-10 17:18:08 +01:00
David Peter 7bf50e70a7
[ty] Generics: Respect typevar bounds when matching against a union (#21893)
## Summary

Respect typevar bounds and constraints when matching against a union.
For example:

```py
def accepts_t_or_int[T_str: str](x: T_str | int) -> T_str:
    raise NotImplementedError

reveal_type(accepts_t_or_int("a"))  # ok, reveals `Literal["a"]`
reveal_type(accepts_t_or_int(1))  # ok, reveals `Unknown`

class Unrelated: ...

# error: [invalid-argument-type] "Argument type `Unrelated` does not
# satisfy upper bound `str` of type variable `T_str`"
accepts_t_or_int(Unrelated())
```

Previously, the last call succeed without any errors. Worse than that,
we also incorrectly solved `T_str = Unrelated`, which often lead to
downstream errors.

closes https://github.com/astral-sh/ty/issues/1837

## Ecosystem impact

Looks good!

* Lots of removed false positives, often because we previously selected
a wrong overload for a generic function (because we didn't respect the
typevar bound in an earlier overload).
* We now understand calls to functions accepting an argument of type
`GenericPath: TypeAlias = AnyStr | PathLike[AnyStr]`. Previously, we
would incorrectly match a `Path` argument against the `AnyStr` typevar
(violating its constraints), but now we match against `PathLike`.

## Performance

Another regression on `colour`. This package uses `numpy` heavily. And
`numpy` is the codebase that originally lead me to this bug. The fix
here allows us to infer more precise `np.array` types in some cases, so
it's reasonable that we just need to perform more work.

The fix here also requires us to look at more union elements when we
would previously short-circuit incorrectly, so some more work needs to
be done in the solver.

## Test Plan

New Markdown tests
2025-12-10 14:58:57 +01:00
Ibraheem Ahmed ff7086d9ad
[ty] Infer type of implicit `cls` parameter in method bodies (#21685)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20922 to infer
unannotated `cls` parameters as `type[Self]` in method bodies.

Part of https://github.com/astral-sh/ty/issues/159.
2025-12-10 10:31:28 +01:00
Charlie Marsh d2aabeaaa2
[ty] Respect `kw_only` from parent class (#21820)
## Summary

Closes https://github.com/astral-sh/ty/issues/1769.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-10 10:12:18 +01:00
Dhruv Manilawala 8293afe2ae
Remove hack about unknown options warning (#21887)
This hack was introduced to reduce the amount of warnings that users
would get while transitioning to the new settings format
(https://github.com/astral-sh/ruff/pull/19787) but now that we're near
the beta release, it would be good to remove this.
2025-12-10 07:09:31 +00:00
Jack O'Connor aaadf16b1b
[ty] bump dependencies to pull in Salsa support for `ordermap` (#21854) 2025-12-09 19:08:03 -08:00
Douglas Creager c343e94ac5
[ty] Simplify union lower bounds and intersection upper bounds in constraint sets (#21871)
In a constraint set, it's not useful for an upper bound to be an
intersection type, or for a lower bound to be a union type. Both of
those can be rewritten as simpler BDDs:

```
T ≤ α & β  ⇒ (T ≤ α) ∧ (T ≤ β)
T ≤ α & ¬β ⇒ (T ≤ α) ∧ ¬(T ≤ β)
α | β ≤ T  ⇒ (α ≤ T) ∧ (β ≤ T)
```

We were seeing performance issues on #21551 when _not_ performing this
simplification. For instance, `pandas` was producing some constraint
sets involving intersections of 8-9 different types. Our sequent map
calculation was timing out calculating all of the different permutations
of those types:

```
t1 & t2 & t3 → t1
t1 & t2 & t3 → t2
t1 & t2 & t3 → t3
t1 & t2 & t3 → t1 & t2
t1 & t2 & t3 → t1 & t3
t1 & t2 & t3 → t2 & t3
```

(and then imagine what that looks like for 9 types instead of 3...)

With this change, all of those permutations are now encoded in the BDD
structure itself, which is very good at simplifying that kind of thing.

Pulling this out of #21551 for separate review.
2025-12-09 19:49:17 -05:00
Douglas Creager 270b8d1d14
[ty] Collapse `never` paths in constraint set BDDs (#21880)
#21744 fixed some non-determinism in our constraint set implementation
by switching our BDD representation from being "fully reduced" to being
"quasi-reduced". We still deduplicate identical nodes (via salsa
interning), but we removed the logic to prune redundant nodes (one with
identical outgoing true and false edges). This ensures that the BDD
"remembers" all of the individual constraints that it was created with.

However, that comes at the cost of creating larger BDDs, and on #21551
that was causing performance issues. `scikit-learn` was producing a
function signature with dozens of overloads, and we were trying to
create a constraint set that would map a return type typevar to any of
those overload's return types. This created a combinatorial explosion in
the BDD, with by far most of the BDD paths leading to the `never`
terminal.

This change updates the quasi-reduction logic to prune nodes that are
redundant _because both edges lead to the `never` terminal_. In this
case, we don't need to "remember" that constraint, since no assignment
to it can lead to a valid specialization. So we keep the "memory" of our
quasi-reduced structure, while still pruning large unneeded portions of
the BDD structure.

Pulling this out of https://github.com/astral-sh/ruff/pull/21551 for
separate review.
2025-12-09 18:22:54 -05:00
Brent Westbrook f3714fd3c1
Fix leading comment formatting for lambdas with multiple parameters (#21879)
## Summary

This is a follow-up to #21868. As soon as I started merging #21868 into
#21385, I realized that I had missed a test case with `**kwargs` after
the `*args` parameter. Such a case is supposed to be formatted on one
line like:

```py
# input
(
    lambda
    # comment
    *x,
    **y: x
)

# output
(
    lambda
    # comment
    *x, **y: x
)
```

which you can still see on the
[playground](https://play.ruff.rs/bd88d339-1358-40d2-819f-865bfcb23aef?secondary=Format),
but on `main` after #21868, this was formatted as:

```py
(
    lambda
    # comment
    *x,
    **y: x
)
```

because the leading comment on the first parameter caused the whole
group around the parameters to break.

Instead of making these comments leading comments on the first
parameter, this PR makes them leading comments on the parameters list as
a whole.

## Test Plan

New tests, and I will also try merging this into #21385 _before_ opening
it for review this time.

<hr>

(labeling `internal` since #21868 should not be released before some
kind of fix)
2025-12-09 18:15:12 -05:00
David Peter a9be810c38
[ty] Type inference for `@asynccontextmanager` (#21876)
## Summary

This PR adds special handling for `asynccontextmanager` calls as a
temporary solution for https://github.com/astral-sh/ty/issues/1804. We
will be able to remove this soon once we have support for generic
protocols in the solver.

closes https://github.com/astral-sh/ty/issues/1804

## Ecosystem

```diff
+ tests/test_downloadermiddleware.py:305:56: error[invalid-argument-type] Argument to bound method `download` is incorrect: Expected `Spider`, found `Unknown | Spider | None`
+ tests/test_downloadermiddleware.py:305:56: warning[possibly-missing-attribute] Attribute `spider` may be missing on object of type `Crawler | None`
```
These look like true positives

```diff
+ pymongo/asynchronous/database.py:1021:35: error[invalid-assignment] Object of type `(AsyncClientSession & ~AlwaysTruthy & ~AlwaysFalsy) | (_ServerMode & ~AlwaysFalsy) | Unknown | Primary` is not assignable to `_ServerMode | None`
+ pymongo/asynchronous/database.py:1025:17: error[invalid-argument-type] Argument to bound method `_conn_for_reads` is incorrect: Expected `_ServerMode`, found `_ServerMode | None`
```

Known problems or true positives, just caused by the new type for
`session`

```diff
- src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[Unknown, None] | Connection | AsyncConnection`
+ src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[AsyncConnection, None] | Connection | AsyncConnection`
```

Just a more concrete type

```diff
- src/prefect/flow_engine.py:1277:24: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/server/api/server.py:696:49: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/task_engine.py:1426:24: error[missing-argument] No argument provided for required parameter `cls`
```

Good

## Test Plan

* Adapted and newly added Markdown tests
* Tested on internal codebase
2025-12-09 22:49:00 +01:00
Brent Westbrook 0bec5c0362
Fix comment placement in lambda parameters (#21868)
Summary
--

This PR makes two changes to comment placement in lambda parameters.
First, we
now insert a line break if the first parameter has a leading comment:

```py
# input
(
    lambda
    * # comment 2
    x:
    x
)

# main
(
    lambda # comment 2
    *x: x
)

# this PR
(
    lambda
	# comment 2
    *x: x
)
```

Note the missing space in the output from main. This case is currently
unstable
on main. Also note that the new formatting is more consistent with our
stable
formatting in cases where the lambda has its own dangling comment:

```py
# input
(
    lambda # comment 1
    * # comment 2
    x:
    x
)

# output
(
    lambda  # comment 1
    # comment 2
    *x: x
)
```

and when a parameter without a comment precedes the split `*x`:

```py
# input
(
    lambda y,
    * # comment 2
    x:
    x
)

# output
(
    lambda y,
    # comment 2
    *x: x
)
```

This does change the stable formatting, but I think such cases are rare
(expecting zero hits in the ecosystem report), this fixes an existing
instability, and it should not change any code we've previously
formatted.

Second, this PR modifies the comment placement such that `# comment 2`
in these
outputs is still a leading comment on the parameter. This is also not
the case
on main, where it becomes a [dangling lambda
comment](https://play.ruff.rs/3b29bb7e-70e4-4365-88e0-e60fe1857a35?secondary=Comments).
This doesn't cause any
instability that I'm aware of on main, but it does cause problems when
trying to
adjust the placement of dangling lambda comments in #21385. Changing the
placement in this way should not affect any formatting here.

Test Plan
--

New lambda tests, plus existing tests covering the cases above with
multiple
comments around the parameters (see lambda.py 122-143, and 122-205 or so
more
broadly)

I also checked manually that the comments are now leading on the
parameter:

```shell
❯ cargo run --bin ruff_python_formatter -- --emit stdout --target-version 3.10 --print-comments <<EOF
(
    lambda
        # comment 2
    *x: x
)
EOF
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
     Running `target/debug/ruff_python_formatter --emit stdout --target-version 3.10 --print-comments`
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
21..32, None, Some((Parameters, 37..39)), (ExprLambda, 6..42), "# comment 2"
{
    Node {
        kind: Parameter,
        range: 37..39,
        source: `*x`,
    }: {
        "leading": [
            SourceComment {
                text: "# comment 2",
                position: OwnLine,
                formatted: true,
            },
        ],
        "dangling": [],
        "trailing": [],
    },
}
(
    lambda
    # comment 2
    *x: x
)
```

But I didn't see a great place to put a test like this. Is there
somewhere I can assert this comment placement since it doesn't affect
any formatting yet? Or is it okay to wait until we use this in #21385?
2025-12-09 14:07:48 -05:00
Loïc Riegel 9490fbf1e1
[`pylint`] Detect subclasses of builtin exceptions (`PLW0133`) (#21382)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Closes #17347

Goal is to detect the useless exception statement not just for builtin
exceptions but also custom (user defined) ones.

## Test Plan

<!-- How was it tested? -->
I added test cases in the rule fixture and updated the insta snapshot.
Note that I first moved up a test case case which was at the bottom to
the correct "violation category".
I wasn't sure if I should create new test cases or just insert inside
those tests. I know that ideally each test case should test only one
thing, but here, duplicating twice 12 test cases seemed very verbose,
and actually less maintainable in the future. The drawback is that the
diff in the snapshot is hard to review, sorry. But you can see that the
snapshot gives 38 diagnostics, which is what we expect.

Alternatively, I also created this file for manual testing.
```py
# tmp/test_error.py

class MyException(Exception):
    ...
class MyBaseException(BaseException):
    ...
class MyValueError(ValueError):
    ...
class MyExceptionCustom(Exception):
    ...
class MyBaseExceptionCustom(BaseException):
    ...
class MyValueErrorCustom(ValueError):
    ...
class MyDeprecationWarning(DeprecationWarning):
    ...
class MyDeprecationWarningCustom(MyDeprecationWarning):
    ...
class MyExceptionGroup(ExceptionGroup):
    ...
class MyExceptionGroupCustom(MyExceptionGroup):
    ...
class MyBaseExceptionGroup(ExceptionGroup):
    ...
class MyBaseExceptionGroupCustom(MyBaseExceptionGroup):
    ...


def foo():
    Exception("...")
    BaseException("...")
    ValueError("...")
    RuntimeError("...")
    DeprecationWarning("...")
    GeneratorExit("...")
    SystemExit("...")
    ExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    BaseExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    MyException("...")
    MyBaseException("...")
    MyValueError("...")
    MyExceptionCustom("...")
    MyBaseExceptionCustom("...")
    MyValueErrorCustom("...")
    MyDeprecationWarning("...")
    MyDeprecationWarningCustom("...")
    MyExceptionGroup("...")
    MyExceptionGroupCustom("...")
    MyBaseExceptionGroup("...")
    MyBaseExceptionGroupCustom("...")

```

and you can run this to check the PR:
```sh
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py310
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py314
```
2025-12-09 13:49:55 -05:00
Carl Meyer 8727a7b179
Fix stack overflow with recursive generic protocols (depth limit) (#21858)
## Summary

This fixes https://github.com/astral-sh/ty/issues/1736 where recursive
generic protocols with growing specializations caused a stack overflow.

The issue occurred with protocols like:
```python
class C[T](Protocol):
    a: 'C[set[T]]'
```

When checking `C[set[int]]` against e.g. `C[Unknown]`, member `a`
requires checking `C[set[set[int]]]`, which requires
`C[set[set[set[int]]]]`, etc. Each level has different type
specializations, so the existing cycle detection (using full types as
cache keys) didn't catch the infinite recursion.

This fix adds a simple recursion depth limit (64) to the CycleDetector.
When the depth exceeds the limit, we return the fallback value (assume
compatible) to safely terminate the recursion.

This is a bit of a blunt hammer, but it should be broadly effective to
prevent stack overflow in any nested-relation case, and it's hard to
imagine that non-recursive nested relation comparisons of depth > 64
exist much in the wild.

## Test Plan

Added mdtest.
2025-12-09 09:05:18 -08:00
Amethyst Reese 4e4d018344
New diagnostics for unused range suppressions (#21783)
Issue #3711
2025-12-09 08:30:27 -08:00
Andrew Gallant a9899af98a [ty] Use default settings in completion tests
This makes it so the test and production environments match.

Ref https://github.com/astral-sh/ruff/pull/21851#discussion_r2601579316
2025-12-09 10:42:46 -05:00
David Peter aea2bc2308
[ty] Infer type variables within generic unions (#21862)
## Summary

This PR allows our generics solver to find a solution for `T` in cases
like the following:
```py
def extract_t[T](x: P[T] | Q[T]) -> T:
    raise NotImplementedError

reveal_type(extract_t(P[int]()))  # revealed: int
reveal_type(extract_t(Q[str]()))  # revealed: str
```

closes https://github.com/astral-sh/ty/issues/1772
closes https://github.com/astral-sh/ty/issues/1314

## Ecosystem

The impact here looks very good!

It took me a long time to figure this out, but the new diagnostics on
bokeh are actually true positives. I should have tested with another
type-checker immediately, I guess. All other type checkers also emit
errors on these `__init__` calls. MRE
[here](https://play.ty.dev/5c19d260-65e2-4f70-a75e-1a25780843a2) (no
error on main, diagnostic on this branch)

A lot of false positives on home-assistant go away for calls to
functions like
[`async_listen`](180053fe98/homeassistant/core.py (L1581-L1587))
which take a `event_type: EventType[_DataT] | str` parameter. We can now
solve for `_DataT` here, which was previously falling back to its
default value, and then caused problems because it was used as an
argument to an invariant generic class.

## Test Plan

New Markdown tests
2025-12-09 16:22:59 +01:00
Dhruv Manilawala c35bf8f441
[ty] Fix overload filtering to prefer more "precise" match (#21859)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1809

I took this chance to add some debug level tracing logs for overload
call evaluation similar to Doug's implementation in `constraints.rs`.

## Test Plan

- Add new mdtests
- Tested it against `sqlalchemy.select` in pyx which results in the
correct overload being matched
2025-12-09 20:29:34 +05:30
Andrew Gallant 426125f5c0 [ty] Stabilize auto-import
While still under development, it's far enough along now that we think
it's worth enabling it by default. This should also help give us
feedback for how it behaves.

This PR adds a "completion settings" grouping similar to inlay hints. We
only have an auto-import setting there now, but I expect we'll add more
options to configure completion behavior in the future.

Closes astral-sh/ty#1765
2025-12-09 09:40:38 -05:00
Micha Reiser a0b18bc153
[ty] Fix reveal-type E2E test (#21865) 2025-12-09 14:08:22 +01:00
Micha Reiser 11901384b4
[ty] Use concise message for LSP clients not supporting related diagnostic information (#21850) 2025-12-09 13:18:30 +01:00
Micha Reiser dc2f0a86fd
Include more details in Tokens 'offset is inside token' panic message (#21860) 2025-12-09 11:12:35 +01:00
Amethyst Reese 4e67a219bb
apply range suppressions to filter diagnostics (#21623)
Builds on range suppressions from
https://github.com/astral-sh/ruff/pull/21441

Filters diagnostics based on parsed valid range suppressions.

Issue: #3711
2025-12-08 16:11:59 -08:00
Aria Desires 8ea18966cf
[ty] followup: add-import action for `reveal_type` too (#21668) 2025-12-08 22:44:17 +00:00
Rasmus Nygren e548ce1ca9 [ty] Enrich function argument auto-complete suggestions with annotated types 2025-12-08 14:19:44 -05:00
Rasmus Nygren eac8a90cc4 [ty] Add autocomplete suggestions for function arguments
This adds autocomplete suggestions for function arguments. For example,
`okay` in:

```python
def foo(okay=None):

foo(o<CURSOR>
```

This also ensures that we don't suggest a keyword argument if it has
already been used.

Closes astral-sh/issues#1550
2025-12-08 14:19:44 -05:00
Loïc Riegel 2d3466eccf
[`flake8-bugbear`] Accept immutable slice default arguments (`B008`) (#21823)
Closes issue #21565

## Summary

As pointed out in the issue, slices are currently flagged by B008 but
this behavior is incorrect because slices are immutable.

## Test Plan

Added a test case in the "B006_B008.py" fixture. Sorry for the diff in
the snapshots, the only thing that changes in those flies is the line
numbers, though.

You can also test this manually with this file:
```py
# test_slice.py
def c(d=slice(0, 3)): ...
```

```sh
> target/debug/ruff check tmp/test_slice.py --no-cache --select B008
All checks passed!
```
2025-12-08 14:00:43 -05:00
Phong Do 45fb3732a4
[`pydocstyle`] Suppress `D417` for parameters with `Unpack` annotations (#21816)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes https://github.com/astral-sh/ruff/issues/8774

This PR fixes `pydocstyle` incorrectly flagging missing argument for
arguments with `Unpack` type annotation by extracting the `kwarg` `D417`
suppression logic into a helper function for future rules as needed.

## Problem Statement

The below example was incorrectly triggering `D417` error for missing
`**kwargs` doc.

```python
class User(TypedDict):
    id: int
    name: str

def do_something(some_arg: str, **kwargs: Unpack[User]):
    """Some doc
    
    Args:
        some_arg: Some argument
    """
```

<img width="1135" height="276" alt="image"
src="https://github.com/user-attachments/assets/42fa4bb9-61a5-4a70-a79c-0c8922a3ee66"
/>

`**kwargs: Unpack[User]` indicates the function expects keyword
arguments that will be unpacked. Ideally, the individual fields of the
User `TypedDict` should be documented, not in the `**kwargs` itself. The
`**kwargs` parameter acts as a semantic grouping rather than a parameter
requiring documentation.

## Solution

As discussed in the linked issue, it makes sense to suppress the `D417`
for parameters with `Unpack` annotation. I extract a helper function to
solely check `D417` should be suppressed with `**kwarg: Unpack[T]`
parameter, this function can also be unit tested independently and
reduce complexity of current `missing_args` check function. This also
makes it easier to add additional rules in the future.

_✏️ Note:_ This is my first PR in this repo, as I've learned a ton from
it, please call out anything that could be improved. Thanks for making
this excellent tool 👏

## Test Plan

Add 2 test cases in `D417.py` and update snapshots.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-08 19:00:05 +00:00
Micha Reiser 0ab8521171
[ty] Remove legacy `concise_message` fallback behavior (#21847) 2025-12-08 16:19:01 +00:00
Alex Waygood 0ccd84136a
[ty] Make Python-version subdiagnostics less verbose (#21849) 2025-12-08 15:58:23 +00:00
Aria Desires 3981a23ee9
[ty] Supress inlay hints when assigning a trivial initializer call (#21848)
## Summary

By taking a purely syntactic approach to the problem of trivial
initializer calls we can supress `x: T = T()`, `x: T = x.y.T()` and `x:
MyNewType = MyNewType(0)` but still display `x: T[U] = T()`.

The place where we drop a ball is this does not compose with our
analysis for supressing `x = (0, "hello")` as `x = (0, T())` and `x =
(T(), T())` will still get inlay hints (I don't think this is a huge
deal).

* fixes https://github.com/astral-sh/ty/issues/1516

## Test Plan

Existing snapshots cover this well.
2025-12-08 10:54:30 -05:00
Charlie Marsh 385dd2770b
[ty] Avoid double-inference on non-tuple argument to `Annotated` (#21837)
## Summary

If you pass a non-tuple to `Annotated`, we end up running inference on
it twice. I _think_ the only case here is `Annotated[]`, where we insert
a (fake) empty `Name` node in the slice.

Closes https://github.com/astral-sh/ty/issues/1801.
2025-12-08 10:24:05 -05:00
Alex Waygood 7519f6c27b
Print Python version and Python platform in the fuzzer output when fuzzing fails (#21844) 2025-12-08 14:35:36 +00:00
David Peter 4686111681
[ty] More SQLAlchemy test updates (#21846)
Minor updates to the SQLAlchemy test suite. I verified all expected
results using pyright.
2025-12-08 15:22:55 +01:00
Micha Reiser 4364ffbdd3
[ty] Don't create a related diagnostic for the primary annotation of sub-diagnostics (#21845) 2025-12-08 14:22:11 +00:00
Charlie Marsh b845e81c4a
Use `memchr` for computing line indexes (#21838)
## Summary

Some benchmarks with Claude's help:

| File | Size | Baseline | Optimized | Speedup |

|---------------------|-------|----------------------|----------------------|---------|
| numpy/globals.py | 3 KB | 1.48 µs (1.95 GiB/s) | 740 ns (3.89 GiB/s) |
2.0x |
| unicode/pypinyin.py | 4 KB | 2.04 µs (2.01 GiB/s) | 1.18 µs (3.49
GiB/s) | 1.7x |
| pydantic/types.py | 26 KB | 13.1 µs (1.90 GiB/s) | 5.88 µs (4.23
GiB/s) | 2.2x |
| numpy/ctypeslib.py | 17 KB | 8.45 µs (1.92 GiB/s) | 3.94 µs (4.13
GiB/s) | 2.1x |
| large/dataset.py | 41 KB | 21.6 µs (1.84 GiB/s) | 11.2 µs (3.55 GiB/s)
| 1.9x |

I think that I originally thought we _had_ to iterate
character-by-character here because we needed to do the ASCII check, but
the ASCII check can be vectorized by LLVM (and the "search for newlines"
can be done with `memchr`).
2025-12-08 08:50:51 -05:00
David Peter c99e10eedc
[ty] Increase SQLAlchemy test coverage (#21843)
## Summary

Increase our SQLAlchemy test coverage to make sure we understand
`Session.scalar`, `Session.scalars`, `Session.execute` (and their async
equivalents), as well as `Result.tuples`, `Result.one_or_none`,
`Row._tuple`.
2025-12-08 14:36:13 +01:00
Dhruv Manilawala a364195335
[ty] Avoid diagnostic when `typing_extensions.ParamSpec` uses `default` parameter (#21839)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1798

## Test Plan

Add mdtest.
2025-12-08 12:34:30 +00:00
David Peter dfd6ed0524
[ty] mdtests with external dependencies (#20904)
## Summary

This PR adds the possibility to write mdtests that specify external
dependencies in a `project` section of TOML blocks. For example, here is
a test that makes sure that we understand Pydantic's dataclass-transform
setup:

````markdown
```toml
[environment]
python-version = "3.12"
python-platform = "linux"

[project]
dependencies = ["pydantic==2.12.2"]
```

```py
from pydantic import BaseModel

class User(BaseModel):
    id: int
    name: str

user = User(id=1, name="Alice")
reveal_type(user.id)  # revealed: int
reveal_type(user.name)  # revealed: str

# error: [missing-argument] "No argument provided for required parameter
`name`"
invalid_user = User(id=2)
```
````

## How?

Using the `python-version` and the `dependencies` fields from the
Markdown section, we generate a `pyproject.toml` file, write it to a
temporary directory, and use `uv sync` to install the dependencies into
a virtual environment. We then copy the Python source files from that
venv's `site-packages` folder to a corresponding directory structure in
the in-memory filesystem. Finally, we configure the search paths
accordingly, and run the mdtest as usual.

I fully understand that there are valid concerns here:
* Doesn't this require network access? (yes, it does)
* Is this fast enough? (`uv` caching makes this almost unnoticeable,
actually)
* Is this deterministic? ~~(probably not, package resolution can depend
on the platform you're on)~~ (yes, hopefully)

For this reason, this first version is opt-in, locally. ~~We don't even
run these tests in CI (even though they worked fine in a previous
iteration of this PR).~~ You need to set `MDTEST_EXTERNAL=1`, or use the
new `-e/--enable-external` command line option of the `mdtest.py`
runner. For example:
```bash
# Skip mdtests with external dependencies (default):
uv run crates/ty_python_semantic/mdtest.py

# Run all mdtests, including those with external dependencies:
uv run crates/ty_python_semantic/mdtest.py -e

# Only run the `pydantic` tests. Use `-e` to make sure it is not skipped:
uv run crates/ty_python_semantic/mdtest.py -e pydantic
```

## Why?

I believe that this can be a useful addition to our testing strategy,
which lies somewhere between ecosystem tests and normal mdtests.
Ecosystem tests cover much more code, but they have the disadvantage
that we only see second- or third-order effects via diagnostic diffs. If
we unexpectedly gain or lose type coverage somewhere, we might not even
notice (assuming the gradual guarantee holds, and ecosystem code is
mostly correct). Another disadvantage of ecosystem checks is that they
only test checked-in code that is usually correct. However, we also want
to test what happens on wrong code, like the code that is momentarily
written in an editor, before fixing it. On the other end of the spectrum
we have normal mdtests, which have the disadvantage that they do not
reflect the reality of complex real-world code. We experience this
whenever we're surprised by an ecosystem report on a PR.

That said, these tests should not be seen as a replacement for either of
these things. For example, we should still strive to write detailed
self-contained mdtests for user-reported issues. But we might use this
new layer for regression tests, or simply as a debugging tool. It can
also serve as a tool to document our support for popular third-party
libraries.

## Test Plan

* I've been locally using this for a couple of weeks now.
* `uv run crates/ty_python_semantic/mdtest.py -e`
2025-12-08 11:44:20 +01:00
Dhruv Manilawala ac882f7e63
[ty] Handle various invalid explicit specializations for `ParamSpec` (#21821)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1788

## Test Plan

Add new mdtests.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-08 05:20:41 +00:00
Alex Waygood 857fd4f683
[ty] Add test case for fixed panic (#21832) 2025-12-07 15:58:11 +00:00
Charlie Marsh 285d6410d3
[ty] Avoid double-analyzing tuple in `Final` subscript (#21828)
## Summary

As-is, a single-element tuple gets destructured via:

```rust
let arguments = if let ast::Expr::Tuple(tuple) = slice {
    &*tuple.elts
} else {
    std::slice::from_ref(slice)
};
```

But then, because it's a single element, we call
`infer_annotation_expression_impl`, passing in the tuple, rather than
the first element.

Closes https://github.com/astral-sh/ty/issues/1793.
Closes https://github.com/astral-sh/ty/issues/1768.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-07 14:27:14 +00:00
Prakhar Pratyush cbff09b9af
[flake8-bandit] Fix false positive when using non-standard `CSafeLoader` path (S506). (#21830) 2025-12-07 11:40:46 +01:00
Louis Maddox 6e0e49eda8
Add minimal-size build profile (#21826)
This PR adds the same `minimal-size` profile as `uv` repo workspace has

```toml
# Profile to build a minimally sized binary for uv-build
[profile.minimal-size]
inherits = "release"
opt-level = "z"
# This will still show a panic message, we only skip the unwind
panic = "abort"
codegen-units = 1
```
but removes its `panic = "abort"` setting

- As discussed in #21825

Compared to the ones pre-built via `uv tool install`, this builds 35%
smaller ruff and 24% smaller ty binaries
(as measured
[here](https://github.com/lmmx/just-pre-commit/blob/master/refresh_binaries.sh))
2025-12-06 13:19:04 -05:00
Charlie Marsh ef45c97dab
[ty] Allow `tuple[Any, ...]` to assign to `tuple[int, *tuple[int, ...]]` (#21803)
## Summary

Closes https://github.com/astral-sh/ty/issues/1750.
2025-12-05 19:04:23 +00:00
Micha Reiser 9714c589e1
[ty] Support renaming import aliases (#21792) 2025-12-05 19:12:13 +01:00
Micha Reiser b2fb421ddd
[ty] Add redeclaration LSP tests (#21812) 2025-12-05 18:02:34 +00:00
Shunsuke Shibayama 2f05ffa2c8
[ty] more detailed description of "Size limit on unions of literals" in mdtest (#21804) 2025-12-05 17:34:39 +00:00
Dhruv Manilawala b623189560
[ty] Complete support for `ParamSpec` (#21445)
## Summary

Closes: https://github.com/astral-sh/ty/issues/157

This PR adds support for the following capabilities involving a
`ParamSpec` type variable:
- Representing `P.args` and `P.kwargs` in the type system
- Matching against a callable containing `P` to create a type mapping
- Specializing `P` against the stored parameters

The value of a `ParamSpec` type variable is being represented using
`CallableType` with a `CallableTypeKind::ParamSpecValue` variant. This
`CallableTypeKind` is expanded from the existing `is_function_like`
boolean flag. An `enum` is used as these variants are mutually
exclusive.

For context, an initial iteration made an attempt to expand the
`Specialization` to use `TypeOrParameters` enum that represents that a
type variable can specialize into either a `Type` or `Parameters` but
that increased the complexity of the code as all downstream usages would
need to handle both the variants appropriately. Additionally, we'd have
also need to establish an invariant that a regular type variable always
maps to a `Type` while a paramspec type variable always maps to a
`Parameters`.

I've intentionally left out checking and raising diagnostics when the
`ParamSpec` type variable and it's components are not being used
correctly to avoid scope increase and it can easily be done as a
follow-up. This would also include the scoping rules which I don't think
a regular type variable implements either.

## Test Plan

Add new mdtest cases and update existing test cases.

Ran this branch on pyx, no new diagnostics.

### Ecosystem analysis

There's a case where in an annotated assignment like:
```py
type CustomType[P] = Callable[...]

def value[**P](...): ...

def another[**P](...):
	target: CustomType[P] = value
```
The type of `value` is a callable and it has a paramspec that's bound to
`value`, `CustomType` is a type alias that's a callable and `P` that's
used in it's specialization is bound to `another`. Now, ty infers the
type of `target` same as `value` and does not use the declared type
`CustomType[P]`. [This is the
assignment](0980b9d9ab/src/async_utils/gen_transform.py (L108))
that I'm referring to which then leads to error in downstream usage.
Pyright and mypy does seem to use the declared type.

There are multiple diagnostics in `dd-trace-py` that requires support
for `cls`.

I'm seeing `Divergent` type for an example like which ~~I'm not sure
why, I'll look into it tomorrow~~ is because of a cycle as mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974:
```py
from typing import Callable

def decorator[**P](c: Callable[P, int]) -> Callable[P, str]: ...

@decorator
def func(a: int) -> int: ...

# ((a: int) -> str) | ((a: Divergent) -> str)
reveal_type(func)
```

I ~~need to look into why are the parameters not being specialized
through multiple decorators in the following code~~ think this is also
because of the cycle mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974 and
the fact that we don't support `staticmethod` properly:
```py
from contextlib import contextmanager

class Foo:
    @staticmethod
    @contextmanager
    def method(x: int):
        yield

foo = Foo()
# ty: Revealed type: `() -> _GeneratorContextManager[Unknown, None, None]` [revealed-type]
reveal_type(foo.method)
```

There's some issue related to `Protocol` that are generic over a
`ParamSpec` in `starlette` which might be related to
https://github.com/astral-sh/ty/issues/1635 but I'm not sure. Here's a
minimal example to reproduce:

<details><summary>Code snippet:</summary>
<p>

```py
from collections.abc import Awaitable, Callable, MutableMapping
from typing import Any, Callable, ParamSpec, Protocol

P = ParamSpec("P")

Scope = MutableMapping[str, Any]
Message = MutableMapping[str, Any]
Receive = Callable[[], Awaitable[Message]]
Send = Callable[[Message], Awaitable[None]]

ASGIApp = Callable[[Scope, Receive, Send], Awaitable[None]]

_Scope = Any
_Receive = Callable[[], Awaitable[Any]]
_Send = Callable[[Any], Awaitable[None]]

# Since `starlette.types.ASGIApp` type differs from `ASGIApplication` from `asgiref`
# we need to define a more permissive version of ASGIApp that doesn't cause type errors.
_ASGIApp = Callable[[_Scope, _Receive, _Send], Awaitable[None]]


class _MiddlewareFactory(Protocol[P]):
    def __call__(
        self, app: _ASGIApp, *args: P.args, **kwargs: P.kwargs
    ) -> _ASGIApp: ...


class Middleware:
    def __init__(
        self, factory: _MiddlewareFactory[P], *args: P.args, **kwargs: P.kwargs
    ) -> None:
        self.factory = factory
        self.args = args
        self.kwargs = kwargs


class ServerErrorMiddleware:
    def __init__(
        self,
        app: ASGIApp,
        value: int | None = None,
        flag: bool = False,
    ) -> None:
        self.app = app
        self.value = value
        self.flag = flag

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ...


# ty: Argument to bound method `__init__` is incorrect: Expected `_MiddlewareFactory[(...)]`, found `<class 'ServerErrorMiddleware'>` [invalid-argument-type]
Middleware(ServerErrorMiddleware, value=500, flag=True)
```

</p>
</details> 

### Conformance analysis

> ```diff
> -constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(...) -> Unknown`
> +constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(x: int) -> Unknown`
> ```

Requires return type inference i.e.,
https://github.com/astral-sh/ruff/pull/21551

> ```diff
> +constructors_callable.py:194:16: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:194:22: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:195:4: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | int]`
> +constructors_callable.py:195:9: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | str]`
> ```

I might need to look into why this is happening...

> ```diff
> +generics_defaults.py:79:1: error[type-assertion-failure] Type
`type[Class_ParamSpec[(str, int, /)]]` does not match asserted type
`<class 'Class_ParamSpec'>`
> ```

which is on the following code
```py
DefaultP = ParamSpec("DefaultP", default=[str, int])

class Class_ParamSpec(Generic[DefaultP]): ...

assert_type(Class_ParamSpec, type[Class_ParamSpec[str, int]])
```

It's occurring because there's no equivalence relationship defined
between `ClassLiteral` and `KnownInstanceType::TypeGenericAlias` which
is what these types are.

Everything else looks good to me!
2025-12-05 22:00:06 +05:30
Micha Reiser f29436ca9e
[ty] Update benchmark dependencies (#21815) 2025-12-05 17:23:18 +01:00
Douglas Creager e42cdf8495
[ty] Carry generic context through when converting class into `Callable` (#21798)
When converting a class (whether specialized or not) into a `Callable`
type, we should carry through any generic context that the constructor
has. This includes both the generic context of the class itself (if it's
generic) and of the constructor methods (if they are separately
generic).

To help test this, this also updates the `generic_context` extension
function to work on `Callable` types and unions; and adds a new
`into_callable` extension function that works just like
`CallableTypeOf`, but on value forms instead of type forms.

Pulled this out of #21551 for separate review.
2025-12-05 08:57:21 -05:00
Alex Waygood 71a7a03ad4
[ty] Add more tests for renamings (#21810) 2025-12-05 12:41:31 +00:00
Alex Waygood 48f7f42784
[ty] Minor improvements to `assert_type` diagnostics (#21811) 2025-12-05 12:33:30 +00:00
Micha Reiser 3deb7e1b90
[ty] Add some attribute/method renaming test cases (#21809) 2025-12-05 11:56:28 +01:00
mahiro 5df8a959f5
Update mkdocs-material to 9.7.0 (Insiders now free) (#21797) 2025-12-05 08:53:08 +01:00
Dhruv Manilawala 6f03afe318
Remove unused whitespaces in test cases (#21806)
These aren't used in the tests themselves. There are more instances of
them in other files but those require code changes so I've left them as
it is.
2025-12-05 12:51:40 +05:30
Shunsuke Shibayama 1951f1bbb8
[ty] fix panic when instantiating a type variable with invalid constraints (#21663) 2025-12-04 18:48:38 -08:00
Shunsuke Shibayama 10de342991
[ty] fix build failure caused by conflicts between #21683 and #21800 (#21802) 2025-12-04 18:20:24 -08:00
Shunsuke Shibayama 3511b7a06b
[ty] do nothing with `store_expression_type` if `inner_expression_inference_state` is `Get` (#21718)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1688

## Test Plan

N/A
2025-12-04 18:05:41 -08:00
Shunsuke Shibayama f3e5713d90
[ty] increase the limit on the number of elements in a non-recursively defined literal union (#21683)
## Summary

Closes https://github.com/astral-sh/ty/issues/957

As explained in https://github.com/astral-sh/ty/issues/957, literal
union types for recursively defined values ​​can be widened early to
speed up the convergence of fixed-point iterations.
This PR achieves this by embedding a marker in `UnionType` that
distinguishes whether a value is recursively defined.

This also allows us to identify values ​​that are not recursively
defined, so I've increased the limit on the number of elements in a
literal union type for such values.

Edit: while this PR doesn't provide the significant performance
improvement initially hoped for, it does have the benefit of allowing
the number of elements in a literal union to be raised above the salsa
limit, and indeed mypy_primer results revealed that a literal union of
220 elements was actually being used.

## Test Plan

`call/union.md` has been updated
2025-12-04 18:01:48 -08:00
Carl Meyer a9de6b5c3e
[ty] normalize typevar bounds/constraints in cycles (#21800)
Fixes https://github.com/astral-sh/ty/issues/1587

## Summary

Perform cycle normalization on typevar bounds and constraints (similar
to how it was already done for typevar defaults) in order to ensure
convergence in cyclic cases.

There might be another fix here that could avoid the cycle in many more
cases, where we don't eagerly evaluate typevar bounds/constraints on
explicit specialization, but just accept the given specialization and
later evaluate to see whether we need to emit a diagnostic on it. But
the current fix here is sufficient to solve the problem and matches the
patterns we use to ensure cycle convergence elsewhere, so it seems good
for now; left a TODO for the other idea.

This fix is sufficient to make us not panic, but not sufficient to get
the semantics fully correct; see the TODOs in the tests. I have ideas
for fixing that as well, but it seems worth at least getting this in to
fix the panic.

## Test Plan

Test that previously panicked now does not.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-04 15:17:57 -08:00
Andrew Gallant 06415b1877 [ty] Update completion eval to include modules
Our parsing and confirming of symbol names is highly suspect, but
I think it's fine for now.
2025-12-04 17:37:37 -05:00
Andrew Gallant 518d11b33f [ty] Add modules to auto-import
This makes auto-import include modules in suggestions.

In this initial implementation, we permit this to include submodules as
well. This is in contrast to what we do in `import ...` completions.
It's easy to change this behavior, but I think it'd be interesting to
run with this for now to see how well it works.
2025-12-04 17:37:37 -05:00
Andrew Gallant da94b99248 [ty] Add support for module-only import requests
The existing importer functionality always required
an import request with a module and a member in that
module. But we want to be able to insert import statements
for a module itself and not any members in the module.

This is basically changing `member: &str` to an
`Option<&str>` and fixing the fallout in a way that
makes sense for module-only imports.
2025-12-04 17:37:37 -05:00
Andrew Gallant 3c2cf49f60 [ty] Refactor auto-import symbol info
This just encapsulates the representation so that
we can make changes to it more easily.
2025-12-04 17:37:37 -05:00
Andrew Gallant fdcb5a7e73 [ty] Clarify the use of `SymbolKind` in auto-import 2025-12-04 13:21:26 -05:00
Andrew Gallant 6a025d1925 [ty] Redact ranking of completions from e2e LSP tests
I think changes to this value are generally noise. It's hard to tell
what it means and it isn't especially actionable. We already have an
eval running in CI for completion ranking, so I don't think it's
terribly important to care about ranking here in e2e tests _generally_.
2025-12-04 13:21:26 -05:00
Andrew Gallant f054e7edf8 [ty] Tweaks tests to use clearer language
A completion lacking a module reference doesn't necessarily mean that
the symbol is defined within the current module. I believe the intent
here is that it means that no import is required to use it.
2025-12-04 13:21:26 -05:00
Andrew Gallant e154efa229 [ty] Update evaluation results
These are all improvements here with one slight regression on
`reveal_type` ranking. The previous completions offered were:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: _sqlite3)
SupportsGetItemViewable (module: _typeshed)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 8/10)
_remove_original_values (module: _osx_support)
_remove_universal_flags (module: _osx_support)
-----
found 10 completions
```

And now they are:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3.dbapi2)
removeHandler (module: unittest)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 9/9)
-----
found 9 completions
```

Some completions were removed (because they are now considered
unexported) and some were added (likely do to better re-export support).

This particular case probably warrants more special attention anyway.
So I think this is fine. (It's only a one-ranking regression.)
2025-12-04 13:21:26 -05:00
Andrew Gallant 32f400a457 [ty] Make auto-import ignore symbols in modules starting with a `_`
This applies recursively. So if *any* component of a module name starts
with a `_`, then symbols from that module are excluded from auto-import.

The exception is when it's a module within first party code. Then we
want to include it in auto-import.
2025-12-04 13:21:26 -05:00
Andrew Gallant 2a38395bc8 [ty] Add some tests for re-exports and `__all__` to completions
Note that the `Deprecated` symbols from `importlib.metadata` are no
longer offered because 1) `importlib.metadata` defined `__all__` and 2)
the `Deprecated` symbols aren't in it. These seem to not be a part of
its public API according to the docs, so this seems right to me.
2025-12-04 13:21:26 -05:00
Andrew Gallant 8c72b296c9 [ty] Add support for re-exports and `__all__` to auto-import
This commit (mostly) re-implements the support for `__all__` in
ty-proper, but inside the auto-import AST scanner.

When `__all__` isn't present in a module, we fall back to conventions to
determine whether a symbol is exported or not:
https://docs.python.org/3/library/index.html

However, in keeping with current practice for non-auto-import
completions, we continue to provide sunder and dunder names as
re-exports.

When `__all__` is present, we respect it strictly. That is, a symbol is
exported *if and only if* it's in `__all__`. This is somewhat stricter
than pylance seemingly is. I felt like it was a good idea to start here,
and we can relax it based on user demand (perhaps through a setting).
2025-12-04 13:21:26 -05:00
Andrew Gallant 086f1e0b89 [ty] Skip over expressions in auto-import AST scanning 2025-12-04 13:21:26 -05:00
Andrew Gallant 5da45f8ec7 [ty] Simplify auto-import AST visitor slightly and add tests
This simplifies the existing visitor by DRYing it up slightly.
We also add tests for the existing functionality. In particular,
we want to add support for re-export conventions, and that
warrants more careful testing.
2025-12-04 13:21:26 -05:00
Andrew Gallant 62f20b1e86 [ty] Re-arrange imports in symbol extraction
I like using a qualified `ast::` prefix for things from
`ruff_python_ast`, so switch over to that convention.
2025-12-04 13:21:26 -05:00
Aria Desires cccb0bbaa4
[ty] Add tests for implicit submodule references (#21793)
## Summary

I realized we don't really test `DefinitionKind::ImportFromSubmodule` in
the IDE at all, so here's a bunch of them, just recording our current
behaviour.

## Test Plan

*stares at the camera*
2025-12-04 15:46:23 +00:00
484 changed files with 40810 additions and 16991 deletions

View File

@ -75,14 +75,6 @@
matchManagers: ["cargo"],
enabled: false,
},
{
// `mkdocs-material` requires a manual update to keep the version in sync
// with `mkdocs-material-insider`.
// See: https://squidfunk.github.io/mkdocs-material/insiders/upgrade/
matchManagers: ["pip_requirements"],
matchPackageNames: ["mkdocs-material"],
enabled: false,
},
{
groupName: "pre-commit dependencies",
matchManagers: ["pre-commit"],

View File

@ -24,6 +24,8 @@ env:
PACKAGE_NAME: ruff
PYTHON_VERSION: "3.14"
NEXTEST_PROFILE: ci
# Enable mdtests that require external dependencies
MDTEST_EXTERNAL: "1"
jobs:
determine_changes:
@ -296,7 +298,7 @@ jobs:
# sync, not just public items. Eventually we should do this for all
# crates; for now add crates here as they are warning-clean to prevent
# regression.
- run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db --document-private-items
- run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db -p ruff_python_formatter --document-private-items
env:
# Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025).
RUSTDOCFLAGS: "-D warnings"
@ -779,8 +781,6 @@ jobs:
name: "mkdocs"
runs-on: ubuntu-latest
timeout-minutes: 10
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
@ -788,11 +788,6 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
@ -800,11 +795,7 @@ jobs:
with:
python-version: 3.13
activate-environment: true
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: uv pip install -r docs/requirements-insiders.txt
- name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: uv pip install -r docs/requirements.txt
- name: "Update README File"
run: python scripts/transform_readme.py --target mkdocs
@ -812,12 +803,8 @@ jobs:
run: python scripts/generate_mkdocs.py
- name: "Check docs formatting"
run: python scripts/check_docs_formatted.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.public.yml
run: mkdocs build --strict -f mkdocs.yml
check-formatter-instability-and-black-similarity:
name: "formatter instabilities and black similarity"

View File

@ -47,6 +47,7 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
shared-key: "mypy-primer"
workspaces: "ruff"
- name: Install Rust toolchain
@ -86,6 +87,7 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain
run: rustup show
@ -105,3 +107,54 @@ jobs:
with:
name: mypy_primer_memory_diff
path: mypy_primer_memory.diff
# Runs mypy twice against the same ty version to catch any non-deterministic behavior (ideally).
# The job is disabled for now because there are some non-deterministic diagnostics.
mypy_primer_same_revision:
name: Run mypy_primer on same revision
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-32' || 'ubuntu-latest' }}
timeout-minutes: 20
# TODO: Enable once we fixed the non-deterministic diagnostics
if: false
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: ruff
fetch-depth: 0
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain
run: rustup show
- name: Run determinism check
env:
BASE_REVISION: ${{ github.event.pull_request.head.sha }}
PRIMER_SELECTOR: crates/ty_python_semantic/resources/primer/good.txt
CLICOLOR_FORCE: "1"
DIFF_FILE: mypy_primer_determinism.diff
run: |
cd ruff
scripts/mypy_primer.sh
- name: Check for non-determinism
run: |
# Remove ANSI color codes for checking
sed -e 's/\x1b\[[0-9;]*m//g' mypy_primer_determinism.diff > mypy_primer_determinism_clean.diff
# Check if there are any differences (non-determinism)
if [ -s mypy_primer_determinism_clean.diff ]; then
echo "ERROR: Non-deterministic output detected!"
echo "The following differences were found when running ty twice on the same commit:"
cat mypy_primer_determinism_clean.diff
exit 1
else
echo "✓ Output is deterministic"
fi

View File

@ -20,8 +20,6 @@ on:
jobs:
mkdocs:
runs-on: ubuntu-latest
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
@ -59,23 +57,12 @@ jobs:
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV"
echo "timestamp=$timestamp" >> "$GITHUB_ENV"
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: pip install -r docs/requirements-insiders.txt
- name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: pip install -r docs/requirements.txt
- name: "Copy README File"
@ -83,13 +70,8 @@ jobs:
python scripts/transform_readme.py --target mkdocs
python scripts/generate_mkdocs.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.public.yml
run: mkdocs build --strict -f mkdocs.yml
- name: "Clone docs repo"
run: git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs

View File

@ -60,7 +60,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@ -123,7 +123,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@ -174,7 +174,7 @@ jobs:
outputs:
val: ${{ steps.host.outputs.manifest }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive
@ -250,7 +250,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: false
submodules: recursive

View File

@ -1,5 +1,47 @@
# Changelog
## 0.14.9
Released on 2025-12-11.
### Preview features
- \[`ruff`\] New `RUF100` diagnostics for unused range suppressions ([#21783](https://github.com/astral-sh/ruff/pull/21783))
- \[`pylint`\] Detect subclasses of builtin exceptions (`PLW0133`) ([#21382](https://github.com/astral-sh/ruff/pull/21382))
### Bug fixes
- Fix comment placement in lambda parameters ([#21868](https://github.com/astral-sh/ruff/pull/21868))
- Skip over trivia tokens after re-lexing ([#21895](https://github.com/astral-sh/ruff/pull/21895))
- \[`flake8-bandit`\] Fix false positive when using non-standard `CSafeLoader` path (S506). ([#21830](https://github.com/astral-sh/ruff/pull/21830))
- \[`flake8-bugbear`\] Accept immutable slice default arguments (`B008`) ([#21823](https://github.com/astral-sh/ruff/pull/21823))
### Rule changes
- \[`pydocstyle`\] Suppress `D417` for parameters with `Unpack` annotations ([#21816](https://github.com/astral-sh/ruff/pull/21816))
### Performance
- Use `memchr` for computing line indexes ([#21838](https://github.com/astral-sh/ruff/pull/21838))
### Documentation
- Document `*.pyw` is included by default in preview ([#21885](https://github.com/astral-sh/ruff/pull/21885))
- Document range suppressions, reorganize suppression docs ([#21884](https://github.com/astral-sh/ruff/pull/21884))
- Update mkdocs-material to 9.7.0 (Insiders now free) ([#21797](https://github.com/astral-sh/ruff/pull/21797))
### Contributors
- [@Avasam](https://github.com/Avasam)
- [@MichaReiser](https://github.com/MichaReiser)
- [@charliermarsh](https://github.com/charliermarsh)
- [@amyreese](https://github.com/amyreese)
- [@phongddo](https://github.com/phongddo)
- [@prakhar1144](https://github.com/prakhar1144)
- [@mahiro72](https://github.com/mahiro72)
- [@ntBre](https://github.com/ntBre)
- [@LoicRiegel](https://github.com/LoicRiegel)
## 0.14.8
Released on 2025-12-04.

View File

@ -331,13 +331,6 @@ you addressed them.
## MkDocs
> [!NOTE]
>
> The documentation uses Material for MkDocs Insiders, which is closed-source software.
> This means only members of the Astral organization can preview the documentation exactly as it
> will appear in production.
> Outside contributors can still preview the documentation, but there will be some differences. Consult [the Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/insiders/benefits/#features) for which features are exclusively available in the insiders version.
To preview any changes to the documentation locally:
1. Install the [Rust toolchain](https://www.rust-lang.org/tools/install).
@ -351,11 +344,7 @@ To preview any changes to the documentation locally:
1. Run the development server with:
```shell
# For contributors.
uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.public.yml
# For members of the Astral org, which has access to MkDocs Insiders via sponsorship.
uvx --with-requirements docs/requirements-insiders.txt -- mkdocs serve -f mkdocs.insiders.yml
uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.yml
```
The documentation should then be available locally at

97
Cargo.lock generated
View File

@ -254,6 +254,21 @@ dependencies = [
"syn",
]
[[package]]
name = "bit-set"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7"
[[package]]
name = "bitflags"
version = "1.3.2"
@ -944,6 +959,18 @@ dependencies = [
"parking_lot_core",
]
[[package]]
name = "datatest-stable"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a867d7322eb69cf3a68a5426387a25b45cb3b9c5ee41023ee6cea92e2afadd82"
dependencies = [
"camino",
"fancy-regex",
"libtest-mimic 0.8.1",
"walkdir",
]
[[package]]
name = "derive-where"
version = "1.6.0"
@ -1016,7 +1043,7 @@ dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@ -1108,7 +1135,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@ -1138,6 +1165,17 @@ dependencies = [
"windows-sys 0.61.0",
]
[[package]]
name = "fancy-regex"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
dependencies = [
"bit-set",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "fastrand"
version = "2.3.0"
@ -1238,9 +1276,9 @@ dependencies = [
[[package]]
name = "get-size-derive2"
version = "0.7.2"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff47daa61505c85af126e9dd64af6a342a33dc0cccfe1be74ceadc7d352e6efd"
checksum = "ab21d7bd2c625f2064f04ce54bcb88bc57c45724cde45cba326d784e22d3f71a"
dependencies = [
"attribute-derive",
"quote",
@ -1249,14 +1287,15 @@ dependencies = [
[[package]]
name = "get-size2"
version = "0.7.2"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac7bb8710e1f09672102be7ddf39f764d8440ae74a9f4e30aaa4820dcdffa4af"
checksum = "879272b0de109e2b67b39fcfe3d25fdbba96ac07e44a254f5a0b4d7ff55340cb"
dependencies = [
"compact_str",
"get-size-derive2",
"hashbrown 0.16.1",
"indexmap",
"ordermap",
"smallvec",
]
@ -1624,7 +1663,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46fdb647ebde000f43b5b53f773c30cf9b0cb4300453208713fa38b2c70935a0"
dependencies = [
"console 0.15.11",
"globset",
"once_cell",
"pest",
"pest_derive",
@ -1632,7 +1670,6 @@ dependencies = [
"ron",
"serde",
"similar",
"walkdir",
]
[[package]]
@ -1763,7 +1800,7 @@ dependencies = [
"portable-atomic",
"portable-atomic-util",
"serde_core",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@ -1918,6 +1955,18 @@ dependencies = [
"threadpool",
]
[[package]]
name = "libtest-mimic"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5297962ef19edda4ce33aaa484386e0a5b3d7f2f4e037cbeee00503ef6b29d33"
dependencies = [
"anstream",
"anstyle",
"clap",
"escape8259",
]
[[package]]
name = "linux-raw-sys"
version = "0.11.0"
@ -2233,9 +2282,9 @@ checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
[[package]]
name = "ordermap"
version = "0.5.12"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b100f7dd605611822d30e182214d3c02fdefce2d801d23993f6b6ba6ca1392af"
checksum = "ed637741ced8fb240855d22a2b4f208dab7a06bcce73380162e5253000c16758"
dependencies = [
"indexmap",
"serde",
@ -2859,7 +2908,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.8"
version = "0.14.9"
dependencies = [
"anyhow",
"argfile",
@ -3117,7 +3166,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.8"
version = "0.14.9"
dependencies = [
"aho-corasick",
"anyhow",
@ -3277,6 +3326,7 @@ dependencies = [
"anyhow",
"clap",
"countme",
"datatest-stable",
"insta",
"itertools 0.14.0",
"memchr",
@ -3346,8 +3396,10 @@ dependencies = [
"bitflags 2.10.0",
"bstr",
"compact_str",
"datatest-stable",
"get-size2",
"insta",
"itertools 0.14.0",
"memchr",
"ruff_annotate_snippets",
"ruff_python_ast",
@ -3473,7 +3525,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.8"
version = "0.14.9"
dependencies = [
"console_error_panic_hook",
"console_log",
@ -3571,7 +3623,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@ -3589,7 +3641,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [
"boxcar",
"compact_str",
@ -3600,6 +3652,7 @@ dependencies = [
"indexmap",
"intrusive-collections",
"inventory",
"ordermap",
"parking_lot",
"portable-atomic",
"rustc-hash",
@ -3613,12 +3666,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
[[package]]
name = "salsa-macros"
version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0#59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [
"proc-macro2",
"quote",
@ -3972,7 +4025,7 @@ dependencies = [
"getrandom 0.3.4",
"once_cell",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]
@ -4308,7 +4361,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fe242ee9e646acec9ab73a5c540e8543ed1b107f0ce42be831e0775d423c396"
dependencies = [
"ignore",
"libtest-mimic",
"libtest-mimic 0.7.3",
"snapbox",
]
@ -4337,6 +4390,7 @@ dependencies = [
"ruff_python_trivia",
"salsa",
"tempfile",
"tikv-jemallocator",
"toml",
"tracing",
"tracing-flame",
@ -4557,6 +4611,7 @@ dependencies = [
"anyhow",
"camino",
"colored 3.0.0",
"dunce",
"insta",
"memchr",
"path-slash",
@ -5025,7 +5080,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.61.0",
]
[[package]]

View File

@ -5,7 +5,7 @@ resolver = "2"
[workspace.package]
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.89"
rust-version = "1.90"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@ -81,6 +81,7 @@ compact_str = "0.9.0"
criterion = { version = "0.7.0", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
datatest-stable = { version = "0.3.3" }
dir-test = { version = "0.4.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
@ -88,7 +89,7 @@ etcetera = { version = "0.11.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
get-size2 = { version = "0.7.0", features = [
get-size2 = { version = "0.7.3", features = [
"derive",
"smallvec",
"hashbrown",
@ -129,7 +130,7 @@ memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
notify = { version = "8.0.0" }
ordermap = { version = "0.5.0" }
ordermap = { version = "1.0.0" }
path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" }
pathdiff = { version = "0.2.1" }
@ -146,7 +147,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "59aa1075e837f5deb0d6ffb24b68fedc0f4bc5e0", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "55e5e7d32fa3fc189276f35bb04c9438f9aedbd1", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",
@ -272,6 +273,12 @@ large_stack_arrays = "allow"
lto = "fat"
codegen-units = 16
# Profile to build a minimally sized binary for ruff/ty
[profile.minimal-size]
inherits = "release"
opt-level = "z"
codegen-units = 1
# Some crates don't change as much but benefit more from
# more expensive optimization passes, so we selectively
# decrease codegen-units in some cases.

View File

@ -57,8 +57,11 @@ Ruff is extremely actively developed and used in major open-source projects like
...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff),
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
Ruff is backed by [Astral](https://astral.sh), the creators of
[uv](https://github.com/astral-sh/uv) and [ty](https://github.com/astral-sh/ty).
Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), or the
original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
## Testimonials
@ -147,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.8/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.8/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.9/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.9/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@ -181,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.8
rev: v0.14.9
hooks:
# Run the linter.
- id: ruff-check

View File

@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.8"
version = "0.14.9"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@ -10,7 +10,7 @@ use anyhow::bail;
use clap::builder::Styles;
use clap::builder::styling::{AnsiColor, Effects};
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{Parser, Subcommand, command};
use clap::{Parser, Subcommand};
use colored::Colorize;
use itertools::Itertools;
use path_absolutize::path_dedot;

View File

@ -9,7 +9,7 @@ use std::sync::mpsc::channel;
use anyhow::Result;
use clap::CommandFactory;
use colored::Colorize;
use log::{error, warn};
use log::error;
use notify::{RecursiveMode, Watcher, recommended_watcher};
use args::{GlobalConfigArgs, ServerCommand};

View File

@ -1440,6 +1440,78 @@ def function():
Ok(())
}
#[test]
fn ignore_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
import os # noqa: F401
# ruff: disable[F401]
import sys
"#,
)?;
// without --ignore-noqa
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py"),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
");
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--preview"]),
@r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
// with --ignore-noqa --preview
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--ignore-noqa", "--preview"]),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:2:8: F401 [*] `os` imported but unused
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 2 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
");
Ok(())
}
#[test]
fn add_noqa() -> Result<()> {
let fixture = CliTest::new()?;
@ -1632,6 +1704,100 @@ def unused(x): # noqa: ANN001, ARG001, D103
Ok(())
}
#[test]
fn add_noqa_existing_file_level_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: noqa F401
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: noqa F401
import os
");
Ok(())
}
#[test]
fn add_noqa_existing_range_suppression() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: disable[F401]
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: disable[F401]
import os
");
Ok(())
}
#[test]
fn add_noqa_multiline_comment() -> Result<()> {
let fixture = CliTest::new()?;

View File

@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13000,
13100,
);
static TANJUN: Benchmark = Benchmark::new(
@ -223,7 +223,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
950,
1100,
);
#[track_caller]

View File

@ -166,28 +166,8 @@ impl Diagnostic {
/// Returns the primary message for this diagnostic.
///
/// A diagnostic always has a message, but it may be empty.
///
/// NOTE: At present, this routine will return the first primary
/// annotation's message as the primary message when the main diagnostic
/// message is empty. This is meant to facilitate an incremental migration
/// in ty over to the new diagnostic data model. (The old data model
/// didn't distinguish between messages on the entire diagnostic and
/// messages attached to a particular span.)
pub fn primary_message(&self) -> &str {
if !self.inner.message.as_str().is_empty() {
return self.inner.message.as_str();
}
// FIXME: As a special case, while we're migrating ty
// to the new diagnostic data model, we'll look for a primary
// message from the primary annotation. This is because most
// ty diagnostics are created with an empty diagnostic
// message and instead attach the message to the annotation.
// Fixing this will require touching basically every diagnostic
// in ty, so we do it this way for now to match the old
// semantics. ---AG
self.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default()
self.inner.message.as_str()
}
/// Introspects this diagnostic and returns what kind of "primary" message
@ -199,18 +179,6 @@ impl Diagnostic {
/// contains *essential* information or context for understanding the
/// diagnostic.
///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
@ -224,11 +192,10 @@ impl Diagnostic {
.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) {
(false, true) => ConciseMessage::MainDiagnostic(main),
(true, false) => ConciseMessage::PrimaryAnnotation(annotation),
(false, false) => ConciseMessage::Both { main, annotation },
(true, true) => ConciseMessage::Empty,
if annotation.is_empty() {
ConciseMessage::MainDiagnostic(main)
} else {
ConciseMessage::Both { main, annotation }
}
}
@ -693,18 +660,6 @@ impl SubDiagnostic {
/// contains *essential* information or context for understanding the
/// diagnostic.
///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
@ -714,11 +669,10 @@ impl SubDiagnostic {
.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) {
(false, true) => ConciseMessage::MainDiagnostic(main),
(true, false) => ConciseMessage::PrimaryAnnotation(annotation),
(false, false) => ConciseMessage::Both { main, annotation },
(true, true) => ConciseMessage::Empty,
if annotation.is_empty() {
ConciseMessage::MainDiagnostic(main)
} else {
ConciseMessage::Both { main, annotation }
}
}
}
@ -888,6 +842,10 @@ impl Annotation {
pub fn hide_snippet(&mut self, yes: bool) {
self.hide_snippet = yes;
}
pub fn is_primary(&self) -> bool {
self.is_primary
}
}
/// Tags that can be associated with an annotation.
@ -1508,28 +1466,10 @@ pub enum DiagnosticFormat {
pub enum ConciseMessage<'a> {
/// A diagnostic contains a non-empty main message and an empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
MainDiagnostic(&'a str),
/// A diagnostic contains an empty main message and a non-empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "old" data model.
PrimaryAnnotation(&'a str),
/// A diagnostic contains a non-empty main message and a non-empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
Both { main: &'a str, annotation: &'a str },
/// A diagnostic contains an empty main message and an empty
/// primary annotation message.
///
/// This indicates that the diagnostic is probably using the old
/// model.
Empty,
/// A custom concise message has been provided.
Custom(&'a str),
}
@ -1540,13 +1480,9 @@ impl std::fmt::Display for ConciseMessage<'_> {
ConciseMessage::MainDiagnostic(main) => {
write!(f, "{main}")
}
ConciseMessage::PrimaryAnnotation(annotation) => {
write!(f, "{annotation}")
}
ConciseMessage::Both { main, annotation } => {
write!(f, "{main}: {annotation}")
}
ConciseMessage::Empty => Ok(()),
ConciseMessage::Custom(message) => {
write!(f, "{message}")
}

View File

@ -144,8 +144,8 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
if let Some(deprecated) = &field.deprecated {
output.push_str("> [!WARN] \"Deprecated\"\n");
output.push_str("> This option has been deprecated");
output.push_str("!!! warning \"Deprecated\"\n");
output.push_str(" This option has been deprecated");
if let Some(since) = deprecated.since {
write!(output, " in {since}").unwrap();
@ -166,8 +166,9 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
let _ = writeln!(output, "**Type**: `{}`", field.value_type);
output.push('\n');
output.push_str("**Example usage** (`pyproject.toml`):\n\n");
output.push_str("**Example usage**:\n\n");
output.push_str(&format_example(
"pyproject.toml",
&format_header(
field.scope,
field.example,
@ -179,11 +180,11 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n');
}
fn format_example(header: &str, content: &str) -> String {
fn format_example(title: &str, header: &str, content: &str) -> String {
if header.is_empty() {
format!("```toml\n{content}\n```\n",)
format!("```toml title=\"{title}\"\n{content}\n```\n",)
} else {
format!("```toml\n{header}\n{content}\n```\n",)
format!("```toml title=\"{title}\"\n{header}\n{content}\n```\n",)
}
}

View File

@ -39,7 +39,7 @@ impl Edit {
/// Creates an edit that replaces the content in `range` with `content`.
pub fn range_replacement(content: String, range: TextRange) -> Self {
debug_assert!(!content.is_empty(), "Prefer `Fix::deletion`");
debug_assert!(!content.is_empty(), "Prefer `Edit::deletion`");
Self {
content: Some(Box::from(content)),

View File

@ -337,7 +337,7 @@ macro_rules! best_fitting {
#[cfg(test)]
mod tests {
use crate::prelude::*;
use crate::{FormatState, SimpleFormatOptions, VecBuffer, write};
use crate::{FormatState, SimpleFormatOptions, VecBuffer};
struct TestFormat;
@ -385,8 +385,8 @@ mod tests {
#[test]
fn best_fitting_variants_print_as_lists() {
use crate::Formatted;
use crate::prelude::*;
use crate::{Formatted, format, format_args};
// The second variant below should be selected when printing at a width of 30
let formatted_best_fitting = format!(

View File

@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.8"
version = "0.14.9"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@ -28,9 +28,11 @@ yaml.load("{}", SafeLoader)
yaml.load("{}", yaml.SafeLoader)
yaml.load("{}", CSafeLoader)
yaml.load("{}", yaml.CSafeLoader)
yaml.load("{}", yaml.cyaml.CSafeLoader)
yaml.load("{}", NewSafeLoader)
yaml.load("{}", Loader=SafeLoader)
yaml.load("{}", Loader=yaml.SafeLoader)
yaml.load("{}", Loader=CSafeLoader)
yaml.load("{}", Loader=yaml.CSafeLoader)
yaml.load("{}", Loader=yaml.cyaml.CSafeLoader)
yaml.load("{}", Loader=NewSafeLoader)

View File

@ -199,6 +199,9 @@ def bytes_okay(value=bytes(1)):
def int_okay(value=int("12")):
pass
# Allow immutable slice()
def slice_okay(value=slice(1,2)):
pass
# Allow immutable complex() value
def complex_okay(value=complex(1,2)):

View File

@ -218,3 +218,26 @@ def should_not_fail(payload, Args):
Args:
The other arguments.
"""
# Test cases for Unpack[TypedDict] kwargs
from typing import TypedDict
from typing_extensions import Unpack
class User(TypedDict):
id: int
name: str
def function_with_unpack_args_should_not_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs.
Args:
query: some arg
"""
def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs but missing query arg documentation.
Args:
**kwargs: keyword arguments
"""

View File

@ -2,15 +2,40 @@ from abc import ABC, abstractmethod
from contextlib import suppress
class MyError(Exception):
...
class MySubError(MyError):
...
class MyValueError(ValueError):
...
class MyUserWarning(UserWarning):
...
# Violation test cases with builtin errors: PLW0133
# Test case 1: Useless exception statement
def func():
AssertionError("This is an assertion error") # PLW0133
MyError("This is a custom error") # PLW0133
MySubError("This is a custom error") # PLW0133
MyValueError("This is a custom value error") # PLW0133
# Test case 2: Useless exception statement in try-except block
def func():
try:
Exception("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
except Exception as err:
pass
@ -19,6 +44,9 @@ def func():
def func():
if True:
RuntimeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 4: Useless exception statement in class
@ -26,12 +54,18 @@ def func():
class Class:
def __init__(self):
TypeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 5: Useless exception statement in function
def func():
def inner():
IndexError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
inner()
@ -40,6 +74,9 @@ def func():
def func():
while True:
KeyError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 7: Useless exception statement in abstract class
@ -48,27 +85,58 @@ def func():
@abstractmethod
def method(self):
NotImplementedError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 8: Useless exception statement inside context manager
def func():
with suppress(AttributeError):
with suppress(Exception):
AttributeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 9: Useless exception statement in parentheses
def func():
(RuntimeError("This is an exception")) # PLW0133
(MyError("This is an exception")) # PLW0133
(MySubError("This is an exception")) # PLW0133
(MyValueError("This is an exception")) # PLW0133
# Test case 10: Useless exception statement in continuation
def func():
x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
x = 1; (MyError("This is an exception")); y = 2 # PLW0133
x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
# Test case 11: Useless warning statement
def func():
UserWarning("This is an assertion error") # PLW0133
UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Test case 12: Useless exception statement at module level
import builtins
builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
PythonFinalizationError("Added in Python 3.13") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Non-violation test cases: PLW0133
@ -119,10 +187,3 @@ def func():
def func():
with suppress(AttributeError):
raise AttributeError("This is an exception") # OK
import builtins
builtins.TypeError("still an exception even though it's an Attribute")
PythonFinalizationError("Added in Python 3.13")

View File

@ -132,7 +132,6 @@ async def c():
# Non-errors
###
# False-negative: RustPython doesn't parse the `\N{snowman}`.
"\N{snowman} {}".format(a)
"{".format(a)
@ -276,3 +275,6 @@ if __name__ == "__main__":
number = 0
string = "{}".format(number := number + 1)
print(string)
# Unicode escape
"\N{angle}AOB = {angle}°".format(angle=180)

View File

@ -0,0 +1,88 @@
def f():
# These should both be ignored by the range suppression.
# ruff: disable[E741, F841]
I = 1
# ruff: enable[E741, F841]
def f():
# These should both be ignored by the implicit range suppression.
# Should also generate an "unmatched suppression" warning.
# ruff:disable[E741,F841]
I = 1
def f():
# Neither warning is ignored, and an "unmatched suppression"
# should be generated.
I = 1
# ruff: enable[E741, F841]
def f():
# One should be ignored by the range suppression, and
# the other logged to the user.
# ruff: disable[E741]
I = 1
# ruff: enable[E741]
def f():
# Test interleaved range suppressions. The first and last
# lines should each log a different warning, while the
# middle line should be completely silenced.
# ruff: disable[E741]
l = 0
# ruff: disable[F841]
O = 1
# ruff: enable[E741]
I = 2
# ruff: enable[F841]
def f():
# Neither of these are ignored and warnings are
# logged to user
# ruff: disable[E501]
I = 1
# ruff: enable[E501]
def f():
# These should both be ignored by the range suppression,
# and an unusued noqa diagnostic should be logged.
# ruff:disable[E741,F841]
I = 1 # noqa: E741,F841
# ruff:enable[E741,F841]
def f():
# TODO: Duplicate codes should be counted as duplicate, not unused
# ruff: disable[F841, F841]
foo = 0
def f():
# Overlapping range suppressions, one should be marked as used,
# and the other should trigger an unused suppression diagnostic
# ruff: disable[F841]
# ruff: disable[F841]
foo = 0
def f():
# Multiple codes but only one is used
# ruff: disable[E741, F401, F841]
foo = 0
def f():
# Multiple codes but only two are used
# ruff: disable[E741, F401, F841]
I = 0
def f():
# Multiple codes but none are used
# ruff: disable[E741, F401, F841]
print("hello")

View File

@ -437,6 +437,15 @@ impl<'a> Checker<'a> {
}
}
/// Returns the [`Tokens`] for the parsed source file.
///
///
/// Unlike [`Self::tokens`], this method always returns
/// the tokens for the current file, even when within a parsed type annotation.
pub(crate) fn source_tokens(&self) -> &'a Tokens {
self.parsed.tokens()
}
/// The [`Locator`] for the current file, which enables extraction of source code from byte
/// offsets.
pub(crate) const fn locator(&self) -> &'a Locator<'a> {

View File

@ -12,17 +12,20 @@ use crate::fix::edits::delete_comment;
use crate::noqa::{
Code, Directive, FileExemption, FileNoqaDirectives, NoqaDirectives, NoqaMapping,
};
use crate::preview::is_range_suppressions_enabled;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::rules::pygrep_hooks;
use crate::rules::ruff;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA};
use crate::settings::LinterSettings;
use crate::suppression::Suppressions;
use crate::{Edit, Fix, Locator};
use super::ast::LintContext;
/// RUF100
#[expect(clippy::too_many_arguments)]
pub(crate) fn check_noqa(
context: &mut LintContext,
path: &Path,
@ -31,6 +34,7 @@ pub(crate) fn check_noqa(
noqa_line_for: &NoqaMapping,
analyze_directives: bool,
settings: &LinterSettings,
suppressions: &Suppressions,
) -> Vec<usize> {
// Identify any codes that are globally exempted (within the current file).
let file_noqa_directives =
@ -40,7 +44,7 @@ pub(crate) fn check_noqa(
let mut noqa_directives =
NoqaDirectives::from_commented_ranges(comment_ranges, &settings.external, path, locator);
if file_noqa_directives.is_empty() && noqa_directives.is_empty() {
if file_noqa_directives.is_empty() && noqa_directives.is_empty() && suppressions.is_empty() {
return Vec::new();
}
@ -60,11 +64,19 @@ pub(crate) fn check_noqa(
continue;
}
// Apply file-level suppressions first
if exemption.contains_secondary_code(code) {
ignored_diagnostics.push(index);
continue;
}
// Apply ranged suppressions next
if is_range_suppressions_enabled(settings) && suppressions.check_diagnostic(diagnostic) {
ignored_diagnostics.push(index);
continue;
}
// Apply end-of-line noqa suppressions last
let noqa_offsets = diagnostic
.parent()
.into_iter()
@ -107,6 +119,9 @@ pub(crate) fn check_noqa(
}
}
// Diagnostics for unused/invalid range suppressions
suppressions.check_suppressions(context, locator);
// Enforce that the noqa directive was actually used (RUF100), unless RUF100 was itself
// suppressed.
if context.is_rule_enabled(Rule::UnusedNOQA)
@ -128,8 +143,13 @@ pub(crate) fn check_noqa(
Directive::All(directive) => {
if matches.is_empty() {
let edit = delete_comment(directive.range(), locator);
let mut diagnostic = context
.report_diagnostic(UnusedNOQA { codes: None }, directive.range());
let mut diagnostic = context.report_diagnostic(
UnusedNOQA {
codes: None,
kind: ruff::rules::UnusedNOQAKind::Noqa,
},
directive.range(),
);
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
diagnostic.set_fix(Fix::safe_edit(edit));
}
@ -224,6 +244,7 @@ pub(crate) fn check_noqa(
.map(|code| (*code).to_string())
.collect(),
}),
kind: ruff::rules::UnusedNOQAKind::Noqa,
},
directive.range(),
);

View File

@ -3,14 +3,13 @@
use anyhow::{Context, Result};
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{self, Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Expr, ExprList, Parameters, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_trivia::textwrap::dedent_to;
use ruff_python_trivia::{
CommentRanges, PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content,
is_python_whitespace,
PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content, is_python_whitespace,
};
use ruff_source_file::{LineRanges, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
@ -209,7 +208,7 @@ pub(crate) fn remove_argument<T: Ranged>(
arguments: &Arguments,
parentheses: Parentheses,
source: &str,
comment_ranges: &CommentRanges,
tokens: &Tokens,
) -> Result<Edit> {
// Partition into arguments before and after the argument to remove.
let (before, after): (Vec<_>, Vec<_>) = arguments
@ -224,7 +223,7 @@ pub(crate) fn remove_argument<T: Ranged>(
.context("Unable to find argument")?;
let parenthesized_range =
parenthesized_range(arg.value().into(), arguments.into(), comment_ranges, source)
token::parenthesized_range(arg.value().into(), arguments.into(), tokens)
.unwrap_or(arg.range());
if !after.is_empty() {
@ -270,25 +269,14 @@ pub(crate) fn remove_argument<T: Ranged>(
///
/// The new argument will be inserted before the first existing keyword argument in `arguments`, if
/// there are any present. Otherwise, the new argument is added to the end of the argument list.
pub(crate) fn add_argument(
argument: &str,
arguments: &Arguments,
comment_ranges: &CommentRanges,
source: &str,
) -> Edit {
pub(crate) fn add_argument(argument: &str, arguments: &Arguments, tokens: &Tokens) -> Edit {
if let Some(ast::Keyword { range, value, .. }) = arguments.keywords.first() {
let keyword = parenthesized_range(value.into(), arguments.into(), comment_ranges, source)
.unwrap_or(*range);
let keyword = parenthesized_range(value.into(), arguments.into(), tokens).unwrap_or(*range);
Edit::insertion(format!("{argument}, "), keyword.start())
} else if let Some(last) = arguments.arguments_source_order().last() {
// Case 1: existing arguments, so append after the last argument.
let last = parenthesized_range(
last.value().into(),
arguments.into(),
comment_ranges,
source,
)
.unwrap_or(last.range());
let last = parenthesized_range(last.value().into(), arguments.into(), tokens)
.unwrap_or(last.range());
Edit::insertion(format!(", {argument}"), last.end())
} else {
// Case 2: no arguments. Add argument, without any trailing comma.
@ -298,12 +286,7 @@ pub(crate) fn add_argument(
/// Generic function to add a (regular) parameter to a function definition.
pub(crate) fn add_parameter(parameter: &str, parameters: &Parameters, source: &str) -> Edit {
if let Some(last) = parameters
.args
.iter()
.filter(|arg| arg.default.is_none())
.next_back()
{
if let Some(last) = parameters.args.iter().rfind(|arg| arg.default.is_none()) {
// Case 1: at least one regular parameter, so append after the last one.
Edit::insertion(format!(", {parameter}"), last.end())
} else if !parameters.args.is_empty() {

View File

@ -32,6 +32,7 @@ use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs};
pub(crate) mod float;
@ -128,6 +129,7 @@ pub fn check_path(
source_type: PySourceType,
parsed: &Parsed<ModModule>,
target_version: TargetVersion,
suppressions: &Suppressions,
) -> Vec<Diagnostic> {
// Aggregate all diagnostics.
let mut context = LintContext::new(path, locator.contents(), settings);
@ -339,6 +341,7 @@ pub fn check_path(
&directives.noqa_line_for,
parsed.has_valid_syntax(),
settings,
suppressions,
);
if noqa.is_enabled() {
for index in ignored.iter().rev() {
@ -400,6 +403,9 @@ pub fn add_noqa_to_path(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics, ignoring any existing `noqa` directives.
let diagnostics = check_path(
path,
@ -414,6 +420,7 @@ pub fn add_noqa_to_path(
source_type,
&parsed,
target_version,
&suppressions,
);
// Add any missing `# noqa` pragmas.
@ -427,6 +434,7 @@ pub fn add_noqa_to_path(
&directives.noqa_line_for,
stylist.line_ending(),
reason,
&suppressions,
)
}
@ -461,6 +469,9 @@ pub fn lint_only(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@ -475,6 +486,7 @@ pub fn lint_only(
source_type,
&parsed,
target_version,
&suppressions,
);
LinterResult {
@ -566,6 +578,9 @@ pub fn lint_fix<'a>(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@ -580,6 +595,7 @@ pub fn lint_fix<'a>(
source_type,
&parsed,
target_version,
&suppressions,
);
if iterations == 0 {
@ -769,6 +785,7 @@ mod tests {
use crate::registry::Rule;
use crate::settings::LinterSettings;
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{TestedNotebook, assert_notebook_path, test_contents, test_snippet};
use crate::{Locator, assert_diagnostics, directives, settings};
@ -944,6 +961,7 @@ mod tests {
&locator,
&indexer,
);
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let mut diagnostics = check_path(
path,
None,
@ -957,6 +975,7 @@ mod tests {
source_type,
&parsed,
target_version,
&suppressions,
);
diagnostics.sort_by(Diagnostic::ruff_start_ordering);
diagnostics

View File

@ -20,12 +20,14 @@ use crate::Locator;
use crate::fs::relativize_path;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::suppression::Suppressions;
/// Generates an array of edits that matches the length of `messages`.
/// Each potential edit in the array is paired, in order, with the associated diagnostic.
/// Each edit will add a `noqa` comment to the appropriate line in the source to hide
/// the diagnostic. These edits may conflict with each other and should not be applied
/// simultaneously.
#[expect(clippy::too_many_arguments)]
pub fn generate_noqa_edits(
path: &Path,
diagnostics: &[Diagnostic],
@ -34,11 +36,19 @@ pub fn generate_noqa_edits(
external: &[String],
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
suppressions: &Suppressions,
) -> Vec<Option<Edit>> {
let file_directives = FileNoqaDirectives::extract(locator, comment_ranges, external, path);
let exemption = FileExemption::from(&file_directives);
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
build_noqa_edits_by_diagnostic(comments, locator, line_ending, None)
}
@ -725,6 +735,7 @@ pub(crate) fn add_noqa(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> Result<usize> {
let (count, output) = add_noqa_inner(
path,
@ -735,6 +746,7 @@ pub(crate) fn add_noqa(
noqa_line_for,
line_ending,
reason,
suppressions,
);
fs::write(path, output)?;
@ -751,6 +763,7 @@ fn add_noqa_inner(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> (usize, String) {
let mut count = 0;
@ -760,7 +773,14 @@ fn add_noqa_inner(
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason);
@ -859,6 +879,7 @@ fn find_noqa_comments<'a>(
exemption: &'a FileExemption,
directives: &'a NoqaDirectives,
noqa_line_for: &NoqaMapping,
suppressions: &'a Suppressions,
) -> Vec<Option<NoqaComment<'a>>> {
// List of noqa comments, ordered to match up with `messages`
let mut comments_by_line: Vec<Option<NoqaComment<'a>>> = vec![];
@ -875,6 +896,12 @@ fn find_noqa_comments<'a>(
continue;
}
// Apply ranged suppressions next
if suppressions.check_diagnostic(message) {
comments_by_line.push(None);
continue;
}
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent) = message.parent() {
if let Some(directive_line) =
@ -1253,6 +1280,7 @@ mod tests {
use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon};
use crate::rules::pyflakes::rules::UnusedVariable;
use crate::rules::pyupgrade::rules::PrintfStringFormatting;
use crate::suppression::Suppressions;
use crate::{Edit, Violation};
use crate::{Locator, generate_noqa_edits};
@ -2848,6 +2876,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, format!("{contents}"));
@ -2872,6 +2901,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n");
@ -2903,6 +2933,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@ -2934,6 +2965,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa");
@ -2956,6 +2988,7 @@ print(
let messages = [PrintfStringFormatting
.into_diagnostic(TextRange::new(12.into(), 79.into()), &source_file)];
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@ -2964,6 +2997,7 @@ print(
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,
@ -2987,6 +3021,7 @@ bar =
[UselessSemicolon.into_diagnostic(TextRange::new(4.into(), 5.into()), &source_file)];
let noqa_line_for = NoqaMapping::default();
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@ -2995,6 +3030,7 @@ bar =
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,

View File

@ -9,6 +9,11 @@ use crate::settings::LinterSettings;
// Rule-specific behavior
// https://github.com/astral-sh/ruff/pull/21382
pub(crate) const fn is_custom_exception_checking_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/15541
pub(crate) const fn is_suspicious_function_reference_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
@ -286,3 +291,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/21623
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@ -91,8 +91,8 @@ pub(crate) fn fastapi_redundant_response_model(checker: &Checker, function_def:
response_model_arg,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::unsafe_edit)
});

View File

@ -75,6 +75,7 @@ pub(crate) fn unsafe_yaml_load(checker: &Checker, call: &ast::ExprCall) {
qualified_name.segments(),
["yaml", "SafeLoader" | "CSafeLoader"]
| ["yaml", "loader", "SafeLoader" | "CSafeLoader"]
| ["yaml", "cyaml", "CSafeLoader"]
)
})
{

View File

@ -74,12 +74,7 @@ pub(crate) fn map_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker
.report_diagnostic(MapWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit(
add_argument(
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
add_argument("strict=False", &call.arguments, checker.tokens()),
Applicability::Unsafe,
));
}

View File

@ -3,7 +3,7 @@ use std::fmt::Write;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_docstring_stmt;
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, ParameterWithDefault};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::function_type::is_stub;
@ -166,12 +166,7 @@ fn move_initialization(
return None;
}
let range = match parenthesized_range(
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source(),
) {
let range = match parenthesized_range(default.into(), parameter.into(), checker.tokens()) {
Some(range) => range,
None => default.range(),
};
@ -194,13 +189,8 @@ fn move_initialization(
"{} = {}",
parameter.parameter.name(),
locator.slice(
parenthesized_range(
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source()
)
.unwrap_or(default.range())
parenthesized_range(default.into(), parameter.into(), checker.tokens())
.unwrap_or(default.range())
)
);
} else {

View File

@ -92,12 +92,7 @@ pub(crate) fn no_explicit_stacklevel(checker: &Checker, call: &ast::ExprCall) {
}
let mut diagnostic = checker.report_diagnostic(NoExplicitStacklevel, call.func.range());
let edit = add_argument(
"stacklevel=2",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
);
let edit = add_argument("stacklevel=2", &call.arguments, checker.tokens());
diagnostic.set_fix(Fix::unsafe_edit(edit));
}

View File

@ -70,12 +70,7 @@ pub(crate) fn zip_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker
.report_diagnostic(ZipWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit(
add_argument(
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
add_argument("strict=False", &call.arguments, checker.tokens()),
Applicability::Unsafe,
));
}

View File

@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@ -53,39 +53,39 @@ B008 Do not perform function call in argument defaults; instead, perform the cal
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:239:31
--> B006_B008.py:242:31
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
B008 Do not perform function call `map` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:245:22
--> B006_B008.py:248:22
|
243 | # Don't flag nested B006 since we can't guarantee that
244 | # it isn't made mutable by the outer operation.
245 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
246 | # Don't flag nested B006 since we can't guarantee that
247 | # it isn't made mutable by the outer operation.
248 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 | pass
249 | pass
|
B008 Do not perform function call `random.randint` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:19
--> B006_B008.py:253:19
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:37
--> B006_B008.py:253:37
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|

View File

@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@ -142,13 +142,9 @@ pub(crate) fn unnecessary_generator_list(checker: &Checker, call: &ast::ExprCall
if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses.
let range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range());
let range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses.
let generator = checker.generator().expr(argument);

View File

@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@ -147,13 +147,9 @@ pub(crate) fn unnecessary_generator_set(checker: &Checker, call: &ast::ExprCall)
if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses.
let range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range());
let range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses.
let generator = checker.generator().expr(argument);

View File

@ -1,7 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;
@ -89,13 +89,9 @@ pub(crate) fn unnecessary_list_comprehension_set(checker: &Checker, call: &ast::
// If the list comprehension is parenthesized, remove the parentheses in addition to
// removing the brackets.
let replacement_range = parenthesized_range(
argument.into(),
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or_else(|| argument.range());
let replacement_range =
parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
.unwrap_or_else(|| argument.range());
let span = argument.range().add_start(one).sub_end(one);
let replacement =

View File

@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, Operator};
use ruff_python_trivia::is_python_whitespace;
use ruff_source_file::LineRanges;
@ -88,13 +88,7 @@ pub(crate) fn explicit(checker: &Checker, expr: &Expr) {
checker.report_diagnostic(ExplicitStringConcatenation, expr.range());
let is_parenthesized = |expr: &Expr| {
parenthesized_range(
expr.into(),
bin_op.into(),
checker.comment_ranges(),
checker.source(),
)
.is_some()
parenthesized_range(expr.into(), bin_op.into(), checker.tokens()).is_some()
};
// If either `left` or `right` is parenthesized, generating
// a fix would be too involved. Just report the diagnostic.

View File

@ -111,7 +111,6 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
}
let arguments = &call.arguments;
let source = checker.source();
let mut diagnostic = checker.report_diagnostic(ExcInfoOutsideExceptHandler, exc_info.range);
@ -120,8 +119,8 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
exc_info,
arguments,
Parentheses::Preserve,
source,
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)?;
Ok(Fix::unsafe_edit(edit))
});

View File

@ -2,7 +2,7 @@ use itertools::Itertools;
use rustc_hash::{FxBuildHasher, FxHashSet};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_text_size::Ranged;
@ -129,8 +129,8 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
keyword,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::safe_edit)
});
@ -158,8 +158,7 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
parenthesized_range(
value.into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens()
)
.unwrap_or(value.range())
)

View File

@ -73,11 +73,11 @@ pub(crate) fn unnecessary_range_start(checker: &Checker, call: &ast::ExprCall) {
let mut diagnostic = checker.report_diagnostic(UnnecessaryRangeStart, start.range());
diagnostic.try_set_fix(|| {
remove_argument(
&start,
start,
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::safe_edit)
});

View File

@ -160,20 +160,16 @@ fn generate_fix(
) -> anyhow::Result<Fix> {
let locator = checker.locator();
let source = locator.contents();
let tokens = checker.tokens();
let deletion = remove_argument(
generic_base,
arguments,
Parentheses::Preserve,
source,
checker.comment_ranges(),
tokens,
)?;
let insertion = add_argument(
locator.slice(generic_base),
arguments,
checker.comment_ranges(),
source,
);
let insertion = add_argument(locator.slice(generic_base), arguments, tokens);
Ok(Fix::unsafe_edits(deletion, [insertion]))
}

View File

@ -5,7 +5,7 @@ use ruff_python_ast::{
helpers::{pep_604_union, typing_optional},
name::Name,
operator_precedence::OperatorPrecedence,
parenthesize::parenthesized_range,
token::{Tokens, parenthesized_range},
};
use ruff_python_semantic::analyze::typing::{traverse_literal, traverse_union};
use ruff_text_size::{Ranged, TextRange};
@ -243,16 +243,12 @@ fn create_fix(
let union_expr = pep_604_union(&[new_literal_expr, none_expr]);
// Check if we need parentheses to preserve operator precedence
let content = if needs_parentheses_for_precedence(
semantic,
literal_expr,
checker.comment_ranges(),
checker.source(),
) {
format!("({})", checker.generator().expr(&union_expr))
} else {
checker.generator().expr(&union_expr)
};
let content =
if needs_parentheses_for_precedence(semantic, literal_expr, checker.tokens()) {
format!("({})", checker.generator().expr(&union_expr))
} else {
checker.generator().expr(&union_expr)
};
let union_edit = Edit::range_replacement(content, literal_expr.range());
Fix::applicable_edit(union_edit, applicability)
@ -278,8 +274,7 @@ enum UnionKind {
fn needs_parentheses_for_precedence(
semantic: &ruff_python_semantic::SemanticModel,
literal_expr: &Expr,
comment_ranges: &ruff_python_trivia::CommentRanges,
source: &str,
tokens: &Tokens,
) -> bool {
// Get the parent expression to check if we're in a context that needs parentheses
let Some(parent_expr) = semantic.current_expression_parent() else {
@ -287,14 +282,7 @@ fn needs_parentheses_for_precedence(
};
// Check if the literal expression is already parenthesized
if parenthesized_range(
literal_expr.into(),
parent_expr.into(),
comment_ranges,
source,
)
.is_some()
{
if parenthesized_range(literal_expr.into(), parent_expr.into(), tokens).is_some() {
return false; // Already parenthesized, don't add more
}

View File

@ -10,7 +10,7 @@ use libcst_native::{
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::Truthiness;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{
self as ast, AnyNodeRef, Arguments, BoolOp, ExceptHandler, Expr, Keyword, Stmt, UnaryOp,
@ -303,8 +303,7 @@ pub(crate) fn unittest_assertion(
parenthesized_range(
expr.into(),
checker.semantic().current_statement().into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or(expr.range()),
)));

View File

@ -768,8 +768,8 @@ fn check_fixture_decorator(checker: &Checker, func_name: &str, decorator: &Decor
keyword,
arguments,
edits::Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.map(Fix::unsafe_edit)
});

View File

@ -2,10 +2,9 @@ use rustc_hash::{FxBuildHasher, FxHashMap};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Expr, ExprCall, ExprContext, StringLiteralFlags};
use ruff_python_codegen::Generator;
use ruff_python_trivia::CommentRanges;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{Ranged, TextRange, TextSize};
@ -322,18 +321,8 @@ fn elts_to_csv(elts: &[Expr], generator: Generator, flags: StringLiteralFlags) -
/// ```
///
/// This method assumes that the first argument is a string.
fn get_parametrize_name_range(
call: &ExprCall,
expr: &Expr,
comment_ranges: &CommentRanges,
source: &str,
) -> Option<TextRange> {
parenthesized_range(
expr.into(),
(&call.arguments).into(),
comment_ranges,
source,
)
fn get_parametrize_name_range(call: &ExprCall, expr: &Expr, tokens: &Tokens) -> Option<TextRange> {
parenthesized_range(expr.into(), (&call.arguments).into(), tokens)
}
/// PT006
@ -349,13 +338,8 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
if names.len() > 1 {
match names_type {
types::ParametrizeNameType::Tuple => {
let name_range = get_parametrize_name_range(
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range());
let name_range = get_parametrize_name_range(call, expr, checker.tokens())
.unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType {
single_argument: false,
@ -386,13 +370,8 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
)));
}
types::ParametrizeNameType::List => {
let name_range = get_parametrize_name_range(
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range());
let name_range = get_parametrize_name_range(call, expr, checker.tokens())
.unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType {
single_argument: false,

View File

@ -10,7 +10,7 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::{Truthiness, contains_effect};
use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_codegen::Generator;
use ruff_python_semantic::SemanticModel;
@ -800,14 +800,9 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit(
value,
TextRange::new(
parenthesized_range(
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range())
.start(),
parenthesized_range(furthest.into(), expr.into(), checker.tokens())
.unwrap_or(furthest.range())
.start(),
expr.end(),
),
short_circuit_truthiness,
@ -828,14 +823,9 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit(
next_value,
TextRange::new(
parenthesized_range(
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range())
.start(),
parenthesized_range(furthest.into(), expr.into(), checker.tokens())
.unwrap_or(furthest.range())
.start(),
expr.end(),
),
short_circuit_truthiness,

View File

@ -4,7 +4,7 @@ use ruff_text_size::{Ranged, TextRange};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::{is_const_false, is_const_true};
use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation};
@ -171,13 +171,8 @@ pub(crate) fn if_expr_with_true_false(
checker
.locator()
.slice(
parenthesized_range(
test.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(test.range()),
parenthesized_range(test.into(), expr.into(), checker.tokens())
.unwrap_or(test.range()),
)
.to_string(),
expr.range(),

View File

@ -4,10 +4,10 @@ use anyhow::Result;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableStmt;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::stmt_if::{IfElifBranch, if_elif_branches};
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_trivia::{CommentRanges, SimpleTokenKind, SimpleTokenizer};
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};
@ -99,7 +99,7 @@ pub(crate) fn if_with_same_arms(checker: &Checker, stmt_if: &ast::StmtIf) {
&current_branch,
following_branch,
checker.locator(),
checker.comment_ranges(),
checker.tokens(),
)
});
}
@ -111,7 +111,7 @@ fn merge_branches(
current_branch: &IfElifBranch,
following_branch: &IfElifBranch,
locator: &Locator,
comment_ranges: &CommentRanges,
tokens: &ruff_python_ast::token::Tokens,
) -> Result<Fix> {
// Identify the colon (`:`) at the end of the current branch's test.
let Some(current_branch_colon) =
@ -127,12 +127,9 @@ fn merge_branches(
);
// If the following test isn't parenthesized, consider parenthesizing it.
let following_branch_test = if let Some(range) = parenthesized_range(
following_branch.test.into(),
stmt_if.into(),
comment_ranges,
locator.contents(),
) {
let following_branch_test = if let Some(range) =
parenthesized_range(following_branch.test.into(), stmt_if.into(), tokens)
{
Cow::Borrowed(locator.slice(range))
} else if matches!(
following_branch.test,
@ -153,24 +150,19 @@ fn merge_branches(
//
// For example, if the current test is `x if x else y`, we should parenthesize it to
// `(x if x else y) or ...`.
let parenthesize_edit = if matches!(
current_branch.test,
Expr::Lambda(_) | Expr::Named(_) | Expr::If(_)
) && parenthesized_range(
current_branch.test.into(),
stmt_if.into(),
comment_ranges,
locator.contents(),
)
.is_none()
{
Some(Edit::range_replacement(
format!("({})", locator.slice(current_branch.test)),
current_branch.test.range(),
))
} else {
None
};
let parenthesize_edit =
if matches!(
current_branch.test,
Expr::Lambda(_) | Expr::Named(_) | Expr::If(_)
) && parenthesized_range(current_branch.test.into(), stmt_if.into(), tokens).is_none()
{
Some(Edit::range_replacement(
format!("({})", locator.slice(current_branch.test)),
current_branch.test.range(),
))
} else {
None
};
Ok(Fix::safe_edits(
deletion_edit,

View File

@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Arguments, CmpOp, Comprehension, Expr};
use ruff_python_semantic::analyze::typing;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
@ -90,20 +90,10 @@ fn key_in_dict(checker: &Checker, left: &Expr, right: &Expr, operator: CmpOp, pa
}
// Extract the exact range of the left and right expressions.
let left_range = parenthesized_range(
left.into(),
parent,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(left.range());
let right_range = parenthesized_range(
right.into(),
parent,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(right.range());
let left_range =
parenthesized_range(left.into(), parent, checker.tokens()).unwrap_or(left.range());
let right_range =
parenthesized_range(right.into(), parent, checker.tokens()).unwrap_or(right.range());
let mut diagnostic = checker.report_diagnostic(
InDictKeys {

View File

@ -146,7 +146,7 @@ fn reverse_comparison(expr: &Expr, locator: &Locator, stylist: &Stylist) -> Resu
let left = (*comparison.left).clone();
// Copy the right side to the left side.
comparison.left = Box::new(comparison.comparisons[0].comparator.clone());
*comparison.left = comparison.comparisons[0].comparator.clone();
// Copy the left side to the right side.
comparison.comparisons[0].comparator = left;

View File

@ -11,7 +11,7 @@ use crate::registry::Rule;
use crate::rules::flake8_type_checking::helpers::quote_type_expression;
use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation};
use ruff_python_ast::PythonVersion;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
/// ## What it does
/// Checks if [PEP 613] explicit type aliases contain references to
@ -295,21 +295,20 @@ pub(crate) fn quoted_type_alias(
let range = annotation_expr.range();
let mut diagnostic = checker.report_diagnostic(QuotedTypeAlias, range);
let fix_string = annotation_expr.value.to_string();
let fix_string = if (fix_string.contains('\n') || fix_string.contains('\r'))
&& parenthesized_range(
// Check for parenthesis outside string ("""...""")
// Check for parentheses outside the string ("""...""")
annotation_expr.into(),
checker.semantic().current_statement().into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.source_tokens(),
)
.is_none()
&& parenthesized_range(
// Check for parenthesis inside string """(...)"""
// Check for parentheses inside the string """(...)"""
expr.into(),
annotation_expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.is_none()
{

View File

@ -1,10 +1,9 @@
use std::ops::Range;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{Expr, ExprBinOp, ExprCall, Operator};
use ruff_python_semantic::SemanticModel;
use ruff_python_trivia::CommentRanges;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
@ -89,11 +88,7 @@ pub(crate) fn path_constructor_current_directory(
let mut diagnostic = checker.report_diagnostic(PathConstructorCurrentDirectory, arg.range());
match parent_and_next_path_fragment_range(
checker.semantic(),
checker.comment_ranges(),
checker.source(),
) {
match parent_and_next_path_fragment_range(checker.semantic(), checker.tokens()) {
Some((parent_range, next_fragment_range)) => {
let next_fragment_expr = checker.locator().slice(next_fragment_range);
let call_expr = checker.locator().slice(call.range());
@ -116,7 +111,7 @@ pub(crate) fn path_constructor_current_directory(
arguments,
Parentheses::Preserve,
checker.source(),
checker.comment_ranges(),
checker.tokens(),
)?;
Ok(Fix::applicable_edit(edit, applicability(call.range())))
}),
@ -125,8 +120,7 @@ pub(crate) fn path_constructor_current_directory(
fn parent_and_next_path_fragment_range(
semantic: &SemanticModel,
comment_ranges: &CommentRanges,
source: &str,
tokens: &ruff_python_ast::token::Tokens,
) -> Option<(TextRange, TextRange)> {
let parent = semantic.current_expression_parent()?;
@ -142,6 +136,6 @@ fn parent_and_next_path_fragment_range(
Some((
parent.range(),
parenthesized_range(right.into(), parent.into(), comment_ranges, source).unwrap_or(range),
parenthesized_range(right.into(), parent.into(), tokens).unwrap_or(range),
))
}

View File

@ -1,8 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_const_true;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Keyword, Stmt};
use ruff_python_trivia::CommentRanges;
use ruff_text_size::Ranged;
use crate::Locator;
@ -91,7 +90,7 @@ pub(crate) fn inplace_argument(checker: &Checker, call: &ast::ExprCall) {
call,
keyword,
statement,
checker.comment_ranges(),
checker.tokens(),
checker.locator(),
) {
diagnostic.set_fix(fix);
@ -111,21 +110,16 @@ fn convert_inplace_argument_to_assignment(
call: &ast::ExprCall,
keyword: &Keyword,
statement: &Stmt,
comment_ranges: &CommentRanges,
tokens: &Tokens,
locator: &Locator,
) -> Option<Fix> {
// Add the assignment.
let attr = call.func.as_attribute_expr()?;
let insert_assignment = Edit::insertion(
format!("{name} = ", name = locator.slice(attr.value.range())),
parenthesized_range(
call.into(),
statement.into(),
comment_ranges,
locator.contents(),
)
.unwrap_or(call.range())
.start(),
parenthesized_range(call.into(), statement.into(), tokens)
.unwrap_or(call.range())
.start(),
);
// Remove the `inplace` argument.
@ -134,7 +128,7 @@ fn convert_inplace_argument_to_assignment(
&call.arguments,
Parentheses::Preserve,
locator.contents(),
comment_ranges,
tokens,
)
.ok()?;

View File

@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{
self as ast, Expr, ExprEllipsisLiteral, ExprLambda, Identifier, Parameter,
ParameterWithDefault, Parameters, Stmt,
@ -265,29 +265,19 @@ fn replace_trailing_ellipsis_with_original_expr(
stmt: &Stmt,
checker: &Checker,
) -> String {
let original_expr_range = parenthesized_range(
(&lambda.body).into(),
lambda.into(),
checker.comment_ranges(),
checker.source(),
)
.unwrap_or(lambda.body.range());
let original_expr_range =
parenthesized_range((&lambda.body).into(), lambda.into(), checker.tokens())
.unwrap_or(lambda.body.range());
// This prevents the autofix of introducing a syntax error if the lambda's body is an
// expression spanned across multiple lines. To avoid the syntax error we preserve
// the parenthesis around the body.
let original_expr_in_source = if parenthesized_range(
lambda.into(),
stmt.into(),
checker.comment_ranges(),
checker.source(),
)
.is_some()
{
format!("({})", checker.locator().slice(original_expr_range))
} else {
checker.locator().slice(original_expr_range).to_string()
};
let original_expr_in_source =
if parenthesized_range(lambda.into(), stmt.into(), checker.tokens()).is_some() {
format!("({})", checker.locator().slice(original_expr_range))
} else {
checker.locator().slice(original_expr_range).to_string()
};
let placeholder_ellipsis_start = generated.rfind("...").unwrap();
let placeholder_ellipsis_end = placeholder_ellipsis_start + "...".len();

View File

@ -1,4 +1,4 @@
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use rustc_hash::FxHashMap;
use ruff_macros::{ViolationMetadata, derive_message_formats};
@ -179,15 +179,14 @@ fn is_redundant_boolean_comparison(op: CmpOp, comparator: &Expr) -> Option<bool>
fn generate_redundant_comparison(
compare: &ast::ExprCompare,
comment_ranges: &ruff_python_trivia::CommentRanges,
tokens: &Tokens,
source: &str,
comparator: &Expr,
kind: bool,
needs_wrap: bool,
) -> String {
let comparator_range =
parenthesized_range(comparator.into(), compare.into(), comment_ranges, source)
.unwrap_or(comparator.range());
let comparator_range = parenthesized_range(comparator.into(), compare.into(), tokens)
.unwrap_or(comparator.range());
let comparator_str = &source[comparator_range];
@ -379,7 +378,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
.copied()
.collect::<Vec<_>>();
let comment_ranges = checker.comment_ranges();
let tokens = checker.tokens();
let source = checker.source();
let content = match (&*compare.ops, &*compare.comparators) {
@ -387,18 +386,13 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
if let Some(kind) = is_redundant_boolean_comparison(*op, &compare.left) {
let needs_wrap = compare.left.range().start() != compare.range().start();
generate_redundant_comparison(
compare,
comment_ranges,
source,
comparator,
kind,
needs_wrap,
compare, tokens, source, comparator, kind, needs_wrap,
)
} else if let Some(kind) = is_redundant_boolean_comparison(*op, comparator) {
let needs_wrap = comparator.range().end() != compare.range().end();
generate_redundant_comparison(
compare,
comment_ranges,
tokens,
source,
&compare.left,
kind,
@ -410,7 +404,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
&ops,
&compare.comparators,
compare.into(),
comment_ranges,
tokens,
source,
)
}
@ -420,7 +414,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
&ops,
&compare.comparators,
compare.into(),
comment_ranges,
tokens,
source,
),
};

View File

@ -107,7 +107,7 @@ pub(crate) fn not_tests(checker: &Checker, unary_op: &ast::ExprUnaryOp) {
&[CmpOp::NotIn],
comparators,
unary_op.into(),
checker.comment_ranges(),
checker.tokens(),
checker.source(),
),
unary_op.range(),
@ -127,7 +127,7 @@ pub(crate) fn not_tests(checker: &Checker, unary_op: &ast::ExprUnaryOp) {
&[CmpOp::IsNot],
comparators,
unary_op.into(),
checker.comment_ranges(),
checker.tokens(),
checker.source(),
),
unary_op.range(),

View File

@ -4,7 +4,9 @@ use rustc_hash::FxHashSet;
use std::sync::LazyLock;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::Parameter;
use ruff_python_ast::docstrings::{clean_space, leading_space};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze::visibility::is_staticmethod;
use ruff_python_trivia::textwrap::dedent;
@ -1184,6 +1186,9 @@ impl AlwaysFixableViolation for MissingSectionNameColon {
/// This rule is enabled when using the `google` convention, and disabled when
/// using the `pep257` and `numpy` conventions.
///
/// Parameters annotated with `typing.Unpack` are exempt from this rule.
/// This follows the Python typing specification for unpacking keyword arguments.
///
/// ## Example
/// ```python
/// def calculate_speed(distance: float, time: float) -> float:
@ -1233,6 +1238,7 @@ impl AlwaysFixableViolation for MissingSectionNameColon {
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
/// - [Google Python Style Guide - Docstrings](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings)
/// - [Python - Unpack for keyword arguments](https://typing.python.org/en/latest/spec/callables.html#unpack-kwargs)
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "v0.0.73")]
pub(crate) struct UndocumentedParam {
@ -1808,7 +1814,9 @@ fn missing_args(checker: &Checker, docstring: &Docstring, docstrings_args: &FxHa
missing_arg_names.insert(starred_arg_name);
}
}
if let Some(arg) = function.parameters.kwarg.as_ref() {
if let Some(arg) = function.parameters.kwarg.as_ref()
&& !has_unpack_annotation(checker, arg)
{
let arg_name = arg.name.as_str();
let starred_arg_name = format!("**{arg_name}");
if !arg_name.starts_with('_')
@ -1834,6 +1842,15 @@ fn missing_args(checker: &Checker, docstring: &Docstring, docstrings_args: &FxHa
}
}
/// Returns `true` if the parameter is annotated with `typing.Unpack`
fn has_unpack_annotation(checker: &Checker, parameter: &Parameter) -> bool {
parameter.annotation.as_ref().is_some_and(|annotation| {
checker
.semantic()
.match_typing_expr(map_subscript(annotation), "Unpack")
})
}
// See: `GOOGLE_ARGS_REGEX` in `pydocstyle/checker.py`.
static GOOGLE_ARGS_REGEX: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"^\s*(\*?\*?\w+)\s*(\(.*?\))?\s*:(\r\n|\n)?\s*.+").unwrap());

View File

@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@ -83,3 +83,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@ -28,6 +28,7 @@ mod tests {
use crate::settings::types::PreviewMode;
use crate::settings::{LinterSettings, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{test_contents, test_path, test_snippet};
use crate::{Locator, assert_diagnostics, assert_diagnostics_diff, directives};
@ -955,6 +956,8 @@ mod tests {
&locator,
&indexer,
);
let suppressions =
Suppressions::from_tokens(&settings, locator.contents(), parsed.tokens());
let mut messages = check_path(
Path::new("<filename>"),
None,
@ -968,6 +971,7 @@ mod tests {
source_type,
&parsed,
target_version,
&suppressions,
);
messages.sort_by(Diagnostic::ruff_start_ordering);
let actual = messages

View File

@ -3,7 +3,7 @@ use std::collections::hash_map::Entry;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::{ComparableExpr, HashableExpr};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr};
use ruff_text_size::Ranged;
@ -193,16 +193,14 @@ pub(crate) fn repeated_keys(checker: &Checker, dict: &ast::ExprDict) {
parenthesized_range(
dict.value(i - 1).into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or_else(|| dict.value(i - 1).range())
.end(),
parenthesized_range(
dict.value(i).into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or_else(|| dict.value(i).range())
.end(),
@ -224,16 +222,14 @@ pub(crate) fn repeated_keys(checker: &Checker, dict: &ast::ExprDict) {
parenthesized_range(
dict.value(i - 1).into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or_else(|| dict.value(i - 1).range())
.end(),
parenthesized_range(
dict.value(i).into(),
dict.into(),
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
)
.unwrap_or_else(|| dict.value(i).range())
.end(),

View File

@ -2,7 +2,7 @@ use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_semantic::Binding;
@ -172,14 +172,10 @@ fn remove_unused_variable(binding: &Binding, checker: &Checker) -> Option<Fix> {
{
// If the expression is complex (`x = foo()`), remove the assignment,
// but preserve the right-hand side.
let start = parenthesized_range(
target.into(),
statement.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(target.range())
.start();
let start =
parenthesized_range(target.into(), statement.into(), checker.tokens())
.unwrap_or(target.range())
.start();
let end = match_token_after(checker.tokens(), target.end(), |token| {
token == TokenKind::Equal
})?

View File

@ -16,10 +16,10 @@ mod tests {
use crate::registry::Rule;
use crate::rules::{flake8_tidy_imports, pylint};
use crate::assert_diagnostics;
use crate::settings::LinterSettings;
use crate::settings::types::PreviewMode;
use crate::test::test_path;
use crate::{assert_diagnostics, assert_diagnostics_diff};
#[test_case(Rule::SingledispatchMethod, Path::new("singledispatch_method.py"))]
#[test_case(
@ -253,6 +253,32 @@ mod tests {
Ok(())
}
#[test_case(
Rule::UselessExceptionStatement,
Path::new("useless_exception_statement.py")
)]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
rule_code.noqa_code(),
path.to_string_lossy()
);
assert_diagnostics_diff!(
snapshot,
Path::new("pylint").join(path).as_path(),
&LinterSettings {
preview: PreviewMode::Disabled,
..LinterSettings::for_rule(rule_code)
},
&LinterSettings {
preview: PreviewMode::Enabled,
..LinterSettings::for_rule(rule_code)
}
);
Ok(())
}
#[test]
fn continue_in_finally() -> Result<()> {
let diagnostics = test_path(

View File

@ -2,7 +2,7 @@ use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{
BoolOp, CmpOp, Expr, ExprBoolOp, ExprCompare,
parenthesize::{parentheses_iterator, parenthesized_range},
token::{parentheses_iterator, parenthesized_range},
};
use ruff_text_size::{Ranged, TextRange};
@ -62,7 +62,7 @@ pub(crate) fn boolean_chained_comparison(checker: &Checker, expr_bool_op: &ExprB
}
let locator = checker.locator();
let comment_ranges = checker.comment_ranges();
let tokens = checker.tokens();
// retrieve all compare expressions from boolean expression
let compare_expressions = expr_bool_op
@ -89,40 +89,22 @@ pub(crate) fn boolean_chained_comparison(checker: &Checker, expr_bool_op: &ExprB
continue;
}
let left_paren_count = parentheses_iterator(
left_compare.into(),
Some(expr_bool_op.into()),
comment_ranges,
locator.contents(),
)
.count();
let left_paren_count =
parentheses_iterator(left_compare.into(), Some(expr_bool_op.into()), tokens).count();
let right_paren_count = parentheses_iterator(
right_compare.into(),
Some(expr_bool_op.into()),
comment_ranges,
locator.contents(),
)
.count();
let right_paren_count =
parentheses_iterator(right_compare.into(), Some(expr_bool_op.into()), tokens).count();
// Create the edit that removes the comparison operator
// In `a<(b) and ((b))<c`, we need to handle the
// parentheses when specifying the fix range.
let left_compare_right_range = parenthesized_range(
left_compare_right.into(),
left_compare.into(),
comment_ranges,
locator.contents(),
)
.unwrap_or(left_compare_right.range());
let right_compare_left_range = parenthesized_range(
right_compare_left.into(),
right_compare.into(),
comment_ranges,
locator.contents(),
)
.unwrap_or(right_compare_left.range());
let left_compare_right_range =
parenthesized_range(left_compare_right.into(), left_compare.into(), tokens)
.unwrap_or(left_compare_right.range());
let right_compare_left_range =
parenthesized_range(right_compare_left.into(), right_compare.into(), tokens)
.unwrap_or(right_compare_left.range());
let edit = Edit::range_replacement(
locator.slice(left_compare_right_range).to_string(),
TextRange::new(

View File

@ -99,7 +99,7 @@ pub(crate) fn duplicate_bases(checker: &Checker, name: &str, arguments: Option<&
arguments,
Parentheses::Remove,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(|edit| {
Fix::applicable_edit(

View File

@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, CmpOp, Stmt};
use ruff_text_size::Ranged;
@ -166,13 +166,8 @@ pub(crate) fn if_stmt_min_max(checker: &Checker, stmt_if: &ast::StmtIf) {
let replacement = format!(
"{} = {min_max}({}, {})",
checker.locator().slice(
parenthesized_range(
body_target.into(),
body.into(),
checker.comment_ranges(),
checker.locator().contents()
)
.unwrap_or(body_target.range())
parenthesized_range(body_target.into(), body.into(), checker.tokens())
.unwrap_or(body_target.range())
),
checker.locator().slice(arg1),
checker.locator().slice(arg2),

View File

@ -174,12 +174,8 @@ pub(crate) fn missing_maxsplit_arg(checker: &Checker, value: &Expr, slice: &Expr
SliceBoundary::Last => "rsplit",
};
let maxsplit_argument_edit = fix::edits::add_argument(
"maxsplit=1",
arguments,
checker.comment_ranges(),
checker.locator().contents(),
);
let maxsplit_argument_edit =
fix::edits::add_argument("maxsplit=1", arguments, checker.tokens());
// Only change `actual_split_type` if it doesn't match `suggested_split_type`
let split_type_edit: Option<Edit> = if actual_split_type == suggested_split_type {

View File

@ -2,7 +2,7 @@ use ast::Expr;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{ExprBinOp, ExprRef, Operator};
use ruff_text_size::{Ranged, TextRange};
@ -150,12 +150,10 @@ fn augmented_assignment(
let right_operand_ref = ExprRef::from(right_operand);
let parent = original_expr.into();
let comment_ranges = checker.comment_ranges();
let source = checker.source();
let tokens = checker.tokens();
let right_operand_range =
parenthesized_range(right_operand_ref, parent, comment_ranges, source)
.unwrap_or(right_operand.range());
parenthesized_range(right_operand_ref, parent, tokens).unwrap_or(right_operand.range());
let right_operand_expr = locator.slice(right_operand_range);
let target_expr = locator.slice(target);

View File

@ -75,12 +75,7 @@ pub(crate) fn subprocess_run_without_check(checker: &Checker, call: &ast::ExprCa
let mut diagnostic =
checker.report_diagnostic(SubprocessRunWithoutCheck, call.func.range());
diagnostic.set_fix(Fix::applicable_edit(
add_argument(
"check=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
add_argument("check=False", &call.arguments, checker.tokens()),
// If the function call contains `**kwargs`, mark the fix as unsafe.
if call
.arguments

View File

@ -1,8 +1,7 @@
use std::fmt::{Display, Formatter};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_ast::{self as ast, Expr, name::QualifiedName};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::typing;
use ruff_text_size::{Ranged, TextRange};
@ -193,8 +192,7 @@ fn generate_keyword_fix(checker: &Checker, call: &ast::ExprCall) -> Fix {
}))
),
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
checker.tokens(),
))
}

View File

@ -1,10 +1,11 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::{SemanticModel, analyze};
use ruff_python_stdlib::builtins;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::preview::is_custom_exception_checking_enabled;
use crate::{Edit, Fix, FixAvailability, Violation};
use ruff_python_ast::PythonVersion;
@ -20,6 +21,9 @@ use ruff_python_ast::PythonVersion;
/// This rule only detects built-in exceptions, like `ValueError`, and does
/// not catch user-defined exceptions.
///
/// In [preview], this rule will also detect user-defined exceptions, but only
/// the ones defined in the file being checked.
///
/// ## Example
/// ```python
/// ValueError("...")
@ -32,7 +36,8 @@ use ruff_python_ast::PythonVersion;
///
/// ## Fix safety
/// This rule's fix is marked as unsafe, as converting a useless exception
/// statement to a `raise` statement will change the program's behavior.
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "0.5.0")]
pub(crate) struct UselessExceptionStatement;
@ -56,7 +61,10 @@ pub(crate) fn useless_exception_statement(checker: &Checker, expr: &ast::StmtExp
return;
};
if is_builtin_exception(func, checker.semantic(), checker.target_version()) {
if is_builtin_exception(func, checker.semantic(), checker.target_version())
|| (is_custom_exception_checking_enabled(checker.settings())
&& is_custom_exception(func, checker.semantic(), checker.target_version()))
{
let mut diagnostic = checker.report_diagnostic(UselessExceptionStatement, expr.range());
diagnostic.set_fix(Fix::unsafe_edit(Edit::insertion(
"raise ".to_string(),
@ -78,3 +86,34 @@ fn is_builtin_exception(
if builtins::is_exception(name, target_version.minor))
})
}
/// Returns `true` if the given expression is a custom exception.
fn is_custom_exception(
expr: &Expr,
semantic: &SemanticModel,
target_version: PythonVersion,
) -> bool {
let Some(qualified_name) = semantic.resolve_qualified_name(expr) else {
return false;
};
let Some(symbol) = qualified_name.segments().last() else {
return false;
};
let Some(binding_id) = semantic.lookup_symbol(symbol) else {
return false;
};
let binding = semantic.binding(binding_id);
let Some(source) = binding.source else {
return false;
};
let statement = semantic.statement(source);
if let ast::Stmt::ClassDef(class_def) = statement {
return analyze::class::any_qualified_base_class(class_def, semantic, &|qualified_name| {
if let ["" | "builtins", name] = qualified_name.segments() {
return builtins::is_exception(name, target_version.minor);
}
false
});
}
false
}

View File

@ -2,250 +2,294 @@
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:7:5
|
5 | # Test case 1: Useless exception statement
6 | def func():
7 | AssertionError("This is an assertion error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
4 |
5 | # Test case 1: Useless exception statement
6 | def func():
- AssertionError("This is an assertion error") # PLW0133
7 + raise AssertionError("This is an assertion error") # PLW0133
8 |
9 |
10 | # Test case 2: Useless exception statement in try-except block
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:13:9
--> useless_exception_statement.py:26:5
|
11 | def func():
12 | try:
13 | Exception("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
14 | except Exception as err:
15 | pass
|
help: Add `raise` keyword
10 | # Test case 2: Useless exception statement in try-except block
11 | def func():
12 | try:
- Exception("This is an exception") # PLW0133
13 + raise Exception("This is an exception") # PLW0133
14 | except Exception as err:
15 | pass
16 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:21:9
|
19 | def func():
20 | if True:
21 | RuntimeError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
18 | # Test case 3: Useless exception statement in if statement
19 | def func():
20 | if True:
- RuntimeError("This is an exception") # PLW0133
21 + raise RuntimeError("This is an exception") # PLW0133
22 |
23 |
24 | # Test case 4: Useless exception statement in class
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:28:13
|
26 | class Class:
27 | def __init__(self):
28 | TypeError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
24 | # Test case 1: Useless exception statement
25 | def func():
26 | class Class:
27 | def __init__(self):
26 | AssertionError("This is an assertion error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
27 | MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
|
help: Add `raise` keyword
23 |
24 | # Test case 1: Useless exception statement
25 | def func():
- AssertionError("This is an assertion error") # PLW0133
26 + raise AssertionError("This is an assertion error") # PLW0133
27 | MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
29 | MyValueError("This is a custom value error") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:35:9
|
33 | def func():
34 | try:
35 | Exception("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
36 | MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
32 | # Test case 2: Useless exception statement in try-except block
33 | def func():
34 | try:
- Exception("This is an exception") # PLW0133
35 + raise Exception("This is an exception") # PLW0133
36 | MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
38 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:46:9
|
44 | def func():
45 | if True:
46 | RuntimeError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
47 | MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
43 | # Test case 3: Useless exception statement in if statement
44 | def func():
45 | if True:
- RuntimeError("This is an exception") # PLW0133
46 + raise RuntimeError("This is an exception") # PLW0133
47 | MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
49 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:56:13
|
54 | class Class:
55 | def __init__(self):
56 | TypeError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57 | MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
53 | def func():
54 | class Class:
55 | def __init__(self):
- TypeError("This is an exception") # PLW0133
28 + raise TypeError("This is an exception") # PLW0133
29 |
30 |
31 | # Test case 5: Useless exception statement in function
56 + raise TypeError("This is an exception") # PLW0133
57 | MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
59 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:34:9
--> useless_exception_statement.py:65:9
|
32 | def func():
33 | def inner():
34 | IndexError("This is an exception") # PLW0133
63 | def func():
64 | def inner():
65 | IndexError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
35 |
36 | inner()
66 | MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
31 | # Test case 5: Useless exception statement in function
32 | def func():
33 | def inner():
62 | # Test case 5: Useless exception statement in function
63 | def func():
64 | def inner():
- IndexError("This is an exception") # PLW0133
34 + raise IndexError("This is an exception") # PLW0133
35 |
36 | inner()
37 |
65 + raise IndexError("This is an exception") # PLW0133
66 | MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
68 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:42:9
--> useless_exception_statement.py:76:9
|
40 | def func():
41 | while True:
42 | KeyError("This is an exception") # PLW0133
74 | def func():
75 | while True:
76 | KeyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
77 | MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
39 | # Test case 6: Useless exception statement in while loop
40 | def func():
41 | while True:
73 | # Test case 6: Useless exception statement in while loop
74 | def func():
75 | while True:
- KeyError("This is an exception") # PLW0133
42 + raise KeyError("This is an exception") # PLW0133
43 |
44 |
45 | # Test case 7: Useless exception statement in abstract class
76 + raise KeyError("This is an exception") # PLW0133
77 | MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
79 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:50:13
--> useless_exception_statement.py:87:13
|
48 | @abstractmethod
49 | def method(self):
50 | NotImplementedError("This is an exception") # PLW0133
85 | @abstractmethod
86 | def method(self):
87 | NotImplementedError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
88 | MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
47 | class Class(ABC):
48 | @abstractmethod
49 | def method(self):
84 | class Class(ABC):
85 | @abstractmethod
86 | def method(self):
- NotImplementedError("This is an exception") # PLW0133
50 + raise NotImplementedError("This is an exception") # PLW0133
51 |
52 |
53 | # Test case 8: Useless exception statement inside context manager
87 + raise NotImplementedError("This is an exception") # PLW0133
88 | MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
90 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:56:9
--> useless_exception_statement.py:96:9
|
54 | def func():
55 | with suppress(AttributeError):
56 | AttributeError("This is an exception") # PLW0133
94 | def func():
95 | with suppress(Exception):
96 | AttributeError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
97 | MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
53 | # Test case 8: Useless exception statement inside context manager
54 | def func():
55 | with suppress(AttributeError):
93 | # Test case 8: Useless exception statement inside context manager
94 | def func():
95 | with suppress(Exception):
- AttributeError("This is an exception") # PLW0133
56 + raise AttributeError("This is an exception") # PLW0133
57 |
58 |
59 | # Test case 9: Useless exception statement in parentheses
96 + raise AttributeError("This is an exception") # PLW0133
97 | MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
99 | MyValueError("This is an exception") # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:61:5
|
59 | # Test case 9: Useless exception statement in parentheses
60 | def func():
61 | (RuntimeError("This is an exception")) # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
58 |
59 | # Test case 9: Useless exception statement in parentheses
60 | def func():
- (RuntimeError("This is an exception")) # PLW0133
61 + raise (RuntimeError("This is an exception")) # PLW0133
62 |
63 |
64 | # Test case 10: Useless exception statement in continuation
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:66:12
|
64 | # Test case 10: Useless exception statement in continuation
65 | def func():
66 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
63 |
64 | # Test case 10: Useless exception statement in continuation
65 | def func():
- x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
66 + x = 1; raise (RuntimeError("This is an exception")); y = 2 # PLW0133
67 |
68 |
69 | # Test case 11: Useless warning statement
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:71:5
|
69 | # Test case 11: Useless warning statement
70 | def func():
71 | UserWarning("This is an assertion error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
68 |
69 | # Test case 11: Useless warning statement
70 | def func():
- UserWarning("This is an assertion error") # PLW0133
71 + raise UserWarning("This is an assertion error") # PLW0133
72 |
73 |
74 | # Non-violation test cases: PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:126:1
--> useless_exception_statement.py:104:5
|
124 | import builtins
125 |
126 | builtins.TypeError("still an exception even though it's an Attribute")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
127 |
128 | PythonFinalizationError("Added in Python 3.13")
102 | # Test case 9: Useless exception statement in parentheses
103 | def func():
104 | (RuntimeError("This is an exception")) # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
105 | (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
|
help: Add `raise` keyword
101 |
102 | # Test case 9: Useless exception statement in parentheses
103 | def func():
- (RuntimeError("This is an exception")) # PLW0133
104 + raise (RuntimeError("This is an exception")) # PLW0133
105 | (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
107 | (MyValueError("This is an exception")) # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:112:12
|
110 | # Test case 10: Useless exception statement in continuation
111 | def func():
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
|
help: Add `raise` keyword
109 |
110 | # Test case 10: Useless exception statement in continuation
111 | def func():
- x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
112 + x = 1; raise (RuntimeError("This is an exception")); y = 2 # PLW0133
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:120:5
|
118 | # Test case 11: Useless warning statement
119 | def func():
120 | UserWarning("This is a user warning") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
121 | MyUserWarning("This is a custom user warning") # PLW0133
|
help: Add `raise` keyword
117 |
118 | # Test case 11: Useless warning statement
119 | def func():
- UserWarning("This is a user warning") # PLW0133
120 + raise UserWarning("This is a user warning") # PLW0133
121 | MyUserWarning("This is a custom user warning") # PLW0133
122 |
123 |
124 | import builtins
125 |
- builtins.TypeError("still an exception even though it's an Attribute")
126 + raise builtins.TypeError("still an exception even though it's an Attribute")
127 |
128 | PythonFinalizationError("Added in Python 3.13")
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:128:1
--> useless_exception_statement.py:127:1
|
126 | builtins.TypeError("still an exception even though it's an Attribute")
127 |
128 | PythonFinalizationError("Added in Python 3.13")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
125 | import builtins
126 |
127 | builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128 |
129 | PythonFinalizationError("Added in Python 3.13") # PLW0133
|
help: Add `raise` keyword
125 |
126 | builtins.TypeError("still an exception even though it's an Attribute")
127 |
- PythonFinalizationError("Added in Python 3.13")
128 + raise PythonFinalizationError("Added in Python 3.13")
124 | # Test case 12: Useless exception statement at module level
125 | import builtins
126 |
- builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
127 + raise builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
128 |
129 | PythonFinalizationError("Added in Python 3.13") # PLW0133
130 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:129:1
|
127 | builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
128 |
129 | PythonFinalizationError("Added in Python 3.13") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
130 |
131 | MyError("This is an exception") # PLW0133
|
help: Add `raise` keyword
126 |
127 | builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
128 |
- PythonFinalizationError("Added in Python 3.13") # PLW0133
129 + raise PythonFinalizationError("Added in Python 3.13") # PLW0133
130 |
131 | MyError("This is an exception") # PLW0133
132 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:137:1
|
135 | MyValueError("This is an exception") # PLW0133
136 |
137 | UserWarning("This is a user warning") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138 |
139 | MyUserWarning("This is a custom user warning") # PLW0133
|
help: Add `raise` keyword
134 |
135 | MyValueError("This is an exception") # PLW0133
136 |
- UserWarning("This is a user warning") # PLW0133
137 + raise UserWarning("This is a user warning") # PLW0133
138 |
139 | MyUserWarning("This is a custom user warning") # PLW0133
140 |
note: This is an unsafe fix and may change runtime behavior

View File

@ -0,0 +1,751 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 0
Added: 35
--- Added ---
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:27:5
|
25 | def func():
26 | AssertionError("This is an assertion error") # PLW0133
27 | MyError("This is a custom error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
28 | MySubError("This is a custom error") # PLW0133
29 | MyValueError("This is a custom value error") # PLW0133
|
help: Add `raise` keyword
24 | # Test case 1: Useless exception statement
25 | def func():
26 | AssertionError("This is an assertion error") # PLW0133
- MyError("This is a custom error") # PLW0133
27 + raise MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
29 | MyValueError("This is a custom value error") # PLW0133
30 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:28:5
|
26 | AssertionError("This is an assertion error") # PLW0133
27 | MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
29 | MyValueError("This is a custom value error") # PLW0133
|
help: Add `raise` keyword
25 | def func():
26 | AssertionError("This is an assertion error") # PLW0133
27 | MyError("This is a custom error") # PLW0133
- MySubError("This is a custom error") # PLW0133
28 + raise MySubError("This is a custom error") # PLW0133
29 | MyValueError("This is a custom value error") # PLW0133
30 |
31 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:29:5
|
27 | MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
29 | MyValueError("This is a custom value error") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
26 | AssertionError("This is an assertion error") # PLW0133
27 | MyError("This is a custom error") # PLW0133
28 | MySubError("This is a custom error") # PLW0133
- MyValueError("This is a custom value error") # PLW0133
29 + raise MyValueError("This is a custom value error") # PLW0133
30 |
31 |
32 | # Test case 2: Useless exception statement in try-except block
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:36:9
|
34 | try:
35 | Exception("This is an exception") # PLW0133
36 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
37 | MySubError("This is an exception") # PLW0133
38 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
33 | def func():
34 | try:
35 | Exception("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
36 + raise MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
38 | MyValueError("This is an exception") # PLW0133
39 | except Exception as err:
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:37:9
|
35 | Exception("This is an exception") # PLW0133
36 | MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
38 | MyValueError("This is an exception") # PLW0133
39 | except Exception as err:
|
help: Add `raise` keyword
34 | try:
35 | Exception("This is an exception") # PLW0133
36 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
37 + raise MySubError("This is an exception") # PLW0133
38 | MyValueError("This is an exception") # PLW0133
39 | except Exception as err:
40 | pass
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:38:9
|
36 | MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
38 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
39 | except Exception as err:
40 | pass
|
help: Add `raise` keyword
35 | Exception("This is an exception") # PLW0133
36 | MyError("This is an exception") # PLW0133
37 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
38 + raise MyValueError("This is an exception") # PLW0133
39 | except Exception as err:
40 | pass
41 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:47:9
|
45 | if True:
46 | RuntimeError("This is an exception") # PLW0133
47 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
48 | MySubError("This is an exception") # PLW0133
49 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
44 | def func():
45 | if True:
46 | RuntimeError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
47 + raise MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
49 | MyValueError("This is an exception") # PLW0133
50 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:48:9
|
46 | RuntimeError("This is an exception") # PLW0133
47 | MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
49 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
45 | if True:
46 | RuntimeError("This is an exception") # PLW0133
47 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
48 + raise MySubError("This is an exception") # PLW0133
49 | MyValueError("This is an exception") # PLW0133
50 |
51 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:49:9
|
47 | MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
49 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
46 | RuntimeError("This is an exception") # PLW0133
47 | MyError("This is an exception") # PLW0133
48 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
49 + raise MyValueError("This is an exception") # PLW0133
50 |
51 |
52 | # Test case 4: Useless exception statement in class
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:57:13
|
55 | def __init__(self):
56 | TypeError("This is an exception") # PLW0133
57 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
58 | MySubError("This is an exception") # PLW0133
59 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
54 | class Class:
55 | def __init__(self):
56 | TypeError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
57 + raise MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
59 | MyValueError("This is an exception") # PLW0133
60 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:58:13
|
56 | TypeError("This is an exception") # PLW0133
57 | MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
55 | def __init__(self):
56 | TypeError("This is an exception") # PLW0133
57 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
58 + raise MySubError("This is an exception") # PLW0133
59 | MyValueError("This is an exception") # PLW0133
60 |
61 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:59:13
|
57 | MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
59 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
56 | TypeError("This is an exception") # PLW0133
57 | MyError("This is an exception") # PLW0133
58 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
59 + raise MyValueError("This is an exception") # PLW0133
60 |
61 |
62 | # Test case 5: Useless exception statement in function
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:66:9
|
64 | def inner():
65 | IndexError("This is an exception") # PLW0133
66 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
67 | MySubError("This is an exception") # PLW0133
68 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
63 | def func():
64 | def inner():
65 | IndexError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
66 + raise MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
68 | MyValueError("This is an exception") # PLW0133
69 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:67:9
|
65 | IndexError("This is an exception") # PLW0133
66 | MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
68 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
64 | def inner():
65 | IndexError("This is an exception") # PLW0133
66 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
67 + raise MySubError("This is an exception") # PLW0133
68 | MyValueError("This is an exception") # PLW0133
69 |
70 | inner()
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:68:9
|
66 | MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
68 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
69 |
70 | inner()
|
help: Add `raise` keyword
65 | IndexError("This is an exception") # PLW0133
66 | MyError("This is an exception") # PLW0133
67 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
68 + raise MyValueError("This is an exception") # PLW0133
69 |
70 | inner()
71 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:77:9
|
75 | while True:
76 | KeyError("This is an exception") # PLW0133
77 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
78 | MySubError("This is an exception") # PLW0133
79 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
74 | def func():
75 | while True:
76 | KeyError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
77 + raise MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
79 | MyValueError("This is an exception") # PLW0133
80 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:78:9
|
76 | KeyError("This is an exception") # PLW0133
77 | MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
79 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
75 | while True:
76 | KeyError("This is an exception") # PLW0133
77 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
78 + raise MySubError("This is an exception") # PLW0133
79 | MyValueError("This is an exception") # PLW0133
80 |
81 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:79:9
|
77 | MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
79 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
76 | KeyError("This is an exception") # PLW0133
77 | MyError("This is an exception") # PLW0133
78 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
79 + raise MyValueError("This is an exception") # PLW0133
80 |
81 |
82 | # Test case 7: Useless exception statement in abstract class
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:88:13
|
86 | def method(self):
87 | NotImplementedError("This is an exception") # PLW0133
88 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89 | MySubError("This is an exception") # PLW0133
90 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
85 | @abstractmethod
86 | def method(self):
87 | NotImplementedError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
88 + raise MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
90 | MyValueError("This is an exception") # PLW0133
91 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:89:13
|
87 | NotImplementedError("This is an exception") # PLW0133
88 | MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
90 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
86 | def method(self):
87 | NotImplementedError("This is an exception") # PLW0133
88 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
89 + raise MySubError("This is an exception") # PLW0133
90 | MyValueError("This is an exception") # PLW0133
91 |
92 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:90:13
|
88 | MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
90 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
87 | NotImplementedError("This is an exception") # PLW0133
88 | MyError("This is an exception") # PLW0133
89 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
90 + raise MyValueError("This is an exception") # PLW0133
91 |
92 |
93 | # Test case 8: Useless exception statement inside context manager
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:97:9
|
95 | with suppress(Exception):
96 | AttributeError("This is an exception") # PLW0133
97 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
98 | MySubError("This is an exception") # PLW0133
99 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
94 | def func():
95 | with suppress(Exception):
96 | AttributeError("This is an exception") # PLW0133
- MyError("This is an exception") # PLW0133
97 + raise MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
99 | MyValueError("This is an exception") # PLW0133
100 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:98:9
|
96 | AttributeError("This is an exception") # PLW0133
97 | MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
99 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
95 | with suppress(Exception):
96 | AttributeError("This is an exception") # PLW0133
97 | MyError("This is an exception") # PLW0133
- MySubError("This is an exception") # PLW0133
98 + raise MySubError("This is an exception") # PLW0133
99 | MyValueError("This is an exception") # PLW0133
100 |
101 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:99:9
|
97 | MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
99 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
96 | AttributeError("This is an exception") # PLW0133
97 | MyError("This is an exception") # PLW0133
98 | MySubError("This is an exception") # PLW0133
- MyValueError("This is an exception") # PLW0133
99 + raise MyValueError("This is an exception") # PLW0133
100 |
101 |
102 | # Test case 9: Useless exception statement in parentheses
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:105:5
|
103 | def func():
104 | (RuntimeError("This is an exception")) # PLW0133
105 | (MyError("This is an exception")) # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
106 | (MySubError("This is an exception")) # PLW0133
107 | (MyValueError("This is an exception")) # PLW0133
|
help: Add `raise` keyword
102 | # Test case 9: Useless exception statement in parentheses
103 | def func():
104 | (RuntimeError("This is an exception")) # PLW0133
- (MyError("This is an exception")) # PLW0133
105 + raise (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
107 | (MyValueError("This is an exception")) # PLW0133
108 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:106:5
|
104 | (RuntimeError("This is an exception")) # PLW0133
105 | (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
107 | (MyValueError("This is an exception")) # PLW0133
|
help: Add `raise` keyword
103 | def func():
104 | (RuntimeError("This is an exception")) # PLW0133
105 | (MyError("This is an exception")) # PLW0133
- (MySubError("This is an exception")) # PLW0133
106 + raise (MySubError("This is an exception")) # PLW0133
107 | (MyValueError("This is an exception")) # PLW0133
108 |
109 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:107:5
|
105 | (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
107 | (MyValueError("This is an exception")) # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
104 | (RuntimeError("This is an exception")) # PLW0133
105 | (MyError("This is an exception")) # PLW0133
106 | (MySubError("This is an exception")) # PLW0133
- (MyValueError("This is an exception")) # PLW0133
107 + raise (MyValueError("This is an exception")) # PLW0133
108 |
109 |
110 | # Test case 10: Useless exception statement in continuation
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:113:12
|
111 | def func():
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
|
help: Add `raise` keyword
110 | # Test case 10: Useless exception statement in continuation
111 | def func():
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
- x = 1; (MyError("This is an exception")); y = 2 # PLW0133
113 + x = 1; raise (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
116 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:114:12
|
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
|
help: Add `raise` keyword
111 | def func():
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
- x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
114 + x = 1; raise (MySubError("This is an exception")); y = 2 # PLW0133
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
116 |
117 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:115:12
|
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
115 | x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
112 | x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
113 | x = 1; (MyError("This is an exception")); y = 2 # PLW0133
114 | x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
- x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
115 + x = 1; raise (MyValueError("This is an exception")); y = 2 # PLW0133
116 |
117 |
118 | # Test case 11: Useless warning statement
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:121:5
|
119 | def func():
120 | UserWarning("This is a user warning") # PLW0133
121 | MyUserWarning("This is a custom user warning") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
118 | # Test case 11: Useless warning statement
119 | def func():
120 | UserWarning("This is a user warning") # PLW0133
- MyUserWarning("This is a custom user warning") # PLW0133
121 + raise MyUserWarning("This is a custom user warning") # PLW0133
122 |
123 |
124 | # Test case 12: Useless exception statement at module level
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:131:1
|
129 | PythonFinalizationError("Added in Python 3.13") # PLW0133
130 |
131 | MyError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
132 |
133 | MySubError("This is an exception") # PLW0133
|
help: Add `raise` keyword
128 |
129 | PythonFinalizationError("Added in Python 3.13") # PLW0133
130 |
- MyError("This is an exception") # PLW0133
131 + raise MyError("This is an exception") # PLW0133
132 |
133 | MySubError("This is an exception") # PLW0133
134 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:133:1
|
131 | MyError("This is an exception") # PLW0133
132 |
133 | MySubError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
134 |
135 | MyValueError("This is an exception") # PLW0133
|
help: Add `raise` keyword
130 |
131 | MyError("This is an exception") # PLW0133
132 |
- MySubError("This is an exception") # PLW0133
133 + raise MySubError("This is an exception") # PLW0133
134 |
135 | MyValueError("This is an exception") # PLW0133
136 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:135:1
|
133 | MySubError("This is an exception") # PLW0133
134 |
135 | MyValueError("This is an exception") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
136 |
137 | UserWarning("This is a user warning") # PLW0133
|
help: Add `raise` keyword
132 |
133 | MySubError("This is an exception") # PLW0133
134 |
- MyValueError("This is an exception") # PLW0133
135 + raise MyValueError("This is an exception") # PLW0133
136 |
137 | UserWarning("This is a user warning") # PLW0133
138 |
note: This is an unsafe fix and may change runtime behavior
PLW0133 [*] Missing `raise` statement on exception
--> useless_exception_statement.py:139:1
|
137 | UserWarning("This is a user warning") # PLW0133
138 |
139 | MyUserWarning("This is a custom user warning") # PLW0133
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Add `raise` keyword
136 |
137 | UserWarning("This is a user warning") # PLW0133
138 |
- MyUserWarning("This is a custom user warning") # PLW0133
139 + raise MyUserWarning("This is a custom user warning") # PLW0133
140 |
141 |
142 | # Non-violation test cases: PLW0133
note: This is an unsafe fix and may change runtime behavior

View File

@ -204,7 +204,7 @@ pub(crate) fn non_pep695_generic_class(checker: &Checker, class_def: &StmtClassD
arguments,
Parentheses::Remove,
checker.source(),
checker.comment_ranges(),
checker.tokens(),
)?;
Ok(Fix::unsafe_edits(
Edit::insertion(type_params.to_string(), name.end()),

View File

@ -2,7 +2,7 @@ use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{Expr, ExprCall, ExprName, Keyword, StmtAnnAssign, StmtAssign, StmtRef};
use ruff_text_size::{Ranged, TextRange};
@ -261,11 +261,11 @@ fn create_diagnostic(
type_alias_kind: TypeAliasKind,
) {
let source = checker.source();
let tokens = checker.tokens();
let comment_ranges = checker.comment_ranges();
let range_with_parentheses =
parenthesized_range(value.into(), stmt.into(), comment_ranges, source)
.unwrap_or(value.range());
parenthesized_range(value.into(), stmt.into(), tokens).unwrap_or(value.range());
let content = format!(
"type {name}{type_params} = {value}",

View File

@ -1,9 +1,8 @@
use anyhow::Result;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, Keyword};
use ruff_python_ast::{self as ast, Keyword, token::Tokens};
use ruff_python_semantic::Modules;
use ruff_python_trivia::CommentRanges;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@ -104,7 +103,7 @@ pub(crate) fn replace_stdout_stderr(checker: &Checker, call: &ast::ExprCall) {
stderr,
call,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
});
}
@ -117,7 +116,7 @@ fn generate_fix(
stderr: &Keyword,
call: &ast::ExprCall,
source: &str,
comment_ranges: &CommentRanges,
tokens: &Tokens,
) -> Result<Fix> {
let (first, second) = if stdout.start() < stderr.start() {
(stdout, stderr)
@ -132,7 +131,7 @@ fn generate_fix(
&call.arguments,
Parentheses::Preserve,
source,
comment_ranges,
tokens,
)?],
))
}

View File

@ -78,7 +78,7 @@ pub(crate) fn replace_universal_newlines(checker: &Checker, call: &ast::ExprCall
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(Fix::safe_edit)
});

View File

@ -188,7 +188,7 @@ pub(crate) fn unnecessary_encode_utf8(checker: &Checker, call: &ast::ExprCall) {
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(Fix::safe_edit)
});
@ -206,7 +206,7 @@ pub(crate) fn unnecessary_encode_utf8(checker: &Checker, call: &ast::ExprCall) {
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(Fix::safe_edit)
});
@ -231,7 +231,7 @@ pub(crate) fn unnecessary_encode_utf8(checker: &Checker, call: &ast::ExprCall) {
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(Fix::safe_edit)
});
@ -249,7 +249,7 @@ pub(crate) fn unnecessary_encode_utf8(checker: &Checker, call: &ast::ExprCall) {
&call.arguments,
Parentheses::Preserve,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)
.map(Fix::safe_edit)
});

View File

@ -70,7 +70,7 @@ pub(crate) fn useless_class_metaclass_type(checker: &Checker, class_def: &StmtCl
arguments,
Parentheses::Remove,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)?;
let range = edit.range();

View File

@ -73,7 +73,7 @@ pub(crate) fn useless_object_inheritance(checker: &Checker, class_def: &ast::Stm
arguments,
Parentheses::Remove,
checker.locator().contents(),
checker.comment_ranges(),
checker.tokens(),
)?;
let range = edit.range();

View File

@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_text_size::Ranged;
@ -139,13 +139,8 @@ pub(crate) fn yield_in_for_loop(checker: &Checker, stmt_for: &ast::StmtFor) {
let mut diagnostic = checker.report_diagnostic(YieldInForLoop, stmt_for.range());
let contents = checker.locator().slice(
parenthesized_range(
iter.as_ref().into(),
stmt_for.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(iter.range()),
parenthesized_range(iter.as_ref().into(), stmt_for.into(), checker.tokens())
.unwrap_or(iter.range()),
);
let contents = if iter.as_tuple_expr().is_some_and(|it| !it.parenthesized) {
format!("yield from ({contents})")

View File

@ -902,56 +902,76 @@ help: Convert to f-string
132 | # Non-errors
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:160:1
--> UP032_0.py:135:1
|
158 | r'"\N{snowman} {}".format(a)'
159 |
160 | / "123456789 {}".format(
161 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | | )
| |_^
163 |
164 | """
133 | ###
134 |
135 | "\N{snowman} {}".format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
136 |
137 | "{".format(a)
|
help: Convert to f-string
157 |
158 | r'"\N{snowman} {}".format(a)'
159 |
132 | # Non-errors
133 | ###
134 |
- "\N{snowman} {}".format(a)
135 + f"\N{snowman} {a}"
136 |
137 | "{".format(a)
138 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:159:1
|
157 | r'"\N{snowman} {}".format(a)'
158 |
159 | / "123456789 {}".format(
160 | | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | | )
| |_^
162 |
163 | """
|
help: Convert to f-string
156 |
157 | r'"\N{snowman} {}".format(a)'
158 |
- "123456789 {}".format(
- 11111111111111111111111111111111111111111111111111111111111111111111111111,
- )
160 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
161 |
162 | """
163 | {}
159 + f"123456789 {11111111111111111111111111111111111111111111111111111111111111111111111111}"
160 |
161 | """
162 | {}
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:164:1
--> UP032_0.py:163:1
|
162 | )
163 |
164 | / """
161 | )
162 |
163 | / """
164 | | {}
165 | | {}
166 | | {}
167 | | {}
168 | | """.format(
169 | | 1,
170 | | 2,
171 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | | )
167 | | """.format(
168 | | 1,
169 | | 2,
170 | | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | | )
| |_^
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
|
help: Convert to f-string
161 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
162 | )
163 |
164 + f"""
165 + {1}
166 + {2}
167 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
168 | """
160 | 11111111111111111111111111111111111111111111111111111111111111111111111111,
161 | )
162 |
163 + f"""
164 + {1}
165 + {2}
166 + {111111111111111111111111111111111111111111111111111111111111111111111111111111111111111}
167 | """
- {}
- {}
- {}
@ -960,392 +980,408 @@ help: Convert to f-string
- 2,
- 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
- )
169 |
170 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | """.format(
168 |
169 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
170 | """.format(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:174:84
--> UP032_0.py:173:84
|
172 | )
173 |
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
171 | )
172 |
173 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
| ____________________________________________________________________________________^
175 | | """.format(
176 | | 111111
177 | | )
174 | | """.format(
175 | | 111111
176 | | )
| |_^
178 |
179 | "{}".format(
177 |
178 | "{}".format(
|
help: Convert to f-string
171 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
172 | )
173 |
170 | 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111,
171 | )
172 |
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = """{}
- """.format(
- 111111
- )
174 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
175 + """
176 |
177 | "{}".format(
178 | [
173 + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = f"""{111111}
174 + """
175 |
176 | "{}".format(
177 | [
UP032 Use f-string instead of `format` call
--> UP032_0.py:202:1
--> UP032_0.py:201:1
|
200 | "{}".format(**c)
201 |
202 | / "{}".format(
203 | | 1 # comment
204 | | )
199 | "{}".format(**c)
200 |
201 | / "{}".format(
202 | | 1 # comment
203 | | )
| |_^
|
help: Convert to f-string
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:209:1
--> UP032_0.py:208:1
|
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
210 |
211 | # When fixing, trim the trailing empty string.
209 |
210 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
206 |
207 | # The fixed string will exceed the line length, but it's still smaller than the
208 | # existing line length, so it's fine.
205 |
206 | # The fixed string will exceed the line length, but it's still smaller than the
207 | # existing line length, so it's fine.
- "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
210 |
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
208 + f"<Customer: {self.internal_ids}, {self.external_ids}, {self.properties}, {self.tags}, {self.others}>"
209 |
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:212:18
--> UP032_0.py:211:18
|
211 | # When fixing, trim the trailing empty string.
212 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
210 | # When fixing, trim the trailing empty string.
211 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
213 | | "".format(new_dict, d))
212 | | "".format(new_dict, d))
| |_______________________________________^
214 |
215 | # When fixing, trim the trailing empty string.
213 |
214 | # When fixing, trim the trailing empty string.
|
help: Convert to f-string
209 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
210 |
211 | # When fixing, trim the trailing empty string.
208 | "<Customer: {}, {}, {}, {}, {}>".format(self.internal_ids, self.external_ids, self.properties, self.tags, self.others)
209 |
210 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d))
212 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
211 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
212 |
213 | # When fixing, trim the trailing empty string.
214 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:215:18
|
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
216 | | .format(new_dict, d))
| |_____________________________________^
217 |
218 | raise ValueError(
|
help: Convert to f-string
212 | "".format(new_dict, d))
213 |
214 | # When fixing, trim the trailing empty string.
215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:216:18
|
215 | # When fixing, trim the trailing empty string.
216 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
| __________________^
217 | | .format(new_dict, d))
| |_____________________________________^
218 |
219 | raise ValueError(
|
help: Convert to f-string
213 | "".format(new_dict, d))
214 |
215 | # When fixing, trim the trailing empty string.
- raise ValueError("Conflicting configuration dicts: {!r} {!r}"
- .format(new_dict, d))
216 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
217 + )
218 |
219 | raise ValueError(
220 | "Conflicting configuration dicts: {!r} {!r}"
215 + raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
216 + )
217 |
218 | raise ValueError(
219 | "Conflicting configuration dicts: {!r} {!r}"
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:220:5
--> UP032_0.py:219:5
|
219 | raise ValueError(
220 | / "Conflicting configuration dicts: {!r} {!r}"
221 | | "".format(new_dict, d)
218 | raise ValueError(
219 | / "Conflicting configuration dicts: {!r} {!r}"
220 | | "".format(new_dict, d)
| |__________________________^
222 | )
221 | )
|
help: Convert to f-string
217 | .format(new_dict, d))
218 |
219 | raise ValueError(
216 | .format(new_dict, d))
217 |
218 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
220 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
219 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
220 | )
221 |
222 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:224:5
|
223 | raise ValueError(
224 | / "Conflicting configuration dicts: {!r} {!r}"
225 | | "".format(new_dict, d)
| |__________________________^
226 |
227 | )
|
help: Convert to f-string
221 | )
222 |
223 | raise ValueError(
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:225:5
|
224 | raise ValueError(
225 | / "Conflicting configuration dicts: {!r} {!r}"
226 | | "".format(new_dict, d)
| |__________________________^
227 |
228 | )
|
help: Convert to f-string
222 | )
223 |
224 | raise ValueError(
- "Conflicting configuration dicts: {!r} {!r}"
- "".format(new_dict, d)
225 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
226 |
227 | )
228 |
224 + f"Conflicting configuration dicts: {new_dict!r} {d!r}"
225 |
226 | )
227 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:231:1
--> UP032_0.py:230:1
|
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | / (
232 | | "{}"
233 | | "{{}}"
234 | | ).format(a)
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | / (
231 | | "{}"
232 | | "{{}}"
233 | | ).format(a)
| |___________^
235 |
236 | ("{}" "{{}}").format(a)
234 |
235 | ("{}" "{{}}").format(a)
|
help: Convert to f-string
229 |
230 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
231 | (
232 + f"{a}"
233 | "{}"
228 |
229 | # The first string will be converted to an f-string and the curly braces in the second should be converted to be unescaped
230 | (
231 + f"{a}"
232 | "{}"
- "{{}}"
- ).format(a)
234 + )
235 |
236 | ("{}" "{{}}").format(a)
237 |
233 + )
234 |
235 | ("{}" "{{}}").format(a)
236 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:236:1
--> UP032_0.py:235:1
|
234 | ).format(a)
235 |
236 | ("{}" "{{}}").format(a)
233 | ).format(a)
234 |
235 | ("{}" "{{}}").format(a)
| ^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
233 | "{{}}"
234 | ).format(a)
235 |
232 | "{{}}"
233 | ).format(a)
234 |
- ("{}" "{{}}").format(a)
236 + (f"{a}" "{}")
235 + (f"{a}" "{}")
236 |
237 |
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:240:1
--> UP032_0.py:239:1
|
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | / (
241 | | "{}"
242 | | "{{{}}}"
243 | | ).format(a, b)
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | / (
240 | | "{}"
241 | | "{{{}}}"
242 | | ).format(a, b)
| |______________^
244 |
245 | ("{}" "{{{}}}").format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
|
help: Convert to f-string
238 |
239 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
240 | (
237 |
238 | # Both strings will be converted to an f-string and the curly braces in the second should left escaped
239 | (
- "{}"
- "{{{}}}"
- ).format(a, b)
241 + f"{a}"
242 + f"{{{b}}}"
243 + )
244 |
245 | ("{}" "{{{}}}").format(a, b)
246 |
240 + f"{a}"
241 + f"{{{b}}}"
242 + )
243 |
244 | ("{}" "{{{}}}").format(a, b)
245 |
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:245:1
--> UP032_0.py:244:1
|
243 | ).format(a, b)
244 |
245 | ("{}" "{{{}}}").format(a, b)
242 | ).format(a, b)
243 |
244 | ("{}" "{{{}}}").format(a, b)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 |
247 | # The dictionary should be parenthesized.
245 |
246 | # The dictionary should be parenthesized.
|
help: Convert to f-string
242 | "{{{}}}"
243 | ).format(a, b)
244 |
241 | "{{{}}}"
242 | ).format(a, b)
243 |
- ("{}" "{{{}}}").format(a, b)
245 + (f"{a}" f"{{{b}}}")
246 |
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
244 + (f"{a}" f"{{{b}}}")
245 |
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:248:1
--> UP032_0.py:247:1
|
247 | # The dictionary should be parenthesized.
248 | "{}".format({0: 1}[0])
246 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
| ^^^^^^^^^^^^^^^^^^^^^^
249 |
250 | # The dictionary should be parenthesized.
248 |
249 | # The dictionary should be parenthesized.
|
help: Convert to f-string
245 | ("{}" "{{{}}}").format(a, b)
246 |
247 | # The dictionary should be parenthesized.
244 | ("{}" "{{{}}}").format(a, b)
245 |
246 | # The dictionary should be parenthesized.
- "{}".format({0: 1}[0])
248 + f"{({0: 1}[0])}"
249 |
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
247 + f"{({0: 1}[0])}"
248 |
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:251:1
--> UP032_0.py:250:1
|
250 | # The dictionary should be parenthesized.
251 | "{}".format({0: 1}.bar)
249 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
| ^^^^^^^^^^^^^^^^^^^^^^^
252 |
253 | # The dictionary should be parenthesized.
251 |
252 | # The dictionary should be parenthesized.
|
help: Convert to f-string
248 | "{}".format({0: 1}[0])
249 |
250 | # The dictionary should be parenthesized.
247 | "{}".format({0: 1}[0])
248 |
249 | # The dictionary should be parenthesized.
- "{}".format({0: 1}.bar)
251 + f"{({0: 1}.bar)}"
252 |
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
250 + f"{({0: 1}.bar)}"
251 |
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:254:1
--> UP032_0.py:253:1
|
253 | # The dictionary should be parenthesized.
254 | "{}".format({0: 1}())
252 | # The dictionary should be parenthesized.
253 | "{}".format({0: 1}())
| ^^^^^^^^^^^^^^^^^^^^^
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
|
help: Convert to f-string
251 | "{}".format({0: 1}.bar)
252 |
253 | # The dictionary should be parenthesized.
250 | "{}".format({0: 1}.bar)
251 |
252 | # The dictionary should be parenthesized.
- "{}".format({0: 1}())
254 + f"{({0: 1}())}"
255 |
256 | # The string shouldn't be converted, since it would require repeating the function call.
257 | "{x} {x}".format(x=foo())
253 + f"{({0: 1}())}"
254 |
255 | # The string shouldn't be converted, since it would require repeating the function call.
256 | "{x} {x}".format(x=foo())
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:261:1
--> UP032_0.py:260:1
|
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
261 | "{0} {1}".format(foo(), foo())
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
260 | "{0} {1}".format(foo(), foo())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 |
263 | # The call should be removed, but the string itself should remain.
261 |
262 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
258 | "{0} {0}".format(foo())
259 |
260 | # The string _should_ be converted, since the function call is repeated in the arguments.
257 | "{0} {0}".format(foo())
258 |
259 | # The string _should_ be converted, since the function call is repeated in the arguments.
- "{0} {1}".format(foo(), foo())
261 + f"{foo()} {foo()}"
262 |
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
260 + f"{foo()} {foo()}"
261 |
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:264:1
--> UP032_0.py:263:1
|
263 | # The call should be removed, but the string itself should remain.
264 | ''.format(self.project)
262 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
265 |
266 | # The call should be removed, but the string itself should remain.
264 |
265 | # The call should be removed, but the string itself should remain.
|
help: Convert to f-string
261 | "{0} {1}".format(foo(), foo())
262 |
263 | # The call should be removed, but the string itself should remain.
260 | "{0} {1}".format(foo(), foo())
261 |
262 | # The call should be removed, but the string itself should remain.
- ''.format(self.project)
264 + ''
265 |
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
263 + ''
264 |
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:267:1
--> UP032_0.py:266:1
|
266 | # The call should be removed, but the string itself should remain.
267 | "".format(self.project)
265 | # The call should be removed, but the string itself should remain.
266 | "".format(self.project)
| ^^^^^^^^^^^^^^^^^^^^^^^
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
|
help: Convert to f-string
264 | ''.format(self.project)
265 |
266 | # The call should be removed, but the string itself should remain.
263 | ''.format(self.project)
264 |
265 | # The call should be removed, but the string itself should remain.
- "".format(self.project)
267 + ""
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
266 + ""
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:271:5
--> UP032_0.py:270:5
|
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
271 | x: "'{} + {}'.format(x, y)"
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
270 | x: "'{} + {}'.format(x, y)"
| ^^^^^^^^^^^^^^^^^^^^^^
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
|
help: Convert to f-string
268 |
269 | # Not a valid type annotation but this test shouldn't result in a panic.
270 | # Refer: https://github.com/astral-sh/ruff/issues/11736
267 |
268 | # Not a valid type annotation but this test shouldn't result in a panic.
269 | # Refer: https://github.com/astral-sh/ruff/issues/11736
- x: "'{} + {}'.format(x, y)"
271 + x: "f'{x} + {y}'"
272 |
273 | # Regression https://github.com/astral-sh/ruff/issues/21000
274 | # Fix should parenthesize walrus
270 + x: "f'{x} + {y}'"
271 |
272 | # Regression https://github.com/astral-sh/ruff/issues/21000
273 | # Fix should parenthesize walrus
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:277:14
--> UP032_0.py:276:14
|
275 | if __name__ == "__main__":
276 | number = 0
277 | string = "{}".format(number := number + 1)
274 | if __name__ == "__main__":
275 | number = 0
276 | string = "{}".format(number := number + 1)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
278 | print(string)
277 | print(string)
|
help: Convert to f-string
274 | # Fix should parenthesize walrus
275 | if __name__ == "__main__":
276 | number = 0
273 | # Fix should parenthesize walrus
274 | if __name__ == "__main__":
275 | number = 0
- string = "{}".format(number := number + 1)
277 + string = f"{(number := number + 1)}"
278 | print(string)
276 + string = f"{(number := number + 1)}"
277 | print(string)
278 |
279 | # Unicode escape
UP032 [*] Use f-string instead of `format` call
--> UP032_0.py:280:1
|
279 | # Unicode escape
280 | "\N{angle}AOB = {angle}°".format(angle=180)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to f-string
277 | print(string)
278 |
279 | # Unicode escape
- "\N{angle}AOB = {angle}°".format(angle=180)
280 + f"\N{angle}AOB = {180}°"

View File

@ -1,7 +1,7 @@
use std::borrow::Cow;
use ruff_python_ast::PythonVersion;
use ruff_python_ast::{self as ast, Expr, name::Name, parenthesize::parenthesized_range};
use ruff_python_ast::{self as ast, Expr, name::Name, token::parenthesized_range};
use ruff_python_codegen::Generator;
use ruff_python_semantic::{BindingId, ResolvedReference, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
@ -330,12 +330,8 @@ pub(super) fn parenthesize_loop_iter_if_necessary<'a>(
let locator = checker.locator();
let iter = for_stmt.iter.as_ref();
let original_parenthesized_range = parenthesized_range(
iter.into(),
for_stmt.into(),
checker.comment_ranges(),
checker.source(),
);
let original_parenthesized_range =
parenthesized_range(iter.into(), for_stmt.into(), checker.tokens());
if let Some(range) = original_parenthesized_range {
return Cow::Borrowed(locator.slice(range));

View File

@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{
Expr, ExprAttribute, ExprBinOp, ExprCall, ExprStringLiteral, ExprSubscript, ExprUnaryOp,
Number, Operator, PythonVersion, UnaryOp,
@ -112,8 +112,7 @@ pub(crate) fn fromisoformat_replace_z(checker: &Checker, call: &ExprCall) {
let value_full_range = parenthesized_range(
replace_time_zone.date.into(),
replace_time_zone.parent.into(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
)
.unwrap_or(replace_time_zone.date.range());

View File

@ -5,8 +5,7 @@ use ruff_python_ast as ast;
use ruff_python_ast::Expr;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_trivia::CommentRanges;
use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_text_size::Ranged;
use crate::Locator;
@ -76,8 +75,8 @@ pub(crate) fn if_exp_instead_of_or_operator(checker: &Checker, if_expr: &ast::Ex
Edit::range_replacement(
format!(
"{} or {}",
parenthesize_test(test, if_expr, checker.comment_ranges(), checker.locator()),
parenthesize_test(orelse, if_expr, checker.comment_ranges(), checker.locator()),
parenthesize_test(test, if_expr, checker.tokens(), checker.locator()),
parenthesize_test(orelse, if_expr, checker.tokens(), checker.locator()),
),
if_expr.range(),
),
@ -99,15 +98,10 @@ pub(crate) fn if_exp_instead_of_or_operator(checker: &Checker, if_expr: &ast::Ex
fn parenthesize_test<'a>(
expr: &Expr,
if_expr: &ast::ExprIf,
comment_ranges: &CommentRanges,
tokens: &Tokens,
locator: &Locator<'a>,
) -> Cow<'a, str> {
if let Some(range) = parenthesized_range(
expr.into(),
if_expr.into(),
comment_ranges,
locator.contents(),
) {
if let Some(range) = parenthesized_range(expr.into(), if_expr.into(), tokens) {
Cow::Borrowed(locator.slice(range))
} else if matches!(expr, Expr::If(_) | Expr::Lambda(_) | Expr::Named(_)) {
Cow::Owned(format!("({})", locator.slice(expr.range())))

View File

@ -1,6 +1,6 @@
use ruff_diagnostics::Applicability;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{Comprehension, Expr, StmtFor};
use ruff_python_semantic::analyze::typing;
use ruff_python_semantic::analyze::typing::is_io_base_expr;
@ -104,8 +104,7 @@ fn readlines_in_iter(checker: &Checker, iter_expr: &Expr) {
let deletion_range = if let Some(parenthesized_range) = parenthesized_range(
expr_attr.value.as_ref().into(),
expr_attr.into(),
checker.comment_ranges(),
checker.source(),
checker.tokens(),
) {
expr_call.range().add_start(parenthesized_range.len())
} else {

View File

@ -1,7 +1,7 @@
use anyhow::Result;
use ruff_diagnostics::Applicability;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, Number};
use ruff_text_size::Ranged;
@ -152,13 +152,8 @@ fn generate_fix(checker: &Checker, call: &ast::ExprCall, base: Base, arg: &Expr)
checker.semantic(),
)?;
let arg_range = parenthesized_range(
arg.into(),
call.into(),
checker.comment_ranges(),
checker.source(),
)
.unwrap_or(arg.range());
let arg_range =
parenthesized_range(arg.into(), call.into(), checker.tokens()).unwrap_or(arg.range());
let arg_str = checker.locator().slice(arg_range);
Ok(Fix::applicable_edits(

View File

@ -95,7 +95,7 @@ pub(crate) fn single_item_membership_test(
&[membership_test.replacement_op()],
std::slice::from_ref(item),
expr.into(),
checker.comment_ranges(),
checker.tokens(),
checker.source(),
),
expr.range(),

View File

@ -305,6 +305,25 @@ mod tests {
Ok(())
}
#[test]
fn range_suppressions() -> Result<()> {
assert_diagnostics_diff!(
Path::new("ruff/suppressions.py"),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
]),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
])
.with_preview_mode(),
);
Ok(())
}
#[test]
fn ruf100_0() -> Result<()> {
let diagnostics = test_path(

View File

@ -163,7 +163,7 @@ fn convert_type_vars(
class_arguments,
Parentheses::Remove,
source,
checker.comment_ranges(),
checker.tokens(),
)?;
let replace_type_params =
Edit::range_replacement(new_type_params.to_string(), type_params.range);

Some files were not shown because too many files have changed in this diff Show More