Compare commits

..

250 Commits
0.14.7 ... main

Author SHA1 Message Date
Alex Waygood 0bd7a94c27
[ty] Improve `unsupported-base` and `invalid-super-argument` diagnostics to avoid extremely long lines when encountering verbose types (#22022) 2025-12-17 14:43:11 +00:00
Bhuminjay Soni 421f88bb32
[`refurb`] Extend support for `Path.open` (`FURB101`, `FURB103`) (#21080)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR fixes https://github.com/astral-sh/ruff/issues/18409

## Test Plan

<!-- How was it tested? -->
I have added tests in FURB103.

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 09:18:13 -05:00
David Peter b0eb39d112
[ty] ecosystem-analyzer: Flush full stderr output in case of panics (#22023)
Pulls in
2e1816eac0
2025-12-17 15:02:59 +01:00
charliecloudberry 260f463edd
Update setup.md (#22024) 2025-12-17 14:56:25 +01:00
Bhuminjay Soni 52849a5e68
[syntax-errors] Annotated name cannot be global (#20868)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements a new semantic syntax error where annotated name
can't be global
example
```
x: int = 1

def f():
    global x
    x: str = "foo"  # SyntaxError: annotated name 'x' can't be global
 ```

## Test Plan

<!-- How was it tested? -->
I have written tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-17 08:39:47 -05:00
David Peter 2a61fe2353
[ty] Handle field specifier functions that accept `**kwargs` and recognize metaclass-based transformers as instances of `DataclassInstance` (#22018)
## Summary

This contains two bug fixes:

- [Handle field specifier functions that accept
`**kwargs`](ad6918d505)
- [Recognize metaclass-based transformers as instances of
`DataclassInstance`](1a8e29b23c)

closes https://github.com/astral-sh/ty/issues/1987

## Test Plan

* New Markdown tests
* Made sure that the example in 1987 checks without errors
2025-12-17 14:22:16 +01:00
Alex Waygood 764ad8b29b
[ty] Improve disambiguation of types in many cases (#22019) 2025-12-17 11:41:07 +00:00
mahiro 85af715880
Fix playground Share button showing "Copied!" before clipboard copy completes (#21942)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-17 12:16:01 +01:00
Phong Do b0bc990cbf
[`pyupgrade`] Fix parsing named Unicode escape sequences (`UP032`) (#21901)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/19771

Fixes incorrect parsing of Unicode named escape sequences like `Hey
\N{snowman}` in `FormatString`, which were being incorrectly split into
separate literal and field parts instead of being treated as a single
literal unit.

## Problem

The `FormatString` parser incorrectly handles Unicode named escape
sequences:
- **Current**: `Hey \N{snowman}` is parsed into 2 parts `Literal("Hey
\N")` & `Field("snowman")`
- **Expected**: `Hey \N{snowman}` should be parsed into 1 part
`Literal("Hey \N{snowman}")`

This affects f-string conversion rules when fixing `UP032` that rely on
proper format string parsing.

## Solution

I modified `parse_literal` to detect and handle Unicode named escape
sequences before parsing single characters:
- Introduced a flag to track when a backslash is "available" to escape
something.
- When the flag is `true`, and the text starts with `N{`, try to parse
the complete Unicode escape sequence as one unit, and set the flag to
`false` after parsing successfully.
- Set the flag to `false` when the backslash is already consumed.

## Manual Verification

`"\N{angle}AOB = {angle}°".format(angle=180)` 

**Result**

```bash
 def foo():
-    "\N{angle}AOB = {angle}°".format(angle=180)
+    f"\N{angle}AOB = {180}°"

Would fix 1 error.
```

`"\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\N{snowman} {snowman}".format(snowman=1)
+    f"\N{snowman} {1}"

Would fix 1 error.
```

`"\\N{snowman} {snowman}".format(snowman=1)`

**Result**
```bash
 def foo():
-    "\\N{snowman} {snowman}".format(snowman=1)
+    f"\\N{1} {1}"

Would fix 1 error.
```

## Test Plan

- Added test cases (happy case, invalid case, edge case) for
`FormatString` when parsing Unicode escape sequence.
- Updated snapshots.
2025-12-16 16:33:39 -05:00
Aria Desires ad3de4e488
[ty] Improve rendering of default values for function args (#22010)
## Summary

We're actually quite good at computing this but the main issue is just
that we compute it at the type-level and so wrap it in `Literal[...]`.
So just special-case the rendering of these to omit `Literal[...]` and
fallback to `...` in cases where the thing we'll show is probably
useless (i.e. `x: str = str`).

Fixes https://github.com/astral-sh/ty/issues/1882
2025-12-16 13:39:19 -05:00
Douglas Creager 2214a46139
[ty] Don't use implicit superclass annotation when converting a class constructor into a `Callable` (#22011)
This fixes a bug @zsol found running ty against pyx. His original repro
is:

```py
class Base:
    def __init__(self) -> None: pass

class A(Base):
    pass

def foo[T](callable: Callable[..., T]) -> T:
    return callable()

a: A = foo(A)
```

The call at the bottom would fail, since we would infer `() -> Base` as
the callable type of `A`, when it should be `() -> A`.

The issue was how we add implicit annotations to `self` parameters.
Typically, we turn it into `self: Self`. But in cases where we don't
need to introduce a full typevar, we turn it into `self: [the class
itself]` — in this case, `self: Base`. Then, when turning the class
constructor into a callable, we would see this non-`Self` annotation and
think that it was important and load-bearing.

The fix is that we skip all implicit annotations when determining
whether the `self` annotation should take precedence in the callable's
return type.
2025-12-16 13:37:11 -05:00
Douglas Creager c02bd11b93
[ty] Infer typevar specializations for `Callable` types (#21551)
This is a first stab at solving
https://github.com/astral-sh/ty/issues/500, at least in part, with the
old solver. We add a new `TypeRelation` that lets us opt into using
constraint sets to describe when a typevar is assignability to some
type, and then use that to calculate a constraint set that describes
when two callable types are assignable. If the callable types contain
typevars, that constraint set will describe their valid specializations.
We can then walk through all of the ways the constraint set can be
satisfied, and record a type mapping in the old solver for each one.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-16 09:16:49 -08:00
Carl Meyer eeaaa8e9fe
[ty] propagate classmethod-ness through decorators returning Callables (#21958)
Fixes https://github.com/astral-sh/ty/issues/1787

## Summary

Allow method decorators returning Callables to presumptively propagate
"classmethod-ness" in the same way that they already presumptively
propagate "function-like-ness". We can't actually be sure that this is
the case, based on the decorator's annotations, but (along with other
type checkers) we heuristically assume it to be the case for decorators
applied via decorator syntax.

## Test Plan

Added mdtest.
2025-12-16 09:16:40 -08:00
Rob Hand 7f7485d608
[ty] Fixed benchmark markdown auto-linenumbers (#22008) 2025-12-16 16:38:11 +01:00
Aria Desires d755f3b522
[ty] Improve syntax-highlighting of constants (#22006)
## Summary

BLAH1 getting different highlighting/classification from BLAH is very
distracting and undesirable.
2025-12-16 09:23:40 -05:00
Aria Desires 83168a1bb1
[ty] highlight special type syntax in hovers as xml (#22005)
## Summary

These types look better rendered as XML than python

## Test Plan

<img width="532" height="299" alt="Screenshot 2025-12-16 at 8 40 56 AM"
src="https://github.com/user-attachments/assets/42d9abfa-3f4a-44ba-b6b4-6700ab06832d"
/>
2025-12-16 14:20:35 +00:00
Andrew Gallant 0f373603eb [ty] Suppress keyword argument completions unless we're in the "arguments"
Otherwise, given a case like this:

```
(lambda foo: (<CURSOR> + 1))(2)
```

we'll offer _argument_ completions for `foo` at the cursor position.
While we do actually want to offer completions for `foo` in this
context, it is currently difficult to do so. But we definitely don't
want to offer completions for `foo` as an argument to a function here.
Which is what we were doing.

We also add an end-to-end test here to verify that the actual label we
offer in completion suggestions includes the `=` suffix.

Closes https://github.com/astral-sh/ruff/pull/21970
2025-12-16 08:00:04 -05:00
Andrew Gallant cc23af944f [ty] Tweak how we show qualified auto-import completions
Specifically, we make two changes:

1. We only show `import ...` when there is an actual import edit.
2. We now show the text we will insert. This means that when we
insert a qualified symbol, the qualification will show in the
completions suggested.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant 0589700ca1 [ty] Prefer unqualified imports over qualified imports
It seems like this is perhaps a better default:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790

For me personally, I think I'd prefer the qualified
variant. But I can easily see this changing based on
the specific scenario. I think the thing that pushes
me toward prioritizing the unqualified variant is that
the user could have typed the qualified variant themselves,
but they didn't. So we should perhaps prioritize the
form they typed, which is unqualified.
2025-12-16 08:00:04 -05:00
Andrew Gallant 43d983ecae [ty] Add tests for how qualified auto-imports are shown
Specifically, we want to test that something like `import typing`
should only be shown when we are actually going to insert an import.
*And* that when we insert a qualified name, then we should show it
as such in the completion suggestions.
2025-12-16 08:00:04 -05:00
Andrew Gallant 5c69bb564c [ty] Add a test capturing sub-optimal auto-import heuristic
Specifically, here, we'd probably like to add `TypedDict` to
the existing `from typing import ...` statement instead of
using the fully qualified `typing.TypedDict` form.

To test this, we add another snapshot mode for including imports
that a completion will insert when selected.

Ref https://github.com/astral-sh/ty/issues/1274#issuecomment-3352233790
2025-12-16 08:00:04 -05:00
Andrew Gallant 89fed85a8d [ty] Switch completion tests to use "insertion" text in snapshots
I think this gives us better tests overall, since it
tells us what is actually going to be inserted. Plus,
we'll want this in the next commit.
2025-12-16 08:00:04 -05:00
Zanie Blue 051f6896ac
[ty] Remove extra headings and split examples in the `overrides` configuration docs (#21994)
Having these as markdown headings ends up being weird in the reference
documentation, e.g., before:

<img width="1071" height="779" alt="Screenshot 2025-12-15 at 8 45 25 PM"
src="https://github.com/user-attachments/assets/2118d4f1-f557-46f3-a4b6-56c406cf9aca"
/>
2025-12-16 06:57:06 -06:00
David Peter 5b1d3ac9b9
[ty] Document `TY_CONFIG_FILE` (#22001) 2025-12-16 13:15:24 +01:00
Micha Reiser b2b0ad38ea
[ty] Cache `KnownClass::to_class_literal` (#22000) 2025-12-16 13:04:12 +01:00
Micha Reiser 01c0a3e960
[ty] Fix benchmark assertion (#22003) 2025-12-16 12:24:54 +01:00
Zanie Blue 5c942119f8
Add uv and ty to the Ruff README (#21996)
Matching the style of the uv README
2025-12-16 10:37:15 +01:00
David Peter 2acf1cc0fd
[ty] Infer precise types for `isinstance(…)` calls involving typevars (#21999)
## Summary

Infer `Literal[True]` for `isinstance(x, C)` calls when `x: T` and `T`
has a bound `B` that satisfies the `isinstance` check against `C`.
Similar for constrained typevars.

closes https://github.com/astral-sh/ty/issues/1895

## Test Plan

* New Markdown tests
* Verified the the example in the linked ticket checks without errors
2025-12-16 10:34:30 +01:00
Micha Reiser 4fdbe26445
[ty] Use `FxHashMap` in `Signature::has_relation_to` (#21997) 2025-12-16 10:10:45 +01:00
Charlie Marsh 682d29c256
[ty] Avoid enforcing standalone expression for tests in f-strings (#21967)
## Summary

Based on what we do elsewhere and my understanding of "standalone"
here...

Closes https://github.com/astral-sh/ty/issues/1865.
2025-12-15 22:31:04 -05:00
Zanie Blue 8e13765b57
[ty] Use `title` for configuration code fences in ty reference documentation (#21992)
Part of https://github.com/astral-sh/ty/pull/1904
2025-12-15 16:36:08 -05:00
Douglas Creager 7d3b7c5754
[ty] Consistent ordering of constraint set specializations, take 2 (#21983)
In https://github.com/astral-sh/ruff/pull/21957, we tried to use
`union_or_intersection_elements_ordering` to provide a stable ordering
of the union and intersection elements that are created when determining
which type a typevar should specialize to. @AlexWaygood [pointed
out](https://github.com/astral-sh/ruff/pull/21551#discussion_r2616543762)
that this won't work, since that provides a consistent ordering within a
single process run, but does not provide a stable ordering across runs.

This is an attempt to produce a proper stable ordering for constraint
sets, so that we end up with consistent diagnostic and test output.

We do this by maintaining a new `source_order` field on each interior
BDD node, which records when that node's constraint was added to the
set. Several of the BDD operators (`and`, `or`, etc) now have
`_with_offset` variants, which update each `source_order` in the rhs to
be larger than any of the `source_order`s in the lhs. This is what
causes that field to be in line with (a) when you add each constraint to
the set, and (b) the order of the parameters you provide to `and`, `or`,
etc. Then we sort by that new field before constructing the
union/intersection types when creating a specialization.
2025-12-15 14:24:08 -05:00
RasmusNygren d6a5bbd91c
[ty] Remove invalid statement-keyword completions in for-statements (#21979)
In `for x in <CURSOR>` statements it's only valid to provide expressions
that eventually evaluate to an iterable. While it's extremely difficult
to know if something can evaulate to an iterable in a general case,
there are some suggestions we know can never lead to an iterable. Most
keywords are such and hence we remove them here.

## Summary
This suppresses statement-keywords from auto-complete suggestions in
`for x in <CURSOR>` statements where we know they can never be valid, as
whatever is typed has to (at some point) evaluate to an iterable.

It handles the core issue from
https://github.com/astral-sh/ty/issues/1774 but there's a lot of related
cases that probably has to be handled piece-wise.

## Test Plan
New tests and verifying in the playground.
2025-12-15 12:56:34 -05:00
Micha Reiser 1df6544ad8
[ty] Avoid caching trivial is-redundant-with calls (#21989) 2025-12-15 18:45:03 +01:00
Dylan 4e1cf5747a
Fluent formatting of method chains (#21369)
This PR implements a modification (in preview) to fluent formatting for
method chains: We break _at_ the first call instead of _after_.

For example, we have the following diff between `main` and this PR (with
`line-length=8` so I don't have to stretch out the text):

```diff
 x = (
-    df.merge()
+    df
+    .merge()
     .groupby()
     .agg()
     .filter()
 )
```

## Explanation of current implementation

Recall that we traverse the AST to apply formatting. A method chain,
while read left-to-right, is stored in the AST "in reverse". So if we
start with something like

```python
a.b.c.d().e.f()
```

then the first syntax node we meet is essentially `.f()`. So we have to
peek ahead. And we actually _already_ do this in our current fluent
formatting logic: we peek ahead to count how many calls we have in the
chain to see whether we should be using fluent formatting or now.

In this implementation, we actually _record_ this number inside the enum
for `CallChainLayout`. That is, we make the variant `Fluent` hold an
`AttributeState`. This state can either be:

- The number of call-like attributes preceding the current attribute
- The state `FirstCallOrSubscript` which means we are at the first
call-like attribute in the chain (reading from left to right)
- The state `BeforeFirstCallOrSubscript` which means we are in the
"first group" of attributes, preceding that first call.

In our example, here's what it looks like at each attribute:

```
a.b.c.d().e.f @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d().e @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d @ Fluent(FirstCallOrSubscript)
a.b.c @ Fluent(BeforeFirstCallOrSubscript)
a.b @ Fluent(BeforeFirstCallOrSubscript)
```

Now, as we descend down from the parent expression, we pass along this
little piece of state and modify it as we go to track where we are. This
state doesn't do anything except when we are in `FirstCallOrSubscript`,
in which case we add a soft line break.

Closes #8598

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-15 09:29:50 -06:00
Douglas Creager cbfecfaf41
[ty] Avoid stack overflow when calculating inferable typevars (#21971)
When we calculate which typevars are inferable in a generic context, the
result might include more than the typevars bound by the generic
context. The canonical example is a generic method of a generic class:

```py
class C[A]:
    def method[T](self, t: T): ...
```

Here, the inferable typevar set of `method` contains `Self` and `T`, as
you'd expect. (Those are the typevars bound by the method.) But it also
contains `A@C`, since the implicit `Self` typevar is defined as `Self:
C[A]`. That means when we call `method`, we need to mark `A@C` as
inferable, so that we can determine the correct mapping for `A@C` at the
call site.

Fixes https://github.com/astral-sh/ty/issues/1874
2025-12-15 10:25:33 -05:00
Aria Desires 8f530a7ab0
[ty] Add "qualify ..." code fix for undefined references (#21968)
## Summary

If `import warnings` exists in the file, we will suggest an edit of
`deprecated -> warnings.deprecated` as "qualify warnings.deprecated"

## Test Plan

Should test more cases...
2025-12-15 10:14:36 -05:00
Micha Reiser 5372bb3440
[ty] Use jemalloc on linux (#21975)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 16:04:34 +01:00
Micha Reiser d08e414179
Update MSRV to 1.90 (#21987) 2025-12-15 14:29:11 +01:00
Alex Waygood 0b918ae4d5
[ty] Improve check enforcing that an overloaded function must have an implementation (#21978)
## Summary

- Treat `if TYPE_CHECKING` blocks the same as stub files (the feature
requested in https://github.com/astral-sh/ty/issues/1216)
- We currently only allow `@abstractmethod`-decorated methods to omit
the implementation if they're methods in classes that have _exactly_
`ABCMeta` as their metaclass. That seems wrong -- `@abstractmethod` has
the same semantics if a class has a subclass of `ABCMeta` as its
metaclass. This PR fixes that too. (I'm actually not _totally_ sure we
should care what the class's metaclass is at all -- see discussion in
https://github.com/astral-sh/ty/issues/1877#issue-3725937441... but the
change this PR is making seems less wrong than what we have currently,
anyway.)

Fixes https://github.com/astral-sh/ty/issues/1216

## Test Plan

Mdtests and snapshots
2025-12-15 08:56:35 +00:00
renovate[bot] 9838f81baf
Update actions/checkout digest to 8e8c483 (#21982)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-15 06:52:52 +00:00
Dhruv Manilawala ba47349c2e
[ty] Use `ParamSpec` without the attr for inferable check (#21934)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1820

## Test Plan

Add new mdtests.

Ecosystem changes removes all false positives.
2025-12-15 11:04:28 +05:30
Bhuminjay Soni 04f9949711
[ty] Emit diagnostic when a type variable with a default is followed by one without a default (#21787)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-14 19:35:37 +00:00
Leandro Braga 8bc753b842
[ty] Fix callout syntax in configuration mkdocs (#1875) (#21961) 2025-12-14 10:21:54 +01:00
Peter Law c7eea1f2e3
Update debug_assert which pointed at missing method (#21969)
## Summary

I assume that the class has been renamed or split since this assertion
was created.

## Test Plan

Compiled locally, nothing more. Relying on CI given the triviality of
this change.
2025-12-13 17:56:59 -05:00
Charlie Marsh be8eb92946
[ty] Add support for `__qualname__` and other implicit class attributes (#21966)
## Summary

Closes https://github.com/astral-sh/ty/issues/1873
2025-12-13 17:10:25 -05:00
Simon Lamon a544c59186
[ty] Emit a diagnostic when frozen dataclass inherits a non-frozen dataclass and the other way around (#21962)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-13 20:59:26 +00:00
Alex Waygood bb464ed924
[ty] Use unqualified names for displays of `TypeAliasType`s and unbound `ParamSpec`s/`TypeVar`s (#21960) 2025-12-13 20:23:16 +00:00
Alex Waygood f57917becd
fix typo in `fuzz/README.md` (#21963) 2025-12-13 18:21:46 +00:00
David Peter 82a7598aa8
[ty] Remove now-unnecessary Divergent check (#21935)
## Summary

This check is not necessary thanks to
https://github.com/astral-sh/ruff/pull/21906.
2025-12-13 16:32:09 +01:00
Micha Reiser e2ec2bc306
Use datatest for formatter tests (#21933) 2025-12-13 08:02:22 +00:00
Douglas Creager b413a6dec4
[ty] Allow gradual lower/upper bounds in a constraint set (#21957)
We now allow the lower and upper bounds of a constraint to be gradual.
Before, we would take the top/bottom materializations of the bounds.
This required us to pass in whether the constraint was intended for a
subtyping check or an assignability check, since that would control
whether we took the "restrictive" or "permissive" materializations,
respectively.

Unfortunately, doing so means that we lost information about whether the
original query involves a non-fully-static type. This would cause us to
create specializations like `T = object` for the constraint `T ≤ Any`,
when it would be nicer to carry through the gradual type and produce `T
= Any`.

We're not currently using constraint sets for subtyping checks, nor are
we going to in the very near future. So for now, we're going to assume
that constraint sets are always used for assignability checks, and allow
the lower/upper bounds to not be fully static. Once we get to the point
where we need to use constraint sets for subtyping checks, we will
consider how best to record this information in constraints.
2025-12-12 22:18:30 -05:00
Shunsuke Shibayama e19c050386
[ty] disallow explicit specialization of type variables themselves (#21938)
## Summary

This PR makes explicit specialization of a type variable itself an
error, and the result of the specialization is `Unknown`.

The change also fixes https://github.com/astral-sh/ty/issues/1794.

## Test Plan

mdtests updated
new corpus test

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-12 15:49:20 -08:00
Alex Waygood 5a2aba237b
[ty] Improve diagnostics for unsupported binary operations and unsupported augmented assignments (#21947)
## Summary

This PR takes the improvements we made to unsupported-comparison
diagnostics in https://github.com/astral-sh/ruff/pull/21737, and extends
them to other `unsupported-operator` diagnostics.

## Test Plan

Mdtests and snapshots
2025-12-12 21:53:29 +00:00
Aria Desires ca5f099481
[ty] update implicit root docs (#21955)
## Summary

./tests is now no longer an implicit root, per
https://github.com/astral-sh/ruff/pull/21817
2025-12-12 16:30:23 -05:00
Alex Waygood a722df6a73
[ty] Enable even more goto-definition on inlay hints (#21950)
## Summary

Working on py-fuzzer recently (AKA, a Python project!) reminded me how
cool our "inlay hint goto-definition feature" is. So this PR adds a
bunch more of that!

I also made a couple of other minor changes to type display. For
example, in the playground, this snippet:

```py
def f(): ...
reveal_type(f.__get__)
```

currently leads to this diagnostic:

```
Revealed type: `<method-wrapper `__get__` of `f`>` (revealed-type) [Ln 2, Col 13]
```

But the fact that we have backticks both around the type display and
inside the type display isn't _great_ there. This PR changes it to

```
Revealed type: `<method-wrapper '__get__' of function 'f'>` (revealed-type) [Ln 2, Col 13]
```

which avoids the nested-backticks issue in diagnostics, and is more
similar to our display for various other `Type` variants such as
class-literal types (`<class 'Foo'>`, etc., not ``<class `Foo`>``).

## Test Plan

inlay snapshots added; mdtests updated
2025-12-12 12:57:38 -05:00
Brent Westbrook dec4154c8a
Document known lambda formatting deviations from Black (#21954)
Summary
--

Following #8179, we now format long lambda expressions a bit more like
Black, preferring to keep long parameter lists on a single line, but we
go one step further to break the body itself across multiple lines and
parenthesize it if it's still too long. This PR documents both the
stable deviation that breaks parameters across multiple lines, and the
new preview deviation that breaks the body instead.

I also fixed a couple of typos in the section immediately above my
addition.

Test Plan
--

I tested all of the snippets here against `main` for the preview
behavior, our playground for the stable behavior, and Black's playground
for their behavior
2025-12-12 12:57:09 -05:00
Carl Meyer 69d1bfbebc
[ty] fix hover type on named expression target (#21952)
## Summary

What it says on the tin.

## Test Plan

Added hover test.
2025-12-12 09:30:50 -08:00
Micha Reiser 90b29c9e87
Bump benchmark dependencies (#21951) 2025-12-12 17:05:57 +00:00
Brent Westbrook 0ebdebddd8
Keep lambda parameters on one line and parenthesize the body if it expands (#21385)
## Summary

This PR makes two changes to our formatting of `lambda` expressions:
1. We now parenthesize the body expression if it expands
2. We now try to keep the parameters on a single line

The latter of these fixes #8179:

Black formatting and this PR's formatting:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        ),
    )
```

Stable Ruff formatting

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self,
        *args,
        **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
    )
```

We don't parenthesize the body expression here because the call to
`aaaa...` has its own parentheses, but adding a binary operator shows
the new parenthesization:

```diff
@@ -3,7 +3,7 @@
         c,
         d,
         e,
-        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
-            *args, **kwargs
-        ) + 1,
+        f=lambda self, *args, **kwargs: (
+            aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
+        ),
     )
```

This is actually a new divergence from Black, which formats this input
like this:

```py
def a():
    return b(
        c,
        d,
        e,
        f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
            *args, **kwargs
        )
        + 1,
    )
```

But I think this is an improvement, unlike the case from #8179.

One other, smaller benefit is that because we now add parentheses to
lambda bodies, we also remove redundant parentheses:

```diff
 @pytest.mark.parametrize(
     "f",
     [
-        lambda x: (x.expanding(min_periods=5).cov(x, pairwise=True)),
-        lambda x: (x.expanding(min_periods=5).corr(x, pairwise=True)),
+        lambda x: x.expanding(min_periods=5).cov(x, pairwise=True),
+        lambda x: x.expanding(min_periods=5).corr(x, pairwise=True),
     ],
 )
 def test_moment_functions_zero_length_pairwise(f):
```

## Test Plan

New tests taken from #8465 and probably a few more I should grab from
the ecosystem results.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-12 12:02:25 -05:00
Aria Desires d5546508cf
[ty] Improve resolution of absolute imports in tests (#21817)
By teaching desperate resolution to try every possible ancestor that
doesn't have an `__init__.py(i)` when resolving absolute imports.

* Fixes https://github.com/astral-sh/ty/issues/1782
2025-12-12 11:59:06 -05:00
Andrew Gallant 3ac58b47bd [ty] Support `__all__ += submodule.__all__`
... and also `__all__.extend(submodule.__all__)`.

I originally left out support for this since I was unclear on whether
we'd really need it. But it turns out this is used somewhat frequently.
For example, in `numpy`.

See the comments on the new `Imports` type for how we approach this.
2025-12-12 10:11:04 -05:00
Andrew Gallant a2b138e789 [ty] Change frequency of invalid `__all__` debug message
This was being emitted for every symbol we checked, which
is clearly too frequent. This switches to emitting it once
per module.
2025-12-12 10:11:04 -05:00
Alex Waygood ff0ed4e752
[ty] Add `KnownUnion::to_type()` (#21948) 2025-12-12 14:06:35 +00:00
Micha Reiser bc8efa2fd8
[ty] Classify `cls` as class parameter (#21944) 2025-12-12 13:54:37 +01:00
Micha Reiser 4249736d74
[ty] Stabilize rename (#21940) 2025-12-12 13:52:47 +01:00
Andrew Gallant 0181568fb5 [ty] Ignore `__all__` for document and workspace symbol requests
We also ignore names introduced by import statements, which seems to
match pylance behavior.

Fixes astral-sh/ty#1856
2025-12-12 07:29:29 -05:00
Micha Reiser 8cc7c993de
[ty] Attach db to background request handler task (#21941) 2025-12-12 11:31:13 +00:00
Micha Reiser 315bf80eed
[ty] Fix outdated version in publish diagnostics after `didChange` (#21943) 2025-12-12 11:30:56 +00:00
Carl Meyer 0138cd238a
[ty] avoid fixpoint unioning of types containing current-cycle Divergent (#21910)
Partially addresses https://github.com/astral-sh/ty/issues/1732

## Summary

Don't union the previous type in fixpoint iteration if the previous type
contains a `Divergent` from the current cycle and the latest type does
not. The theory here, as outlined by @mtshiba at
https://github.com/astral-sh/ty/issues/1732#issuecomment-3609937420, is
that oscillation can't occur by removing and then reintroducing a
`Divergent` type repeatedly, since `Divergent` types are only introduced
at the start of fixpoint iteration.

## Test Plan

Removes a `Divergent` type from the added mdtest, doesn't otherwise
regress any tests.
2025-12-11 19:52:34 -08:00
Shunsuke Shibayama 5e42926eee
[ty] improve bad specialization results & error messages (#21840)
## Summary

This PR includes the following changes:

* When attempting to specialize a non-generic type (or a type that is
already specialized), the result is `Unknown`. Also, the error message
is improved.
* When an implicit type alias is incorrectly specialized, the result is
`Unknown`. Also, the error message is improved.
* When only some of the type alias bounds and constraints are not
satisfied, not all substitutions are `Unknown`.
* Double specialization is prohibited. e.g. `G[int][int]`

Furthermore, after applying this PR, the fuzzing tests for seeds 1052
and 4419, which panic in main, now pass.
This is because the false recursions on type variables have been
removed.

```python
# name_2[0] => Unknown
class name_1[name_2: name_2[0]]:
    def name_4(name_3: name_2, /):
        if name_3:
            pass

#  (name_5 if unique_name_0 else name_1)[0] => Unknown
def name_4[name_5: (name_5 if unique_name_0 else name_1)[0], **name_1](): ...
```

## Test Plan

New corpus test
mdtest files updated
2025-12-11 19:21:34 -08:00
Jack O'Connor ddb7645e9d
[ty] support `NewType`s of `float` and `complex` (#21886)
Fixes https://github.com/astral-sh/ty/issues/1818.
2025-12-12 00:43:09 +00:00
Amethyst Reese 3f63ea4b50
Prepare 0.14.9 release (#21927)
- **Changelog and docs**
- **metadata**
2025-12-11 13:17:52 -08:00
Douglas Creager c8851ecf70
[ty] Defer all parameter and return type annotations (#21906)
As described in astral-sh/ty#1729, we previously had a salsa cycle when
inferring the signature of many function definitions.

The most obvious case happened when (a) the function was decorated, (b)
it had no PEP-695 type params, and (c) annotations were not always
deferred (e.g. in a stub file). We currently evaluate and apply function
decorators eagerly, as part of `infer_function_definition`. Applying a
decorator requires knowing the signature of the function being
decorated. There were two places where signature construction called
`infer_definition_types` cyclically.

The simpler case was that we were looking up the generic context and
decorator list of the function to determine whether it has an implicit
`self` parameter. Before, we used `infer_definition_types` to determine
that information. But since we're in the middle of signature
construction for the function, we can just thread the information
through directly.

The harder case is that signature construction requires knowing the
inferred parameter and return type annotations. When (b) and (c) hold,
those type annotations are inferred in `infer_function_definition`! (In
theory, we've already finished that by the time we start applying
decorators, but signature construction doesn't know that.)

If annotations are deferred, the params/return annotations are inferred
in `infer_deferred_types`; if there are PEP-695 type params, they're
inferred in `infer_function_type_params`. Both of those are different
salsa queries, and don't induce this cycle.

So the quick fix here is to always defer inference of the function
params/return, so that they are always inferred under a different salsa
query.

A more principled fix would be to apply decorators lazily, just like we
construct signatures lazily. But that is a more invasive fix.

Fixes astral-sh/ty#1729

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-11 15:00:18 -05:00
Micha Reiser d442433e93
[ty] Fix workspace symbols to return members too (#21926) 2025-12-11 20:22:21 +01:00
Amethyst Reese c055d665ef
Document range suppressions, reorganize suppression docs (#21884)
- **Reorganize suppression documentation, document range suppressions**
- **Note preview mode requirement**

Issue #21874, #3711
2025-12-11 11:16:36 -08:00
Amethyst Reese 7a578ce833
Ignore ruff:isort like ruff:noqa in new suppressions (#21922)
## Summary

Ignores `#ruff:isort` when parsing suppressions similar to `#ruff:noqa`.
Should clear up ecosystem issues in #21908

## Test Plan

cargo tests
2025-12-11 11:04:28 -08:00
Micha Reiser 34f7a04ef7
[ty] Handle `Definition`s in `SemanticModel::scope` (#21919) 2025-12-11 18:04:57 +00:00
Micha Reiser c9fe4e2703
[ty] Attach salsa db when running ide tests for easier debugging (#21917) 2025-12-11 19:03:52 +01:00
Micha Reiser fbeeb050af
[ty] Don't show hover for expressions with no inferred type (#21924) 2025-12-11 18:55:32 +01:00
Carl Meyer 4fdb4e8219
[ty] avoid unions of generic aliases of the same class in fixpoint (#21909)
Partially addresses https://github.com/astral-sh/ty/issues/1732
Fixes https://github.com/astral-sh/ty/issues/1800

## Summary

At each fixpoint iteration, we union the "previous" and "current"
iteration types, to ensure that the type can only widen at each
iteration. This prevents oscillation and ensures convergence.

But some unions triggered by this behavior (in particular, unions of
differently-specialized generic-aliases of the same class) never
simplify, and cause spurious errors. Since we haven't seen examples of
oscillating types involving class-literal or generic-alias types, just
don't union those.

There may be more thorough/principled ways to avoid undesirable unions
in fixpoint iteration, but this narrow change seems like it results in
strict improvement.

## Test Plan

Removes two false positive `unsupported-class-base` in mdtests, and
several in the ecosystem, without causing other regression.
2025-12-11 09:53:43 -08:00
Andrew Gallant c548ef2027 [ty] Squash false positive logs for failing to find `builtins` as a real module
I recently started noticing this showing up in the logs for every scope
based completion request:

```
2025-12-11 11:25:35.704329935 DEBUG request{id=29 method="textDocument/completion"}:map_stub_definition: Module `builtins` not found while looking in parent dirs
```

And in particular, it was repeated several times. This was confusing to
me because, well, of course `builtins` should resolve.

This particular code path comes from looking for the docstrings
of completion items. This involves a spelunking that ultimately
tries to resolve a "real" module if the stub doesn't have available
docstrings. But I guess there is no "real" `builtins` module, so
`resolve_real_module` fails. Which is fine, but the noisy logs were
annoying since this is an expected case.

So here, we carve out a short circuit for `builtins` and also improve
the log message.
2025-12-11 12:50:08 -05:00
Luca Chiodini 5a9d6a91ea
[ty] Uniformly use "not supported" in diagnostics (#21916) 2025-12-11 15:03:55 +00:00
Micha Reiser c9155d5e72
[ty] Reduce size of ty-ide snapshots (#21915) 2025-12-11 13:36:16 +00:00
Andrew Gallant 8647844572 [ty] Adjust scope completions to use all reachable symbols
Fixes astral-sh/ty#1294
2025-12-11 08:26:15 -05:00
Andrew Gallant 1dcb7f89f1 [ty] Rename `all_members_of_scope` to `all_end_of_scope_members`
This reflects more precisely its behavior based on how it uses the
use-def map.
2025-12-11 08:26:15 -05:00
Andrew Gallant c1c45a6a13 [ty] Remove `all_` prefix from some routines on UseDefMap
These routines don't return *all* symbols/members, but rather,
only *for* a particular scope. We do specifically want to add
some routines that return *all* symbols/members, and this naming
scheme made that confusing. It was also inconsistent with other
routines like `all_end_of_scope_symbol_declarations` which *do*
return *all* symbols.
2025-12-11 08:26:15 -05:00
Brent Westbrook c51727708a
Enable `--document-private-items` for `ruff_python_formatter` (#21903) 2025-12-11 08:23:10 -05:00
Denys Zhak 27912d46b1
Remove `BackwardsTokenizer` based `parenthesized_range` references in `ruff_linter` (#21836)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-11 13:04:57 +01:00
David Peter 71540c03b6
[ty] Revert "Do not infer types for invalid binary expressions in annotations" (#21914)
See discussion here:
https://github.com/astral-sh/ruff/pull/21911#discussion_r2610155157
2025-12-11 11:57:45 +00:00
Micha Reiser aa27925e87
Skip over trivia tokens after re-lexing (#21895) 2025-12-11 10:45:18 +00:00
Charlie Marsh 5c320990f7
[ty] Avoid inferring types for invalid binary expressions in string annotations (#21911)
## Summary

Closes https://github.com/astral-sh/ty/issues/1847.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-11 09:40:19 +01:00
Dhruv Manilawala 24ed28e314
[ty] Improve overload call resolution tracing (#21913)
This PR improves the overload call resolution tracing messages as:
- Use `trace` level instead of `debug` level
- Add a `trace_span` which contains the call arguments and signature
- Remove the signature from individual tracing messages
2025-12-11 12:28:45 +05:30
Carl Meyer 2d0681da08
[ty] fix missing heap_size on Salsa query (#21912) 2025-12-10 18:34:00 -08:00
Ibraheem Ahmed 29bf2cd201
[ty] Support implicit type of `cls` in signatures (#21771)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20517 to support the
implicit type of `cls` in `@classmethod` signatures. Part of
https://github.com/astral-sh/ty/issues/159.
2025-12-10 16:56:20 -05:00
Jack O'Connor 1b44d7e2a7
[ty] add `SyntheticTypedDictType` and implement `normalized` and `is_equivalent_to` (#21784) 2025-12-10 20:36:36 +00:00
Ibraheem Ahmed a2fb2ee06c
[ty] Fix disjointness checks with type-of `@final` classes (#21770)
## Summary

We currently perform a subtyping check, similar to what we were doing
for `@final` instances before
https://github.com/astral-sh/ruff/pull/21167, which is incorrect, e.g.
we currently consider `type[X[Any]]` and `type[X[T]]]` disjoint (where
`X` is `@final`).
2025-12-10 15:15:10 -05:00
Douglas Creager 3e00221a6c
[ty] Fix negation upper bounds in constraint sets (#21897)
This fixes the logic error that @sharkdp
[found](https://github.com/astral-sh/ruff/pull/21871#discussion_r2605755588)
in the constraint set upper bound normalization logic I introduced in
#21871.

I had originally claimed that `(T ≤ α & ~β)` should simplify into `(T ≤
α) ∧ ¬(T ≤ β)`. But that also suggests that `T ≤ ~β` should simplify to
`¬(T ≤ β)` on its own, and that's not correct.

The correct simplification is that `~α` is an "atomic" type, not an
"intersection" for the purposes of our upper bound simplifcation. So `(T
≤ α & ~β)` should simplify to `(T ≤ α) ∧ (T ≤ ~β)`. That is, break apart
the elements of a (proper) intersection, regardless of whether each
element is negated or not.

This PR fixes the logic, adds a test case, and updates the comments to
be hopefully more clear and accurate.
2025-12-10 15:07:50 -05:00
Ibraheem Ahmed 5dc0079e78
[ty] Fix disjointness checks on `@final` class instances (#21769)
## Summary

This was left unfinished in
https://github.com/astral-sh/ruff/pull/21167. This is required to fix
our disjointness checks with type-of a final class, which is currently
broken, and blocking https://github.com/astral-sh/ty/issues/159.
2025-12-10 14:17:22 -05:00
Micha Reiser f7528bd325
[ty] Checking files without extension (#21867) 2025-12-10 16:47:41 +00:00
Avasam 59b92b3522
Document `*.pyw` is included by default in preview (#21885)
Document `*.pyw` is included by default in preview mode.
Originally requested in https://github.com/astral-sh/ruff/issues/13246
and added in https://github.com/astral-sh/ruff/pull/20458

Co-authored-by: Amethyst Reese <amethyst@n7.gg>
2025-12-10 16:43:55 +00:00
Micha Reiser 9ceec359a0
[ty] Add mypy primer check comparing same revisions (#21864) 2025-12-10 16:37:17 +00:00
Micha Reiser 2dd412c89a
Update README to remove production warning (#21899) 2025-12-10 17:25:41 +01:00
Carl Meyer 951766d1fb
[ty] default-specialize class-literal types in assignment to generic-alias types (#21883)
Fixes https://github.com/astral-sh/ty/issues/1832, fixes
https://github.com/astral-sh/ty/issues/1513

## Summary

A class object `C` (for which we infer an unspecialized `ClassLiteral`
type) should always be assignable to the type `type[C]` (which is
default-specialized, if `C` is generic). We already implemented this for
most cases, but we missed the case of a generic final type, where we
simplify `type[C]` to the `GenericAlias` type for the default
specialization of `C`. So we also need to implement this assignability
of generic `ClassLiteral` types as-if default-specialized.

## Test Plan

Added mdtests that failed before this PR.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-12-10 17:18:08 +01:00
David Peter 7bf50e70a7
[ty] Generics: Respect typevar bounds when matching against a union (#21893)
## Summary

Respect typevar bounds and constraints when matching against a union.
For example:

```py
def accepts_t_or_int[T_str: str](x: T_str | int) -> T_str:
    raise NotImplementedError

reveal_type(accepts_t_or_int("a"))  # ok, reveals `Literal["a"]`
reveal_type(accepts_t_or_int(1))  # ok, reveals `Unknown`

class Unrelated: ...

# error: [invalid-argument-type] "Argument type `Unrelated` does not
# satisfy upper bound `str` of type variable `T_str`"
accepts_t_or_int(Unrelated())
```

Previously, the last call succeed without any errors. Worse than that,
we also incorrectly solved `T_str = Unrelated`, which often lead to
downstream errors.

closes https://github.com/astral-sh/ty/issues/1837

## Ecosystem impact

Looks good!

* Lots of removed false positives, often because we previously selected
a wrong overload for a generic function (because we didn't respect the
typevar bound in an earlier overload).
* We now understand calls to functions accepting an argument of type
`GenericPath: TypeAlias = AnyStr | PathLike[AnyStr]`. Previously, we
would incorrectly match a `Path` argument against the `AnyStr` typevar
(violating its constraints), but now we match against `PathLike`.

## Performance

Another regression on `colour`. This package uses `numpy` heavily. And
`numpy` is the codebase that originally lead me to this bug. The fix
here allows us to infer more precise `np.array` types in some cases, so
it's reasonable that we just need to perform more work.

The fix here also requires us to look at more union elements when we
would previously short-circuit incorrectly, so some more work needs to
be done in the solver.

## Test Plan

New Markdown tests
2025-12-10 14:58:57 +01:00
Ibraheem Ahmed ff7086d9ad
[ty] Infer type of implicit `cls` parameter in method bodies (#21685)
## Summary

Extends https://github.com/astral-sh/ruff/pull/20922 to infer
unannotated `cls` parameters as `type[Self]` in method bodies.

Part of https://github.com/astral-sh/ty/issues/159.
2025-12-10 10:31:28 +01:00
Charlie Marsh d2aabeaaa2
[ty] Respect `kw_only` from parent class (#21820)
## Summary

Closes https://github.com/astral-sh/ty/issues/1769.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-10 10:12:18 +01:00
Dhruv Manilawala 8293afe2ae
Remove hack about unknown options warning (#21887)
This hack was introduced to reduce the amount of warnings that users
would get while transitioning to the new settings format
(https://github.com/astral-sh/ruff/pull/19787) but now that we're near
the beta release, it would be good to remove this.
2025-12-10 07:09:31 +00:00
Jack O'Connor aaadf16b1b
[ty] bump dependencies to pull in Salsa support for `ordermap` (#21854) 2025-12-09 19:08:03 -08:00
Douglas Creager c343e94ac5
[ty] Simplify union lower bounds and intersection upper bounds in constraint sets (#21871)
In a constraint set, it's not useful for an upper bound to be an
intersection type, or for a lower bound to be a union type. Both of
those can be rewritten as simpler BDDs:

```
T ≤ α & β  ⇒ (T ≤ α) ∧ (T ≤ β)
T ≤ α & ¬β ⇒ (T ≤ α) ∧ ¬(T ≤ β)
α | β ≤ T  ⇒ (α ≤ T) ∧ (β ≤ T)
```

We were seeing performance issues on #21551 when _not_ performing this
simplification. For instance, `pandas` was producing some constraint
sets involving intersections of 8-9 different types. Our sequent map
calculation was timing out calculating all of the different permutations
of those types:

```
t1 & t2 & t3 → t1
t1 & t2 & t3 → t2
t1 & t2 & t3 → t3
t1 & t2 & t3 → t1 & t2
t1 & t2 & t3 → t1 & t3
t1 & t2 & t3 → t2 & t3
```

(and then imagine what that looks like for 9 types instead of 3...)

With this change, all of those permutations are now encoded in the BDD
structure itself, which is very good at simplifying that kind of thing.

Pulling this out of #21551 for separate review.
2025-12-09 19:49:17 -05:00
Douglas Creager 270b8d1d14
[ty] Collapse `never` paths in constraint set BDDs (#21880)
#21744 fixed some non-determinism in our constraint set implementation
by switching our BDD representation from being "fully reduced" to being
"quasi-reduced". We still deduplicate identical nodes (via salsa
interning), but we removed the logic to prune redundant nodes (one with
identical outgoing true and false edges). This ensures that the BDD
"remembers" all of the individual constraints that it was created with.

However, that comes at the cost of creating larger BDDs, and on #21551
that was causing performance issues. `scikit-learn` was producing a
function signature with dozens of overloads, and we were trying to
create a constraint set that would map a return type typevar to any of
those overload's return types. This created a combinatorial explosion in
the BDD, with by far most of the BDD paths leading to the `never`
terminal.

This change updates the quasi-reduction logic to prune nodes that are
redundant _because both edges lead to the `never` terminal_. In this
case, we don't need to "remember" that constraint, since no assignment
to it can lead to a valid specialization. So we keep the "memory" of our
quasi-reduced structure, while still pruning large unneeded portions of
the BDD structure.

Pulling this out of https://github.com/astral-sh/ruff/pull/21551 for
separate review.
2025-12-09 18:22:54 -05:00
Brent Westbrook f3714fd3c1
Fix leading comment formatting for lambdas with multiple parameters (#21879)
## Summary

This is a follow-up to #21868. As soon as I started merging #21868 into
#21385, I realized that I had missed a test case with `**kwargs` after
the `*args` parameter. Such a case is supposed to be formatted on one
line like:

```py
# input
(
    lambda
    # comment
    *x,
    **y: x
)

# output
(
    lambda
    # comment
    *x, **y: x
)
```

which you can still see on the
[playground](https://play.ruff.rs/bd88d339-1358-40d2-819f-865bfcb23aef?secondary=Format),
but on `main` after #21868, this was formatted as:

```py
(
    lambda
    # comment
    *x,
    **y: x
)
```

because the leading comment on the first parameter caused the whole
group around the parameters to break.

Instead of making these comments leading comments on the first
parameter, this PR makes them leading comments on the parameters list as
a whole.

## Test Plan

New tests, and I will also try merging this into #21385 _before_ opening
it for review this time.

<hr>

(labeling `internal` since #21868 should not be released before some
kind of fix)
2025-12-09 18:15:12 -05:00
David Peter a9be810c38
[ty] Type inference for `@asynccontextmanager` (#21876)
## Summary

This PR adds special handling for `asynccontextmanager` calls as a
temporary solution for https://github.com/astral-sh/ty/issues/1804. We
will be able to remove this soon once we have support for generic
protocols in the solver.

closes https://github.com/astral-sh/ty/issues/1804

## Ecosystem

```diff
+ tests/test_downloadermiddleware.py:305:56: error[invalid-argument-type] Argument to bound method `download` is incorrect: Expected `Spider`, found `Unknown | Spider | None`
+ tests/test_downloadermiddleware.py:305:56: warning[possibly-missing-attribute] Attribute `spider` may be missing on object of type `Crawler | None`
```
These look like true positives

```diff
+ pymongo/asynchronous/database.py:1021:35: error[invalid-assignment] Object of type `(AsyncClientSession & ~AlwaysTruthy & ~AlwaysFalsy) | (_ServerMode & ~AlwaysFalsy) | Unknown | Primary` is not assignable to `_ServerMode | None`
+ pymongo/asynchronous/database.py:1025:17: error[invalid-argument-type] Argument to bound method `_conn_for_reads` is incorrect: Expected `_ServerMode`, found `_ServerMode | None`
```

Known problems or true positives, just caused by the new type for
`session`

```diff
- src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[Unknown, None] | Connection | AsyncConnection`
+ src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[AsyncConnection, None] | Connection | AsyncConnection`
```

Just a more concrete type

```diff
- src/prefect/flow_engine.py:1277:24: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/server/api/server.py:696:49: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/task_engine.py:1426:24: error[missing-argument] No argument provided for required parameter `cls`
```

Good

## Test Plan

* Adapted and newly added Markdown tests
* Tested on internal codebase
2025-12-09 22:49:00 +01:00
Brent Westbrook 0bec5c0362
Fix comment placement in lambda parameters (#21868)
Summary
--

This PR makes two changes to comment placement in lambda parameters.
First, we
now insert a line break if the first parameter has a leading comment:

```py
# input
(
    lambda
    * # comment 2
    x:
    x
)

# main
(
    lambda # comment 2
    *x: x
)

# this PR
(
    lambda
	# comment 2
    *x: x
)
```

Note the missing space in the output from main. This case is currently
unstable
on main. Also note that the new formatting is more consistent with our
stable
formatting in cases where the lambda has its own dangling comment:

```py
# input
(
    lambda # comment 1
    * # comment 2
    x:
    x
)

# output
(
    lambda  # comment 1
    # comment 2
    *x: x
)
```

and when a parameter without a comment precedes the split `*x`:

```py
# input
(
    lambda y,
    * # comment 2
    x:
    x
)

# output
(
    lambda y,
    # comment 2
    *x: x
)
```

This does change the stable formatting, but I think such cases are rare
(expecting zero hits in the ecosystem report), this fixes an existing
instability, and it should not change any code we've previously
formatted.

Second, this PR modifies the comment placement such that `# comment 2`
in these
outputs is still a leading comment on the parameter. This is also not
the case
on main, where it becomes a [dangling lambda
comment](https://play.ruff.rs/3b29bb7e-70e4-4365-88e0-e60fe1857a35?secondary=Comments).
This doesn't cause any
instability that I'm aware of on main, but it does cause problems when
trying to
adjust the placement of dangling lambda comments in #21385. Changing the
placement in this way should not affect any formatting here.

Test Plan
--

New lambda tests, plus existing tests covering the cases above with
multiple
comments around the parameters (see lambda.py 122-143, and 122-205 or so
more
broadly)

I also checked manually that the comments are now leading on the
parameter:

```shell
❯ cargo run --bin ruff_python_formatter -- --emit stdout --target-version 3.10 --print-comments <<EOF
(
    lambda
        # comment 2
    *x: x
)
EOF
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
     Running `target/debug/ruff_python_formatter --emit stdout --target-version 3.10 --print-comments`
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
21..32, None, Some((Parameters, 37..39)), (ExprLambda, 6..42), "# comment 2"
{
    Node {
        kind: Parameter,
        range: 37..39,
        source: `*x`,
    }: {
        "leading": [
            SourceComment {
                text: "# comment 2",
                position: OwnLine,
                formatted: true,
            },
        ],
        "dangling": [],
        "trailing": [],
    },
}
(
    lambda
    # comment 2
    *x: x
)
```

But I didn't see a great place to put a test like this. Is there
somewhere I can assert this comment placement since it doesn't affect
any formatting yet? Or is it okay to wait until we use this in #21385?
2025-12-09 14:07:48 -05:00
Loïc Riegel 9490fbf1e1
[`pylint`] Detect subclasses of builtin exceptions (`PLW0133`) (#21382)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Closes #17347

Goal is to detect the useless exception statement not just for builtin
exceptions but also custom (user defined) ones.

## Test Plan

<!-- How was it tested? -->
I added test cases in the rule fixture and updated the insta snapshot.
Note that I first moved up a test case case which was at the bottom to
the correct "violation category".
I wasn't sure if I should create new test cases or just insert inside
those tests. I know that ideally each test case should test only one
thing, but here, duplicating twice 12 test cases seemed very verbose,
and actually less maintainable in the future. The drawback is that the
diff in the snapshot is hard to review, sorry. But you can see that the
snapshot gives 38 diagnostics, which is what we expect.

Alternatively, I also created this file for manual testing.
```py
# tmp/test_error.py

class MyException(Exception):
    ...
class MyBaseException(BaseException):
    ...
class MyValueError(ValueError):
    ...
class MyExceptionCustom(Exception):
    ...
class MyBaseExceptionCustom(BaseException):
    ...
class MyValueErrorCustom(ValueError):
    ...
class MyDeprecationWarning(DeprecationWarning):
    ...
class MyDeprecationWarningCustom(MyDeprecationWarning):
    ...
class MyExceptionGroup(ExceptionGroup):
    ...
class MyExceptionGroupCustom(MyExceptionGroup):
    ...
class MyBaseExceptionGroup(ExceptionGroup):
    ...
class MyBaseExceptionGroupCustom(MyBaseExceptionGroup):
    ...


def foo():
    Exception("...")
    BaseException("...")
    ValueError("...")
    RuntimeError("...")
    DeprecationWarning("...")
    GeneratorExit("...")
    SystemExit("...")
    ExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    BaseExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
    MyException("...")
    MyBaseException("...")
    MyValueError("...")
    MyExceptionCustom("...")
    MyBaseExceptionCustom("...")
    MyValueErrorCustom("...")
    MyDeprecationWarning("...")
    MyDeprecationWarningCustom("...")
    MyExceptionGroup("...")
    MyExceptionGroupCustom("...")
    MyBaseExceptionGroup("...")
    MyBaseExceptionGroupCustom("...")

```

and you can run this to check the PR:
```sh
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py310
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py314
```
2025-12-09 13:49:55 -05:00
Carl Meyer 8727a7b179
Fix stack overflow with recursive generic protocols (depth limit) (#21858)
## Summary

This fixes https://github.com/astral-sh/ty/issues/1736 where recursive
generic protocols with growing specializations caused a stack overflow.

The issue occurred with protocols like:
```python
class C[T](Protocol):
    a: 'C[set[T]]'
```

When checking `C[set[int]]` against e.g. `C[Unknown]`, member `a`
requires checking `C[set[set[int]]]`, which requires
`C[set[set[set[int]]]]`, etc. Each level has different type
specializations, so the existing cycle detection (using full types as
cache keys) didn't catch the infinite recursion.

This fix adds a simple recursion depth limit (64) to the CycleDetector.
When the depth exceeds the limit, we return the fallback value (assume
compatible) to safely terminate the recursion.

This is a bit of a blunt hammer, but it should be broadly effective to
prevent stack overflow in any nested-relation case, and it's hard to
imagine that non-recursive nested relation comparisons of depth > 64
exist much in the wild.

## Test Plan

Added mdtest.
2025-12-09 09:05:18 -08:00
Amethyst Reese 4e4d018344
New diagnostics for unused range suppressions (#21783)
Issue #3711
2025-12-09 08:30:27 -08:00
Andrew Gallant a9899af98a [ty] Use default settings in completion tests
This makes it so the test and production environments match.

Ref https://github.com/astral-sh/ruff/pull/21851#discussion_r2601579316
2025-12-09 10:42:46 -05:00
David Peter aea2bc2308
[ty] Infer type variables within generic unions (#21862)
## Summary

This PR allows our generics solver to find a solution for `T` in cases
like the following:
```py
def extract_t[T](x: P[T] | Q[T]) -> T:
    raise NotImplementedError

reveal_type(extract_t(P[int]()))  # revealed: int
reveal_type(extract_t(Q[str]()))  # revealed: str
```

closes https://github.com/astral-sh/ty/issues/1772
closes https://github.com/astral-sh/ty/issues/1314

## Ecosystem

The impact here looks very good!

It took me a long time to figure this out, but the new diagnostics on
bokeh are actually true positives. I should have tested with another
type-checker immediately, I guess. All other type checkers also emit
errors on these `__init__` calls. MRE
[here](https://play.ty.dev/5c19d260-65e2-4f70-a75e-1a25780843a2) (no
error on main, diagnostic on this branch)

A lot of false positives on home-assistant go away for calls to
functions like
[`async_listen`](180053fe98/homeassistant/core.py (L1581-L1587))
which take a `event_type: EventType[_DataT] | str` parameter. We can now
solve for `_DataT` here, which was previously falling back to its
default value, and then caused problems because it was used as an
argument to an invariant generic class.

## Test Plan

New Markdown tests
2025-12-09 16:22:59 +01:00
Dhruv Manilawala c35bf8f441
[ty] Fix overload filtering to prefer more "precise" match (#21859)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1809

I took this chance to add some debug level tracing logs for overload
call evaluation similar to Doug's implementation in `constraints.rs`.

## Test Plan

- Add new mdtests
- Tested it against `sqlalchemy.select` in pyx which results in the
correct overload being matched
2025-12-09 20:29:34 +05:30
Andrew Gallant 426125f5c0 [ty] Stabilize auto-import
While still under development, it's far enough along now that we think
it's worth enabling it by default. This should also help give us
feedback for how it behaves.

This PR adds a "completion settings" grouping similar to inlay hints. We
only have an auto-import setting there now, but I expect we'll add more
options to configure completion behavior in the future.

Closes astral-sh/ty#1765
2025-12-09 09:40:38 -05:00
Micha Reiser a0b18bc153
[ty] Fix reveal-type E2E test (#21865) 2025-12-09 14:08:22 +01:00
Micha Reiser 11901384b4
[ty] Use concise message for LSP clients not supporting related diagnostic information (#21850) 2025-12-09 13:18:30 +01:00
Micha Reiser dc2f0a86fd
Include more details in Tokens 'offset is inside token' panic message (#21860) 2025-12-09 11:12:35 +01:00
Amethyst Reese 4e67a219bb
apply range suppressions to filter diagnostics (#21623)
Builds on range suppressions from
https://github.com/astral-sh/ruff/pull/21441

Filters diagnostics based on parsed valid range suppressions.

Issue: #3711
2025-12-08 16:11:59 -08:00
Aria Desires 8ea18966cf
[ty] followup: add-import action for `reveal_type` too (#21668) 2025-12-08 22:44:17 +00:00
Rasmus Nygren e548ce1ca9 [ty] Enrich function argument auto-complete suggestions with annotated types 2025-12-08 14:19:44 -05:00
Rasmus Nygren eac8a90cc4 [ty] Add autocomplete suggestions for function arguments
This adds autocomplete suggestions for function arguments. For example,
`okay` in:

```python
def foo(okay=None):

foo(o<CURSOR>
```

This also ensures that we don't suggest a keyword argument if it has
already been used.

Closes astral-sh/issues#1550
2025-12-08 14:19:44 -05:00
Loïc Riegel 2d3466eccf
[`flake8-bugbear`] Accept immutable slice default arguments (`B008`) (#21823)
Closes issue #21565

## Summary

As pointed out in the issue, slices are currently flagged by B008 but
this behavior is incorrect because slices are immutable.

## Test Plan

Added a test case in the "B006_B008.py" fixture. Sorry for the diff in
the snapshots, the only thing that changes in those flies is the line
numbers, though.

You can also test this manually with this file:
```py
# test_slice.py
def c(d=slice(0, 3)): ...
```

```sh
> target/debug/ruff check tmp/test_slice.py --no-cache --select B008
All checks passed!
```
2025-12-08 14:00:43 -05:00
Phong Do 45fb3732a4
[`pydocstyle`] Suppress `D417` for parameters with `Unpack` annotations (#21816)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes https://github.com/astral-sh/ruff/issues/8774

This PR fixes `pydocstyle` incorrectly flagging missing argument for
arguments with `Unpack` type annotation by extracting the `kwarg` `D417`
suppression logic into a helper function for future rules as needed.

## Problem Statement

The below example was incorrectly triggering `D417` error for missing
`**kwargs` doc.

```python
class User(TypedDict):
    id: int
    name: str

def do_something(some_arg: str, **kwargs: Unpack[User]):
    """Some doc
    
    Args:
        some_arg: Some argument
    """
```

<img width="1135" height="276" alt="image"
src="https://github.com/user-attachments/assets/42fa4bb9-61a5-4a70-a79c-0c8922a3ee66"
/>

`**kwargs: Unpack[User]` indicates the function expects keyword
arguments that will be unpacked. Ideally, the individual fields of the
User `TypedDict` should be documented, not in the `**kwargs` itself. The
`**kwargs` parameter acts as a semantic grouping rather than a parameter
requiring documentation.

## Solution

As discussed in the linked issue, it makes sense to suppress the `D417`
for parameters with `Unpack` annotation. I extract a helper function to
solely check `D417` should be suppressed with `**kwarg: Unpack[T]`
parameter, this function can also be unit tested independently and
reduce complexity of current `missing_args` check function. This also
makes it easier to add additional rules in the future.

_✏️ Note:_ This is my first PR in this repo, as I've learned a ton from
it, please call out anything that could be improved. Thanks for making
this excellent tool 👏

## Test Plan

Add 2 test cases in `D417.py` and update snapshots.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-08 19:00:05 +00:00
Micha Reiser 0ab8521171
[ty] Remove legacy `concise_message` fallback behavior (#21847) 2025-12-08 16:19:01 +00:00
Alex Waygood 0ccd84136a
[ty] Make Python-version subdiagnostics less verbose (#21849) 2025-12-08 15:58:23 +00:00
Aria Desires 3981a23ee9
[ty] Supress inlay hints when assigning a trivial initializer call (#21848)
## Summary

By taking a purely syntactic approach to the problem of trivial
initializer calls we can supress `x: T = T()`, `x: T = x.y.T()` and `x:
MyNewType = MyNewType(0)` but still display `x: T[U] = T()`.

The place where we drop a ball is this does not compose with our
analysis for supressing `x = (0, "hello")` as `x = (0, T())` and `x =
(T(), T())` will still get inlay hints (I don't think this is a huge
deal).

* fixes https://github.com/astral-sh/ty/issues/1516

## Test Plan

Existing snapshots cover this well.
2025-12-08 10:54:30 -05:00
Charlie Marsh 385dd2770b
[ty] Avoid double-inference on non-tuple argument to `Annotated` (#21837)
## Summary

If you pass a non-tuple to `Annotated`, we end up running inference on
it twice. I _think_ the only case here is `Annotated[]`, where we insert
a (fake) empty `Name` node in the slice.

Closes https://github.com/astral-sh/ty/issues/1801.
2025-12-08 10:24:05 -05:00
Alex Waygood 7519f6c27b
Print Python version and Python platform in the fuzzer output when fuzzing fails (#21844) 2025-12-08 14:35:36 +00:00
David Peter 4686111681
[ty] More SQLAlchemy test updates (#21846)
Minor updates to the SQLAlchemy test suite. I verified all expected
results using pyright.
2025-12-08 15:22:55 +01:00
Micha Reiser 4364ffbdd3
[ty] Don't create a related diagnostic for the primary annotation of sub-diagnostics (#21845) 2025-12-08 14:22:11 +00:00
Charlie Marsh b845e81c4a
Use `memchr` for computing line indexes (#21838)
## Summary

Some benchmarks with Claude's help:

| File | Size | Baseline | Optimized | Speedup |

|---------------------|-------|----------------------|----------------------|---------|
| numpy/globals.py | 3 KB | 1.48 µs (1.95 GiB/s) | 740 ns (3.89 GiB/s) |
2.0x |
| unicode/pypinyin.py | 4 KB | 2.04 µs (2.01 GiB/s) | 1.18 µs (3.49
GiB/s) | 1.7x |
| pydantic/types.py | 26 KB | 13.1 µs (1.90 GiB/s) | 5.88 µs (4.23
GiB/s) | 2.2x |
| numpy/ctypeslib.py | 17 KB | 8.45 µs (1.92 GiB/s) | 3.94 µs (4.13
GiB/s) | 2.1x |
| large/dataset.py | 41 KB | 21.6 µs (1.84 GiB/s) | 11.2 µs (3.55 GiB/s)
| 1.9x |

I think that I originally thought we _had_ to iterate
character-by-character here because we needed to do the ASCII check, but
the ASCII check can be vectorized by LLVM (and the "search for newlines"
can be done with `memchr`).
2025-12-08 08:50:51 -05:00
David Peter c99e10eedc
[ty] Increase SQLAlchemy test coverage (#21843)
## Summary

Increase our SQLAlchemy test coverage to make sure we understand
`Session.scalar`, `Session.scalars`, `Session.execute` (and their async
equivalents), as well as `Result.tuples`, `Result.one_or_none`,
`Row._tuple`.
2025-12-08 14:36:13 +01:00
Dhruv Manilawala a364195335
[ty] Avoid diagnostic when `typing_extensions.ParamSpec` uses `default` parameter (#21839)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1798

## Test Plan

Add mdtest.
2025-12-08 12:34:30 +00:00
David Peter dfd6ed0524
[ty] mdtests with external dependencies (#20904)
## Summary

This PR adds the possibility to write mdtests that specify external
dependencies in a `project` section of TOML blocks. For example, here is
a test that makes sure that we understand Pydantic's dataclass-transform
setup:

````markdown
```toml
[environment]
python-version = "3.12"
python-platform = "linux"

[project]
dependencies = ["pydantic==2.12.2"]
```

```py
from pydantic import BaseModel

class User(BaseModel):
    id: int
    name: str

user = User(id=1, name="Alice")
reveal_type(user.id)  # revealed: int
reveal_type(user.name)  # revealed: str

# error: [missing-argument] "No argument provided for required parameter
`name`"
invalid_user = User(id=2)
```
````

## How?

Using the `python-version` and the `dependencies` fields from the
Markdown section, we generate a `pyproject.toml` file, write it to a
temporary directory, and use `uv sync` to install the dependencies into
a virtual environment. We then copy the Python source files from that
venv's `site-packages` folder to a corresponding directory structure in
the in-memory filesystem. Finally, we configure the search paths
accordingly, and run the mdtest as usual.

I fully understand that there are valid concerns here:
* Doesn't this require network access? (yes, it does)
* Is this fast enough? (`uv` caching makes this almost unnoticeable,
actually)
* Is this deterministic? ~~(probably not, package resolution can depend
on the platform you're on)~~ (yes, hopefully)

For this reason, this first version is opt-in, locally. ~~We don't even
run these tests in CI (even though they worked fine in a previous
iteration of this PR).~~ You need to set `MDTEST_EXTERNAL=1`, or use the
new `-e/--enable-external` command line option of the `mdtest.py`
runner. For example:
```bash
# Skip mdtests with external dependencies (default):
uv run crates/ty_python_semantic/mdtest.py

# Run all mdtests, including those with external dependencies:
uv run crates/ty_python_semantic/mdtest.py -e

# Only run the `pydantic` tests. Use `-e` to make sure it is not skipped:
uv run crates/ty_python_semantic/mdtest.py -e pydantic
```

## Why?

I believe that this can be a useful addition to our testing strategy,
which lies somewhere between ecosystem tests and normal mdtests.
Ecosystem tests cover much more code, but they have the disadvantage
that we only see second- or third-order effects via diagnostic diffs. If
we unexpectedly gain or lose type coverage somewhere, we might not even
notice (assuming the gradual guarantee holds, and ecosystem code is
mostly correct). Another disadvantage of ecosystem checks is that they
only test checked-in code that is usually correct. However, we also want
to test what happens on wrong code, like the code that is momentarily
written in an editor, before fixing it. On the other end of the spectrum
we have normal mdtests, which have the disadvantage that they do not
reflect the reality of complex real-world code. We experience this
whenever we're surprised by an ecosystem report on a PR.

That said, these tests should not be seen as a replacement for either of
these things. For example, we should still strive to write detailed
self-contained mdtests for user-reported issues. But we might use this
new layer for regression tests, or simply as a debugging tool. It can
also serve as a tool to document our support for popular third-party
libraries.

## Test Plan

* I've been locally using this for a couple of weeks now.
* `uv run crates/ty_python_semantic/mdtest.py -e`
2025-12-08 11:44:20 +01:00
Dhruv Manilawala ac882f7e63
[ty] Handle various invalid explicit specializations for `ParamSpec` (#21821)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1788

## Test Plan

Add new mdtests.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-08 05:20:41 +00:00
Alex Waygood 857fd4f683
[ty] Add test case for fixed panic (#21832) 2025-12-07 15:58:11 +00:00
Charlie Marsh 285d6410d3
[ty] Avoid double-analyzing tuple in `Final` subscript (#21828)
## Summary

As-is, a single-element tuple gets destructured via:

```rust
let arguments = if let ast::Expr::Tuple(tuple) = slice {
    &*tuple.elts
} else {
    std::slice::from_ref(slice)
};
```

But then, because it's a single element, we call
`infer_annotation_expression_impl`, passing in the tuple, rather than
the first element.

Closes https://github.com/astral-sh/ty/issues/1793.
Closes https://github.com/astral-sh/ty/issues/1768.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-07 14:27:14 +00:00
Prakhar Pratyush cbff09b9af
[flake8-bandit] Fix false positive when using non-standard `CSafeLoader` path (S506). (#21830) 2025-12-07 11:40:46 +01:00
Louis Maddox 6e0e49eda8
Add minimal-size build profile (#21826)
This PR adds the same `minimal-size` profile as `uv` repo workspace has

```toml
# Profile to build a minimally sized binary for uv-build
[profile.minimal-size]
inherits = "release"
opt-level = "z"
# This will still show a panic message, we only skip the unwind
panic = "abort"
codegen-units = 1
```
but removes its `panic = "abort"` setting

- As discussed in #21825

Compared to the ones pre-built via `uv tool install`, this builds 35%
smaller ruff and 24% smaller ty binaries
(as measured
[here](https://github.com/lmmx/just-pre-commit/blob/master/refresh_binaries.sh))
2025-12-06 13:19:04 -05:00
Charlie Marsh ef45c97dab
[ty] Allow `tuple[Any, ...]` to assign to `tuple[int, *tuple[int, ...]]` (#21803)
## Summary

Closes https://github.com/astral-sh/ty/issues/1750.
2025-12-05 19:04:23 +00:00
Micha Reiser 9714c589e1
[ty] Support renaming import aliases (#21792) 2025-12-05 19:12:13 +01:00
Micha Reiser b2fb421ddd
[ty] Add redeclaration LSP tests (#21812) 2025-12-05 18:02:34 +00:00
Shunsuke Shibayama 2f05ffa2c8
[ty] more detailed description of "Size limit on unions of literals" in mdtest (#21804) 2025-12-05 17:34:39 +00:00
Dhruv Manilawala b623189560
[ty] Complete support for `ParamSpec` (#21445)
## Summary

Closes: https://github.com/astral-sh/ty/issues/157

This PR adds support for the following capabilities involving a
`ParamSpec` type variable:
- Representing `P.args` and `P.kwargs` in the type system
- Matching against a callable containing `P` to create a type mapping
- Specializing `P` against the stored parameters

The value of a `ParamSpec` type variable is being represented using
`CallableType` with a `CallableTypeKind::ParamSpecValue` variant. This
`CallableTypeKind` is expanded from the existing `is_function_like`
boolean flag. An `enum` is used as these variants are mutually
exclusive.

For context, an initial iteration made an attempt to expand the
`Specialization` to use `TypeOrParameters` enum that represents that a
type variable can specialize into either a `Type` or `Parameters` but
that increased the complexity of the code as all downstream usages would
need to handle both the variants appropriately. Additionally, we'd have
also need to establish an invariant that a regular type variable always
maps to a `Type` while a paramspec type variable always maps to a
`Parameters`.

I've intentionally left out checking and raising diagnostics when the
`ParamSpec` type variable and it's components are not being used
correctly to avoid scope increase and it can easily be done as a
follow-up. This would also include the scoping rules which I don't think
a regular type variable implements either.

## Test Plan

Add new mdtest cases and update existing test cases.

Ran this branch on pyx, no new diagnostics.

### Ecosystem analysis

There's a case where in an annotated assignment like:
```py
type CustomType[P] = Callable[...]

def value[**P](...): ...

def another[**P](...):
	target: CustomType[P] = value
```
The type of `value` is a callable and it has a paramspec that's bound to
`value`, `CustomType` is a type alias that's a callable and `P` that's
used in it's specialization is bound to `another`. Now, ty infers the
type of `target` same as `value` and does not use the declared type
`CustomType[P]`. [This is the
assignment](0980b9d9ab/src/async_utils/gen_transform.py (L108))
that I'm referring to which then leads to error in downstream usage.
Pyright and mypy does seem to use the declared type.

There are multiple diagnostics in `dd-trace-py` that requires support
for `cls`.

I'm seeing `Divergent` type for an example like which ~~I'm not sure
why, I'll look into it tomorrow~~ is because of a cycle as mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974:
```py
from typing import Callable

def decorator[**P](c: Callable[P, int]) -> Callable[P, str]: ...

@decorator
def func(a: int) -> int: ...

# ((a: int) -> str) | ((a: Divergent) -> str)
reveal_type(func)
```

I ~~need to look into why are the parameters not being specialized
through multiple decorators in the following code~~ think this is also
because of the cycle mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974 and
the fact that we don't support `staticmethod` properly:
```py
from contextlib import contextmanager

class Foo:
    @staticmethod
    @contextmanager
    def method(x: int):
        yield

foo = Foo()
# ty: Revealed type: `() -> _GeneratorContextManager[Unknown, None, None]` [revealed-type]
reveal_type(foo.method)
```

There's some issue related to `Protocol` that are generic over a
`ParamSpec` in `starlette` which might be related to
https://github.com/astral-sh/ty/issues/1635 but I'm not sure. Here's a
minimal example to reproduce:

<details><summary>Code snippet:</summary>
<p>

```py
from collections.abc import Awaitable, Callable, MutableMapping
from typing import Any, Callable, ParamSpec, Protocol

P = ParamSpec("P")

Scope = MutableMapping[str, Any]
Message = MutableMapping[str, Any]
Receive = Callable[[], Awaitable[Message]]
Send = Callable[[Message], Awaitable[None]]

ASGIApp = Callable[[Scope, Receive, Send], Awaitable[None]]

_Scope = Any
_Receive = Callable[[], Awaitable[Any]]
_Send = Callable[[Any], Awaitable[None]]

# Since `starlette.types.ASGIApp` type differs from `ASGIApplication` from `asgiref`
# we need to define a more permissive version of ASGIApp that doesn't cause type errors.
_ASGIApp = Callable[[_Scope, _Receive, _Send], Awaitable[None]]


class _MiddlewareFactory(Protocol[P]):
    def __call__(
        self, app: _ASGIApp, *args: P.args, **kwargs: P.kwargs
    ) -> _ASGIApp: ...


class Middleware:
    def __init__(
        self, factory: _MiddlewareFactory[P], *args: P.args, **kwargs: P.kwargs
    ) -> None:
        self.factory = factory
        self.args = args
        self.kwargs = kwargs


class ServerErrorMiddleware:
    def __init__(
        self,
        app: ASGIApp,
        value: int | None = None,
        flag: bool = False,
    ) -> None:
        self.app = app
        self.value = value
        self.flag = flag

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ...


# ty: Argument to bound method `__init__` is incorrect: Expected `_MiddlewareFactory[(...)]`, found `<class 'ServerErrorMiddleware'>` [invalid-argument-type]
Middleware(ServerErrorMiddleware, value=500, flag=True)
```

</p>
</details> 

### Conformance analysis

> ```diff
> -constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(...) -> Unknown`
> +constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(x: int) -> Unknown`
> ```

Requires return type inference i.e.,
https://github.com/astral-sh/ruff/pull/21551

> ```diff
> +constructors_callable.py:194:16: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:194:22: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:195:4: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | int]`
> +constructors_callable.py:195:9: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | str]`
> ```

I might need to look into why this is happening...

> ```diff
> +generics_defaults.py:79:1: error[type-assertion-failure] Type
`type[Class_ParamSpec[(str, int, /)]]` does not match asserted type
`<class 'Class_ParamSpec'>`
> ```

which is on the following code
```py
DefaultP = ParamSpec("DefaultP", default=[str, int])

class Class_ParamSpec(Generic[DefaultP]): ...

assert_type(Class_ParamSpec, type[Class_ParamSpec[str, int]])
```

It's occurring because there's no equivalence relationship defined
between `ClassLiteral` and `KnownInstanceType::TypeGenericAlias` which
is what these types are.

Everything else looks good to me!
2025-12-05 22:00:06 +05:30
Micha Reiser f29436ca9e
[ty] Update benchmark dependencies (#21815) 2025-12-05 17:23:18 +01:00
Douglas Creager e42cdf8495
[ty] Carry generic context through when converting class into `Callable` (#21798)
When converting a class (whether specialized or not) into a `Callable`
type, we should carry through any generic context that the constructor
has. This includes both the generic context of the class itself (if it's
generic) and of the constructor methods (if they are separately
generic).

To help test this, this also updates the `generic_context` extension
function to work on `Callable` types and unions; and adds a new
`into_callable` extension function that works just like
`CallableTypeOf`, but on value forms instead of type forms.

Pulled this out of #21551 for separate review.
2025-12-05 08:57:21 -05:00
Alex Waygood 71a7a03ad4
[ty] Add more tests for renamings (#21810) 2025-12-05 12:41:31 +00:00
Alex Waygood 48f7f42784
[ty] Minor improvements to `assert_type` diagnostics (#21811) 2025-12-05 12:33:30 +00:00
Micha Reiser 3deb7e1b90
[ty] Add some attribute/method renaming test cases (#21809) 2025-12-05 11:56:28 +01:00
mahiro 5df8a959f5
Update mkdocs-material to 9.7.0 (Insiders now free) (#21797) 2025-12-05 08:53:08 +01:00
Dhruv Manilawala 6f03afe318
Remove unused whitespaces in test cases (#21806)
These aren't used in the tests themselves. There are more instances of
them in other files but those require code changes so I've left them as
it is.
2025-12-05 12:51:40 +05:30
Shunsuke Shibayama 1951f1bbb8
[ty] fix panic when instantiating a type variable with invalid constraints (#21663) 2025-12-04 18:48:38 -08:00
Shunsuke Shibayama 10de342991
[ty] fix build failure caused by conflicts between #21683 and #21800 (#21802) 2025-12-04 18:20:24 -08:00
Shunsuke Shibayama 3511b7a06b
[ty] do nothing with `store_expression_type` if `inner_expression_inference_state` is `Get` (#21718)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1688

## Test Plan

N/A
2025-12-04 18:05:41 -08:00
Shunsuke Shibayama f3e5713d90
[ty] increase the limit on the number of elements in a non-recursively defined literal union (#21683)
## Summary

Closes https://github.com/astral-sh/ty/issues/957

As explained in https://github.com/astral-sh/ty/issues/957, literal
union types for recursively defined values ​​can be widened early to
speed up the convergence of fixed-point iterations.
This PR achieves this by embedding a marker in `UnionType` that
distinguishes whether a value is recursively defined.

This also allows us to identify values ​​that are not recursively
defined, so I've increased the limit on the number of elements in a
literal union type for such values.

Edit: while this PR doesn't provide the significant performance
improvement initially hoped for, it does have the benefit of allowing
the number of elements in a literal union to be raised above the salsa
limit, and indeed mypy_primer results revealed that a literal union of
220 elements was actually being used.

## Test Plan

`call/union.md` has been updated
2025-12-04 18:01:48 -08:00
Carl Meyer a9de6b5c3e
[ty] normalize typevar bounds/constraints in cycles (#21800)
Fixes https://github.com/astral-sh/ty/issues/1587

## Summary

Perform cycle normalization on typevar bounds and constraints (similar
to how it was already done for typevar defaults) in order to ensure
convergence in cyclic cases.

There might be another fix here that could avoid the cycle in many more
cases, where we don't eagerly evaluate typevar bounds/constraints on
explicit specialization, but just accept the given specialization and
later evaluate to see whether we need to emit a diagnostic on it. But
the current fix here is sufficient to solve the problem and matches the
patterns we use to ensure cycle convergence elsewhere, so it seems good
for now; left a TODO for the other idea.

This fix is sufficient to make us not panic, but not sufficient to get
the semantics fully correct; see the TODOs in the tests. I have ideas
for fixing that as well, but it seems worth at least getting this in to
fix the panic.

## Test Plan

Test that previously panicked now does not.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-04 15:17:57 -08:00
Andrew Gallant 06415b1877 [ty] Update completion eval to include modules
Our parsing and confirming of symbol names is highly suspect, but
I think it's fine for now.
2025-12-04 17:37:37 -05:00
Andrew Gallant 518d11b33f [ty] Add modules to auto-import
This makes auto-import include modules in suggestions.

In this initial implementation, we permit this to include submodules as
well. This is in contrast to what we do in `import ...` completions.
It's easy to change this behavior, but I think it'd be interesting to
run with this for now to see how well it works.
2025-12-04 17:37:37 -05:00
Andrew Gallant da94b99248 [ty] Add support for module-only import requests
The existing importer functionality always required
an import request with a module and a member in that
module. But we want to be able to insert import statements
for a module itself and not any members in the module.

This is basically changing `member: &str` to an
`Option<&str>` and fixing the fallout in a way that
makes sense for module-only imports.
2025-12-04 17:37:37 -05:00
Andrew Gallant 3c2cf49f60 [ty] Refactor auto-import symbol info
This just encapsulates the representation so that
we can make changes to it more easily.
2025-12-04 17:37:37 -05:00
Andrew Gallant fdcb5a7e73 [ty] Clarify the use of `SymbolKind` in auto-import 2025-12-04 13:21:26 -05:00
Andrew Gallant 6a025d1925 [ty] Redact ranking of completions from e2e LSP tests
I think changes to this value are generally noise. It's hard to tell
what it means and it isn't especially actionable. We already have an
eval running in CI for completion ranking, so I don't think it's
terribly important to care about ranking here in e2e tests _generally_.
2025-12-04 13:21:26 -05:00
Andrew Gallant f054e7edf8 [ty] Tweaks tests to use clearer language
A completion lacking a module reference doesn't necessarily mean that
the symbol is defined within the current module. I believe the intent
here is that it means that no import is required to use it.
2025-12-04 13:21:26 -05:00
Andrew Gallant e154efa229 [ty] Update evaluation results
These are all improvements here with one slight regression on
`reveal_type` ranking. The previous completions offered were:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: _sqlite3)
SupportsGetItemViewable (module: _typeshed)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 8/10)
_remove_original_values (module: _osx_support)
_remove_universal_flags (module: _osx_support)
-----
found 10 completions
```

And now they are:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3.dbapi2)
removeHandler (module: unittest)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 9/9)
-----
found 9 completions
```

Some completions were removed (because they are now considered
unexported) and some were added (likely do to better re-export support).

This particular case probably warrants more special attention anyway.
So I think this is fine. (It's only a one-ranking regression.)
2025-12-04 13:21:26 -05:00
Andrew Gallant 32f400a457 [ty] Make auto-import ignore symbols in modules starting with a `_`
This applies recursively. So if *any* component of a module name starts
with a `_`, then symbols from that module are excluded from auto-import.

The exception is when it's a module within first party code. Then we
want to include it in auto-import.
2025-12-04 13:21:26 -05:00
Andrew Gallant 2a38395bc8 [ty] Add some tests for re-exports and `__all__` to completions
Note that the `Deprecated` symbols from `importlib.metadata` are no
longer offered because 1) `importlib.metadata` defined `__all__` and 2)
the `Deprecated` symbols aren't in it. These seem to not be a part of
its public API according to the docs, so this seems right to me.
2025-12-04 13:21:26 -05:00
Andrew Gallant 8c72b296c9 [ty] Add support for re-exports and `__all__` to auto-import
This commit (mostly) re-implements the support for `__all__` in
ty-proper, but inside the auto-import AST scanner.

When `__all__` isn't present in a module, we fall back to conventions to
determine whether a symbol is exported or not:
https://docs.python.org/3/library/index.html

However, in keeping with current practice for non-auto-import
completions, we continue to provide sunder and dunder names as
re-exports.

When `__all__` is present, we respect it strictly. That is, a symbol is
exported *if and only if* it's in `__all__`. This is somewhat stricter
than pylance seemingly is. I felt like it was a good idea to start here,
and we can relax it based on user demand (perhaps through a setting).
2025-12-04 13:21:26 -05:00
Andrew Gallant 086f1e0b89 [ty] Skip over expressions in auto-import AST scanning 2025-12-04 13:21:26 -05:00
Andrew Gallant 5da45f8ec7 [ty] Simplify auto-import AST visitor slightly and add tests
This simplifies the existing visitor by DRYing it up slightly.
We also add tests for the existing functionality. In particular,
we want to add support for re-export conventions, and that
warrants more careful testing.
2025-12-04 13:21:26 -05:00
Andrew Gallant 62f20b1e86 [ty] Re-arrange imports in symbol extraction
I like using a qualified `ast::` prefix for things from
`ruff_python_ast`, so switch over to that convention.
2025-12-04 13:21:26 -05:00
Aria Desires cccb0bbaa4
[ty] Add tests for implicit submodule references (#21793)
## Summary

I realized we don't really test `DefinitionKind::ImportFromSubmodule` in
the IDE at all, so here's a bunch of them, just recording our current
behaviour.

## Test Plan

*stares at the camera*
2025-12-04 15:46:23 +00:00
Brent Westbrook 9d4f1c6ae2
Bump 0.14.8 (#21791) 2025-12-04 09:45:53 -05:00
Micha Reiser 326025d45f
[ty] Always register rename provider if client doesn't support dynamic registration (#21789) 2025-12-04 14:40:16 +01:00
Micha Reiser 3aefe85b32
[ty] Ensure `rename` `CursorTest` calls `can_rename` before renaming (#21790) 2025-12-04 14:19:48 +01:00
Dhruv Manilawala b8ecc83a54
Fix clippy errors on `main` (#21788)
https://github.com/astral-sh/ruff/actions/runs/19922070773/job/57112827024#step:5:62
2025-12-04 16:20:37 +05:30
Aria Desires 6491932757
[ty] Fix crash when hovering an unknown string annotation (#21782)
## Summary

I have no idea what I'm doing with the fix (all the interesting stuff is
in the second commit).

The basic problem is the compiler emits the diagnostic:

```
x: "foobar"
    ^^^^^^
```

Which the suppression code-action hands the end of to `Tokens::after`
which then panics because that function panics if handed an offset that
is in the middle of a token.

Fixes https://github.com/astral-sh/ty/issues/1748

## Test Plan

Many tests added (only the e2e test matters).
2025-12-04 09:11:40 +01:00
Micha Reiser a9f2bb41bd
[ty] Don't send publish diagnostics for clients supporting pull diagnostics (#21772) 2025-12-04 08:12:04 +01:00
Aria Desires e2b72fbf99
[ty] cleanup test path (#21781)
Fixes
https://github.com/astral-sh/ruff/pull/21745#discussion_r2586552295
2025-12-03 21:54:50 +00:00
Alex Waygood 14fce0d440
[ty] Improve the display of various special-form types (#21775) 2025-12-03 21:19:59 +00:00
Alex Waygood 8ebecb2a88
[ty] Add subdiagnostic hint if the user wrote `X = Any` rather than `X: Any` (#21777) 2025-12-03 20:42:21 +00:00
Aria Desires 45ac30a4d7
[ty] Teach `ty` the meaning of desperation (try ancestor `pyproject.toml`s as search-paths if module resolution fails) (#21745)
## Summary

This makes an importing file a required argument to module resolution,
and if the fast-path cached query fails to resolve the module, take the
slow-path uncached (could be cached if we want)
`desperately_resolve_module` which will walk up from the importing file
until it finds a `pyproject.toml` (arbitrary decision, we could try
every ancestor directory), at which point it takes one last desperate
attempt to use that directory as a search-path. We do not continue
walking up once we've found a `pyproject.toml` (arbitrary decision, we
could keep going up).

Running locally, this fixes every broken-for-workspace-reasons import in
pyx's workspace!

* Fixes https://github.com/astral-sh/ty/issues/1539
* Improves https://github.com/astral-sh/ty/issues/839

## Test Plan

The workspace tests see a huge improvement on most absolute imports.
2025-12-03 15:04:36 -05:00
Alex Waygood 0280949000
[ty] fix panic when attempting to infer the variance of a PEP-695 class that depends on a recursive type aliases and also somehow protocols (#21778)
Fixes https://github.com/astral-sh/ty/issues/1716.

## Test plan

I added a corpus snippet that causes us to panic on `main` (I tested by
running `cargo run -p ty_python_semantic --test=corpus` without the fix
applied).
2025-12-03 19:01:42 +00:00
Bhuminjay Soni c722f498fe
[`flake8-bugbear`] Catch `yield` expressions within other statements (`B901`) (#21200)
## Summary

This PR re-implements [return-in-generator
(B901)](https://docs.astral.sh/ruff/rules/return-in-generator/#return-in-generator-b901)
for async generators as a semantic syntax error. This is not a syntax
error for sync generators, so we'll need to preserve both the lint rule
and the syntax error in this case.

It also updates B901 and the new implementation to catch cases where the
generator's `yield` or `yield from` expression is part of another
statement, as in:

```py
def foo():
    return (yield)
```

These were previously not caught because we only looked for
`Stmt::Expr(Expr::Yield)` in `visit_stmt` instead of visiting `yield`
expressions directly. I think this modification is within the spirit of
the rule and safe to try out since the rule is in preview.

## Test Plan

<!-- How was it tested? -->
I have written tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-03 12:05:15 -05:00
David Peter 1f4f8d9950
[ty] Fix flow of associated member states during star imports (#21776)
## Summary

Star-imports can not just affect the state of symbols that they pull in,
they can also affect the state of members that are associated with those
symbols. For example, if `obj.attr` was previously narrowed from `int |
None` to `int`, and a star-import now overwrites `obj`, then the
narrowing on `obj.attr` should be "reset".

This PR keeps track of the state of associated members during star
imports and properly models the flow of their corresponding state
through the control flow structure that we artificially create for
star-imports.

See [this
comment](https://github.com/astral-sh/ty/issues/1355#issuecomment-3607125005)
for an explanation why this caused ty to see certain `asyncio` symbols
as not being accessible on Python 3.14.

closes https://github.com/astral-sh/ty/issues/1355

## Ecosystem impact

```diff
async-utils (https://github.com/mikeshardmind/async-utils)
- src/async_utils/bg_loop.py:115:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
- Found 30 diagnostics
+ Found 29 diagnostics

mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ mitmproxy/utils/asyncio_utils.py:96:60: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
- test/conftest.py:37:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
```

All of these seem to be correct, they give us a different type for
`asyncio` symbols that are now imported from different
`sys.version_info` branches (where we previously failed to recognize
some of these as statically true/false).

```diff
dd-trace-py (https://github.com/DataDog/dd-trace-py)
- ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Coroutine[Any, Any, _T@create_task] | Generator[Any, None, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
+ ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Generator[Any, None, _T@create_task] | Coroutine[Any, Any, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
```

Similar, but only results in a diagnostic change.

## Test Plan

Added a regression test
2025-12-03 17:52:31 +01:00
William Woodruff 4488e9d47d
Revert "Enable PEP 740 attestations when publishing to PyPI" (#21768) 2025-12-03 11:07:29 -05:00
github-actions[bot] b08f0b2caa
[ty] Sync vendored typeshed stubs (#21715)
Co-authored-by: typeshedbot <>
Co-authored-by: David Peter <mail@david-peter.de>
2025-12-03 15:49:51 +00:00
David Peter d6e472f297
[ty] Reachability constraints: minor documentation fixes (#21774) 2025-12-03 16:40:11 +01:00
Douglas Creager 45842cc034
[ty] Fix non-determinism in `ConstraintSet.specialize_constrained` (#21744)
This fixes a non-determinism that we were seeing in the constraint set
tests in https://github.com/astral-sh/ruff/pull/21715.

In this test, we create the following constraint set, and then try to
create a specialization from it:

```
(T@constrained_by_gradual_list = list[Base])
  ∨
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
```

That is, `T` is either specifically `list[Base]`, or it's any `list`.
Our current heuristics say that, absent other restrictions, we should
specialize `T` to the more specific type (`list[Base]`).

In the correct test output, we end up creating a BDD that looks like
this:

```
(T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ (Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
    ┡━₁ always
    └─₀ never
```

In the incorrect output, the BDD looks like this:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```

The difference is the ordering of the two individual constraints. Both
constraints appear in the first BDD, but the second BDD only contains `T
is any list`. If we were to force the second BDD to contain both
constraints, it would look like this:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
    ┡━₁ always
    └─₀ never
```

This is the standard shape for an OR of two constraints. However! Those
two constraints are not independent of each other! If `T` is
specifically `list[Base]`, then it's definitely also "any `list`". From
that, we can infer the contrapositive: that if `T` is not any list, then
it cannot be `list[Base]` specifically. When we encounter impossible
situations like that, we prune that path in the BDD, and treat it as
`false`. That rewrites the second BDD to the following:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
    ┡━₁ never   <-- IMPOSSIBLE, rewritten to never
    └─₀ never
```

We then would see that that BDD node is redundant, since both of its
outgoing edges point at the `never` node. Our BDDs are _reduced_, which
means we have to remove that redundant node, resulting in the BDD we saw
above:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never       <-- redundant node removed
```

The end result is that we were "forgetting" about the `T = list[Base]`
constraint, but only for some BDD variable orderings.

To fix this, I'm leaning in to the fact that our BDDs really do need to
"remember" all of the constraints that they were created with. Some
combinations might not be possible, but we now have the sequent map,
which is quite good at detecting and pruning those.

So now our BDDs are _quasi-reduced_, which just means that redundant
nodes are allowed. (At first I was worried that allowing redundant nodes
would be an unsound "fix the glitch". But it turns out they're real!
[This](https://ieeexplore.ieee.org/abstract/document/130209) is the
paper that introduces them, though it's very difficult to read. Knuth
mentions them in §7.1.4 of
[TAOCP](https://course.khoury.northeastern.edu/csu690/ssl/bdd-knuth.pdf),
and [this paper](https://par.nsf.gov/servlets/purl/10128966) has a nice
short summary of them in §2.)

While we're here, I've added a bunch of `debug` and `trace` level log
messages to the constraint set implementation. I was getting tired of
having to add these by hands over and over. To enable them, just set
`TY_LOG` in your environment, e.g.

```sh
env TY_LOG=ty_python_semantic::types::constraints::SequentMap=trace ty check ...
```

[Note, this has an `internal` label because are still not using
`specialize_constrained` in anything user-facing yet.]
2025-12-03 10:19:39 -05:00
Alex Waygood cd079bd92e
[ty] Improve `@override`, `@final` and Liskov checks in cases where there are multiple reachable definitions (#21767) 2025-12-03 12:51:36 +00:00
Alex Waygood 5756b3809c
[ty] Extend `invalid-explicit-override` to also cover properties decorated with `@override` that do not override anything (#21756) 2025-12-03 11:27:47 +00:00
Micha Reiser 92c5f62ec0
[ty] Enable LRU collection for parsed module (#21749) 2025-12-03 12:16:18 +01:00
David Peter 21e5a57296
[ty] Support typevar-specialized dynamic types in generic type aliases (#21730)
## Summary

For a type alias like the one below, where `UnknownClass` is something
with a dynamic type, we previously lost track of the fact that this
dynamic type was explicitly specialized *with a type variable*. If that
alias is then later explicitly specialized itself (`MyAlias[int]`), we
would miscount the number of legacy type variables and emit a
`invalid-type-arguments` diagnostic
([playground](https://play.ty.dev/886ae6cc-86c3-4304-a365-510d29211f85)).
```py
T = TypeVar("T")

MyAlias: TypeAlias = UnknownClass[T] | None
```
The solution implemented here is not pretty, but we can hopefully get
rid of it via https://github.com/astral-sh/ty/issues/1711. Also, once we
properly support `ParamSpec` and `Concatenate`, we should be able to
remove some of this code.

This addresses many of the `invalid-type-arguments` false-positives in
https://github.com/astral-sh/ty/issues/1685. With this change, there are
still some diagnostics of this type left. Instead of implementing even
more (rather sophisticated) workarounds for these cases as well, it
might be much easier to wait for full `ParamSpec`/`Concatenate` support
and then try again.

A disadvantage of this implementation is that we lose track of some
`@Todo` types and replace them with `Unknown`. We could spend more
effort and try to preserve them, but I'm unsure if this is the best use
of our time right now.

## Test Plan

New Markdown tests.
2025-12-03 10:00:02 +01:00
Denys Zhak f4e4229683
Add token based `parenthesized_ranges` implementation (#21738)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-03 08:15:17 +00:00
David Peter e6ddeed386
[ty] Default-specialization of generic type aliases (#21765)
## Summary

Implement default-specialization of generic type aliases (implicit or
PEP-613) if they are used in a type expression without an explicit
specialization.

closes https://github.com/astral-sh/ty/issues/1690

## Typing conformance

```diff
-generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, DefaultStrT]`
```

That's exactly what we want ✔️ 

All other tests in this file pass as well, with the exception of this
assertion, which is just wrong (at least according to our
interpretation, `type[Bar] != <class 'Bar'>`). I checked that we do
correctly default-specialize the type parameter which is not displayed
in the diagnostic that we raise.
```py
class Bar(SubclassMe[int, DefaultStrT]): ...

assert_type(Bar, type[Bar[str]])  # ty: Type `type[Bar[str]]` does not match asserted type `<class 'Bar'>`
```

## Ecosystem impact

Looks like I should have included this last week 😎 

## Test Plan

Updated pre-existing tests and add a few new ones.
2025-12-03 09:10:45 +01:00
Alex Waygood c5b8d551df
[ty] Suppress false positives when `dataclasses.dataclass(...)(cls)` is called imperatively (#21729)
Fixes https://github.com/astral-sh/ty/issues/1705
2025-12-03 08:05:25 +00:00
Bhuminjay Soni f68080b55e
[syntax-error] Default type parameter followed by non-default type parameter (#21657)
## Summary

This PR implements syntax error where a default type parameter is
followed by a non-default type parameter.
https://github.com/astral-sh/ruff/issues/17412#issuecomment-3584088217


## Test Plan

I have written inline tests as directed in #17412

---------

Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Signed-off-by: 11happy <soni5happy@gmail.com>
2025-12-03 12:01:31 +05:30
Amethyst Reese abaa49f552
new module for parsing ranged suppressions (#21441)
This adds a new `suppression` module to the `ruff_linter` crate, similar
to the suppression
module for ty, to parse comments for ruff suppression directives, such
as `# ruff: disable[CODE]`.
2025-12-02 15:39:59 -08:00
Ibraheem Ahmed 7b0aab1696
[ty] `type[T]` is assignable to an inferable typevar (#21766)
## Summary

Resolves https://github.com/astral-sh/ty/issues/1712.
2025-12-02 18:25:09 -05:00
Brent Westbrook 2250fa6f98
Fix syntax error false positives for `await` outside functions (#21763)
## Summary

Fixes #21750 and a related bug in `PLE1142`. We were not properly
considering generators to be valid `await` contexts, which caused the
`F704` issue. One of the tests I added for this also uncovered an issue
in `PLE1142` for comprehensions nested within async generators because
we were only checking the current scope rather than traversing the
nested context.

## Test Plan

Both of these rules are implemented as semantic syntax errors, so I
added tests (and fixes) in both Ruff and ty.
2025-12-02 21:02:02 +00:00
Alex Waygood 392a8e4e50
[ty] Improve diagnostics for unsupported comparison operations (#21737) 2025-12-02 19:58:45 +00:00
Micha Reiser 515de2d062
Move `Token`, `TokenKind` and `Tokens` to `ruff-python-ast` (#21760) 2025-12-02 20:10:46 +01:00
Douglas Creager 508c0a0861
[ty] Don't confuse multiple occurrences of `typing.Self` when binding bound methods (#21754)
In the following example, there are two occurrences of `typing.Self`,
one for `Foo.foo` and one for `Bar.bar`:

```py
from typing import Self, reveal_type

class Foo[T]:
    def foo(self: Self) -> T:
        raise NotImplementedError

class Bar:
    def bar(self: Self, x: Foo[Self]):
        # SHOULD BE: bound method Foo[Self@bar].foo() -> Self@bar
        # revealed: bound method Foo[Self@bar].foo() -> Foo[Self@bar]
        reveal_type(x.foo)

def f[U: Bar](x: Foo[U]):
    # revealed: bound method Foo[U@f].foo() -> U@f
    reveal_type(x.foo)
```

When accessing a bound method, we replace any occurrences of `Self` with
the bound `self` type.

We were doing this correctly for the second reveal. We would first apply
the specialization, getting `(self: Self@foo) -> U@F` as the signature
of `x.foo`. We would then bind the `self` parameter, substituting
`Self@foo` with `Foo[U@F]` as part of that. The return type was already
specialized to `U@F`, so that substitution had no further affect on the
type that we revealed.

In the first reveal, we would follow the same process, but we confused
the two occurrences of `Self`. We would first apply the specialization,
getting `(self: Self@foo) -> Self@bar` as the method signature. We would
then try to bind the `self` parameter, substituting `Self@foo` with
`Foo[Self@bar]`. However, because we didn't distinguish the two separate
`Self`s, and applied the substitution to the return type as well as to
the `self` parameter.

The fix is to track which particular `Self` we're trying to substitute
when applying the type mapping.

Fixes https://github.com/astral-sh/ty/issues/1713
2025-12-02 13:15:09 -05:00
William Woodruff 0d2792517d
Use our org-wide Renovate preset (#21759) 2025-12-02 13:05:26 -05:00
Alex Waygood 05d053376b
Delete `my-script.py` (#21751) 2025-12-02 14:48:01 +00:00
Alex Waygood ac2552b11b
[ty] Move `all_members`, and related types/routines, out of `ide_support.rs` (#21695) 2025-12-02 14:45:24 +00:00
Micha Reiser 644096ea8a
[ty] Fix find-references for import aliases (#21736) 2025-12-02 14:37:50 +01:00
Aria Desires 015ab9e576
[ty] add tests for workspaces (#21741)
Here are a bunch of (variously failing and passing) mdtests that reflect
the kinds of issues people encounter when running ty over an entire
workspace without sufficient hand-holding (especially because in the IDE
it is unclear *how* to provide that hand-holding).
2025-12-02 06:43:41 -05:00
Douglas Creager cf4196466c
[ty] Stop testing the (brittle) constraint set display implementation (#21743)
The `Display` implementation for constraint sets is brittle, and
deserves a rethink. But later! It's perfectly fine for printf debugging;
we just shouldn't be writing mdtests that depend on any particular
rendering details. Most of these tests can be replaced with an
equivalence check that actually validates that the _behavior_ of two
constraint sets are identical.
2025-12-02 09:17:29 +01:00
Micha Reiser 2182c750db
[ty] Use generator over list comprehension to avoid cast (#21748) 2025-12-02 08:47:47 +01:00
Charlie Marsh 72304b01eb
[ty] Add a diagnostic for prohibited `NamedTuple` attribute overrides (#21717)
## Summary

Closes https://github.com/astral-sh/ty/issues/1684.
2025-12-01 21:46:58 -05:00
Ibraheem Ahmed ec854c7199
[ty] Fix subtyping with `type[T]` and unions (#21740)
## Summary

Resolves
https://github.com/astral-sh/ruff/pull/21685#issuecomment-3591695954.
2025-12-01 18:20:13 -05:00
William Woodruff edc6ed5077
Use `npm ci --ignore-scripts` everywhere (#21742) 2025-12-01 17:13:52 -05:00
Dan Parizher f052bd644c
[`flake8-simplify`] Fix truthiness assumption for non-iterable arguments in tuple/list/set calls (`SIM222`, `SIM223`) (#21479)
## Summary

Fixes false positives in SIM222 and SIM223 where truthiness was
incorrectly assumed for `tuple(x)`, `list(x)`, `set(x)` when `x` is not
iterable.

Fixes #21473.

## Problem

`Truthiness::from_expr` recursively called itself on arguments to
iterable initializers (`tuple`, `list`, `set`) without checking if the
argument is iterable, causing false positives for cases like `tuple(0)
or True` and `tuple("") or True`.

## Approach

Added `is_definitely_not_iterable` helper and updated
`Truthiness::from_expr` to return `Unknown` for non-iterable arguments
(numbers, booleans, None) and string literals when called with iterable
initializers, preventing incorrect truthiness assumptions.

## Test Plan

Added test cases to `SIM222.py` and `SIM223.py` for `tuple("")`,
`tuple(0)`, `tuple(1)`, `tuple(False)`, and `tuple(None)` with `or True`
and `and False` patterns.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-01 16:57:51 -05:00
Dan Parizher bc44dc2afb
[`flake8-use-pathlib`] Mark fixes unsafe for return type changes (`PTH104`, `PTH105`, `PTH109`, `PTH115`) (#21440)
## Summary

Marks fixes as unsafe when they change return types (`None` → `Path`,
`str`/`bytes` → `Path`, `str` → `Path`), except when the call is a
top-level expression.

Fixes #21431.

## Problem

Fixes for `os.rename`, `os.replace`, `os.getcwd`/`os.getcwdb`, and
`os.readlink` were marked safe despite changing return types, which can
break code that uses the return value.

## Approach

Added `is_top_level_expression_call` helper to detect when a call is a
top-level expression (return value unused). Updated
`check_os_pathlib_two_arg_calls` and `check_os_pathlib_single_arg_calls`
to mark fixes as unsafe unless the call is a top-level expression.
Updated PTH109 to use the helper for applicability determination.

## Test Plan

Updated snapshots for `preview_full_name.py`, `preview_import_as.py`,
`preview_import_from.py`, and `preview_import_from_as.py` to reflect
unsafe markers.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-12-01 15:26:55 -05:00
Andrew Gallant 52f59c5c39 [ty] Fix auto-import code action to handle pre-existing import
Previously, the code action to do auto-import on a pre-existing symbol
assumed that the auto-importer would always generate an import
statement. But sometimes an import statement already exists.

A good example of this is the following snippet:

```
import warnings

@deprecated
def myfunc(): pass
```

Specifically, `deprecated` exists in `warnings` but isn't currently
imported. A code action to fix this could feasibly do two
transformations here. One is:

```
import warnings

@warnings.deprecated
def myfunc(): pass
```

Another is:

```
from warnings import deprecated
import warnings

@deprecated
def myfunc(): pass
```

The existing auto-import infrastructure chooses the former, since it
reuses a pre-existing import statement. But this PR chooses the latter
for the case of a code action. I'm not 100% sure this is the correct
choice, but it seems to defer more strongly to what the user has typed.
That is, that they want to use it unqualified because it's what has been
typed. So we should add the necessary import statement to make that
work.

Fixes astral-sh/ty#1668
2025-12-01 14:20:47 -05:00
William Woodruff 53299cbff4
Enable PEP 740 attestations when publishing to PyPI (#21735) 2025-12-01 13:15:20 -05:00
Micha Reiser 3738ab1c46
[ty] Fix find references for type defined in stub (#21732) 2025-12-01 17:53:45 +01:00
Micha Reiser b4f618e180
Use OIDC instead of codspeed token (#21719) 2025-12-01 17:51:34 +01:00
Andrew Gallant a561e6659d [ty] Exclude `typing_extensions` from completions unless it's really available
This works by adding a third module resolution mode that lets the caller
opt into _some_ shadowing of modules that is otherwise not allowed (for
`typing` and `typing_extensions`).

Fixes astral-sh/ty#1658
2025-12-01 11:24:16 -05:00
Alex Waygood 0e651b50b7
[ty] Fix false positives for `class F(Generic[*Ts]): ...` (#21723) 2025-12-01 13:24:07 +00:00
David Peter 116fd7c7af
[ty] Remove `GenericAlias`-related todo type (#21728)
## Summary

If you manage to create an `typing.GenericAlias` instance without us
knowing how that was created, then we don't know what to do with this in
a type annotation. So it's better to be explicit and show an error
instead of failing silently with a `@Todo` type.

## Test Plan

* New Markdown tests
* Zero ecosystem impact
2025-12-01 13:02:38 +00:00
David Peter 5358ddae88
[ty] Exhaustiveness checking for generic classes (#21726)
## Summary

We had tests for this already, but they used generic classes that were
bivariant in their type parameter, and so this case wasn't captured.

closes https://github.com/astral-sh/ty/issues/1702

## Test Plan

Updated Markdown tests
2025-12-01 13:52:36 +01:00
Alex Waygood 3a11e714c6
[ty] Show the user where the type variable was defined in `invalid-type-arguments` diagnostics (#21727) 2025-12-01 12:25:49 +00:00
Alex Waygood a2096ee2cb
[ty] Emit `invalid-named-tuple` on namedtuple classes that have field names starting with underscores (#21697) 2025-12-01 11:36:02 +00:00
Micha Reiser 2e229aa8cb
[ty] LSP Benchmarks (#21625) 2025-12-01 11:33:53 +00:00
Carl Meyer c2773b4c6f
[ty] support `type[tuple[...]]` (#21652)
Fixes https://github.com/astral-sh/ty/issues/1649

## Summary

We missed this when adding support for `type[]` of a specialized
generic.

## Test Plan

Added mdtests.
2025-12-01 11:49:26 +01:00
David Peter bc6517a807
[ty] Add missing projects to `good.txt` (#21721)
## Summary

These projects from `mypy_primer` were missing from both `good.txt` and
`bad.txt` for some reason. I thought about writing a script that would
verify that `good.txt` + `bad.txt` = `mypy_primer.projects`, but that's
not completely trivial since there are projects like `cpython` only
appear once in `good.txt`. Given that we can hopefully soon get rid of
both of these files (and always run on all projects), it's probably not
worth the effort. We are usually notified of all `mypy_primer` changes.

## Test Plan

CI on this PR
2025-12-01 11:18:41 +01:00
Kieran Ryan 4686c36079
docs: Output file option with GitLab integration (#21706)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-01 10:07:25 +00:00
Shunsuke Shibayama a6cbc138d2
[ty] remove the `visitor` parameter in the `recursive_type_normalized_impl` method (#21701) 2025-12-01 08:48:43 +01:00
renovate[bot] 846df40a6e
Update Swatinem/rust-cache action to v2.8.2 (#21710)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 08:03:17 +01:00
renovate[bot] c61e885527
Update salsa digest to 59aa107 (#21708)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 08:02:44 +01:00
renovate[bot] 13af584428
Update taiki-e/install-action action to v2.62.60 (#21711)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 08:02:09 +01:00
renovate[bot] 984480a586
Update tokio-tracing monorepo (#21712)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 08:01:14 +01:00
renovate[bot] aef056954b
Update actions/setup-python action to v6.1.0 (#21713)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 08:00:05 +01:00
renovate[bot] 5265af4eee
Update cargo-bins/cargo-binstall action to v1.16.2 (#21714)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 07:59:44 +01:00
renovate[bot] 5b32908920
Update CodSpeedHQ/action action to v4.4.1 (#21716)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 07:58:56 +01:00
renovate[bot] d8d1464d96
Update dependency ruff to v0.14.7 (#21709)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [ruff](https://docs.astral.sh/ruff)
([source](https://redirect.github.com/astral-sh/ruff),
[changelog](https://redirect.github.com/astral-sh/ruff/blob/main/CHANGELOG.md))
| `==0.14.6` -> `==0.14.7` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/ruff/0.14.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/ruff/0.14.6/0.14.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>astral-sh/ruff (ruff)</summary>

###
[`v0.14.7`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0147)

[Compare
Source](https://redirect.github.com/astral-sh/ruff/compare/0.14.6...0.14.7)

Released on 2025-11-28.

##### Preview features

- \[`flake8-bandit`] Handle string literal bindings in
suspicious-url-open-usage (`S310`)
([#&#8203;21469](https://redirect.github.com/astral-sh/ruff/pull/21469))
- \[`pylint`] Fix `PLR1708` false positives on nested functions
([#&#8203;21177](https://redirect.github.com/astral-sh/ruff/pull/21177))
- \[`pylint`] Fix suppression for empty dict without tuple key
annotation (`PLE1141`)
([#&#8203;21290](https://redirect.github.com/astral-sh/ruff/pull/21290))
- \[`ruff`] Add rule `RUF066` to detect unnecessary class properties
([#&#8203;21535](https://redirect.github.com/astral-sh/ruff/pull/21535))
- \[`ruff`] Catch more dummy variable uses (`RUF052`)
([#&#8203;19799](https://redirect.github.com/astral-sh/ruff/pull/19799))

##### Bug fixes

- \[server] Set severity for non-rule diagnostics
([#&#8203;21559](https://redirect.github.com/astral-sh/ruff/pull/21559))
- \[`flake8-implicit-str-concat`] Avoid invalid fix in (`ISC003`)
([#&#8203;21517](https://redirect.github.com/astral-sh/ruff/pull/21517))
- \[`parser`] Fix panic when parsing IPython escape command expressions
([#&#8203;21480](https://redirect.github.com/astral-sh/ruff/pull/21480))

##### CLI

- Show partial fixability indicator in statistics output
([#&#8203;21513](https://redirect.github.com/astral-sh/ruff/pull/21513))

##### Contributors

- [@&#8203;mikeleppane](https://redirect.github.com/mikeleppane)
- [@&#8203;senekor](https://redirect.github.com/senekor)
- [@&#8203;ShaharNaveh](https://redirect.github.com/ShaharNaveh)
- [@&#8203;JumboBear](https://redirect.github.com/JumboBear)
- [@&#8203;prakhar1144](https://redirect.github.com/prakhar1144)
- [@&#8203;tsvikas](https://redirect.github.com/tsvikas)
- [@&#8203;danparizher](https://redirect.github.com/danparizher)
- [@&#8203;chirizxc](https://redirect.github.com/chirizxc)
- [@&#8203;AlexWaygood](https://redirect.github.com/AlexWaygood)
- [@&#8203;MichaReiser](https://redirect.github.com/MichaReiser)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xOS45IiwidXBkYXRlZEluVmVyIjoiNDIuMTkuOSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-01 01:02:48 +00:00
Charlie Marsh e7beb7e1f4
[ty] Forbid use of `super()` in `NamedTuple` subclasses (#21700)
## Summary

The exact behavior around what's allowed vs. disallowed was partly
detected through trial and error in the runtime.

I was a little confused by [this
comment](https://github.com/python/cpython/pull/129352) that says
"`NamedTuple` subclasses cannot be inherited from" because in practice
that doesn't appear to error at runtime.

Closes [#1683](https://github.com/astral-sh/ty/issues/1683).
2025-11-30 15:49:06 +00:00
Alex Waygood b02e8212c9
[ty] Don't introduce invalid syntax when autofixing override-of-final-method (#21699) 2025-11-30 13:40:33 +00:00
Alex Waygood 69ace00210
[ty] Rename `types::liskov` to `types::overrides` (#21694) 2025-11-29 14:54:00 +00:00
Micha Reiser d40590c8f9
[ty] Add code action to ignore diagnostic on the current line (#21595) 2025-11-29 15:41:54 +01:00
RasmusNygren b2387f4eab
[ty] fix typo in HasDefinition trait docstring (#21689)
## Summary
Fixes a typo in the docstring for the definition method in the
HasDefinition trait
2025-11-29 11:13:54 +00:00
Dhruv Manilawala 8795d9f0cb
[ty] Split `ParamSpec` mdtests to separate legacy and PEP 695 tests (#21687)
## Summary

This is another small refactor for
https://github.com/astral-sh/ruff/pull/21445 that splits the single
`paramspec.md` into `generics/legacy/paramspec.md` and
`generics/pep695/paramspec.md`.

## Test Plan

Make sure that all mdtests pass.
2025-11-29 06:49:39 +00:00
725 changed files with 57789 additions and 23595 deletions

View File

@ -7,10 +7,6 @@ serial = { max-threads = 1 }
filter = 'binary(file_watching)' filter = 'binary(file_watching)'
test-group = 'serial' test-group = 'serial'
[[profile.default.overrides]]
filter = 'binary(e2e)'
test-group = 'serial'
[profile.ci] [profile.ci]
# Print out output for failing tests as soon as they fail, and also at the end # Print out output for failing tests as soon as they fail, and also at the end
# of the run (for easy scrollability). # of the run (for easy scrollability).

View File

@ -2,12 +2,11 @@
$schema: "https://docs.renovatebot.com/renovate-schema.json", $schema: "https://docs.renovatebot.com/renovate-schema.json",
dependencyDashboard: true, dependencyDashboard: true,
suppressNotifications: ["prEditedNotification"], suppressNotifications: ["prEditedNotification"],
extends: ["config:recommended"], extends: ["github>astral-sh/renovate-config"],
labels: ["internal"], labels: ["internal"],
schedule: ["before 4am on Monday"], schedule: ["before 4am on Monday"],
semanticCommits: "disabled", semanticCommits: "disabled",
separateMajorMinor: false, separateMajorMinor: false,
prHourlyLimit: 10,
enabledManagers: ["github-actions", "pre-commit", "cargo", "pep621", "pip_requirements", "npm"], enabledManagers: ["github-actions", "pre-commit", "cargo", "pep621", "pip_requirements", "npm"],
cargo: { cargo: {
// See https://docs.renovatebot.com/configuration-options/#rangestrategy // See https://docs.renovatebot.com/configuration-options/#rangestrategy
@ -16,7 +15,7 @@
pep621: { pep621: {
// The default for this package manager is to only search for `pyproject.toml` files // The default for this package manager is to only search for `pyproject.toml` files
// found at the repository root: https://docs.renovatebot.com/modules/manager/pep621/#file-matching // found at the repository root: https://docs.renovatebot.com/modules/manager/pep621/#file-matching
fileMatch: ["^(python|scripts)/.*pyproject\\.toml$"], managerFilePatterns: ["^(python|scripts)/.*pyproject\\.toml$"],
}, },
pip_requirements: { pip_requirements: {
// The default for this package manager is to run on all requirements.txt files: // The default for this package manager is to run on all requirements.txt files:
@ -34,7 +33,7 @@
npm: { npm: {
// The default for this package manager is to only search for `package.json` files // The default for this package manager is to only search for `package.json` files
// found at the repository root: https://docs.renovatebot.com/modules/manager/npm/#file-matching // found at the repository root: https://docs.renovatebot.com/modules/manager/npm/#file-matching
fileMatch: ["^playground/.*package\\.json$"], managerFilePatterns: ["^playground/.*package\\.json$"],
}, },
"pre-commit": { "pre-commit": {
enabled: true, enabled: true,
@ -76,14 +75,6 @@
matchManagers: ["cargo"], matchManagers: ["cargo"],
enabled: false, enabled: false,
}, },
{
// `mkdocs-material` requires a manual update to keep the version in sync
// with `mkdocs-material-insider`.
// See: https://squidfunk.github.io/mkdocs-material/insiders/upgrade/
matchManagers: ["pip_requirements"],
matchPackageNames: ["mkdocs-material"],
enabled: false,
},
{ {
groupName: "pre-commit dependencies", groupName: "pre-commit dependencies",
matchManagers: ["pre-commit"], matchManagers: ["pre-commit"],

View File

@ -43,7 +43,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md" - name: "Prep README.md"
@ -72,7 +72,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: x64 architecture: x64
@ -114,7 +114,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: arm64 architecture: arm64
@ -170,7 +170,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: ${{ matrix.platform.arch }} architecture: ${{ matrix.platform.arch }}
@ -223,7 +223,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: x64 architecture: x64
@ -300,7 +300,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md" - name: "Prep README.md"
@ -365,7 +365,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: x64 architecture: x64
@ -431,7 +431,7 @@ jobs:
with: with:
submodules: recursive submodules: recursive
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md" - name: "Prep README.md"

View File

@ -24,6 +24,8 @@ env:
PACKAGE_NAME: ruff PACKAGE_NAME: ruff
PYTHON_VERSION: "3.14" PYTHON_VERSION: "3.14"
NEXTEST_PROFILE: ci NEXTEST_PROFILE: ci
# Enable mdtests that require external dependencies
MDTEST_EXTERNAL: "1"
jobs: jobs:
determine_changes: determine_changes:
@ -230,7 +232,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -252,7 +254,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
shared-key: ruff-linux-debug shared-key: ruff-linux-debug
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
@ -261,11 +263,11 @@ jobs:
- name: "Install mold" - name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest" - name: "Install cargo nextest"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-nextest tool: cargo-nextest
- name: "Install cargo insta" - name: "Install cargo insta"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-insta tool: cargo-insta
- name: "Install uv" - name: "Install uv"
@ -296,7 +298,7 @@ jobs:
# sync, not just public items. Eventually we should do this for all # sync, not just public items. Eventually we should do this for all
# crates; for now add crates here as they are warning-clean to prevent # crates; for now add crates here as they are warning-clean to prevent
# regression. # regression.
- run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db --document-private-items - run: cargo doc --no-deps -p ty_python_semantic -p ty -p ty_test -p ruff_db -p ruff_python_formatter --document-private-items
env: env:
# Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025). # Setting RUSTDOCFLAGS because `cargo doc --check` isn't yet implemented (https://github.com/rust-lang/cargo/issues/10025).
RUSTDOCFLAGS: "-D warnings" RUSTDOCFLAGS: "-D warnings"
@ -315,7 +317,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -323,7 +325,7 @@ jobs:
- name: "Install mold" - name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest" - name: "Install cargo nextest"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-nextest tool: cargo-nextest
- name: "Install uv" - name: "Install uv"
@ -350,13 +352,13 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
run: rustup show run: rustup show
- name: "Install cargo nextest" - name: "Install cargo nextest"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-nextest tool: cargo-nextest
- name: "Install uv" - name: "Install uv"
@ -378,7 +380,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -415,7 +417,7 @@ jobs:
with: with:
file: "Cargo.toml" file: "Cargo.toml"
field: "workspace.package.rust-version" field: "workspace.package.rust-version"
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -439,7 +441,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
workspaces: "fuzz -> target" workspaces: "fuzz -> target"
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
@ -448,7 +450,7 @@ jobs:
- name: "Install mold" - name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo-binstall" - name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@ae04fb5e853ae6cd3ad7de4a1d554a8b646d12aa # v1.15.11 uses: cargo-bins/cargo-binstall@3fc81674af4165a753833a94cae9f91d8849049f # v1.16.2
- name: "Install cargo-fuzz" - name: "Install cargo-fuzz"
# Download the latest version from quick install and not the github releases because github releases only has MUSL targets. # Download the latest version from quick install and not the github releases because github releases only has MUSL targets.
run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm
@ -467,7 +469,7 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
shared-key: ruff-linux-debug shared-key: ruff-linux-debug
save-if: false save-if: false
@ -498,7 +500,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
@ -547,7 +549,7 @@ jobs:
- name: "Install mold" - name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
shared-key: ruff-linux-debug shared-key: ruff-linux-debug
save-if: false save-if: false
@ -643,7 +645,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
persist-credentials: false persist-credentials: false
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -688,7 +690,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: cargo-bins/cargo-binstall@ae04fb5e853ae6cd3ad7de4a1d554a8b646d12aa # v1.15.11 - uses: cargo-bins/cargo-binstall@3fc81674af4165a753833a94cae9f91d8849049f # v1.16.2
- run: cargo binstall --no-confirm cargo-shear - run: cargo binstall --no-confirm cargo-shear
- run: cargo shear - run: cargo shear
@ -702,7 +704,7 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -723,11 +725,11 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.PYTHON_VERSION }} python-version: ${{ env.PYTHON_VERSION }}
architecture: x64 architecture: x64
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Prep README.md" - name: "Prep README.md"
@ -753,7 +755,7 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 - uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
@ -779,20 +781,13 @@ jobs:
name: "mkdocs" name: "mkdocs"
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10 timeout-minutes: 10
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
run: rustup show run: rustup show
- name: Install uv - name: Install uv
@ -800,11 +795,7 @@ jobs:
with: with:
python-version: 3.13 python-version: 3.13
activate-environment: true activate-environment: true
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: uv pip install -r docs/requirements-insiders.txt
- name: "Install dependencies" - name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: uv pip install -r docs/requirements.txt run: uv pip install -r docs/requirements.txt
- name: "Update README File" - name: "Update README File"
run: python scripts/transform_readme.py --target mkdocs run: python scripts/transform_readme.py --target mkdocs
@ -812,12 +803,8 @@ jobs:
run: python scripts/generate_mkdocs.py run: python scripts/generate_mkdocs.py
- name: "Check docs formatting" - name: "Check docs formatting"
run: python scripts/check_docs_formatted.py run: python scripts/check_docs_formatted.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs" - name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }} run: mkdocs build --strict -f mkdocs.yml
run: mkdocs build --strict -f mkdocs.public.yml
check-formatter-instability-and-black-similarity: check-formatter-instability-and-black-similarity:
name: "formatter instabilities and black similarity" name: "formatter instabilities and black similarity"
@ -829,7 +816,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
@ -857,7 +844,7 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
shared-key: ruff-linux-debug shared-key: ruff-linux-debug
save-if: false save-if: false
@ -875,7 +862,7 @@ jobs:
repository: "astral-sh/ruff-lsp" repository: "astral-sh/ruff-lsp"
path: ruff-lsp path: ruff-lsp
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
# installation fails on 3.13 and newer # installation fails on 3.13 and newer
python-version: "3.12" python-version: "3.12"
@ -908,7 +895,7 @@ jobs:
persist-credentials: false persist-credentials: false
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown run: rustup target add wasm32-unknown-unknown
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 - uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
@ -918,7 +905,7 @@ jobs:
cache-dependency-path: playground/package-lock.json cache-dependency-path: playground/package-lock.json
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0 - uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Install Node dependencies" - name: "Install Node dependencies"
run: npm ci run: npm ci --ignore-scripts
working-directory: playground working-directory: playground
- name: "Build playgrounds" - name: "Build playgrounds"
run: npm run dev:wasm run: npm run dev:wasm
@ -942,13 +929,16 @@ jobs:
needs.determine_changes.outputs.linter == 'true' needs.determine_changes.outputs.linter == 'true'
) )
timeout-minutes: 20 timeout-minutes: 20
permissions:
contents: read # required for actions/checkout
id-token: write # required for OIDC authentication with CodSpeed
steps: steps:
- name: "Checkout Branch" - name: "Checkout Branch"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
@ -957,7 +947,7 @@ jobs:
run: rustup show run: rustup show
- name: "Install codspeed" - name: "Install codspeed"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-codspeed tool: cargo-codspeed
@ -965,11 +955,10 @@ jobs:
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
- name: "Run benchmarks" - name: "Run benchmarks"
uses: CodSpeedHQ/action@6a8e2b874c338bf81cc5e8be715ada75908d3871 # v4.3.4 uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
with: with:
mode: instrumentation mode: simulation
run: cargo codspeed run run: cargo codspeed run
token: ${{ secrets.CODSPEED_TOKEN }}
benchmarks-instrumented-ty: benchmarks-instrumented-ty:
name: "benchmarks instrumented (ty)" name: "benchmarks instrumented (ty)"
@ -982,13 +971,16 @@ jobs:
needs.determine_changes.outputs.ty == 'true' needs.determine_changes.outputs.ty == 'true'
) )
timeout-minutes: 20 timeout-minutes: 20
permissions:
contents: read # required for actions/checkout
id-token: write # required for OIDC authentication with CodSpeed
steps: steps:
- name: "Checkout Branch" - name: "Checkout Branch"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
@ -997,7 +989,7 @@ jobs:
run: rustup show run: rustup show
- name: "Install codspeed" - name: "Install codspeed"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-codspeed tool: cargo-codspeed
@ -1005,11 +997,10 @@ jobs:
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench ty run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench ty
- name: "Run benchmarks" - name: "Run benchmarks"
uses: CodSpeedHQ/action@6a8e2b874c338bf81cc5e8be715ada75908d3871 # v4.3.4 uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
with: with:
mode: instrumentation mode: simulation
run: cargo codspeed run run: cargo codspeed run
token: ${{ secrets.CODSPEED_TOKEN }}
benchmarks-walltime: benchmarks-walltime:
name: "benchmarks walltime (${{ matrix.benchmarks }})" name: "benchmarks walltime (${{ matrix.benchmarks }})"
@ -1017,6 +1008,9 @@ jobs:
needs: determine_changes needs: determine_changes
if: ${{ github.repository == 'astral-sh/ruff' && !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.ty == 'true' || github.ref == 'refs/heads/main') }} if: ${{ github.repository == 'astral-sh/ruff' && !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.ty == 'true' || github.ref == 'refs/heads/main') }}
timeout-minutes: 20 timeout-minutes: 20
permissions:
contents: read # required for actions/checkout
id-token: write # required for OIDC authentication with CodSpeed
strategy: strategy:
matrix: matrix:
benchmarks: benchmarks:
@ -1028,7 +1022,7 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
save-if: ${{ github.ref == 'refs/heads/main' }} save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 - uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
@ -1037,7 +1031,7 @@ jobs:
run: rustup show run: rustup show
- name: "Install codspeed" - name: "Install codspeed"
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-codspeed tool: cargo-codspeed
@ -1045,7 +1039,7 @@ jobs:
run: cargo codspeed build --features "codspeed,walltime" --profile profiling --no-default-features -p ruff_benchmark run: cargo codspeed build --features "codspeed,walltime" --profile profiling --no-default-features -p ruff_benchmark
- name: "Run benchmarks" - name: "Run benchmarks"
uses: CodSpeedHQ/action@6a8e2b874c338bf81cc5e8be715ada75908d3871 # v4.3.4 uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
env: env:
# enabling walltime flamegraphs adds ~6 minutes to the CI time, and they don't # enabling walltime flamegraphs adds ~6 minutes to the CI time, and they don't
# appear to provide much useful insight for our walltime benchmarks right now # appear to provide much useful insight for our walltime benchmarks right now
@ -1054,4 +1048,3 @@ jobs:
with: with:
mode: walltime mode: walltime
run: cargo codspeed run --bench ty_walltime "${{ matrix.benchmarks }}" run: cargo codspeed run --bench ty_walltime "${{ matrix.benchmarks }}"
token: ${{ secrets.CODSPEED_TOKEN }}

View File

@ -39,7 +39,7 @@ jobs:
run: rustup show run: rustup show
- name: "Install mold" - name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Build ruff - name: Build ruff
# A debug build means the script runs slower once it gets started, # A debug build means the script runs slower once it gets started,
# but this is outweighed by the fact that a release build takes *much* longer to compile in CI # but this is outweighed by the fact that a release build takes *much* longer to compile in CI

View File

@ -45,8 +45,9 @@ jobs:
- name: Install the latest version of uv - name: Install the latest version of uv
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
shared-key: "mypy-primer"
workspaces: "ruff" workspaces: "ruff"
- name: Install Rust toolchain - name: Install Rust toolchain
@ -83,9 +84,10 @@ jobs:
- name: Install the latest version of uv - name: Install the latest version of uv
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4 uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
workspaces: "ruff" workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain - name: Install Rust toolchain
run: rustup show run: rustup show
@ -105,3 +107,54 @@ jobs:
with: with:
name: mypy_primer_memory_diff name: mypy_primer_memory_diff
path: mypy_primer_memory.diff path: mypy_primer_memory.diff
# Runs mypy twice against the same ty version to catch any non-deterministic behavior (ideally).
# The job is disabled for now because there are some non-deterministic diagnostics.
mypy_primer_same_revision:
name: Run mypy_primer on same revision
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-32' || 'ubuntu-latest' }}
timeout-minutes: 20
# TODO: Enable once we fixed the non-deterministic diagnostics
if: false
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: ruff
fetch-depth: 0
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
shared-key: "mypy-primer"
- name: Install Rust toolchain
run: rustup show
- name: Run determinism check
env:
BASE_REVISION: ${{ github.event.pull_request.head.sha }}
PRIMER_SELECTOR: crates/ty_python_semantic/resources/primer/good.txt
CLICOLOR_FORCE: "1"
DIFF_FILE: mypy_primer_determinism.diff
run: |
cd ruff
scripts/mypy_primer.sh
- name: Check for non-determinism
run: |
# Remove ANSI color codes for checking
sed -e 's/\x1b\[[0-9;]*m//g' mypy_primer_determinism.diff > mypy_primer_determinism_clean.diff
# Check if there are any differences (non-determinism)
if [ -s mypy_primer_determinism_clean.diff ]; then
echo "ERROR: Non-deterministic output detected!"
echo "The following differences were found when running ty twice on the same commit:"
cat mypy_primer_determinism_clean.diff
exit 1
else
echo "✓ Output is deterministic"
fi

View File

@ -20,15 +20,13 @@ on:
jobs: jobs:
mkdocs: mkdocs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with: with:
ref: ${{ inputs.ref }} ref: ${{ inputs.ref }}
persist-credentials: true persist-credentials: true
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 - uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: 3.12 python-version: 3.12
@ -59,23 +57,12 @@ jobs:
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV" echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV"
echo "timestamp=$timestamp" >> "$GITHUB_ENV" echo "timestamp=$timestamp" >> "$GITHUB_ENV"
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
run: rustup show run: rustup show
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: pip install -r docs/requirements-insiders.txt
- name: "Install dependencies" - name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: pip install -r docs/requirements.txt run: pip install -r docs/requirements.txt
- name: "Copy README File" - name: "Copy README File"
@ -83,13 +70,8 @@ jobs:
python scripts/transform_readme.py --target mkdocs python scripts/transform_readme.py --target mkdocs
python scripts/generate_mkdocs.py python scripts/generate_mkdocs.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs" - name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }} run: mkdocs build --strict -f mkdocs.yml
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Clone docs repo" - name: "Clone docs repo"
run: git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs run: git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs

View File

@ -37,7 +37,7 @@ jobs:
package-manager-cache: false package-manager-cache: false
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0 - uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Install Node dependencies" - name: "Install Node dependencies"
run: npm ci run: npm ci --ignore-scripts
working-directory: playground working-directory: playground
- name: "Run TypeScript checks" - name: "Run TypeScript checks"
run: npm run check run: npm run check

View File

@ -41,7 +41,7 @@ jobs:
package-manager-cache: false package-manager-cache: false
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0 - uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Install Node dependencies" - name: "Install Node dependencies"
run: npm ci run: npm ci --ignore-scripts
working-directory: playground working-directory: playground
- name: "Run TypeScript checks" - name: "Run TypeScript checks"
run: npm run check run: npm run check

View File

@ -60,7 +60,7 @@ jobs:
env: env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps: steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with: with:
persist-credentials: false persist-credentials: false
submodules: recursive submodules: recursive
@ -123,7 +123,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps: steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with: with:
persist-credentials: false persist-credentials: false
submodules: recursive submodules: recursive
@ -174,7 +174,7 @@ jobs:
outputs: outputs:
val: ${{ steps.host.outputs.manifest }} val: ${{ steps.host.outputs.manifest }}
steps: steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with: with:
persist-credentials: false persist-credentials: false
submodules: recursive submodules: recursive
@ -250,7 +250,7 @@ jobs:
env: env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps: steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with: with:
persist-credentials: false persist-credentials: false
submodules: recursive submodules: recursive

View File

@ -198,7 +198,7 @@ jobs:
run: | run: |
rm "${VENDORED_TYPESHED}/pyproject.toml" rm "${VENDORED_TYPESHED}/pyproject.toml"
git commit -am "Remove pyproject.toml file" git commit -am "Remove pyproject.toml file"
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: "Install Rust toolchain" - name: "Install Rust toolchain"
if: ${{ success() }} if: ${{ success() }}
run: rustup show run: rustup show
@ -207,12 +207,12 @@ jobs:
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1 uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest" - name: "Install cargo nextest"
if: ${{ success() }} if: ${{ success() }}
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-nextest tool: cargo-nextest
- name: "Install cargo insta" - name: "Install cargo insta"
if: ${{ success() }} if: ${{ success() }}
uses: taiki-e/install-action@f79fe7514db78f0a7bdba3cb6dd9c1baa7d046d9 # v2.62.56 uses: taiki-e/install-action@3575e532701a5fc614b0c842e4119af4cc5fd16d # v2.62.60
with: with:
tool: cargo-insta tool: cargo-insta
- name: Update snapshots - name: Update snapshots

View File

@ -37,7 +37,7 @@ jobs:
with: with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
workspaces: "ruff" workspaces: "ruff"
lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
@ -67,7 +67,7 @@ jobs:
cd .. cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205" uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \ ecosystem-analyzer \
--repository ruff \ --repository ruff \

View File

@ -33,7 +33,7 @@ jobs:
with: with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
workspaces: "ruff" workspaces: "ruff"
lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
@ -52,7 +52,7 @@ jobs:
cd .. cd ..
uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@55df3c868f3fa9ab34cff0498dd6106722aac205" uv tool install "git+https://github.com/astral-sh/ecosystem-analyzer@2e1816eac09c90140b1ba51d19afc5f59da460f5"
ecosystem-analyzer \ ecosystem-analyzer \
--verbose \ --verbose \

View File

@ -45,7 +45,7 @@ jobs:
path: typing path: typing
persist-credentials: false persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1 - uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with: with:
workspaces: "ruff" workspaces: "ruff"

View File

@ -1,5 +1,76 @@
# Changelog # Changelog
## 0.14.9
Released on 2025-12-11.
### Preview features
- \[`ruff`\] New `RUF100` diagnostics for unused range suppressions ([#21783](https://github.com/astral-sh/ruff/pull/21783))
- \[`pylint`\] Detect subclasses of builtin exceptions (`PLW0133`) ([#21382](https://github.com/astral-sh/ruff/pull/21382))
### Bug fixes
- Fix comment placement in lambda parameters ([#21868](https://github.com/astral-sh/ruff/pull/21868))
- Skip over trivia tokens after re-lexing ([#21895](https://github.com/astral-sh/ruff/pull/21895))
- \[`flake8-bandit`\] Fix false positive when using non-standard `CSafeLoader` path (S506). ([#21830](https://github.com/astral-sh/ruff/pull/21830))
- \[`flake8-bugbear`\] Accept immutable slice default arguments (`B008`) ([#21823](https://github.com/astral-sh/ruff/pull/21823))
### Rule changes
- \[`pydocstyle`\] Suppress `D417` for parameters with `Unpack` annotations ([#21816](https://github.com/astral-sh/ruff/pull/21816))
### Performance
- Use `memchr` for computing line indexes ([#21838](https://github.com/astral-sh/ruff/pull/21838))
### Documentation
- Document `*.pyw` is included by default in preview ([#21885](https://github.com/astral-sh/ruff/pull/21885))
- Document range suppressions, reorganize suppression docs ([#21884](https://github.com/astral-sh/ruff/pull/21884))
- Update mkdocs-material to 9.7.0 (Insiders now free) ([#21797](https://github.com/astral-sh/ruff/pull/21797))
### Contributors
- [@Avasam](https://github.com/Avasam)
- [@MichaReiser](https://github.com/MichaReiser)
- [@charliermarsh](https://github.com/charliermarsh)
- [@amyreese](https://github.com/amyreese)
- [@phongddo](https://github.com/phongddo)
- [@prakhar1144](https://github.com/prakhar1144)
- [@mahiro72](https://github.com/mahiro72)
- [@ntBre](https://github.com/ntBre)
- [@LoicRiegel](https://github.com/LoicRiegel)
## 0.14.8
Released on 2025-12-04.
### Preview features
- \[`flake8-bugbear`\] Catch `yield` expressions within other statements (`B901`) ([#21200](https://github.com/astral-sh/ruff/pull/21200))
- \[`flake8-use-pathlib`\] Mark fixes unsafe for return type changes (`PTH104`, `PTH105`, `PTH109`, `PTH115`) ([#21440](https://github.com/astral-sh/ruff/pull/21440))
### Bug fixes
- Fix syntax error false positives for `await` outside functions ([#21763](https://github.com/astral-sh/ruff/pull/21763))
- \[`flake8-simplify`\] Fix truthiness assumption for non-iterable arguments in tuple/list/set calls (`SIM222`, `SIM223`) ([#21479](https://github.com/astral-sh/ruff/pull/21479))
### Documentation
- Suggest using `--output-file` option in GitLab integration ([#21706](https://github.com/astral-sh/ruff/pull/21706))
### Other changes
- [syntax-error] Default type parameter followed by non-default type parameter ([#21657](https://github.com/astral-sh/ruff/pull/21657))
### Contributors
- [@kieran-ryan](https://github.com/kieran-ryan)
- [@11happy](https://github.com/11happy)
- [@danparizher](https://github.com/danparizher)
- [@ntBre](https://github.com/ntBre)
## 0.14.7 ## 0.14.7
Released on 2025-11-28. Released on 2025-11-28.

View File

@ -331,13 +331,6 @@ you addressed them.
## MkDocs ## MkDocs
> [!NOTE]
>
> The documentation uses Material for MkDocs Insiders, which is closed-source software.
> This means only members of the Astral organization can preview the documentation exactly as it
> will appear in production.
> Outside contributors can still preview the documentation, but there will be some differences. Consult [the Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/insiders/benefits/#features) for which features are exclusively available in the insiders version.
To preview any changes to the documentation locally: To preview any changes to the documentation locally:
1. Install the [Rust toolchain](https://www.rust-lang.org/tools/install). 1. Install the [Rust toolchain](https://www.rust-lang.org/tools/install).
@ -351,11 +344,7 @@ To preview any changes to the documentation locally:
1. Run the development server with: 1. Run the development server with:
```shell ```shell
# For contributors. uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.yml
uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.public.yml
# For members of the Astral org, which has access to MkDocs Insiders via sponsorship.
uvx --with-requirements docs/requirements-insiders.txt -- mkdocs serve -f mkdocs.insiders.yml
``` ```
The documentation should then be available locally at The documentation should then be available locally at

114
Cargo.lock generated
View File

@ -254,6 +254,21 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "bit-set"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7"
[[package]] [[package]]
name = "bitflags" name = "bitflags"
version = "1.3.2" version = "1.3.2"
@ -944,6 +959,18 @@ dependencies = [
"parking_lot_core", "parking_lot_core",
] ]
[[package]]
name = "datatest-stable"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a867d7322eb69cf3a68a5426387a25b45cb3b9c5ee41023ee6cea92e2afadd82"
dependencies = [
"camino",
"fancy-regex",
"libtest-mimic 0.8.1",
"walkdir",
]
[[package]] [[package]]
name = "derive-where" name = "derive-where"
version = "1.6.0" version = "1.6.0"
@ -1016,7 +1043,7 @@ dependencies = [
"libc", "libc",
"option-ext", "option-ext",
"redox_users", "redox_users",
"windows-sys 0.59.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]
@ -1108,7 +1135,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb" checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [ dependencies = [
"libc", "libc",
"windows-sys 0.52.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]
@ -1138,6 +1165,17 @@ dependencies = [
"windows-sys 0.61.0", "windows-sys 0.61.0",
] ]
[[package]]
name = "fancy-regex"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
dependencies = [
"bit-set",
"regex-automata",
"regex-syntax",
]
[[package]] [[package]]
name = "fastrand" name = "fastrand"
version = "2.3.0" version = "2.3.0"
@ -1238,9 +1276,9 @@ dependencies = [
[[package]] [[package]]
name = "get-size-derive2" name = "get-size-derive2"
version = "0.7.2" version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff47daa61505c85af126e9dd64af6a342a33dc0cccfe1be74ceadc7d352e6efd" checksum = "ab21d7bd2c625f2064f04ce54bcb88bc57c45724cde45cba326d784e22d3f71a"
dependencies = [ dependencies = [
"attribute-derive", "attribute-derive",
"quote", "quote",
@ -1249,14 +1287,15 @@ dependencies = [
[[package]] [[package]]
name = "get-size2" name = "get-size2"
version = "0.7.2" version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac7bb8710e1f09672102be7ddf39f764d8440ae74a9f4e30aaa4820dcdffa4af" checksum = "879272b0de109e2b67b39fcfe3d25fdbba96ac07e44a254f5a0b4d7ff55340cb"
dependencies = [ dependencies = [
"compact_str", "compact_str",
"get-size-derive2", "get-size-derive2",
"hashbrown 0.16.1", "hashbrown 0.16.1",
"indexmap", "indexmap",
"ordermap",
"smallvec", "smallvec",
] ]
@ -1624,7 +1663,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46fdb647ebde000f43b5b53f773c30cf9b0cb4300453208713fa38b2c70935a0" checksum = "46fdb647ebde000f43b5b53f773c30cf9b0cb4300453208713fa38b2c70935a0"
dependencies = [ dependencies = [
"console 0.15.11", "console 0.15.11",
"globset",
"once_cell", "once_cell",
"pest", "pest",
"pest_derive", "pest_derive",
@ -1632,7 +1670,6 @@ dependencies = [
"ron", "ron",
"serde", "serde",
"similar", "similar",
"walkdir",
] ]
[[package]] [[package]]
@ -1763,7 +1800,7 @@ dependencies = [
"portable-atomic", "portable-atomic",
"portable-atomic-util", "portable-atomic-util",
"serde_core", "serde_core",
"windows-sys 0.52.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]
@ -1918,6 +1955,18 @@ dependencies = [
"threadpool", "threadpool",
] ]
[[package]]
name = "libtest-mimic"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5297962ef19edda4ce33aaa484386e0a5b3d7f2f4e037cbeee00503ef6b29d33"
dependencies = [
"anstream",
"anstyle",
"clap",
"escape8259",
]
[[package]] [[package]]
name = "linux-raw-sys" name = "linux-raw-sys"
version = "0.11.0" version = "0.11.0"
@ -2233,9 +2282,9 @@ checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
[[package]] [[package]]
name = "ordermap" name = "ordermap"
version = "0.5.12" version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b100f7dd605611822d30e182214d3c02fdefce2d801d23993f6b6ba6ca1392af" checksum = "ed637741ced8fb240855d22a2b4f208dab7a06bcce73380162e5253000c16758"
dependencies = [ dependencies = [
"indexmap", "indexmap",
"serde", "serde",
@ -2859,7 +2908,7 @@ dependencies = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.14.7" version = "0.14.9"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"argfile", "argfile",
@ -3117,13 +3166,14 @@ dependencies = [
[[package]] [[package]]
name = "ruff_linter" name = "ruff_linter"
version = "0.14.7" version = "0.14.9"
dependencies = [ dependencies = [
"aho-corasick", "aho-corasick",
"anyhow", "anyhow",
"bitflags 2.10.0", "bitflags 2.10.0",
"clap", "clap",
"colored 3.0.0", "colored 3.0.0",
"compact_str",
"fern", "fern",
"glob", "glob",
"globset", "globset",
@ -3276,6 +3326,7 @@ dependencies = [
"anyhow", "anyhow",
"clap", "clap",
"countme", "countme",
"datatest-stable",
"insta", "insta",
"itertools 0.14.0", "itertools 0.14.0",
"memchr", "memchr",
@ -3345,8 +3396,10 @@ dependencies = [
"bitflags 2.10.0", "bitflags 2.10.0",
"bstr", "bstr",
"compact_str", "compact_str",
"datatest-stable",
"get-size2", "get-size2",
"insta", "insta",
"itertools 0.14.0",
"memchr", "memchr",
"ruff_annotate_snippets", "ruff_annotate_snippets",
"ruff_python_ast", "ruff_python_ast",
@ -3472,7 +3525,7 @@ dependencies = [
[[package]] [[package]]
name = "ruff_wasm" name = "ruff_wasm"
version = "0.14.7" version = "0.14.9"
dependencies = [ dependencies = [
"console_error_panic_hook", "console_error_panic_hook",
"console_log", "console_log",
@ -3570,7 +3623,7 @@ dependencies = [
"errno", "errno",
"libc", "libc",
"linux-raw-sys", "linux-raw-sys",
"windows-sys 0.52.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]
@ -3588,7 +3641,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]] [[package]]
name = "salsa" name = "salsa"
version = "0.24.0" version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=17bc55d699565e5a1cb1bd42363b905af2f9f3e7#17bc55d699565e5a1cb1bd42363b905af2f9f3e7" source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [ dependencies = [
"boxcar", "boxcar",
"compact_str", "compact_str",
@ -3599,6 +3652,7 @@ dependencies = [
"indexmap", "indexmap",
"intrusive-collections", "intrusive-collections",
"inventory", "inventory",
"ordermap",
"parking_lot", "parking_lot",
"portable-atomic", "portable-atomic",
"rustc-hash", "rustc-hash",
@ -3612,12 +3666,12 @@ dependencies = [
[[package]] [[package]]
name = "salsa-macro-rules" name = "salsa-macro-rules"
version = "0.24.0" version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=17bc55d699565e5a1cb1bd42363b905af2f9f3e7#17bc55d699565e5a1cb1bd42363b905af2f9f3e7" source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
[[package]] [[package]]
name = "salsa-macros" name = "salsa-macros"
version = "0.24.0" version = "0.24.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=17bc55d699565e5a1cb1bd42363b905af2f9f3e7#17bc55d699565e5a1cb1bd42363b905af2f9f3e7" source = "git+https://github.com/salsa-rs/salsa.git?rev=55e5e7d32fa3fc189276f35bb04c9438f9aedbd1#55e5e7d32fa3fc189276f35bb04c9438f9aedbd1"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -3971,7 +4025,7 @@ dependencies = [
"getrandom 0.3.4", "getrandom 0.3.4",
"once_cell", "once_cell",
"rustix", "rustix",
"windows-sys 0.52.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]
@ -4216,9 +4270,9 @@ checksum = "df8b2b54733674ad286d16267dcfc7a71ed5c776e4ac7aa3c3e2561f7c637bf2"
[[package]] [[package]]
name = "tracing" name = "tracing"
version = "0.1.41" version = "0.1.43"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0" checksum = "2d15d90a0b5c19378952d479dc858407149d7bb45a14de0142f6c534b16fc647"
dependencies = [ dependencies = [
"log", "log",
"pin-project-lite", "pin-project-lite",
@ -4228,9 +4282,9 @@ dependencies = [
[[package]] [[package]]
name = "tracing-attributes" name = "tracing-attributes"
version = "0.1.30" version = "0.1.31"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81383ab64e72a7a8b8e13130c49e3dab29def6d0c7d76a03087b3cf71c5c6903" checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -4239,9 +4293,9 @@ dependencies = [
[[package]] [[package]]
name = "tracing-core" name = "tracing-core"
version = "0.1.34" version = "0.1.35"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9d12581f227e93f094d3af2ae690a574abb8a2b9b7a96e7cfe9647b2b617678" checksum = "7a04e24fab5c89c6a36eb8558c9656f30d81de51dfa4d3b45f26b21d61fa0a6c"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"valuable", "valuable",
@ -4283,9 +4337,9 @@ dependencies = [
[[package]] [[package]]
name = "tracing-subscriber" name = "tracing-subscriber"
version = "0.3.20" version = "0.3.22"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5" checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
dependencies = [ dependencies = [
"chrono", "chrono",
"matchers", "matchers",
@ -4307,7 +4361,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fe242ee9e646acec9ab73a5c540e8543ed1b107f0ce42be831e0775d423c396" checksum = "5fe242ee9e646acec9ab73a5c540e8543ed1b107f0ce42be831e0775d423c396"
dependencies = [ dependencies = [
"ignore", "ignore",
"libtest-mimic", "libtest-mimic 0.7.3",
"snapbox", "snapbox",
] ]
@ -4336,6 +4390,7 @@ dependencies = [
"ruff_python_trivia", "ruff_python_trivia",
"salsa", "salsa",
"tempfile", "tempfile",
"tikv-jemallocator",
"toml", "toml",
"tracing", "tracing",
"tracing-flame", "tracing-flame",
@ -4556,6 +4611,7 @@ dependencies = [
"anyhow", "anyhow",
"camino", "camino",
"colored 3.0.0", "colored 3.0.0",
"dunce",
"insta", "insta",
"memchr", "memchr",
"path-slash", "path-slash",
@ -5024,7 +5080,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22" checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.61.0",
] ]
[[package]] [[package]]

View File

@ -5,7 +5,7 @@ resolver = "2"
[workspace.package] [workspace.package]
# Please update rustfmt.toml when bumping the Rust edition # Please update rustfmt.toml when bumping the Rust edition
edition = "2024" edition = "2024"
rust-version = "1.89" rust-version = "1.90"
homepage = "https://docs.astral.sh/ruff" homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff" documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff" repository = "https://github.com/astral-sh/ruff"
@ -81,6 +81,7 @@ compact_str = "0.9.0"
criterion = { version = "0.7.0", default-features = false } criterion = { version = "0.7.0", default-features = false }
crossbeam = { version = "0.8.4" } crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" } dashmap = { version = "6.0.1" }
datatest-stable = { version = "0.3.3" }
dir-test = { version = "0.4.0" } dir-test = { version = "0.4.0" }
dunce = { version = "1.0.5" } dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" } drop_bomb = { version = "0.1.5" }
@ -88,7 +89,7 @@ etcetera = { version = "0.11.0" }
fern = { version = "0.7.0" } fern = { version = "0.7.0" }
filetime = { version = "0.2.23" } filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" } getrandom = { version = "0.3.1" }
get-size2 = { version = "0.7.0", features = [ get-size2 = { version = "0.7.3", features = [
"derive", "derive",
"smallvec", "smallvec",
"hashbrown", "hashbrown",
@ -129,7 +130,7 @@ memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" } mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" } natord = { version = "1.0.9" }
notify = { version = "8.0.0" } notify = { version = "8.0.0" }
ordermap = { version = "0.5.0" } ordermap = { version = "1.0.0" }
path-absolutize = { version = "3.1.1" } path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" } path-slash = { version = "0.2.1" }
pathdiff = { version = "0.2.1" } pathdiff = { version = "0.2.1" }
@ -146,7 +147,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" } rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" } rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml` # When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "17bc55d699565e5a1cb1bd42363b905af2f9f3e7", default-features = false, features = [ salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "55e5e7d32fa3fc189276f35bb04c9438f9aedbd1", default-features = false, features = [
"compact_str", "compact_str",
"macros", "macros",
"salsa_unstable", "salsa_unstable",
@ -272,6 +273,12 @@ large_stack_arrays = "allow"
lto = "fat" lto = "fat"
codegen-units = 16 codegen-units = 16
# Profile to build a minimally sized binary for ruff/ty
[profile.minimal-size]
inherits = "release"
opt-level = "z"
codegen-units = 1
# Some crates don't change as much but benefit more from # Some crates don't change as much but benefit more from
# more expensive optimization passes, so we selectively # more expensive optimization passes, so we selectively
# decrease codegen-units in some cases. # decrease codegen-units in some cases.

View File

@ -57,8 +57,11 @@ Ruff is extremely actively developed and used in major open-source projects like
...and [many more](#whos-using-ruff). ...and [many more](#whos-using-ruff).
Ruff is backed by [Astral](https://astral.sh). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), Ruff is backed by [Astral](https://astral.sh), the creators of
or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster). [uv](https://github.com/astral-sh/uv) and [ty](https://github.com/astral-sh/ty).
Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), or the
original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster).
## Testimonials ## Testimonials
@ -147,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex" powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version. # For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.7/install.sh | sh curl -LsSf https://astral.sh/ruff/0.14.9/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.7/install.ps1 | iex" powershell -c "irm https://astral.sh/ruff/0.14.9/install.ps1 | iex"
``` ```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff), You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@ -181,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml ```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version. # Ruff version.
rev: v0.14.7 rev: v0.14.9
hooks: hooks:
# Run the linter. # Run the linter.
- id: ruff-check - id: ruff-check

View File

@ -1,6 +1,6 @@
[package] [package]
name = "ruff" name = "ruff"
version = "0.14.7" version = "0.14.9"
publish = true publish = true
authors = { workspace = true } authors = { workspace = true }
edition = { workspace = true } edition = { workspace = true }

View File

@ -10,7 +10,7 @@ use anyhow::bail;
use clap::builder::Styles; use clap::builder::Styles;
use clap::builder::styling::{AnsiColor, Effects}; use clap::builder::styling::{AnsiColor, Effects};
use clap::builder::{TypedValueParser, ValueParserFactory}; use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{Parser, Subcommand, command}; use clap::{Parser, Subcommand};
use colored::Colorize; use colored::Colorize;
use itertools::Itertools; use itertools::Itertools;
use path_absolutize::path_dedot; use path_absolutize::path_dedot;

View File

@ -9,7 +9,7 @@ use std::sync::mpsc::channel;
use anyhow::Result; use anyhow::Result;
use clap::CommandFactory; use clap::CommandFactory;
use colored::Colorize; use colored::Colorize;
use log::{error, warn}; use log::error;
use notify::{RecursiveMode, Watcher, recommended_watcher}; use notify::{RecursiveMode, Watcher, recommended_watcher};
use args::{GlobalConfigArgs, ServerCommand}; use args::{GlobalConfigArgs, ServerCommand};

View File

@ -1440,6 +1440,78 @@ def function():
Ok(()) Ok(())
} }
#[test]
fn ignore_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
import os # noqa: F401
# ruff: disable[F401]
import sys
"#,
)?;
// without --ignore-noqa
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py"),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
");
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--preview"]),
@r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
// with --ignore-noqa --preview
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--ignore-noqa", "--preview"]),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:2:8: F401 [*] `os` imported but unused
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 2 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
");
Ok(())
}
#[test] #[test]
fn add_noqa() -> Result<()> { fn add_noqa() -> Result<()> {
let fixture = CliTest::new()?; let fixture = CliTest::new()?;
@ -1632,6 +1704,100 @@ def unused(x): # noqa: ANN001, ARG001, D103
Ok(()) Ok(())
} }
#[test]
fn add_noqa_existing_file_level_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: noqa F401
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: noqa F401
import os
");
Ok(())
}
#[test]
fn add_noqa_existing_range_suppression() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: disable[F401]
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: disable[F401]
import os
");
Ok(())
}
#[test] #[test]
fn add_noqa_multiline_comment() -> Result<()> { fn add_noqa_multiline_comment() -> Result<()> {
let fixture = CliTest::new()?; let fixture = CliTest::new()?;

View File

@ -6,7 +6,8 @@ use criterion::{
use ruff_benchmark::{ use ruff_benchmark::{
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN, LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
}; };
use ruff_python_parser::{Mode, TokenKind, lexer}; use ruff_python_ast::token::TokenKind;
use ruff_python_parser::{Mode, lexer};
#[cfg(target_os = "windows")] #[cfg(target_os = "windows")]
#[global_allocator] #[global_allocator]

View File

@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17", max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312, python_version: PythonVersion::PY312,
}, },
13000, 13100,
); );
static TANJUN: Benchmark = Benchmark::new( static TANJUN: Benchmark = Benchmark::new(
@ -223,7 +223,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09", max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311, python_version: PythonVersion::PY311,
}, },
950, 1100,
); );
#[track_caller] #[track_caller]

View File

@ -166,28 +166,8 @@ impl Diagnostic {
/// Returns the primary message for this diagnostic. /// Returns the primary message for this diagnostic.
/// ///
/// A diagnostic always has a message, but it may be empty. /// A diagnostic always has a message, but it may be empty.
///
/// NOTE: At present, this routine will return the first primary
/// annotation's message as the primary message when the main diagnostic
/// message is empty. This is meant to facilitate an incremental migration
/// in ty over to the new diagnostic data model. (The old data model
/// didn't distinguish between messages on the entire diagnostic and
/// messages attached to a particular span.)
pub fn primary_message(&self) -> &str { pub fn primary_message(&self) -> &str {
if !self.inner.message.as_str().is_empty() { self.inner.message.as_str()
return self.inner.message.as_str();
}
// FIXME: As a special case, while we're migrating ty
// to the new diagnostic data model, we'll look for a primary
// message from the primary annotation. This is because most
// ty diagnostics are created with an empty diagnostic
// message and instead attach the message to the annotation.
// Fixing this will require touching basically every diagnostic
// in ty, so we do it this way for now to match the old
// semantics. ---AG
self.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default()
} }
/// Introspects this diagnostic and returns what kind of "primary" message /// Introspects this diagnostic and returns what kind of "primary" message
@ -199,18 +179,6 @@ impl Diagnostic {
/// contains *essential* information or context for understanding the /// contains *essential* information or context for understanding the
/// diagnostic. /// diagnostic.
/// ///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most /// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what /// cases, just converting it to a string (or printing it) will do what
/// you want. /// you want.
@ -224,11 +192,10 @@ impl Diagnostic {
.primary_annotation() .primary_annotation()
.and_then(|ann| ann.get_message()) .and_then(|ann| ann.get_message())
.unwrap_or_default(); .unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) { if annotation.is_empty() {
(false, true) => ConciseMessage::MainDiagnostic(main), ConciseMessage::MainDiagnostic(main)
(true, false) => ConciseMessage::PrimaryAnnotation(annotation), } else {
(false, false) => ConciseMessage::Both { main, annotation }, ConciseMessage::Both { main, annotation }
(true, true) => ConciseMessage::Empty,
} }
} }
@ -354,6 +321,13 @@ impl Diagnostic {
Arc::make_mut(&mut self.inner).fix = Some(fix); Arc::make_mut(&mut self.inner).fix = Some(fix);
} }
/// If `fix` is `Some`, set the fix for this diagnostic.
pub fn set_optional_fix(&mut self, fix: Option<Fix>) {
if let Some(fix) = fix {
self.set_fix(fix);
}
}
/// Remove the fix for this diagnostic. /// Remove the fix for this diagnostic.
pub fn remove_fix(&mut self) { pub fn remove_fix(&mut self) {
Arc::make_mut(&mut self.inner).fix = None; Arc::make_mut(&mut self.inner).fix = None;
@ -686,18 +660,6 @@ impl SubDiagnostic {
/// contains *essential* information or context for understanding the /// contains *essential* information or context for understanding the
/// diagnostic. /// diagnostic.
/// ///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most /// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what /// cases, just converting it to a string (or printing it) will do what
/// you want. /// you want.
@ -707,11 +669,10 @@ impl SubDiagnostic {
.primary_annotation() .primary_annotation()
.and_then(|ann| ann.get_message()) .and_then(|ann| ann.get_message())
.unwrap_or_default(); .unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) { if annotation.is_empty() {
(false, true) => ConciseMessage::MainDiagnostic(main), ConciseMessage::MainDiagnostic(main)
(true, false) => ConciseMessage::PrimaryAnnotation(annotation), } else {
(false, false) => ConciseMessage::Both { main, annotation }, ConciseMessage::Both { main, annotation }
(true, true) => ConciseMessage::Empty,
} }
} }
} }
@ -881,6 +842,10 @@ impl Annotation {
pub fn hide_snippet(&mut self, yes: bool) { pub fn hide_snippet(&mut self, yes: bool) {
self.hide_snippet = yes; self.hide_snippet = yes;
} }
pub fn is_primary(&self) -> bool {
self.is_primary
}
} }
/// Tags that can be associated with an annotation. /// Tags that can be associated with an annotation.
@ -1501,28 +1466,10 @@ pub enum DiagnosticFormat {
pub enum ConciseMessage<'a> { pub enum ConciseMessage<'a> {
/// A diagnostic contains a non-empty main message and an empty /// A diagnostic contains a non-empty main message and an empty
/// primary annotation message. /// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
MainDiagnostic(&'a str), MainDiagnostic(&'a str),
/// A diagnostic contains an empty main message and a non-empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "old" data model.
PrimaryAnnotation(&'a str),
/// A diagnostic contains a non-empty main message and a non-empty /// A diagnostic contains a non-empty main message and a non-empty
/// primary annotation message. /// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
Both { main: &'a str, annotation: &'a str }, Both { main: &'a str, annotation: &'a str },
/// A diagnostic contains an empty main message and an empty
/// primary annotation message.
///
/// This indicates that the diagnostic is probably using the old
/// model.
Empty,
/// A custom concise message has been provided. /// A custom concise message has been provided.
Custom(&'a str), Custom(&'a str),
} }
@ -1533,13 +1480,9 @@ impl std::fmt::Display for ConciseMessage<'_> {
ConciseMessage::MainDiagnostic(main) => { ConciseMessage::MainDiagnostic(main) => {
write!(f, "{main}") write!(f, "{main}")
} }
ConciseMessage::PrimaryAnnotation(annotation) => {
write!(f, "{annotation}")
}
ConciseMessage::Both { main, annotation } => { ConciseMessage::Both { main, annotation } => {
write!(f, "{main}: {annotation}") write!(f, "{main}: {annotation}")
} }
ConciseMessage::Empty => Ok(()),
ConciseMessage::Custom(message) => { ConciseMessage::Custom(message) => {
write!(f, "{message}") write!(f, "{message}")
} }

View File

@ -21,7 +21,11 @@ use crate::source::source_text;
/// reflected in the changed AST offsets. /// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires /// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires
/// for determining if a query result is unchanged. /// for determining if a query result is unchanged.
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size)] ///
/// The LRU capacity of 200 was picked without any empirical evidence that it's optimal,
/// instead it's a wild guess that it should be unlikely that incremental changes involve
/// more than 200 modules. Parsed ASTs within the same revision are never evicted by Salsa.
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size, lru=200)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule { pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
let _span = tracing::trace_span!("parsed_module", ?file).entered(); let _span = tracing::trace_span!("parsed_module", ?file).entered();
@ -92,14 +96,9 @@ impl ParsedModule {
self.inner.store(None); self.inner.store(None);
} }
/// Returns the pointer address of this [`ParsedModule`]. /// Returns the file to which this module belongs.
/// pub fn file(&self) -> File {
/// The pointer uniquely identifies the module within the current Salsa revision, self.file
/// regardless of whether particular [`ParsedModuleRef`] instances are garbage collected.
pub fn addr(&self) -> usize {
// Note that the outer `Arc` in `inner` is stable across garbage collection, while the inner
// `Arc` within the `ArcSwap` may change.
Arc::as_ptr(&self.inner).addr()
} }
} }

View File

@ -667,6 +667,13 @@ impl Deref for SystemPathBuf {
} }
} }
impl AsRef<Path> for SystemPathBuf {
#[inline]
fn as_ref(&self) -> &Path {
self.0.as_std_path()
}
}
impl<P: AsRef<SystemPath>> FromIterator<P> for SystemPathBuf { impl<P: AsRef<SystemPath>> FromIterator<P> for SystemPathBuf {
fn from_iter<I: IntoIterator<Item = P>>(iter: I) -> Self { fn from_iter<I: IntoIterator<Item = P>>(iter: I) -> Self {
let mut buf = SystemPathBuf::new(); let mut buf = SystemPathBuf::new();

View File

@ -144,8 +144,8 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n'); output.push('\n');
if let Some(deprecated) = &field.deprecated { if let Some(deprecated) = &field.deprecated {
output.push_str("> [!WARN] \"Deprecated\"\n"); output.push_str("!!! warning \"Deprecated\"\n");
output.push_str("> This option has been deprecated"); output.push_str(" This option has been deprecated");
if let Some(since) = deprecated.since { if let Some(since) = deprecated.since {
write!(output, " in {since}").unwrap(); write!(output, " in {since}").unwrap();
@ -166,8 +166,9 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n'); output.push('\n');
let _ = writeln!(output, "**Type**: `{}`", field.value_type); let _ = writeln!(output, "**Type**: `{}`", field.value_type);
output.push('\n'); output.push('\n');
output.push_str("**Example usage** (`pyproject.toml`):\n\n"); output.push_str("**Example usage**:\n\n");
output.push_str(&format_example( output.push_str(&format_example(
"pyproject.toml",
&format_header( &format_header(
field.scope, field.scope,
field.example, field.example,
@ -179,11 +180,11 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
output.push('\n'); output.push('\n');
} }
fn format_example(header: &str, content: &str) -> String { fn format_example(title: &str, header: &str, content: &str) -> String {
if header.is_empty() { if header.is_empty() {
format!("```toml\n{content}\n```\n",) format!("```toml title=\"{title}\"\n{content}\n```\n",)
} else { } else {
format!("```toml\n{header}\n{content}\n```\n",) format!("```toml title=\"{title}\"\n{header}\n{content}\n```\n",)
} }
} }

View File

@ -39,7 +39,7 @@ impl Edit {
/// Creates an edit that replaces the content in `range` with `content`. /// Creates an edit that replaces the content in `range` with `content`.
pub fn range_replacement(content: String, range: TextRange) -> Self { pub fn range_replacement(content: String, range: TextRange) -> Self {
debug_assert!(!content.is_empty(), "Prefer `Fix::deletion`"); debug_assert!(!content.is_empty(), "Prefer `Edit::deletion`");
Self { Self {
content: Some(Box::from(content)), content: Some(Box::from(content)),

View File

@ -149,6 +149,10 @@ impl Fix {
&self.edits &self.edits
} }
pub fn into_edits(self) -> Vec<Edit> {
self.edits
}
/// Return the [`Applicability`] of the [`Fix`]. /// Return the [`Applicability`] of the [`Fix`].
pub fn applicability(&self) -> Applicability { pub fn applicability(&self) -> Applicability {
self.applicability self.applicability

View File

@ -337,7 +337,7 @@ macro_rules! best_fitting {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::prelude::*; use crate::prelude::*;
use crate::{FormatState, SimpleFormatOptions, VecBuffer, write}; use crate::{FormatState, SimpleFormatOptions, VecBuffer};
struct TestFormat; struct TestFormat;
@ -385,8 +385,8 @@ mod tests {
#[test] #[test]
fn best_fitting_variants_print_as_lists() { fn best_fitting_variants_print_as_lists() {
use crate::Formatted;
use crate::prelude::*; use crate::prelude::*;
use crate::{Formatted, format, format_args};
// The second variant below should be selected when printing at a width of 30 // The second variant below should be selected when printing at a width of 30
let formatted_best_fitting = format!( let formatted_best_fitting = format!(

View File

@ -49,7 +49,7 @@ impl ModuleImports {
// Resolve the imports. // Resolve the imports.
let mut resolved_imports = ModuleImports::default(); let mut resolved_imports = ModuleImports::default();
for import in imports { for import in imports {
for resolved in Resolver::new(db).resolve(import) { for resolved in Resolver::new(db, path).resolve(import) {
if let Some(path) = resolved.as_system_path() { if let Some(path) = resolved.as_system_path() {
resolved_imports.insert(path.to_path_buf()); resolved_imports.insert(path.to_path_buf());
} }

View File

@ -1,5 +1,9 @@
use ruff_db::files::FilePath; use ruff_db::files::{File, FilePath, system_path_to_file};
use ty_python_semantic::{ModuleName, resolve_module, resolve_real_module}; use ruff_db::system::SystemPath;
use ty_python_semantic::{
ModuleName, resolve_module, resolve_module_confident, resolve_real_module,
resolve_real_module_confident,
};
use crate::ModuleDb; use crate::ModuleDb;
use crate::collector::CollectedImport; use crate::collector::CollectedImport;
@ -7,12 +11,15 @@ use crate::collector::CollectedImport;
/// Collect all imports for a given Python file. /// Collect all imports for a given Python file.
pub(crate) struct Resolver<'a> { pub(crate) struct Resolver<'a> {
db: &'a ModuleDb, db: &'a ModuleDb,
file: Option<File>,
} }
impl<'a> Resolver<'a> { impl<'a> Resolver<'a> {
/// Initialize a [`Resolver`] with a given [`ModuleDb`]. /// Initialize a [`Resolver`] with a given [`ModuleDb`].
pub(crate) fn new(db: &'a ModuleDb) -> Self { pub(crate) fn new(db: &'a ModuleDb, path: &SystemPath) -> Self {
Self { db } // If we know the importing file we can potentially resolve more imports
let file = system_path_to_file(db, path).ok();
Self { db, file }
} }
/// Resolve the [`CollectedImport`] into a [`FilePath`]. /// Resolve the [`CollectedImport`] into a [`FilePath`].
@ -70,13 +77,21 @@ impl<'a> Resolver<'a> {
/// Resolves a module name to a module. /// Resolves a module name to a module.
pub(crate) fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> { pub(crate) fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_module(self.db, module_name)?; let module = if let Some(file) = self.file {
resolve_module(self.db, file, module_name)?
} else {
resolve_module_confident(self.db, module_name)?
};
Some(module.file(self.db)?.path(self.db)) Some(module.file(self.db)?.path(self.db))
} }
/// Resolves a module name to a module (stubs not allowed). /// Resolves a module name to a module (stubs not allowed).
fn resolve_real_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> { fn resolve_real_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_real_module(self.db, module_name)?; let module = if let Some(file) = self.file {
resolve_real_module(self.db, file, module_name)?
} else {
resolve_real_module_confident(self.db, module_name)?
};
Some(module.file(self.db)?.path(self.db)) Some(module.file(self.db)?.path(self.db))
} }
} }

View File

@ -1,6 +1,6 @@
[package] [package]
name = "ruff_linter" name = "ruff_linter"
version = "0.14.7" version = "0.14.9"
publish = false publish = false
authors = { workspace = true } authors = { workspace = true }
edition = { workspace = true } edition = { workspace = true }
@ -35,6 +35,7 @@ anyhow = { workspace = true }
bitflags = { workspace = true } bitflags = { workspace = true }
clap = { workspace = true, features = ["derive", "string"], optional = true } clap = { workspace = true, features = ["derive", "string"], optional = true }
colored = { workspace = true } colored = { workspace = true }
compact_str = { workspace = true }
fern = { workspace = true } fern = { workspace = true }
glob = { workspace = true } glob = { workspace = true }
globset = { workspace = true } globset = { workspace = true }

View File

@ -28,9 +28,11 @@ yaml.load("{}", SafeLoader)
yaml.load("{}", yaml.SafeLoader) yaml.load("{}", yaml.SafeLoader)
yaml.load("{}", CSafeLoader) yaml.load("{}", CSafeLoader)
yaml.load("{}", yaml.CSafeLoader) yaml.load("{}", yaml.CSafeLoader)
yaml.load("{}", yaml.cyaml.CSafeLoader)
yaml.load("{}", NewSafeLoader) yaml.load("{}", NewSafeLoader)
yaml.load("{}", Loader=SafeLoader) yaml.load("{}", Loader=SafeLoader)
yaml.load("{}", Loader=yaml.SafeLoader) yaml.load("{}", Loader=yaml.SafeLoader)
yaml.load("{}", Loader=CSafeLoader) yaml.load("{}", Loader=CSafeLoader)
yaml.load("{}", Loader=yaml.CSafeLoader) yaml.load("{}", Loader=yaml.CSafeLoader)
yaml.load("{}", Loader=yaml.cyaml.CSafeLoader)
yaml.load("{}", Loader=NewSafeLoader) yaml.load("{}", Loader=NewSafeLoader)

View File

@ -199,6 +199,9 @@ def bytes_okay(value=bytes(1)):
def int_okay(value=int("12")): def int_okay(value=int("12")):
pass pass
# Allow immutable slice()
def slice_okay(value=slice(1,2)):
pass
# Allow immutable complex() value # Allow immutable complex() value
def complex_okay(value=complex(1,2)): def complex_okay(value=complex(1,2)):

View File

@ -52,16 +52,16 @@ def not_broken5():
yield inner() yield inner()
def not_broken6(): def broken3():
return (yield from []) return (yield from [])
def not_broken7(): def broken4():
x = yield from [] x = yield from []
return x return x
def not_broken8(): def broken5():
x = None x = None
def inner(ex): def inner(ex):
@ -76,3 +76,13 @@ class NotBroken9(object):
def __await__(self): def __await__(self):
yield from function() yield from function()
return 42 return 42
async def broken6():
yield 1
return foo()
async def broken7():
yield 1
return [1, 2, 3]

View File

@ -216,3 +216,15 @@ def get_items_list():
def get_items_set(): def get_items_set():
return tuple({item for item in items}) or None # OK return tuple({item for item in items}) or None # OK
# https://github.com/astral-sh/ruff/issues/21473
tuple("") or True # SIM222
tuple(t"") or True # OK
tuple(0) or True # OK
tuple(1) or True # OK
tuple(False) or True # OK
tuple(None) or True # OK
tuple(...) or True # OK
tuple(lambda x: x) or True # OK
tuple(x for x in range(0)) or True # OK

View File

@ -157,3 +157,15 @@ print(f"{1}{''}" and "bar")
# https://github.com/astral-sh/ruff/issues/7127 # https://github.com/astral-sh/ruff/issues/7127
def f(a: "'' and 'b'"): ... def f(a: "'' and 'b'"): ...
# https://github.com/astral-sh/ruff/issues/21473
tuple("") and False # SIM223
tuple(t"") and False # OK
tuple(0) and False # OK
tuple(1) and False # OK
tuple(False) and False # OK
tuple(None) and False # OK
tuple(...) and False # OK
tuple(lambda x: x) and False # OK
tuple(x for x in range(0)) and False # OK

View File

@ -218,3 +218,26 @@ def should_not_fail(payload, Args):
Args: Args:
The other arguments. The other arguments.
""" """
# Test cases for Unpack[TypedDict] kwargs
from typing import TypedDict
from typing_extensions import Unpack
class User(TypedDict):
id: int
name: str
def function_with_unpack_args_should_not_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs.
Args:
query: some arg
"""
def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs but missing query arg documentation.
Args:
**kwargs: keyword arguments
"""

View File

@ -17,3 +17,24 @@ def _():
# Valid yield scope # Valid yield scope
yield 3 yield 3
# await is valid in any generator, sync or async
(await cor async for cor in f()) # ok
(await cor for cor in f()) # ok
# but not in comprehensions
[await cor async for cor in f()] # F704
{await cor async for cor in f()} # F704
{await cor: 1 async for cor in f()} # F704
[await cor for cor in f()] # F704
{await cor for cor in f()} # F704
{await cor: 1 for cor in f()} # F704
# or in the iterator of an async generator, which is evaluated in the parent
# scope
(cor async for cor in await f()) # F704
(await cor async for cor in [await c for c in f()]) # F704
# this is also okay because the comprehension is within the generator scope
([await c for c in cor] async for cor in f()) # ok

View File

@ -2,15 +2,40 @@ from abc import ABC, abstractmethod
from contextlib import suppress from contextlib import suppress
class MyError(Exception):
...
class MySubError(MyError):
...
class MyValueError(ValueError):
...
class MyUserWarning(UserWarning):
...
# Violation test cases with builtin errors: PLW0133
# Test case 1: Useless exception statement # Test case 1: Useless exception statement
def func(): def func():
AssertionError("This is an assertion error") # PLW0133 AssertionError("This is an assertion error") # PLW0133
MyError("This is a custom error") # PLW0133
MySubError("This is a custom error") # PLW0133
MyValueError("This is a custom value error") # PLW0133
# Test case 2: Useless exception statement in try-except block # Test case 2: Useless exception statement in try-except block
def func(): def func():
try: try:
Exception("This is an exception") # PLW0133 Exception("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
except Exception as err: except Exception as err:
pass pass
@ -19,6 +44,9 @@ def func():
def func(): def func():
if True: if True:
RuntimeError("This is an exception") # PLW0133 RuntimeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 4: Useless exception statement in class # Test case 4: Useless exception statement in class
@ -26,12 +54,18 @@ def func():
class Class: class Class:
def __init__(self): def __init__(self):
TypeError("This is an exception") # PLW0133 TypeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 5: Useless exception statement in function # Test case 5: Useless exception statement in function
def func(): def func():
def inner(): def inner():
IndexError("This is an exception") # PLW0133 IndexError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
inner() inner()
@ -40,6 +74,9 @@ def func():
def func(): def func():
while True: while True:
KeyError("This is an exception") # PLW0133 KeyError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 7: Useless exception statement in abstract class # Test case 7: Useless exception statement in abstract class
@ -48,27 +85,58 @@ def func():
@abstractmethod @abstractmethod
def method(self): def method(self):
NotImplementedError("This is an exception") # PLW0133 NotImplementedError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 8: Useless exception statement inside context manager # Test case 8: Useless exception statement inside context manager
def func(): def func():
with suppress(AttributeError): with suppress(Exception):
AttributeError("This is an exception") # PLW0133 AttributeError("This is an exception") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
# Test case 9: Useless exception statement in parentheses # Test case 9: Useless exception statement in parentheses
def func(): def func():
(RuntimeError("This is an exception")) # PLW0133 (RuntimeError("This is an exception")) # PLW0133
(MyError("This is an exception")) # PLW0133
(MySubError("This is an exception")) # PLW0133
(MyValueError("This is an exception")) # PLW0133
# Test case 10: Useless exception statement in continuation # Test case 10: Useless exception statement in continuation
def func(): def func():
x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133 x = 1; (RuntimeError("This is an exception")); y = 2 # PLW0133
x = 1; (MyError("This is an exception")); y = 2 # PLW0133
x = 1; (MySubError("This is an exception")); y = 2 # PLW0133
x = 1; (MyValueError("This is an exception")); y = 2 # PLW0133
# Test case 11: Useless warning statement # Test case 11: Useless warning statement
def func(): def func():
UserWarning("This is an assertion error") # PLW0133 UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Test case 12: Useless exception statement at module level
import builtins
builtins.TypeError("still an exception even though it's an Attribute") # PLW0133
PythonFinalizationError("Added in Python 3.13") # PLW0133
MyError("This is an exception") # PLW0133
MySubError("This is an exception") # PLW0133
MyValueError("This is an exception") # PLW0133
UserWarning("This is a user warning") # PLW0133
MyUserWarning("This is a custom user warning") # PLW0133
# Non-violation test cases: PLW0133 # Non-violation test cases: PLW0133
@ -119,10 +187,3 @@ def func():
def func(): def func():
with suppress(AttributeError): with suppress(AttributeError):
raise AttributeError("This is an exception") # OK raise AttributeError("This is an exception") # OK
import builtins
builtins.TypeError("still an exception even though it's an Attribute")
PythonFinalizationError("Added in Python 3.13")

View File

@ -132,7 +132,6 @@ async def c():
# Non-errors # Non-errors
### ###
# False-negative: RustPython doesn't parse the `\N{snowman}`.
"\N{snowman} {}".format(a) "\N{snowman} {}".format(a)
"{".format(a) "{".format(a)
@ -276,3 +275,6 @@ if __name__ == "__main__":
number = 0 number = 0
string = "{}".format(number := number + 1) string = "{}".format(number := number + 1)
print(string) print(string)
# Unicode escape
"\N{angle}AOB = {angle}°".format(angle=180)

View File

@ -138,5 +138,6 @@ with open("file.txt", encoding="utf-8") as f:
with open("file.txt", encoding="utf-8") as f: with open("file.txt", encoding="utf-8") as f:
contents = process_contents(f.read()) contents = process_contents(f.read())
with open("file.txt", encoding="utf-8") as f: with open("file1.txt", encoding="utf-8") as f:
contents: str = process_contents(f.read()) contents: str = process_contents(f.read())

View File

@ -0,0 +1,8 @@
from pathlib import Path
with Path("file.txt").open() as f:
contents = f.read()
with Path("file.txt").open("r") as f:
contents = f.read()

View File

@ -0,0 +1,26 @@
from pathlib import Path
with Path("file.txt").open("w") as f:
f.write("test")
with Path("file.txt").open("wb") as f:
f.write(b"test")
with Path("file.txt").open(mode="w") as f:
f.write("test")
with Path("file.txt").open("w", encoding="utf8") as f:
f.write("test")
with Path("file.txt").open("w", errors="ignore") as f:
f.write("test")
with Path(foo()).open("w") as f:
f.write("test")
p = Path("file.txt")
with p.open("w") as f:
f.write("test")
with Path("foo", "bar", "baz").open("w") as f:
f.write("test")

View File

@ -0,0 +1,88 @@
def f():
# These should both be ignored by the range suppression.
# ruff: disable[E741, F841]
I = 1
# ruff: enable[E741, F841]
def f():
# These should both be ignored by the implicit range suppression.
# Should also generate an "unmatched suppression" warning.
# ruff:disable[E741,F841]
I = 1
def f():
# Neither warning is ignored, and an "unmatched suppression"
# should be generated.
I = 1
# ruff: enable[E741, F841]
def f():
# One should be ignored by the range suppression, and
# the other logged to the user.
# ruff: disable[E741]
I = 1
# ruff: enable[E741]
def f():
# Test interleaved range suppressions. The first and last
# lines should each log a different warning, while the
# middle line should be completely silenced.
# ruff: disable[E741]
l = 0
# ruff: disable[F841]
O = 1
# ruff: enable[E741]
I = 2
# ruff: enable[F841]
def f():
# Neither of these are ignored and warnings are
# logged to user
# ruff: disable[E501]
I = 1
# ruff: enable[E501]
def f():
# These should both be ignored by the range suppression,
# and an unusued noqa diagnostic should be logged.
# ruff:disable[E741,F841]
I = 1 # noqa: E741,F841
# ruff:enable[E741,F841]
def f():
# TODO: Duplicate codes should be counted as duplicate, not unused
# ruff: disable[F841, F841]
foo = 0
def f():
# Overlapping range suppressions, one should be marked as used,
# and the other should trigger an unused suppression diagnostic
# ruff: disable[F841]
# ruff: disable[F841]
foo = 0
def f():
# Multiple codes but only one is used
# ruff: disable[E741, F401, F841]
foo = 0
def f():
# Multiple codes but only two are used
# ruff: disable[E741, F401, F841]
I = 0
def f():
# Multiple codes but none are used
# ruff: disable[E741, F401, F841]
print("hello")

View File

@ -0,0 +1,38 @@
a: int = 1
def f1():
global a
a: str = "foo" # error
b: int = 1
def outer():
def inner():
global b
b: str = "nested" # error
c: int = 1
def f2():
global c
c: list[str] = [] # error
d: int = 1
def f3():
global d
d: str # error
e: int = 1
def f4():
e: str = "happy" # okay
global f
f: int = 1 # okay
g: int = 1
global g # error
class C:
x: str
global x # error
class D:
global x # error
x: str

View File

@ -3,3 +3,5 @@ def func():
# Top-level await # Top-level await
await 1 await 1
([await c for c in cor] async for cor in func()) # ok

View File

@ -0,0 +1,24 @@
async def gen():
yield 1
return 42
def gen(): # B901 but not a syntax error - not an async generator
yield 1
return 42
async def gen(): # ok - no value in return
yield 1
return
async def gen():
yield 1
return foo()
async def gen():
yield 1
return [1, 2, 3]
async def gen():
if True:
yield 1
return 10

View File

@ -35,6 +35,7 @@ use ruff_python_ast::helpers::{collect_import_from_member, is_docstring_stmt, to
use ruff_python_ast::identifier::Identifier; use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::name::QualifiedName; use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::str::Quote; use ruff_python_ast::str::Quote;
use ruff_python_ast::token::Tokens;
use ruff_python_ast::visitor::{Visitor, walk_except_handler, walk_pattern}; use ruff_python_ast::visitor::{Visitor, walk_except_handler, walk_pattern};
use ruff_python_ast::{ use ruff_python_ast::{
self as ast, AnyParameterRef, ArgOrKeyword, Comprehension, ElifElseClause, ExceptHandler, Expr, self as ast, AnyParameterRef, ArgOrKeyword, Comprehension, ElifElseClause, ExceptHandler, Expr,
@ -48,7 +49,7 @@ use ruff_python_parser::semantic_errors::{
SemanticSyntaxChecker, SemanticSyntaxContext, SemanticSyntaxError, SemanticSyntaxErrorKind, SemanticSyntaxChecker, SemanticSyntaxContext, SemanticSyntaxError, SemanticSyntaxErrorKind,
}; };
use ruff_python_parser::typing::{AnnotationKind, ParsedAnnotation, parse_type_annotation}; use ruff_python_parser::typing::{AnnotationKind, ParsedAnnotation, parse_type_annotation};
use ruff_python_parser::{ParseError, Parsed, Tokens}; use ruff_python_parser::{ParseError, Parsed};
use ruff_python_semantic::all::{DunderAllDefinition, DunderAllFlags}; use ruff_python_semantic::all::{DunderAllDefinition, DunderAllFlags};
use ruff_python_semantic::analyze::{imports, typing}; use ruff_python_semantic::analyze::{imports, typing};
use ruff_python_semantic::{ use ruff_python_semantic::{
@ -68,6 +69,7 @@ use crate::noqa::NoqaMapping;
use crate::package::PackageRoot; use crate::package::PackageRoot;
use crate::preview::is_undefined_export_in_dunder_init_enabled; use crate::preview::is_undefined_export_in_dunder_init_enabled;
use crate::registry::Rule; use crate::registry::Rule;
use crate::rules::flake8_bugbear::rules::ReturnInGenerator;
use crate::rules::pyflakes::rules::{ use crate::rules::pyflakes::rules::{
LateFutureImport, MultipleStarredExpressions, ReturnOutsideFunction, LateFutureImport, MultipleStarredExpressions, ReturnOutsideFunction,
UndefinedLocalWithNestedImportStarUsage, YieldOutsideFunction, UndefinedLocalWithNestedImportStarUsage, YieldOutsideFunction,
@ -435,6 +437,15 @@ impl<'a> Checker<'a> {
} }
} }
/// Returns the [`Tokens`] for the parsed source file.
///
///
/// Unlike [`Self::tokens`], this method always returns
/// the tokens for the current file, even when within a parsed type annotation.
pub(crate) fn source_tokens(&self) -> &'a Tokens {
self.parsed.tokens()
}
/// The [`Locator`] for the current file, which enables extraction of source code from byte /// The [`Locator`] for the current file, which enables extraction of source code from byte
/// offsets. /// offsets.
pub(crate) const fn locator(&self) -> &'a Locator<'a> { pub(crate) const fn locator(&self) -> &'a Locator<'a> {
@ -728,6 +739,12 @@ impl SemanticSyntaxContext for Checker<'_> {
self.report_diagnostic(NonlocalWithoutBinding { name }, error.range); self.report_diagnostic(NonlocalWithoutBinding { name }, error.range);
} }
} }
SemanticSyntaxErrorKind::ReturnInGenerator => {
// B901
if self.is_rule_enabled(Rule::ReturnInGenerator) {
self.report_diagnostic(ReturnInGenerator, error.range);
}
}
SemanticSyntaxErrorKind::ReboundComprehensionVariable SemanticSyntaxErrorKind::ReboundComprehensionVariable
| SemanticSyntaxErrorKind::DuplicateTypeParameter | SemanticSyntaxErrorKind::DuplicateTypeParameter
| SemanticSyntaxErrorKind::MultipleCaseAssignment(_) | SemanticSyntaxErrorKind::MultipleCaseAssignment(_)
@ -746,6 +763,7 @@ impl SemanticSyntaxContext for Checker<'_> {
| SemanticSyntaxErrorKind::LoadBeforeNonlocalDeclaration { .. } | SemanticSyntaxErrorKind::LoadBeforeNonlocalDeclaration { .. }
| SemanticSyntaxErrorKind::NonlocalAndGlobal(_) | SemanticSyntaxErrorKind::NonlocalAndGlobal(_)
| SemanticSyntaxErrorKind::AnnotatedGlobal(_) | SemanticSyntaxErrorKind::AnnotatedGlobal(_)
| SemanticSyntaxErrorKind::TypeParameterDefaultOrder(_)
| SemanticSyntaxErrorKind::AnnotatedNonlocal(_) => { | SemanticSyntaxErrorKind::AnnotatedNonlocal(_) => {
self.semantic_errors.borrow_mut().push(error); self.semantic_errors.borrow_mut().push(error);
} }
@ -779,6 +797,10 @@ impl SemanticSyntaxContext for Checker<'_> {
match scope.kind { match scope.kind {
ScopeKind::Class(_) => return false, ScopeKind::Class(_) => return false,
ScopeKind::Function(_) | ScopeKind::Lambda(_) => return true, ScopeKind::Function(_) | ScopeKind::Lambda(_) => return true,
ScopeKind::Generator {
kind: GeneratorKind::Generator,
..
} => return true,
ScopeKind::Generator { .. } ScopeKind::Generator { .. }
| ScopeKind::Module | ScopeKind::Module
| ScopeKind::Type | ScopeKind::Type
@ -828,14 +850,19 @@ impl SemanticSyntaxContext for Checker<'_> {
self.source_type.is_ipynb() self.source_type.is_ipynb()
} }
fn in_generator_scope(&self) -> bool { fn in_generator_context(&self) -> bool {
matches!( for scope in self.semantic.current_scopes() {
&self.semantic.current_scope().kind, if matches!(
scope.kind,
ScopeKind::Generator { ScopeKind::Generator {
kind: GeneratorKind::Generator, kind: GeneratorKind::Generator,
.. ..
} }
) ) {
return true;
}
}
false
} }
fn in_loop_context(&self) -> bool { fn in_loop_context(&self) -> bool {

View File

@ -1,6 +1,6 @@
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_codegen::Stylist; use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_source_file::LineRanges; use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange}; use ruff_text_size::{Ranged, TextRange};

View File

@ -12,17 +12,20 @@ use crate::fix::edits::delete_comment;
use crate::noqa::{ use crate::noqa::{
Code, Directive, FileExemption, FileNoqaDirectives, NoqaDirectives, NoqaMapping, Code, Directive, FileExemption, FileNoqaDirectives, NoqaDirectives, NoqaMapping,
}; };
use crate::preview::is_range_suppressions_enabled;
use crate::registry::Rule; use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target; use crate::rule_redirects::get_redirect_target;
use crate::rules::pygrep_hooks; use crate::rules::pygrep_hooks;
use crate::rules::ruff; use crate::rules::ruff;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA}; use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA};
use crate::settings::LinterSettings; use crate::settings::LinterSettings;
use crate::suppression::Suppressions;
use crate::{Edit, Fix, Locator}; use crate::{Edit, Fix, Locator};
use super::ast::LintContext; use super::ast::LintContext;
/// RUF100 /// RUF100
#[expect(clippy::too_many_arguments)]
pub(crate) fn check_noqa( pub(crate) fn check_noqa(
context: &mut LintContext, context: &mut LintContext,
path: &Path, path: &Path,
@ -31,6 +34,7 @@ pub(crate) fn check_noqa(
noqa_line_for: &NoqaMapping, noqa_line_for: &NoqaMapping,
analyze_directives: bool, analyze_directives: bool,
settings: &LinterSettings, settings: &LinterSettings,
suppressions: &Suppressions,
) -> Vec<usize> { ) -> Vec<usize> {
// Identify any codes that are globally exempted (within the current file). // Identify any codes that are globally exempted (within the current file).
let file_noqa_directives = let file_noqa_directives =
@ -40,7 +44,7 @@ pub(crate) fn check_noqa(
let mut noqa_directives = let mut noqa_directives =
NoqaDirectives::from_commented_ranges(comment_ranges, &settings.external, path, locator); NoqaDirectives::from_commented_ranges(comment_ranges, &settings.external, path, locator);
if file_noqa_directives.is_empty() && noqa_directives.is_empty() { if file_noqa_directives.is_empty() && noqa_directives.is_empty() && suppressions.is_empty() {
return Vec::new(); return Vec::new();
} }
@ -60,11 +64,19 @@ pub(crate) fn check_noqa(
continue; continue;
} }
// Apply file-level suppressions first
if exemption.contains_secondary_code(code) { if exemption.contains_secondary_code(code) {
ignored_diagnostics.push(index); ignored_diagnostics.push(index);
continue; continue;
} }
// Apply ranged suppressions next
if is_range_suppressions_enabled(settings) && suppressions.check_diagnostic(diagnostic) {
ignored_diagnostics.push(index);
continue;
}
// Apply end-of-line noqa suppressions last
let noqa_offsets = diagnostic let noqa_offsets = diagnostic
.parent() .parent()
.into_iter() .into_iter()
@ -107,6 +119,9 @@ pub(crate) fn check_noqa(
} }
} }
// Diagnostics for unused/invalid range suppressions
suppressions.check_suppressions(context, locator);
// Enforce that the noqa directive was actually used (RUF100), unless RUF100 was itself // Enforce that the noqa directive was actually used (RUF100), unless RUF100 was itself
// suppressed. // suppressed.
if context.is_rule_enabled(Rule::UnusedNOQA) if context.is_rule_enabled(Rule::UnusedNOQA)
@ -128,8 +143,13 @@ pub(crate) fn check_noqa(
Directive::All(directive) => { Directive::All(directive) => {
if matches.is_empty() { if matches.is_empty() {
let edit = delete_comment(directive.range(), locator); let edit = delete_comment(directive.range(), locator);
let mut diagnostic = context let mut diagnostic = context.report_diagnostic(
.report_diagnostic(UnusedNOQA { codes: None }, directive.range()); UnusedNOQA {
codes: None,
kind: ruff::rules::UnusedNOQAKind::Noqa,
},
directive.range(),
);
diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary); diagnostic.add_primary_tag(ruff_db::diagnostic::DiagnosticTag::Unnecessary);
diagnostic.set_fix(Fix::safe_edit(edit)); diagnostic.set_fix(Fix::safe_edit(edit));
} }
@ -224,6 +244,7 @@ pub(crate) fn check_noqa(
.map(|code| (*code).to_string()) .map(|code| (*code).to_string())
.collect(), .collect(),
}), }),
kind: ruff::rules::UnusedNOQAKind::Noqa,
}, },
directive.range(), directive.range(),
); );

View File

@ -4,9 +4,9 @@ use std::path::Path;
use ruff_notebook::CellOffsets; use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType; use ruff_python_ast::PySourceType;
use ruff_python_ast::token::Tokens;
use ruff_python_codegen::Stylist; use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_parser::Tokens;
use crate::Locator; use crate::Locator;
use crate::directives::TodoComment; use crate::directives::TodoComment;

View File

@ -5,8 +5,8 @@ use std::str::FromStr;
use bitflags::bitflags; use bitflags::bitflags;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_trivia::CommentRanges; use ruff_python_trivia::CommentRanges;
use ruff_source_file::LineRanges; use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize}; use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};

View File

@ -5,8 +5,8 @@ use std::iter::FusedIterator;
use std::slice::Iter; use std::slice::Iter;
use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt}; use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt};
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_python_ast::{self as ast, Stmt, Suite}; use ruff_python_ast::{self as ast, Stmt, Suite};
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_source_file::UniversalNewlineIterator; use ruff_source_file::UniversalNewlineIterator;
use ruff_text_size::{Ranged, TextSize}; use ruff_text_size::{Ranged, TextSize};

View File

@ -3,14 +3,13 @@
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use ruff_python_ast::AnyNodeRef; use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::{self, Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Expr, ExprList, Parameters, Stmt}; use ruff_python_ast::{self as ast, Arguments, ExceptHandler, Expr, ExprList, Parameters, Stmt};
use ruff_python_codegen::Stylist; use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_trivia::textwrap::dedent_to; use ruff_python_trivia::textwrap::dedent_to;
use ruff_python_trivia::{ use ruff_python_trivia::{
CommentRanges, PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content, PythonWhitespace, SimpleTokenKind, SimpleTokenizer, has_leading_content, is_python_whitespace,
is_python_whitespace,
}; };
use ruff_source_file::{LineRanges, NewlineWithTrailingNewline, UniversalNewlines}; use ruff_source_file::{LineRanges, NewlineWithTrailingNewline, UniversalNewlines};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize}; use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
@ -209,7 +208,7 @@ pub(crate) fn remove_argument<T: Ranged>(
arguments: &Arguments, arguments: &Arguments,
parentheses: Parentheses, parentheses: Parentheses,
source: &str, source: &str,
comment_ranges: &CommentRanges, tokens: &Tokens,
) -> Result<Edit> { ) -> Result<Edit> {
// Partition into arguments before and after the argument to remove. // Partition into arguments before and after the argument to remove.
let (before, after): (Vec<_>, Vec<_>) = arguments let (before, after): (Vec<_>, Vec<_>) = arguments
@ -224,7 +223,7 @@ pub(crate) fn remove_argument<T: Ranged>(
.context("Unable to find argument")?; .context("Unable to find argument")?;
let parenthesized_range = let parenthesized_range =
parenthesized_range(arg.value().into(), arguments.into(), comment_ranges, source) token::parenthesized_range(arg.value().into(), arguments.into(), tokens)
.unwrap_or(arg.range()); .unwrap_or(arg.range());
if !after.is_empty() { if !after.is_empty() {
@ -270,24 +269,13 @@ pub(crate) fn remove_argument<T: Ranged>(
/// ///
/// The new argument will be inserted before the first existing keyword argument in `arguments`, if /// The new argument will be inserted before the first existing keyword argument in `arguments`, if
/// there are any present. Otherwise, the new argument is added to the end of the argument list. /// there are any present. Otherwise, the new argument is added to the end of the argument list.
pub(crate) fn add_argument( pub(crate) fn add_argument(argument: &str, arguments: &Arguments, tokens: &Tokens) -> Edit {
argument: &str,
arguments: &Arguments,
comment_ranges: &CommentRanges,
source: &str,
) -> Edit {
if let Some(ast::Keyword { range, value, .. }) = arguments.keywords.first() { if let Some(ast::Keyword { range, value, .. }) = arguments.keywords.first() {
let keyword = parenthesized_range(value.into(), arguments.into(), comment_ranges, source) let keyword = parenthesized_range(value.into(), arguments.into(), tokens).unwrap_or(*range);
.unwrap_or(*range);
Edit::insertion(format!("{argument}, "), keyword.start()) Edit::insertion(format!("{argument}, "), keyword.start())
} else if let Some(last) = arguments.arguments_source_order().last() { } else if let Some(last) = arguments.arguments_source_order().last() {
// Case 1: existing arguments, so append after the last argument. // Case 1: existing arguments, so append after the last argument.
let last = parenthesized_range( let last = parenthesized_range(last.value().into(), arguments.into(), tokens)
last.value().into(),
arguments.into(),
comment_ranges,
source,
)
.unwrap_or(last.range()); .unwrap_or(last.range());
Edit::insertion(format!(", {argument}"), last.end()) Edit::insertion(format!(", {argument}"), last.end())
} else { } else {
@ -298,12 +286,7 @@ pub(crate) fn add_argument(
/// Generic function to add a (regular) parameter to a function definition. /// Generic function to add a (regular) parameter to a function definition.
pub(crate) fn add_parameter(parameter: &str, parameters: &Parameters, source: &str) -> Edit { pub(crate) fn add_parameter(parameter: &str, parameters: &Parameters, source: &str) -> Edit {
if let Some(last) = parameters if let Some(last) = parameters.args.iter().rfind(|arg| arg.default.is_none()) {
.args
.iter()
.filter(|arg| arg.default.is_none())
.next_back()
{
// Case 1: at least one regular parameter, so append after the last one. // Case 1: at least one regular parameter, so append after the last one.
Edit::insertion(format!(", {parameter}"), last.end()) Edit::insertion(format!(", {parameter}"), last.end())
} else if !parameters.args.is_empty() { } else if !parameters.args.is_empty() {

View File

@ -9,10 +9,11 @@ use anyhow::Result;
use libcst_native as cst; use libcst_native as cst;
use ruff_diagnostics::Edit; use ruff_diagnostics::Edit;
use ruff_python_ast::token::Tokens;
use ruff_python_ast::{self as ast, Expr, ModModule, Stmt}; use ruff_python_ast::{self as ast, Expr, ModModule, Stmt};
use ruff_python_codegen::Stylist; use ruff_python_codegen::Stylist;
use ruff_python_importer::Insertion; use ruff_python_importer::Insertion;
use ruff_python_parser::{Parsed, Tokens}; use ruff_python_parser::Parsed;
use ruff_python_semantic::{ use ruff_python_semantic::{
ImportedName, MemberNameImport, ModuleNameImport, NameImport, SemanticModel, ImportedName, MemberNameImport, ModuleNameImport, NameImport, SemanticModel,
}; };

View File

@ -46,6 +46,7 @@ pub mod rule_selector;
pub mod rules; pub mod rules;
pub mod settings; pub mod settings;
pub mod source_kind; pub mod source_kind;
pub mod suppression;
mod text_helpers; mod text_helpers;
pub mod upstream_categories; pub mod upstream_categories;
mod violation; mod violation;

View File

@ -32,6 +32,7 @@ use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
use crate::settings::types::UnsafeFixes; use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags}; use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind; use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs}; use crate::{Locator, directives, fs};
pub(crate) mod float; pub(crate) mod float;
@ -128,6 +129,7 @@ pub fn check_path(
source_type: PySourceType, source_type: PySourceType,
parsed: &Parsed<ModModule>, parsed: &Parsed<ModModule>,
target_version: TargetVersion, target_version: TargetVersion,
suppressions: &Suppressions,
) -> Vec<Diagnostic> { ) -> Vec<Diagnostic> {
// Aggregate all diagnostics. // Aggregate all diagnostics.
let mut context = LintContext::new(path, locator.contents(), settings); let mut context = LintContext::new(path, locator.contents(), settings);
@ -339,6 +341,7 @@ pub fn check_path(
&directives.noqa_line_for, &directives.noqa_line_for,
parsed.has_valid_syntax(), parsed.has_valid_syntax(),
settings, settings,
suppressions,
); );
if noqa.is_enabled() { if noqa.is_enabled() {
for index in ignored.iter().rev() { for index in ignored.iter().rev() {
@ -400,6 +403,9 @@ pub fn add_noqa_to_path(
&indexer, &indexer,
); );
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics, ignoring any existing `noqa` directives. // Generate diagnostics, ignoring any existing `noqa` directives.
let diagnostics = check_path( let diagnostics = check_path(
path, path,
@ -414,6 +420,7 @@ pub fn add_noqa_to_path(
source_type, source_type,
&parsed, &parsed,
target_version, target_version,
&suppressions,
); );
// Add any missing `# noqa` pragmas. // Add any missing `# noqa` pragmas.
@ -427,6 +434,7 @@ pub fn add_noqa_to_path(
&directives.noqa_line_for, &directives.noqa_line_for,
stylist.line_ending(), stylist.line_ending(),
reason, reason,
&suppressions,
) )
} }
@ -461,6 +469,9 @@ pub fn lint_only(
&indexer, &indexer,
); );
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics. // Generate diagnostics.
let diagnostics = check_path( let diagnostics = check_path(
path, path,
@ -475,6 +486,7 @@ pub fn lint_only(
source_type, source_type,
&parsed, &parsed,
target_version, target_version,
&suppressions,
); );
LinterResult { LinterResult {
@ -566,6 +578,9 @@ pub fn lint_fix<'a>(
&indexer, &indexer,
); );
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics. // Generate diagnostics.
let diagnostics = check_path( let diagnostics = check_path(
path, path,
@ -580,6 +595,7 @@ pub fn lint_fix<'a>(
source_type, source_type,
&parsed, &parsed,
target_version, target_version,
&suppressions,
); );
if iterations == 0 { if iterations == 0 {
@ -769,6 +785,7 @@ mod tests {
use crate::registry::Rule; use crate::registry::Rule;
use crate::settings::LinterSettings; use crate::settings::LinterSettings;
use crate::source_kind::SourceKind; use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{TestedNotebook, assert_notebook_path, test_contents, test_snippet}; use crate::test::{TestedNotebook, assert_notebook_path, test_contents, test_snippet};
use crate::{Locator, assert_diagnostics, directives, settings}; use crate::{Locator, assert_diagnostics, directives, settings};
@ -944,6 +961,7 @@ mod tests {
&locator, &locator,
&indexer, &indexer,
); );
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let mut diagnostics = check_path( let mut diagnostics = check_path(
path, path,
None, None,
@ -957,6 +975,7 @@ mod tests {
source_type, source_type,
&parsed, &parsed,
target_version, target_version,
&suppressions,
); );
diagnostics.sort_by(Diagnostic::ruff_start_ordering); diagnostics.sort_by(Diagnostic::ruff_start_ordering);
diagnostics diagnostics
@ -982,6 +1001,7 @@ mod tests {
#[test_case(Path::new("write_to_debug.py"), PythonVersion::PY310)] #[test_case(Path::new("write_to_debug.py"), PythonVersion::PY310)]
#[test_case(Path::new("invalid_expression.py"), PythonVersion::PY312)] #[test_case(Path::new("invalid_expression.py"), PythonVersion::PY312)]
#[test_case(Path::new("global_parameter.py"), PythonVersion::PY310)] #[test_case(Path::new("global_parameter.py"), PythonVersion::PY310)]
#[test_case(Path::new("annotated_global.py"), PythonVersion::PY314)]
fn test_semantic_errors(path: &Path, python_version: PythonVersion) -> Result<()> { fn test_semantic_errors(path: &Path, python_version: PythonVersion) -> Result<()> {
let snapshot = format!( let snapshot = format!(
"semantic_syntax_error_{}_{}", "semantic_syntax_error_{}_{}",
@ -1043,6 +1063,7 @@ mod tests {
Rule::YieldFromInAsyncFunction, Rule::YieldFromInAsyncFunction,
Path::new("yield_from_in_async_function.py") Path::new("yield_from_in_async_function.py")
)] )]
#[test_case(Rule::ReturnInGenerator, Path::new("return_in_generator.py"))]
fn test_syntax_errors(rule: Rule, path: &Path) -> Result<()> { fn test_syntax_errors(rule: Rule, path: &Path) -> Result<()> {
let snapshot = path.to_string_lossy().to_string(); let snapshot = path.to_string_lossy().to_string();
let path = Path::new("resources/test/fixtures/syntax_errors").join(path); let path = Path::new("resources/test/fixtures/syntax_errors").join(path);

View File

@ -20,12 +20,14 @@ use crate::Locator;
use crate::fs::relativize_path; use crate::fs::relativize_path;
use crate::registry::Rule; use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target; use crate::rule_redirects::get_redirect_target;
use crate::suppression::Suppressions;
/// Generates an array of edits that matches the length of `messages`. /// Generates an array of edits that matches the length of `messages`.
/// Each potential edit in the array is paired, in order, with the associated diagnostic. /// Each potential edit in the array is paired, in order, with the associated diagnostic.
/// Each edit will add a `noqa` comment to the appropriate line in the source to hide /// Each edit will add a `noqa` comment to the appropriate line in the source to hide
/// the diagnostic. These edits may conflict with each other and should not be applied /// the diagnostic. These edits may conflict with each other and should not be applied
/// simultaneously. /// simultaneously.
#[expect(clippy::too_many_arguments)]
pub fn generate_noqa_edits( pub fn generate_noqa_edits(
path: &Path, path: &Path,
diagnostics: &[Diagnostic], diagnostics: &[Diagnostic],
@ -34,11 +36,19 @@ pub fn generate_noqa_edits(
external: &[String], external: &[String],
noqa_line_for: &NoqaMapping, noqa_line_for: &NoqaMapping,
line_ending: LineEnding, line_ending: LineEnding,
suppressions: &Suppressions,
) -> Vec<Option<Edit>> { ) -> Vec<Option<Edit>> {
let file_directives = FileNoqaDirectives::extract(locator, comment_ranges, external, path); let file_directives = FileNoqaDirectives::extract(locator, comment_ranges, external, path);
let exemption = FileExemption::from(&file_directives); let exemption = FileExemption::from(&file_directives);
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator); let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for); let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
build_noqa_edits_by_diagnostic(comments, locator, line_ending, None) build_noqa_edits_by_diagnostic(comments, locator, line_ending, None)
} }
@ -725,6 +735,7 @@ pub(crate) fn add_noqa(
noqa_line_for: &NoqaMapping, noqa_line_for: &NoqaMapping,
line_ending: LineEnding, line_ending: LineEnding,
reason: Option<&str>, reason: Option<&str>,
suppressions: &Suppressions,
) -> Result<usize> { ) -> Result<usize> {
let (count, output) = add_noqa_inner( let (count, output) = add_noqa_inner(
path, path,
@ -735,6 +746,7 @@ pub(crate) fn add_noqa(
noqa_line_for, noqa_line_for,
line_ending, line_ending,
reason, reason,
suppressions,
); );
fs::write(path, output)?; fs::write(path, output)?;
@ -751,6 +763,7 @@ fn add_noqa_inner(
noqa_line_for: &NoqaMapping, noqa_line_for: &NoqaMapping,
line_ending: LineEnding, line_ending: LineEnding,
reason: Option<&str>, reason: Option<&str>,
suppressions: &Suppressions,
) -> (usize, String) { ) -> (usize, String) {
let mut count = 0; let mut count = 0;
@ -760,7 +773,14 @@ fn add_noqa_inner(
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator); let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for); let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason); let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason);
@ -859,6 +879,7 @@ fn find_noqa_comments<'a>(
exemption: &'a FileExemption, exemption: &'a FileExemption,
directives: &'a NoqaDirectives, directives: &'a NoqaDirectives,
noqa_line_for: &NoqaMapping, noqa_line_for: &NoqaMapping,
suppressions: &'a Suppressions,
) -> Vec<Option<NoqaComment<'a>>> { ) -> Vec<Option<NoqaComment<'a>>> {
// List of noqa comments, ordered to match up with `messages` // List of noqa comments, ordered to match up with `messages`
let mut comments_by_line: Vec<Option<NoqaComment<'a>>> = vec![]; let mut comments_by_line: Vec<Option<NoqaComment<'a>>> = vec![];
@ -875,6 +896,12 @@ fn find_noqa_comments<'a>(
continue; continue;
} }
// Apply ranged suppressions next
if suppressions.check_diagnostic(message) {
comments_by_line.push(None);
continue;
}
// Is the violation ignored by a `noqa` directive on the parent line? // Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent) = message.parent() { if let Some(parent) = message.parent() {
if let Some(directive_line) = if let Some(directive_line) =
@ -1253,6 +1280,7 @@ mod tests {
use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon}; use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon};
use crate::rules::pyflakes::rules::UnusedVariable; use crate::rules::pyflakes::rules::UnusedVariable;
use crate::rules::pyupgrade::rules::PrintfStringFormatting; use crate::rules::pyupgrade::rules::PrintfStringFormatting;
use crate::suppression::Suppressions;
use crate::{Edit, Violation}; use crate::{Edit, Violation};
use crate::{Locator, generate_noqa_edits}; use crate::{Locator, generate_noqa_edits};
@ -2848,6 +2876,7 @@ mod tests {
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
None, None,
&Suppressions::default(),
); );
assert_eq!(count, 0); assert_eq!(count, 0);
assert_eq!(output, format!("{contents}")); assert_eq!(output, format!("{contents}"));
@ -2872,6 +2901,7 @@ mod tests {
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
None, None,
&Suppressions::default(),
); );
assert_eq!(count, 1); assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n"); assert_eq!(output, "x = 1 # noqa: F841\n");
@ -2903,6 +2933,7 @@ mod tests {
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
None, None,
&Suppressions::default(),
); );
assert_eq!(count, 1); assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n"); assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@ -2934,6 +2965,7 @@ mod tests {
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
None, None,
&Suppressions::default(),
); );
assert_eq!(count, 0); assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa"); assert_eq!(output, "x = 1 # noqa");
@ -2956,6 +2988,7 @@ print(
let messages = [PrintfStringFormatting let messages = [PrintfStringFormatting
.into_diagnostic(TextRange::new(12.into(), 79.into()), &source_file)]; .into_diagnostic(TextRange::new(12.into(), 79.into()), &source_file)];
let comment_ranges = CommentRanges::default(); let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits( let edits = generate_noqa_edits(
path, path,
&messages, &messages,
@ -2964,6 +2997,7 @@ print(
&[], &[],
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
&suppressions,
); );
assert_eq!( assert_eq!(
edits, edits,
@ -2987,6 +3021,7 @@ bar =
[UselessSemicolon.into_diagnostic(TextRange::new(4.into(), 5.into()), &source_file)]; [UselessSemicolon.into_diagnostic(TextRange::new(4.into(), 5.into()), &source_file)];
let noqa_line_for = NoqaMapping::default(); let noqa_line_for = NoqaMapping::default();
let comment_ranges = CommentRanges::default(); let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits( let edits = generate_noqa_edits(
path, path,
&messages, &messages,
@ -2995,6 +3030,7 @@ bar =
&[], &[],
&noqa_line_for, &noqa_line_for,
LineEnding::Lf, LineEnding::Lf,
&suppressions,
); );
assert_eq!( assert_eq!(
edits, edits,

View File

@ -9,6 +9,11 @@ use crate::settings::LinterSettings;
// Rule-specific behavior // Rule-specific behavior
// https://github.com/astral-sh/ruff/pull/21382
pub(crate) const fn is_custom_exception_checking_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/15541 // https://github.com/astral-sh/ruff/pull/15541
pub(crate) const fn is_suspicious_function_reference_enabled(settings: &LinterSettings) -> bool { pub(crate) const fn is_suspicious_function_reference_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled() settings.preview.is_enabled()
@ -286,3 +291,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
) -> bool { ) -> bool {
settings.preview.is_enabled() settings.preview.is_enabled()
} }
// https://github.com/astral-sh/ruff/pull/21623
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@ -91,8 +91,8 @@ pub(crate) fn fastapi_redundant_response_model(checker: &Checker, function_def:
response_model_arg, response_model_arg,
&call.arguments, &call.arguments,
Parentheses::Preserve, Parentheses::Preserve,
checker.locator().contents(), checker.source(),
checker.comment_ranges(), checker.tokens(),
) )
.map(Fix::unsafe_edit) .map(Fix::unsafe_edit)
}); });

View File

@ -70,7 +70,7 @@ fn is_open_call(func: &Expr, semantic: &SemanticModel) -> bool {
} }
/// Returns `true` if an expression resolves to a call to `pathlib.Path.open`. /// Returns `true` if an expression resolves to a call to `pathlib.Path.open`.
fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool { pub(crate) fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func else { let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func else {
return false; return false;
}; };

View File

@ -18,7 +18,7 @@ mod async_zero_sleep;
mod blocking_http_call; mod blocking_http_call;
mod blocking_http_call_httpx; mod blocking_http_call_httpx;
mod blocking_input; mod blocking_input;
mod blocking_open_call; pub(crate) mod blocking_open_call;
mod blocking_path_methods; mod blocking_path_methods;
mod blocking_process_invocation; mod blocking_process_invocation;
mod blocking_sleep; mod blocking_sleep;

View File

@ -75,6 +75,7 @@ pub(crate) fn unsafe_yaml_load(checker: &Checker, call: &ast::ExprCall) {
qualified_name.segments(), qualified_name.segments(),
["yaml", "SafeLoader" | "CSafeLoader"] ["yaml", "SafeLoader" | "CSafeLoader"]
| ["yaml", "loader", "SafeLoader" | "CSafeLoader"] | ["yaml", "loader", "SafeLoader" | "CSafeLoader"]
| ["yaml", "cyaml", "CSafeLoader"]
) )
}) })
{ {

View File

@ -74,12 +74,7 @@ pub(crate) fn map_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker checker
.report_diagnostic(MapWithoutExplicitStrict, call.range()) .report_diagnostic(MapWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit( .set_fix(Fix::applicable_edit(
add_argument( add_argument("strict=False", &call.arguments, checker.tokens()),
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
Applicability::Unsafe, Applicability::Unsafe,
)); ));
} }

View File

@ -3,7 +3,7 @@ use std::fmt::Write;
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::is_docstring_stmt; use ruff_python_ast::helpers::is_docstring_stmt;
use ruff_python_ast::name::QualifiedName; use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, ParameterWithDefault}; use ruff_python_ast::{self as ast, Expr, ParameterWithDefault};
use ruff_python_semantic::SemanticModel; use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::function_type::is_stub; use ruff_python_semantic::analyze::function_type::is_stub;
@ -166,12 +166,7 @@ fn move_initialization(
return None; return None;
} }
let range = match parenthesized_range( let range = match parenthesized_range(default.into(), parameter.into(), checker.tokens()) {
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source(),
) {
Some(range) => range, Some(range) => range,
None => default.range(), None => default.range(),
}; };
@ -194,12 +189,7 @@ fn move_initialization(
"{} = {}", "{} = {}",
parameter.parameter.name(), parameter.parameter.name(),
locator.slice( locator.slice(
parenthesized_range( parenthesized_range(default.into(), parameter.into(), checker.tokens())
default.into(),
parameter.into(),
checker.comment_ranges(),
checker.source()
)
.unwrap_or(default.range()) .unwrap_or(default.range())
) )
); );

View File

@ -92,12 +92,7 @@ pub(crate) fn no_explicit_stacklevel(checker: &Checker, call: &ast::ExprCall) {
} }
let mut diagnostic = checker.report_diagnostic(NoExplicitStacklevel, call.func.range()); let mut diagnostic = checker.report_diagnostic(NoExplicitStacklevel, call.func.range());
let edit = add_argument( let edit = add_argument("stacklevel=2", &call.arguments, checker.tokens());
"stacklevel=2",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
);
diagnostic.set_fix(Fix::unsafe_edit(edit)); diagnostic.set_fix(Fix::unsafe_edit(edit));
} }

View File

@ -1,6 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::statement_visitor; use ruff_python_ast::visitor::{Visitor, walk_expr, walk_stmt};
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::{self as ast, Expr, Stmt, StmtFunctionDef}; use ruff_python_ast::{self as ast, Expr, Stmt, StmtFunctionDef};
use ruff_text_size::TextRange; use ruff_text_size::TextRange;
@ -96,6 +95,11 @@ pub(crate) fn return_in_generator(checker: &Checker, function_def: &StmtFunction
return; return;
} }
// Async functions are flagged by the `ReturnInGenerator` semantic syntax error.
if function_def.is_async {
return;
}
let mut visitor = ReturnInGeneratorVisitor::default(); let mut visitor = ReturnInGeneratorVisitor::default();
visitor.visit_body(&function_def.body); visitor.visit_body(&function_def.body);
@ -112,15 +116,9 @@ struct ReturnInGeneratorVisitor {
has_yield: bool, has_yield: bool,
} }
impl StatementVisitor<'_> for ReturnInGeneratorVisitor { impl Visitor<'_> for ReturnInGeneratorVisitor {
fn visit_stmt(&mut self, stmt: &Stmt) { fn visit_stmt(&mut self, stmt: &Stmt) {
match stmt { match stmt {
Stmt::Expr(ast::StmtExpr { value, .. }) => match **value {
Expr::Yield(_) | Expr::YieldFrom(_) => {
self.has_yield = true;
}
_ => {}
},
Stmt::FunctionDef(_) => { Stmt::FunctionDef(_) => {
// Do not recurse into nested functions; they're evaluated separately. // Do not recurse into nested functions; they're evaluated separately.
} }
@ -130,8 +128,19 @@ impl StatementVisitor<'_> for ReturnInGeneratorVisitor {
node_index: _, node_index: _,
}) => { }) => {
self.return_ = Some(*range); self.return_ = Some(*range);
walk_stmt(self, stmt);
} }
_ => statement_visitor::walk_stmt(self, stmt), _ => walk_stmt(self, stmt),
}
}
fn visit_expr(&mut self, expr: &Expr) {
match expr {
Expr::Lambda(_) => {}
Expr::Yield(_) | Expr::YieldFrom(_) => {
self.has_yield = true;
}
_ => walk_expr(self, expr),
} }
} }
} }

View File

@ -70,12 +70,7 @@ pub(crate) fn zip_without_explicit_strict(checker: &Checker, call: &ast::ExprCal
checker checker
.report_diagnostic(ZipWithoutExplicitStrict, call.range()) .report_diagnostic(ZipWithoutExplicitStrict, call.range())
.set_fix(Fix::applicable_edit( .set_fix(Fix::applicable_edit(
add_argument( add_argument("strict=False", &call.arguments, checker.tokens()),
"strict=False",
&call.arguments,
checker.comment_ranges(),
checker.locator().contents(),
),
Applicability::Unsafe, Applicability::Unsafe,
)); ));
} }

View File

@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20 --> B006_B008.py:242:20
| |
237 | # B006 and B008 240 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008. 241 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]): 242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass 243 | pass
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
236 | 239 |
237 | # B006 and B008 240 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008. 241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]): - def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None): 242 + def nested_combo(a=None):
240 | pass 243 | pass
241 | 244 |
242 | 245 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27 --> B006_B008.py:279:27
| |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
| ^^ | ^^
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
273 | 276 |
274 | 277 |
275 | def mutable_annotations( 278 | def mutable_annotations(
- a: list[int] | None = [], - a: list[int] | None = [],
276 + a: list[int] | None = None, 279 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35 --> B006_B008.py:280:35
| |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
| ^^ | ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
274 | 277 |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {}, - b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None, 280 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62 --> B006_B008.py:281:62
| |
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^ | ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), - c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None, 281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
281 | pass 284 | pass
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80 --> B006_B008.py:282:80
| |
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^ | ^^^^^
280 | ): 283 | ):
281 | pass 284 | pass
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), - d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None, 282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ): 283 | ):
281 | pass 284 | pass
282 | 285 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52 --> B006_B008.py:287:52
| |
284 | def single_line_func_wrong(value: dict[str, str] = {}): 287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
285 | """Docstring""" 288 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
281 | pass 284 | pass
282 | 285 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
286 | 286 |
287 | - def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52 --> B006_B008.py:291:52
| |
288 | def single_line_func_wrong(value: dict[str, str] = {}): 291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
289 | """Docstring""" 292 | """Docstring"""
290 | ... 293 | ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
285 | """Docstring""" 288 | """Docstring"""
286 | 289 |
287 | 290 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None): 291 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring""" 292 | """Docstring"""
290 | ... 293 | ...
291 | 294 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52 --> B006_B008.py:296:52
| |
293 | def single_line_func_wrong(value: dict[str, str] = {}): 296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
294 | """Docstring"""; ... 297 | """Docstring"""; ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
290 | ... 293 | ...
291 | 294 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
295 | 295 |
296 | - def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52 --> B006_B008.py:300:52
| |
297 | def single_line_func_wrong(value: dict[str, str] = {}): 300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
298 | """Docstring"""; \ 301 | """Docstring"""; \
299 | ... 302 | ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
294 | """Docstring"""; ... 297 | """Docstring"""; ...
295 | 298 |
296 | 299 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None): 300 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \ 301 | """Docstring"""; \
299 | ... 302 | ...
300 | 303 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52 --> B006_B008.py:305:52
| |
302 | def single_line_func_wrong(value: dict[str, str] = { 305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^ | ____________________________________________________^
303 | | # This is a comment 306 | | # This is a comment
304 | | }): 307 | | }):
| |_^ | |_^
305 | """Docstring""" 308 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
299 | ... 302 | ...
300 | 303 |
301 | 304 |
- def single_line_func_wrong(value: dict[str, str] = { - def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment - # This is a comment
- }): - }):
302 + def single_line_func_wrong(value: dict[str, str] = None): 305 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring""" 306 | """Docstring"""
304 | 307 |
305 | 308 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52 --> B006_B008.py:311:52
| |
308 | def single_line_func_wrong(value: dict[str, str] = {}) \ 311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^ | ^^
309 | : \ 312 | : \
310 | """Docstring""" 313 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52 --> B006_B008.py:316:52
| |
313 | def single_line_func_wrong(value: dict[str, str] = {}): 316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
314 | """Docstring without newline""" 317 | """Docstring without newline"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
310 | """Docstring""" 313 | """Docstring"""
311 | 314 |
312 | 315 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None): 316 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline""" 317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior

View File

@ -53,39 +53,39 @@ B008 Do not perform function call in argument defaults; instead, perform the cal
| |
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:239:31 --> B006_B008.py:242:31
| |
237 | # B006 and B008 240 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008. 241 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]): 242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^
240 | pass 243 | pass
| |
B008 Do not perform function call `map` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable B008 Do not perform function call `map` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:245:22 --> B006_B008.py:248:22
| |
243 | # Don't flag nested B006 since we can't guarantee that 246 | # Don't flag nested B006 since we can't guarantee that
244 | # it isn't made mutable by the outer operation. 247 | # it isn't made mutable by the outer operation.
245 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])): 248 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 | pass 249 | pass
| |
B008 Do not perform function call `random.randint` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable B008 Do not perform function call `random.randint` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:19 --> B006_B008.py:253:19
| |
249 | # B008-ception. 252 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)): 253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
251 | pass 254 | pass
| |
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:37 --> B006_B008.py:253:37
| |
249 | # B008-ception. 252 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)): 253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^
251 | pass 254 | pass
| |

View File

@ -21,3 +21,46 @@ B901 Using `yield` and `return {value}` in a generator function can lead to conf
37 | 37 |
38 | yield from not_broken() 38 | yield from not_broken()
| |
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:56:5
|
55 | def broken3():
56 | return (yield from [])
| ^^^^^^^^^^^^^^^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:61:5
|
59 | def broken4():
60 | x = yield from []
61 | return x
| ^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:72:5
|
71 | inner((yield from []))
72 | return x
| ^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:83:5
|
81 | async def broken6():
82 | yield 1
83 | return foo()
| ^^^^^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:88:5
|
86 | async def broken7():
87 | yield 1
88 | return [1, 2, 3]
| ^^^^^^^^^^^^^^^^
|

View File

@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20 --> B006_B008.py:242:20
| |
237 | # B006 and B008 240 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008. 241 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]): 242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass 243 | pass
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
236 | 239 |
237 | # B006 and B008 240 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008. 241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]): - def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None): 242 + def nested_combo(a=None):
240 | pass 243 | pass
241 | 244 |
242 | 245 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27 --> B006_B008.py:279:27
| |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
| ^^ | ^^
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
273 | 276 |
274 | 277 |
275 | def mutable_annotations( 278 | def mutable_annotations(
- a: list[int] | None = [], - a: list[int] | None = [],
276 + a: list[int] | None = None, 279 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35 --> B006_B008.py:280:35
| |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
| ^^ | ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
274 | 277 |
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {}, - b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None, 280 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62 --> B006_B008.py:281:62
| |
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^ | ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
275 | def mutable_annotations( 278 | def mutable_annotations(
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), - c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None, 281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ): 283 | ):
281 | pass 284 | pass
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80 --> B006_B008.py:282:80
| |
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^ | ^^^^^
280 | ): 283 | ):
281 | pass 284 | pass
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
276 | a: list[int] | None = [], 279 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {}, 280 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), 281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(), - d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None, 282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ): 283 | ):
281 | pass 284 | pass
282 | 285 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52 --> B006_B008.py:287:52
| |
284 | def single_line_func_wrong(value: dict[str, str] = {}): 287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
285 | """Docstring""" 288 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
281 | pass 284 | pass
282 | 285 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
286 | 286 |
287 | - def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52 --> B006_B008.py:291:52
| |
288 | def single_line_func_wrong(value: dict[str, str] = {}): 291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
289 | """Docstring""" 292 | """Docstring"""
290 | ... 293 | ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
285 | """Docstring""" 288 | """Docstring"""
286 | 289 |
287 | 290 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None): 291 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring""" 292 | """Docstring"""
290 | ... 293 | ...
291 | 294 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52 --> B006_B008.py:296:52
| |
293 | def single_line_func_wrong(value: dict[str, str] = {}): 296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
294 | """Docstring"""; ... 297 | """Docstring"""; ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
290 | ... 293 | ...
291 | 294 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
295 | 295 |
296 | - def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52 --> B006_B008.py:300:52
| |
297 | def single_line_func_wrong(value: dict[str, str] = {}): 300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
298 | """Docstring"""; \ 301 | """Docstring"""; \
299 | ... 302 | ...
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
294 | """Docstring"""; ... 297 | """Docstring"""; ...
295 | 298 |
296 | 299 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None): 300 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \ 301 | """Docstring"""; \
299 | ... 302 | ...
300 | 303 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52 --> B006_B008.py:305:52
| |
302 | def single_line_func_wrong(value: dict[str, str] = { 305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^ | ____________________________________________________^
303 | | # This is a comment 306 | | # This is a comment
304 | | }): 307 | | }):
| |_^ | |_^
305 | """Docstring""" 308 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
299 | ... 302 | ...
300 | 303 |
301 | 304 |
- def single_line_func_wrong(value: dict[str, str] = { - def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment - # This is a comment
- }): - }):
302 + def single_line_func_wrong(value: dict[str, str] = None): 305 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring""" 306 | """Docstring"""
304 | 307 |
305 | 308 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52 --> B006_B008.py:311:52
| |
308 | def single_line_func_wrong(value: dict[str, str] = {}) \ 311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^ | ^^
309 | : \ 312 | : \
310 | """Docstring""" 313 | """Docstring"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52 --> B006_B008.py:316:52
| |
313 | def single_line_func_wrong(value: dict[str, str] = {}): 316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^ | ^^
314 | """Docstring without newline""" 317 | """Docstring without newline"""
| |
help: Replace with `None`; initialize within function help: Replace with `None`; initialize within function
310 | """Docstring""" 313 | """Docstring"""
311 | 314 |
312 | 315 |
- def single_line_func_wrong(value: dict[str, str] = {}): - def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None): 316 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline""" 317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior

View File

@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange}; use ruff_text_size::{Ranged, TextRange};
use crate::Locator; use crate::Locator;

View File

@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast; use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator; use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr; use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::TokenKind;
use ruff_python_parser::TokenKind; use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize}; use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker; use crate::checkers::ast::Checker;
@ -142,12 +142,8 @@ pub(crate) fn unnecessary_generator_list(checker: &Checker, call: &ast::ExprCall
if *parenthesized { if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be // The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses. // surrounded by additional parentheses.
let range = parenthesized_range( let range =
argument.into(), parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range()); .unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses. // The generator always parenthesizes the expression; trim the parentheses.

View File

@ -2,8 +2,8 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast; use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator; use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr; use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::TokenKind;
use ruff_python_parser::TokenKind; use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize}; use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker; use crate::checkers::ast::Checker;
@ -147,12 +147,8 @@ pub(crate) fn unnecessary_generator_set(checker: &Checker, call: &ast::ExprCall)
if *parenthesized { if *parenthesized {
// The generator's range will include the innermost parentheses, but it could be // The generator's range will include the innermost parentheses, but it could be
// surrounded by additional parentheses. // surrounded by additional parentheses.
let range = parenthesized_range( let range =
argument.into(), parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(argument.range()); .unwrap_or(argument.range());
// The generator always parenthesizes the expression; trim the parentheses. // The generator always parenthesizes the expression; trim the parentheses.

View File

@ -1,7 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast; use ruff_python_ast as ast;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::TokenKind;
use ruff_python_parser::TokenKind; use ruff_python_ast::token::parenthesized_range;
use ruff_text_size::{Ranged, TextRange, TextSize}; use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker; use crate::checkers::ast::Checker;
@ -89,12 +89,8 @@ pub(crate) fn unnecessary_list_comprehension_set(checker: &Checker, call: &ast::
// If the list comprehension is parenthesized, remove the parentheses in addition to // If the list comprehension is parenthesized, remove the parentheses in addition to
// removing the brackets. // removing the brackets.
let replacement_range = parenthesized_range( let replacement_range =
argument.into(), parenthesized_range(argument.into(), (&call.arguments).into(), checker.tokens())
(&call.arguments).into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or_else(|| argument.range()); .unwrap_or_else(|| argument.range());
let span = argument.range().add_start(one).sub_end(one); let span = argument.range().add_start(one).sub_end(one);

View File

@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr, Operator}; use ruff_python_ast::{self as ast, Expr, Operator};
use ruff_python_trivia::is_python_whitespace; use ruff_python_trivia::is_python_whitespace;
use ruff_source_file::LineRanges; use ruff_source_file::LineRanges;
@ -88,13 +88,7 @@ pub(crate) fn explicit(checker: &Checker, expr: &Expr) {
checker.report_diagnostic(ExplicitStringConcatenation, expr.range()); checker.report_diagnostic(ExplicitStringConcatenation, expr.range());
let is_parenthesized = |expr: &Expr| { let is_parenthesized = |expr: &Expr| {
parenthesized_range( parenthesized_range(expr.into(), bin_op.into(), checker.tokens()).is_some()
expr.into(),
bin_op.into(),
checker.comment_ranges(),
checker.source(),
)
.is_some()
}; };
// If either `left` or `right` is parenthesized, generating // If either `left` or `right` is parenthesized, generating
// a fix would be too involved. Just report the diagnostic. // a fix would be too involved. Just report the diagnostic.

View File

@ -3,8 +3,8 @@ use std::borrow::Cow;
use itertools::Itertools; use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::StringFlags; use ruff_python_ast::StringFlags;
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_python_index::Indexer; use ruff_python_index::Indexer;
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_source_file::LineRanges; use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange}; use ruff_text_size::{Ranged, TextLen, TextRange};

View File

@ -111,7 +111,6 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
} }
let arguments = &call.arguments; let arguments = &call.arguments;
let source = checker.source();
let mut diagnostic = checker.report_diagnostic(ExcInfoOutsideExceptHandler, exc_info.range); let mut diagnostic = checker.report_diagnostic(ExcInfoOutsideExceptHandler, exc_info.range);
@ -120,8 +119,8 @@ pub(crate) fn exc_info_outside_except_handler(checker: &Checker, call: &ExprCall
exc_info, exc_info,
arguments, arguments,
Parentheses::Preserve, Parentheses::Preserve,
source, checker.source(),
checker.comment_ranges(), checker.tokens(),
)?; )?;
Ok(Fix::unsafe_edit(edit)) Ok(Fix::unsafe_edit(edit))
}); });

View File

@ -2,7 +2,7 @@ use itertools::Itertools;
use rustc_hash::{FxBuildHasher, FxHashSet}; use rustc_hash::{FxBuildHasher, FxHashSet};
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr}; use ruff_python_ast::{self as ast, Expr};
use ruff_python_stdlib::identifiers::is_identifier; use ruff_python_stdlib::identifiers::is_identifier;
use ruff_text_size::Ranged; use ruff_text_size::Ranged;
@ -129,8 +129,8 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
keyword, keyword,
&call.arguments, &call.arguments,
Parentheses::Preserve, Parentheses::Preserve,
checker.locator().contents(), checker.source(),
checker.comment_ranges(), checker.tokens(),
) )
.map(Fix::safe_edit) .map(Fix::safe_edit)
}); });
@ -158,8 +158,7 @@ pub(crate) fn unnecessary_dict_kwargs(checker: &Checker, call: &ast::ExprCall) {
parenthesized_range( parenthesized_range(
value.into(), value.into(),
dict.into(), dict.into(),
checker.comment_ranges(), checker.tokens()
checker.locator().contents(),
) )
.unwrap_or(value.range()) .unwrap_or(value.range())
) )

View File

@ -73,11 +73,11 @@ pub(crate) fn unnecessary_range_start(checker: &Checker, call: &ast::ExprCall) {
let mut diagnostic = checker.report_diagnostic(UnnecessaryRangeStart, start.range()); let mut diagnostic = checker.report_diagnostic(UnnecessaryRangeStart, start.range());
diagnostic.try_set_fix(|| { diagnostic.try_set_fix(|| {
remove_argument( remove_argument(
&start, start,
&call.arguments, &call.arguments,
Parentheses::Preserve, Parentheses::Preserve,
checker.locator().contents(), checker.source(),
checker.comment_ranges(), checker.tokens(),
) )
.map(Fix::safe_edit) .map(Fix::safe_edit)
}); });

View File

@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Expr}; use ruff_python_ast::{self as ast, Expr};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextLen, TextSize}; use ruff_text_size::{Ranged, TextLen, TextSize};
use crate::checkers::ast::Checker; use crate::checkers::ast::Checker;

View File

@ -160,20 +160,16 @@ fn generate_fix(
) -> anyhow::Result<Fix> { ) -> anyhow::Result<Fix> {
let locator = checker.locator(); let locator = checker.locator();
let source = locator.contents(); let source = locator.contents();
let tokens = checker.tokens();
let deletion = remove_argument( let deletion = remove_argument(
generic_base, generic_base,
arguments, arguments,
Parentheses::Preserve, Parentheses::Preserve,
source, source,
checker.comment_ranges(), tokens,
)?; )?;
let insertion = add_argument( let insertion = add_argument(locator.slice(generic_base), arguments, tokens);
locator.slice(generic_base),
arguments,
checker.comment_ranges(),
source,
);
Ok(Fix::unsafe_edits(deletion, [insertion])) Ok(Fix::unsafe_edits(deletion, [insertion]))
} }

View File

@ -5,7 +5,7 @@ use ruff_python_ast::{
helpers::{pep_604_union, typing_optional}, helpers::{pep_604_union, typing_optional},
name::Name, name::Name,
operator_precedence::OperatorPrecedence, operator_precedence::OperatorPrecedence,
parenthesize::parenthesized_range, token::{Tokens, parenthesized_range},
}; };
use ruff_python_semantic::analyze::typing::{traverse_literal, traverse_union}; use ruff_python_semantic::analyze::typing::{traverse_literal, traverse_union};
use ruff_text_size::{Ranged, TextRange}; use ruff_text_size::{Ranged, TextRange};
@ -243,12 +243,8 @@ fn create_fix(
let union_expr = pep_604_union(&[new_literal_expr, none_expr]); let union_expr = pep_604_union(&[new_literal_expr, none_expr]);
// Check if we need parentheses to preserve operator precedence // Check if we need parentheses to preserve operator precedence
let content = if needs_parentheses_for_precedence( let content =
semantic, if needs_parentheses_for_precedence(semantic, literal_expr, checker.tokens()) {
literal_expr,
checker.comment_ranges(),
checker.source(),
) {
format!("({})", checker.generator().expr(&union_expr)) format!("({})", checker.generator().expr(&union_expr))
} else { } else {
checker.generator().expr(&union_expr) checker.generator().expr(&union_expr)
@ -278,8 +274,7 @@ enum UnionKind {
fn needs_parentheses_for_precedence( fn needs_parentheses_for_precedence(
semantic: &ruff_python_semantic::SemanticModel, semantic: &ruff_python_semantic::SemanticModel,
literal_expr: &Expr, literal_expr: &Expr,
comment_ranges: &ruff_python_trivia::CommentRanges, tokens: &Tokens,
source: &str,
) -> bool { ) -> bool {
// Get the parent expression to check if we're in a context that needs parentheses // Get the parent expression to check if we're in a context that needs parentheses
let Some(parent_expr) = semantic.current_expression_parent() else { let Some(parent_expr) = semantic.current_expression_parent() else {
@ -287,14 +282,7 @@ fn needs_parentheses_for_precedence(
}; };
// Check if the literal expression is already parenthesized // Check if the literal expression is already parenthesized
if parenthesized_range( if parenthesized_range(literal_expr.into(), parent_expr.into(), tokens).is_some() {
literal_expr.into(),
parent_expr.into(),
comment_ranges,
source,
)
.is_some()
{
return false; // Already parenthesized, don't add more return false; // Already parenthesized, don't add more
} }

View File

@ -10,7 +10,7 @@ use libcst_native::{
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::Truthiness; use ruff_python_ast::helpers::Truthiness;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::visitor::Visitor; use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{ use ruff_python_ast::{
self as ast, AnyNodeRef, Arguments, BoolOp, ExceptHandler, Expr, Keyword, Stmt, UnaryOp, self as ast, AnyNodeRef, Arguments, BoolOp, ExceptHandler, Expr, Keyword, Stmt, UnaryOp,
@ -303,8 +303,7 @@ pub(crate) fn unittest_assertion(
parenthesized_range( parenthesized_range(
expr.into(), expr.into(),
checker.semantic().current_statement().into(), checker.semantic().current_statement().into(),
checker.comment_ranges(), checker.tokens(),
checker.locator().contents(),
) )
.unwrap_or(expr.range()), .unwrap_or(expr.range()),
))); )));

View File

@ -768,8 +768,8 @@ fn check_fixture_decorator(checker: &Checker, func_name: &str, decorator: &Decor
keyword, keyword,
arguments, arguments,
edits::Parentheses::Preserve, edits::Parentheses::Preserve,
checker.locator().contents(), checker.source(),
checker.comment_ranges(), checker.tokens(),
) )
.map(Fix::unsafe_edit) .map(Fix::unsafe_edit)
}); });

View File

@ -2,10 +2,9 @@ use rustc_hash::{FxBuildHasher, FxHashMap};
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr; use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::{Tokens, parenthesized_range};
use ruff_python_ast::{self as ast, Expr, ExprCall, ExprContext, StringLiteralFlags}; use ruff_python_ast::{self as ast, Expr, ExprCall, ExprContext, StringLiteralFlags};
use ruff_python_codegen::Generator; use ruff_python_codegen::Generator;
use ruff_python_trivia::CommentRanges;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer}; use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{Ranged, TextRange, TextSize}; use ruff_text_size::{Ranged, TextRange, TextSize};
@ -322,18 +321,8 @@ fn elts_to_csv(elts: &[Expr], generator: Generator, flags: StringLiteralFlags) -
/// ``` /// ```
/// ///
/// This method assumes that the first argument is a string. /// This method assumes that the first argument is a string.
fn get_parametrize_name_range( fn get_parametrize_name_range(call: &ExprCall, expr: &Expr, tokens: &Tokens) -> Option<TextRange> {
call: &ExprCall, parenthesized_range(expr.into(), (&call.arguments).into(), tokens)
expr: &Expr,
comment_ranges: &CommentRanges,
source: &str,
) -> Option<TextRange> {
parenthesized_range(
expr.into(),
(&call.arguments).into(),
comment_ranges,
source,
)
} }
/// PT006 /// PT006
@ -349,12 +338,7 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
if names.len() > 1 { if names.len() > 1 {
match names_type { match names_type {
types::ParametrizeNameType::Tuple => { types::ParametrizeNameType::Tuple => {
let name_range = get_parametrize_name_range( let name_range = get_parametrize_name_range(call, expr, checker.tokens())
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range()); .unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic( let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType { PytestParametrizeNamesWrongType {
@ -386,12 +370,7 @@ fn check_names(checker: &Checker, call: &ExprCall, expr: &Expr, argvalues: &Expr
))); )));
} }
types::ParametrizeNameType::List => { types::ParametrizeNameType::List => {
let name_range = get_parametrize_name_range( let name_range = get_parametrize_name_range(call, expr, checker.tokens())
call,
expr,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(expr.range()); .unwrap_or(expr.range());
let mut diagnostic = checker.report_diagnostic( let mut diagnostic = checker.report_diagnostic(
PytestParametrizeNamesWrongType { PytestParametrizeNamesWrongType {

View File

@ -4,10 +4,10 @@ use ruff_diagnostics::Applicability;
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::{is_const_false, is_const_true}; use ruff_python_ast::helpers::{is_const_false, is_const_true};
use ruff_python_ast::stmt_if::elif_else_range; use ruff_python_ast::stmt_if::elif_else_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::visitor::Visitor; use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::whitespace::indentation; use ruff_python_ast::whitespace::indentation;
use ruff_python_ast::{self as ast, Decorator, ElifElseClause, Expr, Stmt}; use ruff_python_ast::{self as ast, Decorator, ElifElseClause, Expr, Stmt};
use ruff_python_parser::TokenKind;
use ruff_python_semantic::SemanticModel; use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::visibility::is_property; use ruff_python_semantic::analyze::visibility::is_property;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer, is_python_whitespace}; use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer, is_python_whitespace};

View File

@ -10,7 +10,7 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr; use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::helpers::{Truthiness, contains_effect}; use ruff_python_ast::helpers::{Truthiness, contains_effect};
use ruff_python_ast::name::Name; use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_codegen::Generator; use ruff_python_codegen::Generator;
use ruff_python_semantic::SemanticModel; use ruff_python_semantic::SemanticModel;
@ -800,12 +800,7 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit( edit = Some(get_short_circuit_edit(
value, value,
TextRange::new( TextRange::new(
parenthesized_range( parenthesized_range(furthest.into(), expr.into(), checker.tokens())
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range()) .unwrap_or(furthest.range())
.start(), .start(),
expr.end(), expr.end(),
@ -828,12 +823,7 @@ fn is_short_circuit(
edit = Some(get_short_circuit_edit( edit = Some(get_short_circuit_edit(
next_value, next_value,
TextRange::new( TextRange::new(
parenthesized_range( parenthesized_range(furthest.into(), expr.into(), checker.tokens())
furthest.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(furthest.range()) .unwrap_or(furthest.range())
.start(), .start(),
expr.end(), expr.end(),

View File

@ -4,7 +4,7 @@ use ruff_text_size::{Ranged, TextRange};
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::{is_const_false, is_const_true}; use ruff_python_ast::helpers::{is_const_false, is_const_true};
use ruff_python_ast::name::Name; use ruff_python_ast::name::Name;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use crate::checkers::ast::Checker; use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation}; use crate::{AlwaysFixableViolation, Edit, Fix, FixAvailability, Violation};
@ -171,12 +171,7 @@ pub(crate) fn if_expr_with_true_false(
checker checker
.locator() .locator()
.slice( .slice(
parenthesized_range( parenthesized_range(test.into(), expr.into(), checker.tokens())
test.into(),
expr.into(),
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(test.range()), .unwrap_or(test.range()),
) )
.to_string(), .to_string(),

View File

@ -4,10 +4,10 @@ use anyhow::Result;
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableStmt; use ruff_python_ast::comparable::ComparableStmt;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::stmt_if::{IfElifBranch, if_elif_branches}; use ruff_python_ast::stmt_if::{IfElifBranch, if_elif_branches};
use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Expr}; use ruff_python_ast::{self as ast, Expr};
use ruff_python_trivia::{CommentRanges, SimpleTokenKind, SimpleTokenizer}; use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
use ruff_source_file::LineRanges; use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange}; use ruff_text_size::{Ranged, TextRange};
@ -99,7 +99,7 @@ pub(crate) fn if_with_same_arms(checker: &Checker, stmt_if: &ast::StmtIf) {
&current_branch, &current_branch,
following_branch, following_branch,
checker.locator(), checker.locator(),
checker.comment_ranges(), checker.tokens(),
) )
}); });
} }
@ -111,7 +111,7 @@ fn merge_branches(
current_branch: &IfElifBranch, current_branch: &IfElifBranch,
following_branch: &IfElifBranch, following_branch: &IfElifBranch,
locator: &Locator, locator: &Locator,
comment_ranges: &CommentRanges, tokens: &ruff_python_ast::token::Tokens,
) -> Result<Fix> { ) -> Result<Fix> {
// Identify the colon (`:`) at the end of the current branch's test. // Identify the colon (`:`) at the end of the current branch's test.
let Some(current_branch_colon) = let Some(current_branch_colon) =
@ -127,12 +127,9 @@ fn merge_branches(
); );
// If the following test isn't parenthesized, consider parenthesizing it. // If the following test isn't parenthesized, consider parenthesizing it.
let following_branch_test = if let Some(range) = parenthesized_range( let following_branch_test = if let Some(range) =
following_branch.test.into(), parenthesized_range(following_branch.test.into(), stmt_if.into(), tokens)
stmt_if.into(), {
comment_ranges,
locator.contents(),
) {
Cow::Borrowed(locator.slice(range)) Cow::Borrowed(locator.slice(range))
} else if matches!( } else if matches!(
following_branch.test, following_branch.test,
@ -153,16 +150,11 @@ fn merge_branches(
// //
// For example, if the current test is `x if x else y`, we should parenthesize it to // For example, if the current test is `x if x else y`, we should parenthesize it to
// `(x if x else y) or ...`. // `(x if x else y) or ...`.
let parenthesize_edit = if matches!( let parenthesize_edit =
if matches!(
current_branch.test, current_branch.test,
Expr::Lambda(_) | Expr::Named(_) | Expr::If(_) Expr::Lambda(_) | Expr::Named(_) | Expr::If(_)
) && parenthesized_range( ) && parenthesized_range(current_branch.test.into(), stmt_if.into(), tokens).is_none()
current_branch.test.into(),
stmt_if.into(),
comment_ranges,
locator.contents(),
)
.is_none()
{ {
Some(Edit::range_replacement( Some(Edit::range_replacement(
format!("({})", locator.slice(current_branch.test)), format!("({})", locator.slice(current_branch.test)),

View File

@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats}; use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::AnyNodeRef; use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::parenthesize::parenthesized_range; use ruff_python_ast::token::parenthesized_range;
use ruff_python_ast::{self as ast, Arguments, CmpOp, Comprehension, Expr}; use ruff_python_ast::{self as ast, Arguments, CmpOp, Comprehension, Expr};
use ruff_python_semantic::analyze::typing; use ruff_python_semantic::analyze::typing;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer}; use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer};
@ -90,20 +90,10 @@ fn key_in_dict(checker: &Checker, left: &Expr, right: &Expr, operator: CmpOp, pa
} }
// Extract the exact range of the left and right expressions. // Extract the exact range of the left and right expressions.
let left_range = parenthesized_range( let left_range =
left.into(), parenthesized_range(left.into(), parent, checker.tokens()).unwrap_or(left.range());
parent, let right_range =
checker.comment_ranges(), parenthesized_range(right.into(), parent, checker.tokens()).unwrap_or(right.range());
checker.locator().contents(),
)
.unwrap_or(left.range());
let right_range = parenthesized_range(
right.into(),
parent,
checker.comment_ranges(),
checker.locator().contents(),
)
.unwrap_or(right.range());
let mut diagnostic = checker.report_diagnostic( let mut diagnostic = checker.report_diagnostic(
InDictKeys { InDictKeys {

View File

@ -146,7 +146,7 @@ fn reverse_comparison(expr: &Expr, locator: &Locator, stylist: &Stylist) -> Resu
let left = (*comparison.left).clone(); let left = (*comparison.left).clone();
// Copy the right side to the left side. // Copy the right side to the left side.
comparison.left = Box::new(comparison.comparisons[0].comparator.clone()); *comparison.left = comparison.comparisons[0].comparator.clone();
// Copy the left side to the right side. // Copy the left side to the right side.
comparison.comparisons[0].comparator = left; comparison.comparisons[0].comparator = left;

View File

@ -1144,3 +1144,23 @@ help: Replace with `(i for i in range(1))`
208 | # https://github.com/astral-sh/ruff/issues/21136 208 | # https://github.com/astral-sh/ruff/issues/21136
209 | def get_items(): 209 | def get_items():
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
SIM222 [*] Use `True` instead of `... or True`
--> SIM222.py:222:1
|
221 | # https://github.com/astral-sh/ruff/issues/21473
222 | tuple("") or True # SIM222
| ^^^^^^^^^^^^^^^^^
223 | tuple(t"") or True # OK
224 | tuple(0) or True # OK
|
help: Replace with `True`
219 |
220 |
221 | # https://github.com/astral-sh/ruff/issues/21473
- tuple("") or True # SIM222
222 + True # SIM222
223 | tuple(t"") or True # OK
224 | tuple(0) or True # OK
225 | tuple(1) or True # OK
note: This is an unsafe fix and may change runtime behavior

View File

@ -1025,3 +1025,23 @@ help: Replace with `f"{''}{''}"`
156 | 156 |
157 | 157 |
note: This is an unsafe fix and may change runtime behavior note: This is an unsafe fix and may change runtime behavior
SIM223 [*] Use `tuple("")` instead of `tuple("") and ...`
--> SIM223.py:163:1
|
162 | # https://github.com/astral-sh/ruff/issues/21473
163 | tuple("") and False # SIM223
| ^^^^^^^^^^^^^^^^^^^
164 | tuple(t"") and False # OK
165 | tuple(0) and False # OK
|
help: Replace with `tuple("")`
160 |
161 |
162 | # https://github.com/astral-sh/ruff/issues/21473
- tuple("") and False # SIM223
163 + tuple("") # SIM223
164 | tuple(t"") and False # OK
165 | tuple(0) and False # OK
166 | tuple(1) and False # OK
note: This is an unsafe fix and may change runtime behavior

Some files were not shown because too many files have changed in this diff Show More