#21414 added the ability to create a specialization from a constraint
set. It handled mutually constrained typevars just fine, e.g. given `T ≤
int ∧ U = T` we can infer `T = int, U = int`.
But it didn't handle _nested_ constraints correctly, e.g. `T ≤ int ∧ U =
list[T]`. Now we do! This requires doing a fixed-point "apply the
specialization to itself" step to propagate the assignments of any
nested typevars, and then a cycle detection check to make sure we don't
have an infinite expansion in the specialization.
This gets at an interesting nuance in our constraint set structure that
@sharkdp has asked about before. Constraint sets are BDDs, and each
internal node represents an _individual constraint_, of the form `lower
≤ T ≤ upper`. `lower` and `upper` are allowed to be other typevars, but
only if they appear "later" in the arbitary ordering that we establish
over typevars. The main purpose of this is to avoid infinite expansion
for mutually constrained typevars.
However, that restriction doesn't help us here, because only applies
when `lower` and `upper` _are_ typevars, not when they _contain_
typevars. That distinction is important, since it means the restriction
does not affect our expressiveness: we can always rewrite `Never ≤ T ≤
U` (a constraint on `T`) into `T ≤ U ≤ object` (a constraint on `U`).
The same is not true of `Never ≤ T ≤ list[U]` — there is no "inverse" of
`list` that we could apply to both sides to transform this into a
constraint on a bare `U`.
## Summary
Updated `S508` (snmp-insecure-version) and `S509`
(snmp-weak-cryptography) rules to support both old and new PySNMP API
module paths. Previously, these rules only detected the old API path
`pysnmp.hlapi.*`, but now they correctly detect all PySNMP API variants
including `pysnmp.hlapi.asyncio.*`, `pysnmp.hlapi.v1arch.*`,
`pysnmp.hlapi.v3arch.*`, and `pysnmp.hlapi.auth.*`.
Fixes#21364
## Problem Analysis
The `S508` and `S509` rules used exact pattern matching on qualified
names:
- `S509` only matched `["pysnmp", "hlapi", "UsmUserData"]`
- `S508` only matched `["pysnmp", "hlapi", "CommunityData"]`
This meant that newer PySNMP API paths were not detected, such as:
- `pysnmp.hlapi.asyncio.UsmUserData`
- `pysnmp.hlapi.v3arch.asyncio.UsmUserData`
- `pysnmp.hlapi.v3arch.asyncio.auth.UsmUserData`
- `pysnmp.hlapi.auth.UsmUserData`
- Similar variants for `CommunityData` in `S508`
Additionally, the old API path `pysnmp.hlapi.auth.*` was also missing
from both rules.
## Approach
Instead of exact pattern matching, both rules now check if:
1. The qualified name starts with `["pysnmp", "hlapi"]`
2. The qualified name ends with the target class name (`"UsmUserData"`
for `S509`, `"CommunityData"` for `S508`)
This flexible approach matches all PySNMP API paths without hardcoding
each variant, making the rules more maintainable and future-proof.
## Test Plan
Added comprehensive test cases to both `S508.py` and `S509.py` test
files covering:
- New API paths: `pysnmp.hlapi.asyncio.*`, `pysnmp.hlapi.v1arch.*`,
`pysnmp.hlapi.v3arch.*`
- Old API path: `pysnmp.hlapi.auth.*`
- Both insecure and secure usage patterns
All existing tests pass, and new snapshot tests were added and accepted.
Manual verification confirms both rules correctly detect all PySNMP API
variants.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Fixes https://github.com/astral-sh/ty/issues/1571.
I realised I was overcomplicating things when I described what we should
do in that issue description. The simplest thing to do here is just to
special-case call expressions and short-circuit the call-binding
machinery entirely if we see it's `NotImplemented` being called. It
doesn't really matter if the subdiagnostic doesn't fire when a union is
called and one element of the union is `NotImplemented` -- the
subdiagnostic doesn't need to be exhaustive; it's just to help people in
some common cases.
## Test Plan
Added snapshots
## Summary
The `.expect()` call here:
5dd56264fb/crates/ty_python_semantic/src/types/instance.rs (L816-L827)
is the direct cause of the panic in
https://github.com/astral-sh/ty/issues/1587. This patch gets rid of the
panic by refactoring our `Protocol` enum so that the
`Protocol::FromClass` variant holds a `ProtocolClass` instance rather
than a `ClassType` instance (all the `.expect()` call was doing was
attempting to convert form a `ClassType` to a `ProtocolClass`).
I hoped that this would provide a fix for
https://github.com/astral-sh/ty/issues/1587, but we still panic on the
provided reproducible examples in that issue even with this PR.
Nonetheless, I think this PR is a worthwhile change to make because:
- It's probably slightly more efficient this way (we no longer have to
re-verify that the wrapped class in a `Protocol::FromClass()` variant is
a protocol class every time we want to access its interface)
- It's nice to get rid of `.expect()` calls where possible, and this one
seems definitely unnecessary
- The _new_ panic message on this PR branch makes it much clearer what
the underlying cause of the bug in
https://github.com/astral-sh/ty/issues/1587 is:
<details>
<summary>New panic message</summary>
```
error[panic]: Panicked at
/Users/alexw/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/a885bb4/src/function/execute.rs:321:21
when checking `/Users/alexw/dev/ruff/foo.py`: `ClassLiteral < 'db
>::explicit_bases_(Id(4c09)): execute: too many cycle iterations`
info: This indicates a bug in ty.
info: If you could open an issue at
https://github.com/astral-sh/ty/issues/new?title=%5Bpanic%5D, we'd be
very appreciative!
info: Platform: macos aarch64
info: Version: ruff/0.14.5+60 (18a14bfaf 2025-11-19)
info: Args: ["target/debug/ty", "check", "foo.py",
"--python-version=3.14"]
info: run with `RUST_BACKTRACE=1` environment variable to show the full
backtrace information
info: query stacktrace:
0: cached_protocol_interface(Id(6805))
at crates/ty_python_semantic/src/types/protocol_class.rs:790
1: is_equivalent_to_object_inner(Id(8003))
at crates/ty_python_semantic/src/types/instance.rs:667
2: infer_deferred_types(Id(1409))
at crates/ty_python_semantic/src/types/infer.rs:141
cycle heads: infer_definition_types(Id(140b)) -> iteration = 200,
TypeVarInstance < 'db >::lazy_bound_(Id(5803)) -> iteration = 200
3: TypeVarInstance < 'db >::lazy_bound_(Id(5802))
at crates/ty_python_semantic/src/types.rs:8734
4: infer_definition_types(Id(140c))
at crates/ty_python_semantic/src/types/infer.rs:94
5: infer_deferred_types(Id(140a))
at crates/ty_python_semantic/src/types/infer.rs:141
6: TypeVarInstance < 'db >::lazy_bound_(Id(5803))
at crates/ty_python_semantic/src/types.rs:8734
7: infer_definition_types(Id(140b))
at crates/ty_python_semantic/src/types/infer.rs:94
8: infer_scope_types(Id(1000))
at crates/ty_python_semantic/src/types/infer.rs:70
9: check_file_impl(Id(c00))
at crates/ty_project/src/lib.rs:535
Found 1 diagnostic
WARN A fatal error occurred while checking some files. Not all project
files were analyzed. See the diagnostics list above for details.
```
</details>
## Test Plan
All existing tests pass.
This patch lets us create specializations from a constraint set. The
constraint encodes the restrictions on which types each typevar can
specialize to. Given a generic context and a constraint set, we iterate
through all of the generic context's typevars. For each typevar, we
abstract the constraint set so that it only mentions the typevar in
question (propagating derived facts if needed). We then find the "best
representative type" for the typevar given the abstracted constraint
set.
When considering the BDD structure of the abstracted constraint set,
each path from the BDD root to the `true` terminal represents one way
that the constraint set can be satisfied. (This is also one of the
clauses in the DNF representation of the constraint set's boolean
formula.) Each of those paths is the conjunction of the individual
constraints of each internal node that we traverse as we walk that path,
giving a single lower/upper bound for the path. We use the upper bound
as the "best" (i.e. "closest to `object`") type for that path.
If there are multiple paths in the BDD, they technically represent
independent possible specializations. If there's a single specialization
that satisfies all of them, we will return that as the specialization.
If not, then the constraint set is ambiguous. (This happens most often
with constrained typevars.) We could in the future turn _each_ of the
paths into separate specializations, but it's not clear what we would do
with that, so instead we just report the ambiguity as a specialization
failure.
We were previously normalizing the upper and lower bounds of each
constraint when constructing constraint sets. Like in #21463, this was
for conflated reasons: It made constraint set displays nicer, since we
wouldn't render multiple constraints with obviously equivalent bounds.
(Think `T ≤ A & B` and `T ≤ B & A`) But it was also useful for
correctness, since prior to #21463 we were (trying to) add the full
transitive closure to a constraint set's BDD, and normalization gave a
useful reduction in the number of nodes in a typical BDD.
Now that we don't store the transitive closure explicitly, that second
reason is no longer relevant. Our sequent map can store that full
transitive closure much more efficiently than the expanded BDD would
have. This helps fix some false positives on #20933, where we're seeing
some (incorrect, need to be fixed, but ideally not blocking this effort)
assignability failures between a type and its normalization.
Normalization is still useful for display purposes, and so we do
normalize the upper/lower bounds before building up our display
representation of a constraint set BDD.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
We're seeing flaky test failures on macos, which seems to be caused by
different Salsa ID orderings on the different platforms. Constraint set
BDDs order their internal nodes based on the Salsa IDs of the interned
typevar structs, and we had some code that depended on variable ordering
in an unexpected way.
This patch definitely fixes the macos test failure on #21414, and
hopefully fixes it on #21436, too.
## Summary
Add a set of comprehensive tests for generic implicit type aliases to
illustrate the current behavior with many flavors of `@Todo` types and
false positive diagnostics.
The tests are partially based on the typing conformance suite, and the
expected behavior has been checked against other type checkers.
## Summary
Get rid of the catch-all todo type from subscripting a base type we
haven't implemented handling for yet in a type expression, and turn it
into a diagnostic instead.
Handle a few more cases explicitly, to avoid false positives from the
above change:
1. Subscripting any dynamic type (not just a todo type) in a type
expression should just result in that same dynamic type. This is
important for gradual guarantee, and matches other type checkers.
2. Subscripting a generic alias may be an error or not, depending
whether the specialization itself contains typevars. Don't try to handle
this yet (it should be handled in a later PR for specializing generic
non-PEP695 type aliases), just use a dedicated todo type for it.
3. Add a temporary todo branch to avoid false positives from string PEP
613 type aliases. This can be removed in the next PR, with PEP 613 type
alias support.
## Test Plan
Adjusted mdtests, ecosystem.
All new diagnostics in conformance suite are supposed to be diagnostics,
so this PR is a strict improvement there.
New diagnostics in the ecosystem are surfacing cases where we already
don't understand an annotation, but now we emit a diagnostic about it.
They are mostly intentional choices. Analysis of particular cases:
* `attrs`, `bokeh`, `django-stubs`, `dulwich`, `ibis`, `kornia`,
`mitmproxy`, `mongo-python-driver`, `mypy`, `pandas`, `poetry`,
`prefect`, `pydantic`, `pytest`, `scrapy`, `trio`, `werkzeug`, and
`xarray` are all cases where under `from __future__ import annotations`
or Python 3.14 deferred-annotations semantics, we follow normal
name-scoping rules, whereas some other type checkers prefer global names
over local names. This means we don't like it if e.g. you have a class
with a method or attribute named `type` or `tuple`, and you also try to
use `type` or `tuple` in method/attribute annotations of that class.
This PR isn't changing those semantics, just revealing them in more
cases where previously we just silently fell back to `Unknown`. I think
failing with a diagnostic (so authors can alias names as needed to avoid
relying on scoping rules that differ between type checkers) is better
than failing silently here.
* `beartype` assumes we support `TypeForm` (because it only supports
mypy and pyright, it uses `if MYPY:` to hide the `TypeForm` from mypy,
and pyright supports `TypeForm`), and we don't yet.
* `graphql-core` likes to use a `try: ... except ImportError: ...`
pattern for importing special forms from `typing` with fallback to
`typing_extensions`, instead of using `sys.version_info` checks. We
don't handle this well when type checking under an older Python version
(where the import from `typing` is not found); we see the imported name
as of type e.g. `Unknown | SpecialFormType(...)`, and because of the
union with `Unknown` we fail to handle it as the special form type. Mypy
and pyright also don't seem to support this pattern. They don't complain
about subscripting such special forms, but they do silently fail to
treat them as the desired special form. Again here, if we are going to
fail I'd rather fail with a diagnostic rather than silently.
* `ibis` is [trying to
use](https://github.com/ibis-project/ibis/blob/main/ibis/common/collections.py#L372)
`frozendict: type[FrozenDict]` as a way to create a "type alias" to
`FrozenDict`, but this is wrong: that means `frozendict:
type[FrozenDict[Any, Any]]`.
* `mypy` has some errors due to the fact that type-checking `typing.pyi`
itself (without knowing that it's the real `typing.pyi`) doesn't work
very well.
* `mypy-protobuf` imports some types from the protobufs library that end
up unioned with `Unknown` for some reason, and so we don't allow
explicit-specialization of them. Depending on the reason they end up
unioned with `Unknown`, we might want to better support this? But it's
orthogonal to this PR -- we aren't failing any worse here, just alerting
the author that we didn't understand their annotation.
* `pwndbg` has unresolved references due to star-importing from a
dependency that isn't installed, and uses un-imported names like `Dict`
in annotation expressions. Some of the unresolved references were hidden
by
https://github.com/astral-sh/ruff/blob/main/crates/ty_python_semantic/src/types/infer/builder.rs#L7223-L7228
when some annotations previously resolved to a Todo type that no longer
do.
Summary
--
This PR wires up the `Diagnostic::set_documentation_url` method from
#21502 to Ruff's lint diagnostics. This enables the links for the full
and concise output formats without any other changes.
I considered also including the URLs for the grouped and pylint output
formats, but the grouped format is still in `ruff_linter` instead of
`ruff_db`, so we'd have to export some additional functionality to wire
it up with `fmt_with_hyperlink`; and the pylint format doesn't currently
render with color, so I think it might actually be machine readable
rather than human readable?
The other ouput formats (json, json-lines, junit, github, gitlab,
rdjson, azure, sarif) seem more clearly not to need the links.
Test Plan
--
I guess you can't see my cursor or the browser opening, but it works for
lint rules, which have links, and doesn't include a link for syntax
errors, which don't have valid links.

Closes#11216
Essentially the approach is to implement `Format` for a new struct
`FormatClause` which is just a clause header _and_ its body. We then
have the information we need to see whether there is a skip suppression
comment on the last child in the body and it all fits on one line.
This saga began with a regression in how we handle constraint sets where
a typevar is constrained by another typevar, which #21068 first added
support for:
```py
def mutually_constrained[T, U]():
# If [T = U ∧ U ≤ int], then [T ≤ int] must be true as well.
given_int = ConstraintSet.range(U, T, U) & ConstraintSet.range(Never, U, int)
static_assert(given_int.implies_subtype_of(T, int))
```
While working on #21414, I saw a regression in this test, which was
strange, since that PR has nothing to do with this logic! The issue is
that something in that PR made us instantiate the typevars `T` and `U`
in a different order, giving them differently ordered salsa IDs. And
importantly, we use these salsa IDs to define the variable ordering that
is used in our constraint set BDDs. This showed that our "mutually
constrained" logic only worked for one of the two possible orderings.
(We can — and now do — test this in a brute-force way by copy/pasting
the test with both typevar orderings.)
The underlying bug was in our `ConstraintSet::simplify_and_domain`
method. It would correctly detect `(U ≤ T ≤ U) ∧ (U ≤ int)`, because
those two constraints affect different typevars, and from that, infer `T
≤ int`. But it wouldn't detect the equivalent pattern in `(T ≤ U ≤ T) ∧
(U ≤ int)`, since those constraints affect the same typevar. At first I
tried adding that as yet more pattern-match logic in the ever-growing
`simplify_and_domain` method. But doing so caused other tests to start
failing.
At that point, I realized that `simplify_and_domain` had gotten to the
point where it was trying to do too much, and for conflicting consumers.
It was first written as part of our display logic, where the goal is to
remove redundant information from a BDD to make its string rendering
simpler. But we also started using it to add "derived facts" to a BDD. A
derived fact is a constraint that doesn't appear in the BDD directly,
but which we can still infer to be true. Our failing test relies on
derived facts — being able to infer that `T ≤ int` even though that
particular constraint doesn't appear in the original BDD. Before,
`simplify_and_domain` would trace through all of the constraints in a
BDD, figure out the full set of derived facts, and _add those derived
facts_ to the BDD structure. This is brittle, because those derived
facts are not universally true! In our example, `T ≤ int` only holds
along the BDD paths where both `T = U` and `U ≤ int`. Other paths will
test the negations of those constraints, and on those, we _shouldn't_
infer `T ≤ int`. In theory it's possible (and we were trying) to use BDD
operators to express that dependency...but that runs afoul of how we
were simultaneously trying to _remove_ information to make our displays
simpler.
So, I ripped off the band-aid. `simplify_and_domain` is now _only_ used
for display purposes. I have not touched it at all, except to remove
some logic that is definitely not used by our `Display` impl. Otherwise,
I did not want to touch that house of cards for now, since the display
logic is not load-bearing for any type inference logic.
For all non-display callers, we have a new **_sequent map_** data type,
which tracks exactly the same derived information. But it does so (a)
without trying to remove anything from the BDD, and (b) lazily, without
updating the BDD structure.
So the end result is that all of the tests (including the new
regressions) pass, via a more efficient (and hopefully better
structured/documented) implementation, at the cost of hanging onto a
pile of display-related tech debt that we'll want to clean up at some
point.
## Summary
This is another attempt at https://github.com/astral-sh/ruff/pull/21410
that fixes https://github.com/astral-sh/ruff/issues/19226.
@MichaReiser helped me get something working in a very helpful pairing
session. I pushed one additional commit moving the comments back from
leading comments to trailing comments, which I think retains more of the
input formatting.
I was inspired by Dylan's PR (#21185) to make one of these tables:
<table>
<thead>
<tr>
<th scope="col">Input</th>
<th scope="col">Main</th>
<th scope="col">PR</th>
</tr>
</thead>
<tbody>
<tr>
<td><pre lang="python">
if (
not
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
# comment
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
not
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
# unary comment
not
# operand comment
(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
<td><pre lang="python">
if (
# unary comment
# operand comment
not (
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
<td><pre lang="python">
if (
# unary comment
not
# operand comment
(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
not # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if ( # comment
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa # comment
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
</tr>
</tbody>
</table>
hopefully it helps even though the snippets are much wider here.
The two main differences are (1) that we now retain own-line comments
between the unary operator and its operand instead of moving these to
leading comments on the operator itself, and (2) that we move
end-of-line comments between the operator and operand to dangling
end-of-line comments on the operand (the last example in the table).
## Test Plan
Existing tests, plus new ones based on the issue. As I noted below, I
also ran the output from main on the unary.py file back through this
branch to check that we don't reformat code from main. This made me feel
a bit better about not preview-gating the changes in this PR.
```shell
> git show main:crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/unary.py | ruff format - | ./target/debug/ruff format --diff -
> echo $?
0
```
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Takayuki Maeda <takoyaki0316@gmail.com>
## Summary
This PR proposes that we add a new `set_concise_message` functionality
to our `Diagnostic` construction API. When used, the concise message
that is otherwise auto-generated from the main diagnostic message and
the primary annotation will be overwritten with the custom message.
To understand why this is desirable, let's look at the `invalid-key`
diagnostic. This is how I *want* the full diagnostic to look like:
<img width="620" height="282" alt="image"
src="https://github.com/user-attachments/assets/3bf70f52-9d9f-4817-bc16-fb0ebf7c2113"
/>
However, without the change in this PR, the concise message would have
the following form:
```
error[invalid-key]: Unknown key "Age" for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This duplication is why the full `invalid-key` diagnostic used a main
diagnostic message that is only "Invalid key for TypedDict `Person`", to
make that bearable:
```
error[invalid-key] Invalid key for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This is still less than ideal, *and* we had to make the "full"
diagnostic worse. With the new API here, we have to make no such
compromises. We need to do slightly more work (provide one additional
custom-designed message), but we get to keep the "full" diagnostic that
we actually want, and we can make the concise message more terse and
readable:
```
error[invalid-key] Unknown key "Age" for TypedDict `Person` - did you mean "age"?
```
Similar problems exist for other diagnostics as well (I really want this
for https://github.com/astral-sh/ruff/pull/21476). In this PR, I only
changed `invalid-key` and `type-assertion-failure`.
The PR here is somewhat related to the discussion in
https://github.com/astral-sh/ty/issues/1418, but note that we are
solving a problem that is unrelated to sub-diagnostics.
## Test Plan
Updated tests
## Summary
Add support for `Callable` special forms in implicit type aliases.
## Typing conformance
Four new tests are passing
## Ecosystem impact
* All of the `invalid-type-form` errors are from libraries that use
`mypy_extensions` and do something like `Callable[[NamedArg("x", str)],
int]`.
* A handful of new false positives because we do not support generic
specializations of implicit type aliases, yet. But other
* Everything else looks like true positives or known limitations
## Test Plan
New Markdown tests.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#21389
Avoid RUF012 false positives when reassigning a ClassVar
## Test Plan
<!-- How was it tested? -->
Added the new reassignment scenario to
`crates/ruff_linter/resources/test/fixtures/ruff/RUF012.py`.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Constraint sets can now track subtyping/assignability/etc of generic
callables correctly. For instance:
```py
def identity[T](t: T) -> T:
return t
constraints = ConstraintSet.always()
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[str], str]))
```
A generic callable can be considered an intersection of all of its
possible specializations, and an assignability check with an
intersection as the lhs side succeeds of _any_ of the intersected types
satisfies the check. Put another way, if someone expects to receive any
function with a signature of `(int) -> int`, we can give them
`identity`.
Note that the corresponding check using `is_subtype_of` directly does
not yet work, since #20093 has not yet hooked up the core typing
relationship logic to use constraint sets:
```py
# These currently fail
static_assert(is_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(is_subtype_of(TypeOf[identity], Callable[[str], str]))
```
To do this, we add a new _existential quantification_ operation on
constraint sets. This takes in a list of typevars and _removes_ those
typevars from the constraint set. Conceptually, we return a new
constraint set that evaluates to `true` when there was _any_ assignment
of the removed typevars that caused the old constraint set to evaluate
to `true`.
When comparing a generic constraint set, we add its typevars to the
`inferable` set, and figure out whatever constraints would allow any
specialization to satisfy the check. We then use the new existential
quantification operator to remove those new typevars, since the caller
doesn't (and shouldn't) know anything about them.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Closes#19350
This fixes a syntax error caused by formatting. However, the new tests reveal that there are some cases where formatting attributes with certain comments behaves strangely, both before and after this PR, so some more polish may be in order.
For example, without parentheses around the value, and both before and after this PR, we have:
```python
# unformatted
variable = (
something # a comment
.first_method("some string")
)
# formatted
variable = something.first_method("some string") # a comment
```
which is probably not where the comment ought to go.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Partially addresses https://github.com/astral-sh/ty/issues/1562
Only suggest the keyword "as" in import statements when the user have
written `import foo a<CURSOR>` or `from foo import bar a<CURSOR>` as no
other suggestion makes sense here.
Re-uses the existing pattern for incomplete `import from` statements to
determine incomplete import alias statements and make the suggestions
more sane in those cases.
There was a potential suggestion from @BurntSushi in
https://github.com/astral-sh/ty/issues/1562#issue-3626853513 to move the
handling of import statements into one unified state machine but I acted
on the side of caution and fixed this with already established patterns,
pending a potential bigger re-write down the line.
## Test Plan
Added new tests and checked that it behaved reasonable in the
playground.
<!-- How was it tested? -->
This PR attempts to improve the placement of own-line comments between
branches in the setting where the comment is more indented than the
preceding node.
There are two main changes.
### First change: Preceding node has leading content
If the preceding node has leading content, we now regard the comment as
automatically _less_ indented than the preceding node, and format
accordingly.
For example,
```python
if True: preceding_node
# leading on `else`, not trailing on `preceding_node`
else: ...
```
This is more compatible with `black`, although there is a (presumably
very uncommon) edge case:
```python
if True:
this;that
# leading on `else`, but trailing in `black`
else: ...
```
I'm sort of okay with this - presumably if one wanted a comment for
those semi-colon separated statements, one should have put it _above_
them, and one wanted a comment only for `that` then it ought to have
been on the same line?
### Second change: searching for last child in body
While searching for the (recursively) last child in the body of the
preceding _branch_, we implicitly assumed that the preceding node had to
have a body to begin the recursion. But actually, in the base case, the
preceding node _is_ the last child in the body of the preceding branch.
So, for example:
```python
if True:
something
last_child_but_no_body
# leading on else for `main` but trailing in this PR
else: ...
```
### More examples
The table below is an attempt to summarize the changes in behavior. The
rows alternate between an example snippet with `while` and the same
example with `if` - in the former case we do _not_ have an `else` node
and in the latter we do.
Notice that:
1. On `main` our handling of `if` vs. `while` is not consistent, whereas
it is consistent in the present PR
2. We disagree with `black` in all cases except that last example on
`main`, but agree in all cases for the present PR (though see above for
a wonky edge case where we disagree).
<table>
<tr>
<th>Original
</th>
<th><code>main</code> </th>
<th>This
PR </th>
<th><code>black</code> </th>
</tr>
<tr>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
else:
# comment
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
while True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
while True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
else:
# comment
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
</table>
## Summary
Follow up from https://github.com/astral-sh/ruff/pull/21411. Again,
there are more things that could be improved here (like the diagnostics
for `lists`, or extending what we have for `dict` to `OrderedDict` etc),
but that will have to be postponed.
## Summary
We previously only allowed models to overwrite the
`{eq,order,kw_only,frozen}_defaults` of the dataclass-transformer, but
all other standard-dataclass parameters should be equally supported with
the same behavior.
## Test Plan
Added regression tests.
## Summary
Not a high-priority task... but it _is_ a weekend :P
This PR improves our diagnostics for invalid exceptions. Specifically:
- We now give a special-cased ``help: Did you mean
`NotImplementedError`` subdiagnostic for `except NotImplemented`, `raise
NotImplemented` and `raise <EXCEPTION> from NotImplemented`
- If the user catches a tuple of exceptions (`except (foo, bar, baz):`)
and multiple elements in the tuple are invalid, we now collect these
into a single diagnostic rather than emitting a separate diagnostic for
each tuple element
- The explanation of why the `except`/`raise` was invalid ("must be a
`BaseException` instance or `BaseException` subclass", etc.) is
relegated to a subdiagnostic. This makes the top-level diagnostic
summary much more concise.
## Test Plan
Lots of snapshots. And here's some screenshots:
<details>
<summary>Screenshots</summary>
<img width="1770" height="1520" alt="image"
src="https://github.com/user-attachments/assets/7f27fd61-c74d-4ddf-ad97-ea4fd24d06fd"
/>
<img width="1916" height="1392" alt="image"
src="https://github.com/user-attachments/assets/83e5027c-8798-48a6-a0ec-1babfc134000"
/>
<img width="1696" height="588" alt="image"
src="https://github.com/user-attachments/assets/1bc16048-6eb4-4dfa-9ace-dd271074530f"
/>
</details>
## Summary
Allow metaclass-based and baseclass-based dataclass-transformers to
overwrite the default behavior using class arguments:
```py
class Person(Model, order=True):
# ...
```
## Conformance tests
Four new tests passing!
## Test Plan
New Markdown tests
This PR updates the constraint implication type relationship to work on
compound types as well. (A compound type is a non-atomic type, like
`list[T]`.)
The goal of constraint implication is to check whether the requirements
of a constraint imply that a particular subtyping relationship holds.
Before, we were only checking atomic typevars. That would let us verify
that the constraint set `T ≤ bool` implies that `T` is always a subtype
of `int`. (In this case, the lhs of the subtyping check, `T`, is an
atomic typevar.)
But we weren't recursing into compound types, to look for nested
occurrences of typevars. That means that we weren't able to see that `T
≤ bool` implies that `Covariant[T]` is always a subtype of
`Covariant[int]`.
Doing this recursion means that we have to carry the constraint set
along with us as we recurse into types as part of `has_relation_to`, by
adding constraint implication as a new `TypeRelation` variant. (Before
it was just a method on `ConstraintSet`.)
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
## Summary
Currently our diagnostic only covers the range of the thing being
subscripted:
<img width="1702" height="312" alt="image"
src="https://github.com/user-attachments/assets/7e630431-e846-46ca-93c1-139f11aaba11"
/>
But it should probably cover the _whole_ subscript expression (arguably
the more "incorrect" bit is the `["foo"]` part of this expression, not
the `x` part of this expression!)
## Test Plan
Added a snapshot
Co-authored-by: Brent Westbrook
<36778786+ntBre@users.noreply.github.com>
## Summary
Extends literal promotion to apply to any generic method, as opposed to
only generic class constructors. This PR also improves our literal
promotion heuristics to only promote literals in non-covariant position
in the return type, and avoid promotion if the literal is present in
non-covariant position in any argument type.
Resolves https://github.com/astral-sh/ty/issues/1357.
## Summary
- Always restore the previous `deferred_state` after parsing a type
expression: we don't want that state leaking out into other contexts
where we shouldn't be deferring expression inference
- Always defer the right-hand-side of a PEP-613 type alias in a stub
file, allowing for forward references on the right-hand side of `T:
TypeAlias = X | Y` in a stub file
Addresses @carljm's review in
https://github.com/astral-sh/ruff/pull/21401#discussion_r2524260153
## Test Plan
I added a regression test for a regression that the first version of
this PR introduced (we need to make sure the r.h.s. of a PEP-613
`TypeAlias`es is always deferred in a stub file)
## Summary
We currently fail to account for the type context when inferring generic
classes constructed with `__new__`, or synthesized `__init__` for
dataclasses.
There are a few places in Python where it is known that new names are
being introduced and thus we probably shouldn't offer completions. We
already handle this today for things like `class <CURSOR>` and `def
<CURSOR>`. But we didn't handle `as <CURSOR>`, which can appear in
`import`, `with`, `except` and `match` statements. Indeed, these are
exactly the 4 cases where the `as` keyword can occur. So we look for the
presence of `as` and suppress completions based on that.
While we're here, we also make the implementation a bit more robust with
respect to suppressing completions when the user hasn't typed anything.
Namely, previously, we'd still offer completions in a `class <CURSOR>`
context. But it looks like LSP clients (at least, VS Code) doesn't ask
for completions here, so we were "saved" incidentally. This PR detects
this case and suppresses completions there so we don't rely on LSP
client behavior to handle that case correctly.
Fixesastral-sh/ty#1287
## Summary
Infer the first argument `type` inside `Annotated[type, …]` as a type
expression. This allows us to support stringified annotations inside
`Annotated`.
## Ecosystem
* The removed diagnostic on `prefect` shows that we now understand the
`State.data` type annotation in
`src/prefect/client/schemas/objects.py:230`, which uses a stringified
annotation in `Annoated`. The other diagnostics are downstream changes
that result from this, it seems to be a commonly used data type.
* `artigraph` does something like `Annotated[cast(Any,
field_info.annotation), *field_info.metadata]` which I'm not sure we
need to allow? It's unfortunate since this is probably supported at
runtime, but it seems reasonable that they need to add a `# type:
ignore` for that.
* `pydantic` uses something like `Annotated[(self.annotation,
*self.metadata)]` but adds a `# type: ignore`
## Test Plan
New Markdown test
## Summary
Typeshed has a (fake) `__getattr__` method on `types.ModuleType` with a
return type of `Any`. We ignore this method when accessing attributes on
module *literals*, but with this PR, we respect this method when dealing
with `ModuleType` itself. That is, we allow arbitrary attribute accesses
on instances of `types.ModuleType`. This is useful because dynamic
import mechanisms such as `importlib.import_module` use `ModuleType` as
a return type.
closes https://github.com/astral-sh/ty/issues/1346
## Ecosystem
Massive reduction in diagnostics. The few new diagnostics are true
positives.
## Test Plan
Added regression test.
## Summary
Add synthetic members to completions on dataclasses and dataclass
instances.
Also, while we're at it, add support for `__weakref__` and
`__match_args__`.
closes https://github.com/astral-sh/ty/issues/1542
## Test Plan
New Markdown tests
## Summary
Support various legacy `typing` special forms (`List`, `Dict`, …) in
implicit type aliases.
## Ecosystem impact
A lot of true positives (e.g. on `alerta`)!
## Test Plan
New Markdown tests
## Summary
Support `type[…]` in implicit type aliases, for example:
```py
SubclassOfInt = type[int]
reveal_type(SubclassOfInt) # GenericAlias
def _(subclass_of_int: SubclassOfInt):
reveal_type(subclass_of_int) # type[int]
```
part of https://github.com/astral-sh/ty/issues/221
## Typing conformance
```diff
-specialtypes_type.py:138:5: error[type-assertion-failure] Argument does not have asserted type `type[Any]`
-specialtypes_type.py:140:5: error[type-assertion-failure] Argument does not have asserted type `type[Any]`
```
Two new tests passing ✔️
```diff
-specialtypes_type.py:146:1: error[unresolved-attribute] Object of type `GenericAlias` has no attribute `unknown`
```
An `TA4.unknown` attribute on a PEP 613 alias (`TA4: TypeAlias =
type[Any]`) is being accessed, and the conformance suite expects this to
be an error. Since we currently use the inferred type for these type
aliases (and possibly in the future as well), we treat this as a direct
access of the attribute on `type[Any]`, which falls back to an access on
`Any` itself, which succeeds. 🔴
```
+specialtypes_type.py:152:16: error[invalid-type-form] `typing.TypeVar` is not a generic class
+specialtypes_type.py:156:16: error[invalid-type-form] `typing.TypeVar` is not a generic class
```
New errors because we don't handle `T = TypeVar("T"); MyType = type[T];
MyType[T]` yet. Support for this is being tracked in
https://github.com/astral-sh/ty/issues/221🔴
## Ecosystem impact
Looks mostly good, a few known problems.
## Test Plan
New Markdown tests
## Summary
Allow users of `mdtest.py` to press enter to rerun all mdtests without
recompiling (thanks @AlexWaygood).
I swear I tried three other approaches (including a fully async version)
before I settled on this solution. It is indeed silly, but works just
fine.
## Test Plan
Interactive playing around
## Summary
Further improve subscript assignment diagnostics, especially for
`dict`s:
```py
config: dict[str, int] = {}
config["retries"] = "three"
```
<img width="1276" height="274" alt="image"
src="https://github.com/user-attachments/assets/9762c733-8d1c-4a57-8c8a-99825071dc7d"
/>
I have many more ideas, but this looks like a reasonable first step.
Thank you @AlexWaygood for some of the suggestions here.
## Test Plan
Update tests
## Summary
This change to the mdtest runner makes it easy to run on a subset of
tests/files. For example:
```
▶ uv run crates/ty_python_semantic/mdtest.py implicit
running 1 test
test mdtest__implicit_type_aliases ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 281 filtered out; finished in 0.83s
Ready to watch for changes...
```
Subsequent changes to either that test file or the Rust source code will
also only rerun the `implicit_type_aliases` test.
Multiple arguments can be provided, and filters can either be partial
file paths (`loops/for.md`, `loops/for`, `for`) or mangled test names
(`loops_for`):
```
▶ uv run crates/ty_python_semantic/mdtest.py implicit binary/union
running 2 tests
test mdtest__binary_unions ... ok
test mdtest__implicit_type_aliases ... ok
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 280 filtered out; finished in 0.85s
Ready to watch for changes...
```
## Test Plan
Tested it interactively for a while
## Summary
This PR renames the `CallableBinding::matching_overload_index` field to
`CallableBinding::matching_overload_after_parameter_matching` to clarify
the main use case of this field which is to surface type checking errors
on the matching overloads directly instead of using the
`no-matching-overload` diagnostic. This can only happen after parameter
matching as following steps could filter out this overload which should
then result in `no-matching-overload` diagnostic.
Callers should use the `matching_overload_index` _method_ to get the
matching overloads.
## Summary
We synthesize a (potentially large) set of `__setitem__` overloads for
every item in a `TypedDict`. Previously, validation of subscript
assignments on `TypedDict`s relied on actually calling `__setitem__`
with the provided key and value types, which implied that we needed to
do the full overload call evaluation for this large set of overloads.
This PR improves the performance of subscript assignment checks on
`TypedDict`s by validating the assignment directly instead of calling
`__setitem__`.
This PR also adds better handling for assignments to subscripts on union
and intersection types (but does not attempt to make it perfect). It
achieves this by distributing the check over unions and intersections,
instead of calling `__setitem__` on the union/intersection directly. We
already do something similar when validating *attribute* assignments.
## Ecosystem impact
* A lot of diagnostics change their rule type, and/or split into
multiple diagnostics. The new version is more verbose, but easier to
understand, in my opinion
* Almost all of the invalid-key diagnostics come from pydantic, and they
should all go away (including many more) when we implement
https://github.com/astral-sh/ty/issues/1479
* Everything else looks correct to me. There may be some new diagnostics
due to the fact that we now check intersections.
## Test Plan
New Markdown tests.
## Summary
cf. https://github.com/astral-sh/ruff/pull/20962
In the following code, `foo` in the comprehension was not reported as
unresolved:
```python
# error: [unresolved-reference] "Name `foo` used when not defined"
foo
foo = [
# no error!
# revealed: Divergent
reveal_type(x) for _ in () for x in [foo]
]
baz = [
# error: [unresolved-reference] "Name `baz` used when not defined"
# revealed: Unknown
reveal_type(x) for _ in () for x in [baz]
]
```
In fact, this is a more serious bug than it looks: for `foo`,
[`explicit_global_symbol` is
called](6cc3393ccd/crates/ty_python_semantic/src/types/infer/builder.rs (L8052)),
causing a symbol that should actually be `Undefined` to be reported as
being of type `Divergent`.
This PR fixes this bug. As a result, the code in
`mdtest/regression/pr_20962_comprehension_panics.md` no longer panics.
## Test Plan
`corpus\cyclic_symbol_in_comprehension.py` is added.
New tests are added in `mdtest/comprehensions/basic.md`.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Fixes#21393
Now the rule checks if the index variable is initialized as an `int`
type rather than only flagging if the index variable is initialized to
`0`. I used `ResolvedPythonType` to check if the index variable is an
`int` type.
## Test Plan
Updated snapshot test for `SIM113`.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Add (snapshot) tests for subscript assignment diagnostics. This is
mainly intended to establish a baseline before I hope to improve some of
these messages.
## Summary
Add support for `typing.Union` in implicit type aliases / in value
position.
## Typing conformance tests
Two new tests are passing
## Ecosystem impact
* The 2k new `invalid-key` diagnostics on pydantic are caused by
https://github.com/astral-sh/ty/issues/1479#issuecomment-3513854645.
* Everything else I've checked is either a known limitation (often
related to type narrowing, because union types are often narrowed down
to a subset of options), or a true positive.
## Test Plan
New Markdown tests
I don't know why, but it always takes me an eternity to find the failing
project name a few lines below in the output. So I'm suggesting we just
add the project name to the assertion message.
## Summary
Fix https://github.com/astral-sh/ty/issues/664
This PR adds support for storing attributes in comprehension scopes (any
eager scope.)
For example in the following code we infer type of `z` correctly:
```py
class C:
def __init__(self):
[None for self.z in range(1)]
reveal_type(C().z) # previously [unresolved-attribute] but now shows Unknown | int
```
The fix works by adjusting the following logics:
To identify if an attriute is an assignment to self or cls we need to
check the scope is a method. To allow comprehension scopes here we skip
any eager scope in the check.
Also at this stage the code checks if self or the first method argument
is shadowed by another binding that eager scope to prevent this:
```py
class D:
g: int
class C:
def __init__(self):
[[None for self.g in range(1)] for self in [D()]]
reveal_type(C().g) # [unresolved-attribute]
```
When determining scopes that attributes might be defined after
collecting all the methods of the class the code also returns any
decendant scope that is eager and only has eager parents until the
method scope.
When checking reachability of a attribute definition if the attribute is
defined in an eager scope we use the reachability of the first non eager
scope which must be a method. This allows attributes to be marked as
reachable and be seen.
There are also which I didn't add support for:
```py
class C:
def __init__(self):
def f():
[None for self.z in range(1)]
f()
reveal_type(C().z) # [unresolved-attribute]
```
In the above example we will not even return the comprehension scope as
an attribute scope because there is a non eager scope (`f` function)
between the comprehension and the `__init__` method
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
It looks like VS Code does this forcefully. As in, I don't think we can
override it. It also seems like a plausibly good idea. But by us doing
it too, it makes our completion evaluation framework match real world
conditions. (To the extent that "VS Code" and "real world conditions"
are the same. Which... they aren't. But it's close, since VS Code is so
popular.)
This should round out the rest of the set. I think I had hesitated doing
this before because some of these don't make sense in every context. But
I think identifying the correct context for every keyword could be quite
difficult. And at the very least, I think offering these at least as a
choice---even if they aren't always correct---is better than not doing
it at all.
## Summary
Fixes https://github.com/astral-sh/ty/issues/1409
This PR allows `Final` instance attributes to be initialized in
`__init__` methods, as mandated by the Python typing specification (PEP
591). Previously, ty incorrectly prevented this initialization, causing
false positive errors.
The fix checks if we're inside an `__init__` method before rejecting
Final attribute assignments, allowing assignments during
instance initialization while still preventing reassignment elsewhere.
## Test Plan
- Added new test coverage in `final.md` for the reported issue with
`Self` annotations
- Updated existing tests that were incorrectly expecting errors
- All 278 mdtest tests pass
- Manually tested with real-world code examples
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Fixes https://github.com/astral-sh/ty/issues/1487
This one is a true extension of non-standard semantics, and is therefore
a certified Hot Take we might conclude is simply a Bad Take (let's see
what ecosystem tests say...).
By resolving `.` and the LHS of the from import during semantic
indexing, we can check if the LHS is a submodule of `.`, and handle
`from whatever.thispackage.x.y import z` exactly like we do `from .x.y
import z`.
Fixes https://github.com/astral-sh/ty/issues/1484
This manifested as an error when inferring the type of a PEP-695 generic
class via its constructor parameters:
```py
class D[T, U]:
@overload
def __init__(self: "D[str, U]", u: U) -> None: ...
@overload
def __init__(self, t: T, u: U) -> None: ...
def __init__(self, *args) -> None: ...
# revealed: D[Unknown, str]
# SHOULD BE: D[str, str]
reveal_type(D("string"))
```
This manifested because `D` is inferred to be bivariant in both `T` and
`U`. We weren't seeing this in the equivalent example for legacy
typevars, since those default to invariant. (This issue also showed up
for _covariant_ typevars, so this issue was not limited to bivariance.)
The underlying cause was because of a heuristic that we have in our
current constraint solver, which attempts to handle situations like
this:
```py
def f[T](t: T | None): ...
f(None)
```
Here, the `None` argument matches the non-typevar union element, so this
argument should not add any constraints on what `T` can specialize to.
Our previous heuristic would check for this by seeing if the argument
type is a subtype of the parameter annotation as a whole — even if it
isn't a union! That would cause us to erroneously ignore the `self`
parameter in our constructor call, since bivariant classes are
equivalent to each other, regardless of their specializations.
The quick fix is to move this heuristic "down a level", so that we only
apply it when the parameter annotation is a union. This heuristic should
go away completely 🤞 with the new constraint solver.
This loses any ability to have "per-function" implicit submodule
imports, to avoid the "ok but now we need per-scope imports" and "ok but
this should actually introduce a global that only exists during this
function" problems. A simple and clean implementation with no weird
corners.
Fixes https://github.com/astral-sh/ty/issues/1482
This rips out the previous implementation in favour of a new
implementation with 3 rules:
- **froms are locals**: a `from..import` can only define locals, it does
not have global
side-effects. Specifically any submodule attribute `a` that's implicitly
introduced by either
`from .a import b` or `from . import a as b` (in an `__init__.py(i)`) is
a local and not a
global. If you do such an import at the top of a file you won't notice
this. However if you do
such an import in a function, that means it will only be function-scoped
(so you'll need to do
it in every function that wants to access it, making your code less
sensitive to execution
order).
- **first from first serve**: only the *first* `from..import` in an
`__init__.py(i)` that imports a
particular direct submodule of the current package introduces that
submodule as a local.
Subsequent imports of the submodule will not introduce that local. This
reflects the fact that
in actual python only the first import of a submodule (in the entire
execution of the program)
introduces it as an attribute of the package. By "first" we mean "the
first time in this scope
(or any parent scope)". This pairs well with the fact that we are
specifically introducing a
local (as long as you don't accidentally shadow or overwrite the local).
- **dot re-exports**: `from . import a` in an `__init__.pyi` is
considered a re-export of `a`
(equivalent to `from . import a as a`). This is required to properly
handle many stubs in the
wild. Currently it must be *exactly* `from . import ...`.
This implementation is intentionally limited/conservative (notably,
often requiring a from import to be relative). I'm going to file a ton
of followups for improvements so that their impact can be evaluated
separately.
Fixes https://github.com/astral-sh/ty/issues/133
## Summary
Fixed RUF065 (`logging-eager-conversion`) to only flag `str()` calls
when they perform a simple conversion that can be safely removed. The
rule now ignores `str()` calls with no arguments, multiple arguments,
starred arguments, or keyword unpacking, preventing false positives.
Fixes#21315
## Problem Analysis
The RUF065 rule was incorrectly flagging all `str()` calls in logging
statements, even when `str()` was performing actual conversion work
beyond simple type coercion. Specifically, the rule flagged:
- `str()` with no arguments - which returns an empty string
- `str(b"data", "utf-8")` with multiple arguments - which performs
encoding conversion
- `str(*args)` with starred arguments - which unpacks arguments
- `str(**kwargs)` with keyword unpacking - which passes keyword
arguments
These cases cannot be safely removed because `str()` is doing meaningful
work (encoding conversion, argument unpacking, etc.), not just redundant
type conversion.
The root cause was that the rule only checked if the function was
`str()` without validating the call signature. It didn't distinguish
between simple `str(value)` conversions (which can be removed) and more
complex `str()` calls that perform actual work.
## Approach
The fix adds validation to the `str()` detection logic in
`logging_eager_conversion.rs`:
1. **Check argument count**: Only flag `str()` calls with exactly one
positional argument (`str_call_args.args.len() == 1`)
2. **Check for starred arguments**: Ensure the single argument is not
starred (`!str_call_args.args[0].is_starred_expr()`)
3. **Check for keyword arguments**: Ensure there are no keyword
arguments (`str_call_args.keywords.is_empty()`)
This ensures the rule only flags cases like `str(value)` where `str()`
is truly redundant and can be removed, while ignoring cases where
`str()` performs actual conversion work.
The fix maintains backward compatibility - all existing valid test cases
continue to be flagged correctly, while the new edge cases are properly
ignored.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
It's everyone's favourite language corner case!
Also having kicked the tires on it, I'm pretty happy to call this (in
conjunction with #21367):
Fixes https://github.com/astral-sh/ty/issues/494
There's cases where you can make noisy Literal hints appear, so we can
always iterate on it, but this handles like, 98% of the cases in the
wild, which is great.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
I'm not 100% sold on this implementation, but it's a strict improvement
and it adds a ton of snapshot tests for future iteration.
Part of https://github.com/astral-sh/ty/issues/494
## Summary
Fixes FURB105 (`print-empty-string`) to detect empty f-strings in
addition to regular empty strings. Previously, the rule only flagged
`print("")` but missed `print(f"")`. This fix ensures both cases are
detected and can be automatically fixed.
Fixes#21346
## Problem Analysis
The FURB105 rule checks for unnecessary empty strings passed to
`print()` calls. The `is_empty_string` helper function was only checking
for `Expr::StringLiteral` with empty values, but did not handle
`Expr::FString` (f-strings). As a result, `print(f"")` was not being
flagged as a violation, even though it's semantically equivalent to
`print("")` and should be simplified to `print()`.
The issue occurred because the function used a `matches!` macro that
only checked for string literals:
```rust
fn is_empty_string(expr: &Expr) -> bool {
matches!(
expr,
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) if value.is_empty()
)
}
```
## Approach
1. **Import the helper function**: Added `is_empty_f_string` to the
imports from `ruff_python_ast::helpers`, which already provides logic to
detect empty f-strings.
2. **Update `is_empty_string` function**: Changed the implementation
from a `matches!` macro to a `match` expression that handles both string
literals and f-strings:
```rust
fn is_empty_string(expr: &Expr) -> bool {
match expr {
Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) =>
value.is_empty(),
Expr::FString(f_string) => is_empty_f_string(f_string),
_ => false,
}
}
```
The fix leverages the existing `is_empty_f_string` helper function which
properly handles the complexity of f-strings, including nested f-strings
and interpolated expressions. This ensures the detection is accurate and
consistent with how empty strings are detected elsewhere in the
codebase.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves https://github.com/astral-sh/ty/issues/1494
## Test Plan
Add a test showing if we are in `from <name> <name> ` we provide the
keyword completion "import"
This elides the following inlay hints:
```py
foo([x=]x)
foo([x=]y.x)
foo([x=]x[0])
foo([x=]x(...))
# composes to complex situations
foo([x=]y.x(..)[0])
```
Fixes https://github.com/astral-sh/ty/issues/1514
Summary
--
Fixes#21360 by using the union of names instead of overwriting them, as
Micha suggested originally on #21104.
This avoids overwriting the `n` name in the `Subscript` by the empty set
of names visited in the nested OR pattern before visiting the other arm
of the outer OR pattern.
Test Plan
--
A new inline test case taken from the issue
## Summary
Detect usages of implicit `self` in property getters, which allows us to
treat their signature as being generic.
closes https://github.com/astral-sh/ty/issues/1502
## Typing conformance
Two new type assertions that are succeeding.
## Ecosystem results
Mostly look good. There are a few new false positives related to a bug
with constrained typevars that is unrelated to the work here. I reported
this as https://github.com/astral-sh/ty/issues/1503.
## Test Plan
Added regression tests.
## Summary
Add support for `Optional` and `Annotated` in implicit type aliases
part of https://github.com/astral-sh/ty/issues/221
## Typing conformance changes
New expected diagnostics.
## Ecosystem
A lot of true positives, some known limitations unrelated to this PR.
## Test Plan
New Markdown tests
## Summary
This PR adds extra validation for `isinstance()` and `issubclass()`
calls that use `UnionType` instances for their second argument.
According to typeshed's annotations, any `UnionType` is accepted for the
second argument, but this isn't true at runtime: at runtime, all
elements in the `UnionType` must either be class objects or be `None` in
order for the `isinstance()` or `issubclass()` call to reliably succeed:
```pycon
% uvx python3.14
Python 3.14.0 (main, Oct 10 2025, 12:54:13) [Clang 20.1.4 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from typing import LiteralString
>>> import types
>>> type(LiteralString | int) is types.UnionType
True
>>> isinstance(42, LiteralString | int)
Traceback (most recent call last):
File "<python-input-5>", line 1, in <module>
isinstance(42, LiteralString | int)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexw/Library/Application Support/uv/python/cpython-3.14.0-macos-aarch64-none/lib/python3.14/typing.py", line 559, in __instancecheck__
raise TypeError(f"{self} cannot be used with isinstance()")
TypeError: typing.LiteralString cannot be used with isinstance()
```
## Test Plan
Added mdtests/snapshots
## Summary
Fixed FURB101 (`read-whole-file`) to handle annotated assignments.
Previously, the rule would detect violations in code like `contents: str
= f.read()` but fail to generate a fix. Now it correctly generates fixes
that preserve type annotations (e.g., `contents: str =
Path("file.txt").read_text(encoding="utf-8")`).
Fixes#21274
## Problem Analysis
The FURB101 rule was only checking for `Stmt::Assign` statements when
determining whether a fix could be applied. When encountering annotated
assignments (`Stmt::AnnAssign`) like `contents: str = f.read()`, the
rule would:
1. Correctly detect the violation (the diagnostic was reported)
2. Fail to generate a fix because:
- The `visit_expr` method only matched `Stmt::Assign`, not
`Stmt::AnnAssign`
- The `generate_fix` function only accepted `Stmt::Assign` in its body
validation
- The replacement code generation didn't account for type annotations
This occurred because Python's AST represents annotated assignments as a
different node type (`StmtAnnAssign`) with separate fields for the
target, annotation, and value, unlike regular assignments which use a
list of targets.
## Approach
The fix extends the rule to handle both assignment types:
1. **Updated `visit_expr` method**: Now matches both `Stmt::Assign` and
`Stmt::AnnAssign`, extracting:
- Variable name from the target expression
- Type annotation code (when present) using the code generator
2. **Updated `generate_fix` function**:
- Added `annotation: Option<String>` parameter to accept annotation code
- Updated body validation to accept both `Stmt::Assign` and
`Stmt::AnnAssign`
- Modified replacement code generation to preserve annotations: `{var}:
{annotation} = {binding}({filename_code}).{suggestion}`
3. **Added test case**: Added an annotated assignment test case to
verify the fix works correctly.
The implementation maintains backward compatibility with regular
assignments while adding support for annotated assignments, ensuring
type annotations are preserved in the generated fixes.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
We have lots of `TypeVisitor`s that end up having very similar
`visit_type` implementations. This PR consolidates some of the code for
these so that there's less repetition and duplication.
When checking whether a constraint set is satisfied, if a typevar has a
non-fully-static upper bound or constraint, we are free to choose any
materialization that makes the check succeed.
In non-inferable positions, we have to show that the constraint set is
satisfied for all valid specializations, so it's best to choose the most
restrictive materialization, since that minimizes the set of valid
specializations that have to pass.
In inferable positions, we only have to show that the constraint set is
satisfied for _some_ valid specializations, so it's best to choose the
most permissive materialization, since that maximizes our chances of
finding a specialization that passes.
Summary
--
These rules are themselves in preview, so we don't need the additional
preview checks on the fixes or the separate preview tests. This has
confused me in a couple of reviews of changes to the fixes.
Test Plan
--
Existing tests, with the fixes previously only shown in the preview
tests now in the "non-preview" tests.
## Summary
Add support for `Literal` types in implicit type aliases.
part of https://github.com/astral-sh/ty/issues/221
## Ecosystem analysis
This looks good to me, true positives and known problems.
## Test Plan
New Markdown tests.
## Summary
This PR adds support for understanding the legacy definition and PEP 695
definition for `ParamSpec`.
This is still very initial and doesn't really implement any of the
semantics.
Part of https://github.com/astral-sh/ty/issues/157
## Test Plan
Add mdtest cases.
## Ecosystem analysis
Most of the diagnostics in `starlette` are due to the fact that ty now
understands `ParamSpec` is not a `Todo` type, so the assignability check
fails. The code looks something like:
```py
class _MiddlewareFactory(Protocol[P]):
def __call__(self, app: ASGIApp, /, *args: P.args, **kwargs: P.kwargs) -> ASGIApp: ... # pragma: no cover
class Middleware:
def __init__(
self,
cls: _MiddlewareFactory[P],
*args: P.args,
**kwargs: P.kwargs,
) -> None:
self.cls = cls
self.args = args
self.kwargs = kwargs
# ty complains that `ServerErrorMiddleware` is not assignable to `_MiddlewareFactory[P]`
Middleware(ServerErrorMiddleware, handler=error_handler, debug=debug)
```
There are multiple diagnostics where there's an attribute access on the
`Wrapped` object of `functools` which Pyright also raises:
```py
from functools import wraps
def my_decorator(f):
@wraps(f)
def wrapper(*args, **kwds):
return f(*args, **kwds)
# Pyright: Cannot access attribute "__signature__" for class "_Wrapped[..., Unknown, ..., Unknown]"
Attribute "__signature__" is unknown [reportAttributeAccessIssue]
# ty: Object of type `_Wrapped[Unknown, Unknown, Unknown, Unknown]` has no attribute `__signature__` [unresolved-attribute]
wrapper.__signature__
return wrapper
```
There are additional diagnostics that is due to the assignability checks
failing because ty now infers the `ParamSpec` instead of using the
`Todo` type which would always succeed. This results in a few
`no-matching-overload` diagnostics because the assignability checks
fail.
There are a few diagnostics related to
https://github.com/astral-sh/ty/issues/491 where there's a variable
which is either a bound method or a variable that's annotated with
`Callable` that doesn't contain the instance as the first parameter.
Another set of (valid) diagnostics are where the code hasn't provided
all the type variables. ty is now raising diagnostics for these because
we include `ParamSpec` type variable in the signature. For example,
`staticmethod[Any]` which contains two type variables.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Closes https://github.com/astral-sh/ty/issues/989
There are various situations where users expect the Python packages
installed in the same environment as ty itself to be considered during
type checking. A minimal example would look like:
```
uv venv my-env
uv pip install my-env ty httpx
echo "import httpx" > foo.py
./my-env/bin/ty check foo.py
```
or
```
uv tool install ty --with httpx
echo "import httpx" > foo.py
ty check foo.py
```
While these are a bit contrived, there are real-world situations where a
user would expect a similar behavior to work. Notably, all of the other
type checkers consider their own environment when determining search
paths (though I'll admit that I have not verified when they choose not
to do this).
One common situation where users are encountering this today is with
`uvx --with-requirements script.py ty check script.py` — which is
currently our "best" recommendation for type checking a PEP 723 script,
but it doesn't work.
Of the options discussed in
https://github.com/astral-sh/ty/issues/989#issuecomment-3307417985, I've
chosen (2) as our criteria for including ty's environment in the search
paths.
- If no virtual environment is discovered, we will always include ty's
environment.
- If a `.venv` is discovered in the working directory, we will _prepend_
ty's environment to the search paths. The dependencies in ty's
environment (e.g., from `uvx --with`) will take precedence.
- If a virtual environment is active, e.g., `VIRTUAL_ENV` (i.e.,
including conda prefixes) is set, we will not include ty's environment.
The reason we need to special case the `.venv` case is that we both
1. Recommend `uvx ty` today as a way to check your project
2. Want to enable `uvx --with <...> ty`
And I don't want (2) to break when you _happen_ to be in a project
(i.e., if we only included ty's environment when _no_ environment is
found) and don't want to remove support for (1).
I think long-term, I want to make `uvx <cmd>` layer the environment on
_top_ of the project environment (in uv), which would obviate the need
for this change when you're using uv. However, that change is breaking
and I think users will expect this behavior in contexts where they're
not using uv, so I think we should handle it in ty regardless.
I've opted not to include the environment if it's non-virtual (i.e., a
system environment) for now. It seems better to start by being more
restrictive. I left a comment in the code.
## Test Plan
I did some manual testing with the initial commit, then subsequently
added some unit tests.
```
❯ echo "import httpx" > example.py
❯ uvx --with httpx ty check example.py
Installed 8 packages in 19ms
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo/example.py:1:8
|
1 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
❯ uvx --from . --with httpx ty check example.py
All checks passed!
```
```
❯ uv init --script foo.py
Initialized script at `foo.py`
❯ uv add --script foo.py httpx
warning: The Python request from `.python-version` resolved to Python 3.13.8, which is incompatible with the script's Python requirement: `>=3.14`
Updated `foo.py`
❯ echo "import httpx" >> foo.py
❯ uvx --with-requirements foo.py ty check foo.py
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo.py:15:8
|
13 | if __name__ == "__main__":
14 | main()
15 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
❯ uvx --from . --with-requirements foo.py ty check foo.py
All checks passed!
```
Notice we do not include ty's environment if `VIRTUAL_ENV` is set
```
❯ VIRTUAL_ENV=.venv uvx --with httpx ty check foo/example.py
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo/example.py:1:8
|
1 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: 4. /Users/zb/workspace/ty/.venv/lib/python3.13/site-packages (site-packages)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
```
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Raised by @AlexWaygood.
We previously did not favour imported symbols, when we probably
should've
## Test Plan
Add test showing that we favour imported symbol even if it is
alphabetically after other symbols that are builtin.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR ports PLE0117 as a semantic syntax error.
## Test Plan
<!-- How was it tested? -->
Tests previously written
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
This PR carries over some of the `has_relation_to` logic for comparing a
typevar with itself. A typevar will specialize to the same type if it's
mentioned multiple times, so it is always assignable to and a subtype of
itself. (Note that typevars can only specialize to fully static types.)
This is also true when the typevar appears in a union on the right-hand
side, or in an intersection on the left-hand side. Similarly, a typevar
is always disjoint from its negation, so when a negated typevar appears
on the left-hand side, the constraint set is never satisfiable.
(Eventually this will allow us to remove the corresponding clauses from
`has_relation_to`, but that can't happen until more of #20093 lands.)
## Summary
Splitting this one out from https://github.com/astral-sh/ruff/pull/21210. This is also something that should be made obselete by the new constraint solver, but is easy enough to fix now.
## Summary
Fixes FURB157 false negative where `Decimal("_-1")` was not flagged as
verbose when underscores precede the sign character. This fixes#21186.
## Problem Analysis
The `verbose-decimal-constructor` (FURB157) rule failed to detect
verbose `Decimal` constructors when the sign character (`+` or `-`) was
preceded by underscores. For example, `Decimal("_-1")` was not flagged,
even though it can be simplified to `Decimal(-1)`.
The bug occurred because the rule checked for the sign character at the
start of the string before stripping leading underscores. According to
Python's `Decimal` parser behavior (as documented in CPython's
`_pydecimal.py`), underscores are removed before parsing the sign. The
rule's logic didn't match this behavior, causing a false negative for
cases like `"_-1"` where the underscore came before the sign.
This was a regression introduced in version 0.14.3, as these cases were
correctly flagged in version 0.14.2.
## Approach
The fix updates the sign extraction logic to:
1. Strip leading underscores first (matching Python's Decimal parser
behavior)
2. Extract the sign from the underscore-stripped string
3. Preserve the string after the sign for normalization purposes
This ensures that cases like `Decimal("_-1")`, `Decimal("_+1")`, and
`Decimal("_-1_000")` are correctly detected and flagged. The
normalization logic was also updated to use the string after the sign
(without underscores) to avoid double signs in the replacement output.
## Summary
Allow values of type `None` in type expressions. The [typing
spec](https://typing.python.org/en/latest/spec/annotations.html#type-and-annotation-expressions)
could be more explicit on whether this is actually allowed or not, but
it seems relatively harmless and does help in some use cases like:
```py
try:
from module import MyClass
except ImportError:
MyClass = None # ty: ignore
def f(m: MyClass):
pass
```
## Test Plan
Updated tests, ecosystem check.
Summary
--
This PR fixes#17796 by taking the approach mentioned in
https://github.com/astral-sh/ruff/issues/17796#issuecomment-2847943862
of simply recursing into the `MatchAs` patterns when checking if we need
parentheses. This allows us to reuse the parentheses in the inner
pattern before also breaking the `MatchAs` pattern itself:
```diff
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
- case (
- Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
- case (
- Class(
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
case (
Class(
@@ -685,13 +683,11 @@
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
- case (
- [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
- case (
- [
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
```
I haven't really resolved the question of whether or not it's okay
always to recurse, but I'm hoping the ecosystem check on this PR might
shed some light on that.
Test Plan
--
New tests based on the issue and then reviewing the ecosystem check here
## Summary
A lot of the bidirectional inference work relies on `dict` not being
assignable to `TypedDict`, so I think it makes sense to add this before
fully implementing https://github.com/astral-sh/ty/issues/1387.
## Summary
Add support for implicit type aliases that use PEP 604 unions:
```py
IntOrStr = int | str
reveal_type(IntOrStr) # UnionType
def _(int_or_str: IntOrStr):
reveal_type(int_or_str) # int | str
```
## Typing conformance
The changes are either removed false positives, or new diagnostics due
to known limitations unrelated to this PR.
## Ecosystem impact
Spot checked, a mix of true positives and known limitations.
## Test Plan
New Markdown tests.
Fixes https://github.com/astral-sh/ty/issues/1053
## Summary
Other type checkers prioritize a submodule over a package `__getattr__`
in `from mod import sub`, even though the runtime precedence is the
other direction. In effect, this is making an implicit assumption that a
module `__getattr__` will not handle (that is, will raise
`AttributeError`) for names that are also actual submodules, rather than
shadowing them. In practice this seems like a realistic assumption in
the ecosystem? Or at least the ecosystem has adapted to it, and we need
to adapt this precedence also, for ecosystem compatibility.
The implementation is a bit ugly, precisely because it departs from the
runtime semantics, and our implementation is oriented toward modeling
runtime semantics accurately. That is, `__getattr__` is modeled within
the member-lookup code, so it's hard to split "member lookup result from
module `__getattr__`" apart from other member lookup results. I did this
via a synthetic `TypeQualifier::FROM_MODULE_GETATTR` that we attach to a
type resulting from a member lookup, which isn't beautiful but it works
well and doesn't introduce inefficiency (e.g. redundant member lookups).
## Test Plan
Updated mdtests.
Also added a related mdtest formalizing our support for a module
`__getattr__` that is explicitly annotated to accept a limited set of
names. In principle this could be an alternative (more explicit) way to
handle the precedence problem without departing from runtime semantics,
if the ecosystem would adopt it.
### Ecosystem analysis
Lots of removed diagnostics which are an improvement because we now
infer the expected submodule.
Added diagnostics are mostly unrelated issues surfaced now because we
previously had an earlier attribute error resulting in `Unknown`; now we
correctly resolve the module so that earlier attribute error goes away,
we get an actual type instead of `Unknown`, and that triggers a new
error.
In scipy and sklearn, the module `__getattr__` which we were respecting
previously is un-annotated so returned a forgiving `Unknown`; now we
correctly see the actual module, which reveals some cases of
https://github.com/astral-sh/ty/issues/133 that were previously hidden
(`scipy/optimize/__init__.py` [imports `from
._tnc`](eff82ca575/scipy/optimize/__init__.py (L429)).)
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
* extend AIR301 to include deprecated argument `concurrency` in
`airflow....DAG`
## Test Plan
<!-- How was it tested? -->
update the existing test fixture in the first commit and then reorganize
in the second one
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves https://github.com/astral-sh/ty/issues/1464
We sort the completions before we add the unimported ones, meaning that
imported completions show up before unimported ones.
This is also spoken about in
https://github.com/astral-sh/ty/issues/1274, and this is probably a
duplicate of that.
@AlexWaygood mentions this
[here](https://github.com/astral-sh/ty/issues/1274#issuecomment-3345942698)
too.
## Test Plan
Add a test showing even if an unimported completion "should"
(alphabetically before) come first, we favor the imported one.
Summary
--
This code has been unused since #14233 but not detected by clippy I
guess. This should help to remove the temptation to use the set
comparison again like I suggested in #21144. And we shouldn't do the set
comparison because of #13802, which #14233 fixed.
Test Plan
--
Existing tests
Fixes https://github.com/astral-sh/ty/issues/1368
## Summary
Add support for patterns like this, where a type alias to a literal type
(or union of literal types) is used to subscript `typing.Literal`:
```py
type MyAlias = Literal[1]
def _(x: Literal[MyAlias]): ...
```
This shows up in the ecosystem report for PEP 613 type alias support.
One interesting case is an alias to `bool` or an enum type. `bool` is an
equivalent type to `Literal[True, False]`, which is a union of literal
types. Similarly an enum type `E` is also equivalent to a union of its
member literal types. Since (for explicit type aliases) we infer the RHS
directly as a type expression, this makes it difficult for us to
distinguish between `bool` and `Literal[True, False]`, so we allow
either one to (or an alias to either one) to appear inside `Literal`,
where other type checkers allow only the latter.
I think for implicit type aliases it may be simpler to support only
types derived from actually subscripting `typing.Literal`, though, so I
didn't make a TODO-comment commitment here.
## Test Plan
Added mdtests, including TODO-filled tests for PEP 613 and implicit type
aliases.
### Conformance suite
All changes here are positive -- we now emit errors on lines that should
be errors. This is a side effect of the new implementation, not the
primary purpose of this PR, but it's still a positive change.
### Ecosystem
Eliminates one ecosystem false positive, where a PEP 695 type alias for
a union of literal types is used to subscript `typing.Literal`.
## Summary
Adds type inference for list/dict/set comprehensions, including
bidirectional inference:
```py
reveal_type({k: v for k, v in [("a", 1), ("b", 2)]}) # dict[Unknown | str, Unknown | int]
squares: list[int | None] = [x for x in range(10)]
reveal_type(squares) # list[int | None]
```
## Ecosystem impact
I did spot check the changes and most of them seem like known
limitations or true positives. Without proper bidirectional inference,
we saw a lot of false positives.
## Test Plan
New Markdown tests
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
@BurntSushi provided some feedback in #21146 so i address it here.
Summary
--
This is a first step toward fixing #9745. After reviewing our open
issues and several Black issues and PRs, I personally found the function
case the most compelling, especially with very long argument lists:
```py
def func(
self,
arg1: int,
arg2: bool,
arg3: bool,
arg4: float,
arg5: bool,
) -> tuple[...]:
if arg2 and arg3:
raise ValueError
```
or many annotations:
```py
def function(
self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
do_something(data)
return something
```
I think docstrings help the situation substantially both because syntax
highlighting will usually give a very clear separation between the
annotations and the docstring and because we already allow a blank line
_after_ the docstring:
```py
def function(
self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
"""
A function doing something.
And a longer description of the things it does.
"""
do_something(data)
return something
```
There are still other comments on #9745, such as [this one] with 9
upvotes, where users specifically request blank lines in all block
types, or at least including conditionals and loops. I'm sympathetic to
that case as well, even if personally I don't find an [example] like
this:
```py
if blah:
# Do some stuff that is logically related
data = get_data()
# Do some different stuff that is logically related
results = calculate_results()
return results
```
to be much more readable than:
```py
if blah:
# Do some stuff that is logically related
data = get_data()
# Do some different stuff that is logically related
results = calculate_results()
return results
```
I'm probably just used to the latter from the formatters I've used, but
I do prefer it. I also think that functions are the least susceptible to
the accidental introduction of a newline after refactoring described in
Micha's [comment] on #8893.
I actually considered further restricting this change to functions with
multiline headers. I don't think very short functions like:
```py
def foo():
return 1
```
benefit nearly as much from the allowed newline, but I just went with
any function without a docstring for now. I guess a marginal case like:
```py
def foo(a_long_parameter: ALongType, b_long_parameter: BLongType) -> CLongType:
return 1
```
might be a good argument for not restricting it.
I caused a couple of syntax errors before adding special handling for
the ellipsis-only case, so I suspect that there are some other
interesting edge cases that may need to be handled better.
Test Plan
--
Existing tests, plus a few simple new ones. As noted above, I suspect
that we may need a few more for edge cases I haven't considered.
[this one]:
https://github.com/astral-sh/ruff/issues/9745#issuecomment-2876771400
[example]:
https://github.com/psf/black/issues/902#issuecomment-1562154809
[comment]:
https://github.com/astral-sh/ruff/issues/8893#issuecomment-1867259744
## Summary
Discussion with @ibraheemdev clarified that
https://github.com/astral-sh/ruff/pull/21168 was incorrect. In a case of
failed inference of a dict literal as a `TypedDict`, we should store the
context-less inferred type of the dict literal as the type of the dict
literal expression itself; the fallback to declared type should happen
at the level of the overall assignment definition.
The reason the latter isn't working yet is because currently we
(wrongly) consider a homogeneous dict type as assignable to a
`TypedDict`, so we don't actually consider the assignment itself as
failed. So the "bug" I observed (and tried to fix) will naturally be
fixed by implementing TypedDict assignability rules.
Rollback https://github.com/astral-sh/ruff/pull/21168 except for the
tests, and modify the tests to include TODOs as needed.
## Test Plan
Updated mdtests.
The parser currently uses single quotes to wrap tokens. This is
inconsistent with the rest of ruff/ty, which use backticks.
For example, see the inconsistent diagnostics produced in this simple
example: https://play.ty.dev/0a9d6eab-6599-4a1d-8e40-032091f7f50f
Consistently wrapping tokens in backticks produces uniform diagnostics.
Following the style decision of #723, in #2889 some quotes were already
switched into backticks.
This is also in line with Rust's guide on diagnostics
(https://rustc-dev-guide.rust-lang.org/diagnostics.html#diagnostic-structure):
> When code or an identifier must appear in a message or label, it
should be surrounded with backticks
## Summary
In general, when we have an invalid assignment (inferred assigned type
is not assignable to declared type), we fall back to inferring the
declared type, since the declared type is a more explicit declaration of
the programmer's intent. This also maintains the invariant that our
inferred type for a name is always assignable to the declared type for
that same name. For example:
```py
x: str = 1
reveal_type(x) # revealed: str
```
We weren't following this pattern for dictionary literals inferred (via
type context) as a typed dictionary; if the literal was not valid for
the annotated TypedDict type, we would just fall back to the normal
inferred type of the dict literal, effectively ignoring the annotation,
and resulting in inferred type not assignable to declared type.
## Test Plan
Added mdtest assertions.
## Summary
The solver is currently order-dependent, and will choose a supertype
over the exact type if it appears earlier in the list of constraints. We
could be smarter and try to choose the most precise subtype, but I
imagine this is something the new constraint solver will fix anyways,
and this fixes the issue showing up on
https://github.com/astral-sh/ruff/pull/21070.
This PR adds a new `satisfied_by_all_typevar` method, which implements
one of the final steps of actually using these dang constraint sets.
Constraint sets exist to help us check assignability and subtyping of
types in the presence of typevars. We construct a constraint set
describing the conditions under which assignability holds between the
two types. Then we check whether that constraint set is satisfied for
the valid specializations of the relevant typevars (which is this new
method).
We also add a new `ty_extensions.ConstraintSet` method so that we can
test this method's behavior in mdtests, before hooking it up to the rest
of the specialization inference machinery.
## Summary
We currently perform a subtyping check instead of the intended subclass
check (and the subtyping check is confusingly named `is_subclass_of`).
This showed up in https://github.com/astral-sh/ruff/pull/21070.
## Summary
Before this PR, we would emit diagnostics like "Invalid key access" for
a TypedDict literal with invalid key, which doesn't make sense since
there's no "access" in that case. This PR just adjusts the wording to be
more general, and adjusts the documentation of the lint rule too.
I noticed this in the playground and thought it would be a quick fix. As
usual, it turned out to be a bit more subtle than I expected, but for
now I chose to punt on the complexity. We may ultimately want to have
different rules for invalid subscript vs invalid TypedDict literal,
because an invalid key in a TypedDict literal is low severity: it's a
typo detector, but not actually a type error. But then there's another
wrinkle there: if the TypedDict is `closed=True`, then it _is_ a type
error. So would we want to separate the open and closed cases into
separate rules, too? I decided to leave this as a question for future.
If we wanted to use separate rules, or use specific wording for each
case instead of the generalized wording I chose here, that would also
involve a bit of extra work to distinguish the cases, since we use a
generic set of functions for reporting these errors.
## Test Plan
Added and updated mdtests.
This is a second take at the implicit imports approach, allowing `from .
import submodule` in an `__init__.pyi` to create the
`mypackage.submodule` attribute everyhere.
This implementation operates inside of the
available_submodule_attributes subsystem instead of as a re-export rule.
The upside of this is we are no longer purely syntactic, and absolute
from imports that happen to target submodules work (an intentional
discussed deviation from pyright which demands a relative from import).
Also we don't re-export functions or classes.
The downside(?) of this is star imports no longer see these attributes
(this may be either good or bad. I believe it's not a huge lift to make
it work with star imports but it's some non-trivial reworking).
I've also intentionally made `import mypackage.submodule` not trigger
this rule although it's trivial to change that.
I've tried to cover as many relevant cases as possible for discussion in
the new test file I've added (there are some random overlaps with
existing tests but trying to add them piecemeal felt confusing and
weird, so I just made a dedicated file for this extension to the rules).
Fixes https://github.com/astral-sh/ty/issues/133
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
Fixes https://github.com/astral-sh/ty/issues/1427
This PR fixes a regression introduced in alpha.24 where non-dataclass
children of generic dataclasses lost generic type parameter information
during `__init__` synthesis.
The issue occurred because when looking up inherited members in the MRO,
the child class's `inherited_generic_context` was correctly passed down,
but `own_synthesized_member()` (which synthesizes dataclass `__init__`
methods) didn't accept this parameter. It only used
`self.inherited_generic_context(db)`, which returned the parent's
context instead of the child's.
The fix threads the child's generic context through to the synthesis
logic, allowing proper generic type inference for inherited dataclass
constructors.
## Test Plan
- Added regression test for non-dataclass inheriting from generic
dataclass
- Verified the exact repro case from the issue now works
- All 277 mdtest tests passing
- Clippy clean
- Manually verified with Python runtime, mypy, and pyright - all accept
this code pattern
## Verification
Tested against multiple type checkers:
- ✅ Python runtime: Code works correctly
- ✅ mypy: No issues found
- ✅ pyright: 0 errors, 0 warnings
- ✅ ty alpha.23: Worked (before regression)
- ❌ ty alpha.24: Regression
- ✅ ty with this fix: Works correctly
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: David Peter <mail@david-peter.de>
It's possible for a constraint to mention two typevars. For instance, in
the body of
```py
def f[S: int, T: S](): ...
```
the baseline constraint set would be `(T ≤ S) ∧ (S ≤ int)`. That is, `S`
must specialize to some subtype of `int`, and `T` must specialize to a
subtype of the type that `S` specializes to.
This PR updates the new "constraint implication" relationship from
#21010 to work on these kinds of constraint sets. For instance, in the
example above, we should be able to see that `T ≤ int` must always hold:
```py
def f[S, T]():
constraints = ConstraintSet.range(Never, S, int) & ConstraintSet.range(Never, T, S)
static_assert(constraints.implies_subtype_of(T, int)) # now succeeds!
```
This did not require major changes to the implementation of
`implies_subtype_of`. That method already relies on how our `simplify`
and `domain` methods expand a constraint set to include the transitive
closure of the constraints that it mentions, and to mark certain
combinations of constraints as impossible. Previously, that transitive
closure logic only looked at pairs of constraints that constrain the
same typevar. (For instance, to notice that `(T ≤ bool) ∧ ¬(T ≤ int)` is
impossible.)
Now we also look at pairs of constraints that constraint different
typevars, if one of the constraints is bound by the other — that is,
pairs of the form `T ≤ S` and `S ≤ something`, or `S ≤ T` and `something
≤ S`. In those cases, transitivity lets us add a new derived constraint
that `T ≤ something` or `something ≤ T`, respectively. Having done that,
our existing `implies_subtype_of` logic finds and takes into account
that derived constraint.
Summary
--
Fixes#21121 by upgrading `RuntimeEvaluated` annotations like
`dataclasses.KW_ONLY` to `RuntimeRequired`. We already had special
handling for
`TypingOnly` annotations in this context but not `RuntimeEvaluated`.
Combining
that with the `future-annotations` setting, which allowed ignoring the
`RuntimeEvaluated` flag, led to the reported bug where we would try to
move
`KW_ONLY` into a `TYPE_CHECKING` block.
Test Plan
--
A new test based on the issue
## Summary
We weren't correctly modeling it as a `staticmethod` in all cases,
leading us to incorrectly infer that the `cls` argument would be bound
if it was accessed on an instance (rather than the class object).
## Test Plan
Added mdtests that fail on `main`. The primer output also looks good!
## Summary
Fixes#21101 by storing the child visitor's names in the parent visitor.
This makes sure that `visitor.names` on line 1818 isn't empty after we
visit a nested OR pattern.
## Test Plan
New inline test cases derived from the issue,
[playground](https://play.ruff.rs/7b6439ac-ee8f-4593-9a3e-c2aa34a595d0)
## Summary
Adds proper type narrowing and reachability analysis for matching on
non-inferable type variables bound to enums. For example:
```py
from enum import Enum
class Answer(Enum):
NO = 0
YES = 1
def is_yes(self) -> bool: # no error here!
match self:
case Answer.YES:
return True
case Answer.NO:
return False
```
closes https://github.com/astral-sh/ty/issues/1404
## Test Plan
Added regression tests
## Summary
We previously didn't understand `range` and wrote these custom
`IntIterable`/`IntIterator` classes for tests. We can now remove them
and make the tests shorter in some places.
## Summary
Infer a type of unannotated `self` parameters in decorated methods /
properties.
closes https://github.com/astral-sh/ty/issues/1448
## Test Plan
Existing tests, some new tests.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixed the incorrect import example in the "correct exmaple"
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
🤷
<!-- How was it tested? -->
Note that this doesn't change the evaluation results unfortunately.
In particular, prior to this fix, the correct result was ranked above
the redundant result. Our MRR-based evaluation doesn't care about
anything below the rank of the correct answer, and so this change isn't
reflected in our evaluation.
Fixesastral-sh/ty#1445
The status quo grew organically and didn't do well when one wanted to
mix and match different settings to generate a snapshot.
This does a small refactor to use more of a builder to generate
snapshots.
This fixes a bug where the `import module` part of a completion for
unimported candidates would be missing. This makes it especially
confusing because the user can't tell where the symbol is coming from,
and there is no hint that an `import` statement will be inserted.
Previously, we were using [`CompletionItemLabelDetails`] to render the
`import module` part of the suggestion. But this is only supported in
clients that support version 3.17 (or newer) of the LSP specification.
It turns out that this support isn't widespread yet. In particular,
Heliex doesn't seem to support "label details."
To fix this, we take a [cue from rust-analyzer][rust-analyzer-details].
We detect if the client supports "label details," and if so, use it.
Otherwise, we push the `import module` text into the completion label
itself.
Fixes https://github.com/astral-sh/ruff/pull/20439#issuecomment-3313689568
[`CompletionItemLabelDetails`]: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#completionItemLabelDetails
[rust-analyzer-details]: 5d905576d4/crates/rust-analyzer/src/lsp/to_proto.rs (L391-L404)
This PR updates the mdtests that test how our generics solver interacts
with our new constraint set implementation. Because the rendering of a
constraint set can get long, this standardizes on putting the `revealed`
assertion on a separate line. We also add a `static_assert` test for
each constraint set to verify that they are all coerced into simple
`bool`s correctly.
This is a pure reformatting (not even a refactoring!) that changes no
behavior. I've pulled it out of #20093 to reduce the amount of effort
that will be required to review that PR.
We have several functions in `ty_extensions` for testing our constraint
set implementation. This PR refactors those functions so that they are
all methods of the `ConstraintSet` class, rather than being standalone
top-level functions. 🎩 to @sharkdp for pointing out that
`KnownBoundMethod` gives us what we need to implement that!
This PR adds the new **_constraint implication_** relationship between
types, aka `is_subtype_of_given`, which tests whether one type is a
subtype of another _assuming that the constraints in a particular
constraint set hold_.
For concrete types, constraint implication is exactly the same as
subtyping. (A concrete type is any fully static type that is not a
typevar. It can _contain_ a typevar, though — `list[T]` is considered
concrete.)
The interesting case is typevars. The other typing relationships (TODO:
will) all "punt" on the question when considering a typevar, by
translating the desired relationship into a constraint set. At some
point, though, we need to resolve a constraint set; at that point, we
can no longer punt on the question. Unlike with concrete types, the
answer will depend on the constraint set that we are considering.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR refactors semantic error tests in each seperate file
## Test Plan
<!-- How was it tested? -->
## CC
- @ntBre
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Summary
--
Fixes#19550
This PR copies our non-watch diagnostic rendering code into
`Printer::write_continuously` in preview mode, allowing it to use
whatever output format is passed in.
I initially marked this as also fixing #19552, but I guess that's not
true currently but will be true once this is stabilized and we can
remove the warning.
Test Plan
--
Existing tests, but I don't think we have any `watch` tests, so some
manual testing as well. The default with just `ruff check --watch` is
still `concise`, adding just `--preview` still gives the `full` output,
and then specifying any other output format works, with JSON as one
example:
<img width="695" height="719" alt="Screenshot 2025-10-27 at 9 21 41 AM"
src="https://github.com/user-attachments/assets/98957911-d216-4fc4-8b6c-22c56c963b3f"
/>
When formatting clause headers for clauses that are not their own node,
like an `else` clause or `finally` clause, we begin searching for the
keyword at the end of the previous statement. However, if the previous
statement ended in a semicolon this caused a panic because we only
expected trivia between the end of the last statement and the keyword.
This PR adjusts the starting point of our search for the keyword to
begin after the optional semicolon in these cases.
Closes#21065
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Add docstring sections which were missing from the numpy list as pointed
out here #20923. For now these are only the official sections as
documented
[here](https://numpydoc.readthedocs.io/en/latest/format.html#sections).
## Test Plan
Added a test case for DOC102
## Summary
Fixes#20973 (`docstring-extraneous-exception`) false positive when
exceptions mentioned in docstrings are caught and explicitly re-raised
using `raise e` or `raise e from None`.
## Problem Analysis
The DOC502 rule was incorrectly flagging exceptions mentioned in
docstrings as "not explicitly raised" when they were actually being
explicitly re-raised through exception variables bound in `except`
clauses.
**Root Cause**: The `BodyVisitor` in `check_docstring.rs` only checked
for direct exception references (like `raise OSError()`) but didn't
recognize when a variable bound to an exception in an `except` clause
was being re-raised.
**Example of the bug**:
```python
def f():
"""Do nothing.
Raises
------
OSError
If the OS errors.
"""
try:
pass
except OSError as e:
raise e # This was incorrectly flagged as not explicitly raising OSError
```
The issue occurred because `resolve_qualified_name(e)` couldn't resolve
the variable `e` to a qualified exception name, since `e` is just a
variable binding, not a direct reference to an exception class.
## Approach
Modified the `BodyVisitor` in
`crates/ruff_linter/src/rules/pydoclint/rules/check_docstring.rs` to:
1. **Track exception variable bindings**: Added `exception_variables`
field to map exception variable names to their exception types within
`except` clauses
2. **Enhanced raise statement detection**: Updated `visit_stmt` to check
if a `raise` statement uses a variable name that's bound to an exception
in the current `except` clause
3. **Proper scope management**: Clear exception variable mappings when
leaving `except` handlers to prevent cross-contamination
**Key changes**:
- Added `exception_variables: FxHashMap<&'a str, QualifiedName<'a>>` to
track variable-to-exception mappings
- Enhanced `visit_except_handler` to store exception variable bindings
when entering `except` clauses
- Modified `visit_stmt` to check for variable-based re-raising: `raise
e` → lookup `e` in `exception_variables`
- Clear mappings when exiting `except` handlers to maintain proper scope
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
That PR title might be a bit inscrutable.
Consider the two constraints `T ≤ bool` and `T ≤ int`. Since `bool ≤
int`, by transitivity `T ≤ bool` implies `T ≤ int`. (Every type that is
a subtype of `bool` is necessarily also a subtype of `int`.) That means
that `T ≤ bool ∧ T ≰ int` is an impossible combination of constraints,
and is therefore not a valid input to any BDD. We say that that
assignment is not in the _domain_ of the BDD.
The implication `T ≤ bool → T ≤ int` can be rewritten as `T ≰ bool ∨ T ≤
int`. (That's the definition of implication.) If we construct that
constraint set in an mdtest, we should get a constraint set that is
always satisfiable. Previously, that constraint set would correctly
_display_ as `always`, but a `static_assert` on it would fail.
The underlying cause is that our `is_always_satisfied` method would only
test if the BDD was the `AlwaysTrue` terminal node. `T ≰ bool ∨ T ≤ int`
does not simplify that far, because we purposefully keep around those
constraints in the BDD structure so that it's easier to compare against
other BDDs that reference those constraints.
To fix this, we need a more nuanced definition of "always satisfied".
Instead of evaluating to `true` for _every_ input, we only need it to
evaluate to `true` for every _valid_ input — that is, every input in its
domain.
## Summary
implement pylint rule stop-iteration-return / R1708
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
* Extend `airflow.models.Param` to include `airflow.models.param.Param`
case and include both `airflow.models.param.ParamDict` and
`airflow.models.param.DagParam` and their `airflow.models.` counter part
## Test Plan
<!-- How was it tested? -->
update the text fixture accordingly and reorganize them in the third
commit
Summary
--
Inspired by #20859, this PR adds the version a rule was added, and the
file and line where it was defined, to `ViolationMetadata`. The file and
line just use the standard `file!` and `line!` macros, while the more
interesting version field uses a new `violation_metadata` attribute
parsed by our `ViolationMetadata` derive macro.
I moved the commit modifying all of the rule files to the end, so it
should be a lot easier to review by omitting that one.
As a curiosity and a bit of a sanity check, I also plotted the rule
numbers over time:
<img width="640" height="480" alt="image"
src="https://github.com/user-attachments/assets/75b0b5cc-3521-4d40-a395-8807e6f4925f"
/>
I think this looks pretty reasonable and avoids some of the artifacts
the earlier versions of the script ran into, such as the `rule`
sub-command not being available or `--explain` requiring a file
argument.
<details><summary>Script and summary data</summary>
```shell
gawk --csv '
NR > 1 {
split($2, a, ".")
major = a[1]; minor = a[2]; micro = a[3]
# sum the number of rules added per minor version
versions[minor] += 1
}
END {
tot = 0
for (i = 0; i <= 14; i++) {
tot += versions[i]
print i, tot
}
}
' ruff_rules_metadata.csv > summary.dat
```
```
0 696
1 768
2 778
3 803
4 822
5 848
6 855
7 865
8 893
9 915
10 916
11 924
12 929
13 932
14 933
```
</details>
Test Plan
--
I built and viewed the documentation locally, and it looks pretty good!
<img width="1466" height="676" alt="image"
src="https://github.com/user-attachments/assets/5e227df4-7294-4d12-bdaa-31cac4e9ad5c"
/>
The spacing seems a bit awkward following the `h1` at the top, so I'm
wondering if this might look nicer as a footer in Ruff. The links work
well too:
- [v0.0.271](https://github.com/astral-sh/ruff/releases/tag/v0.0.271)
- [Related
issues](https://github.com/astral-sh/ruff/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20airflow-variable-name-task-id-mismatch)
- [View
source](https://github.com/astral-sh/ruff/blob/main/crates%2Fruff_linter%2Fsrc%2Frules%2Fairflow%2Frules%2Ftask_variable_name.rs#L34)
The last one even works on `main` now since it points to the
`derive(ViolationMetadata)` line.
In terms of binary size, this branch is a bit bigger than main with
38,654,520 bytes compared to 38,635,728 (+20 KB). I guess that's not
_too_ much of an increase, but I wanted to check since we're generating
a lot more code with macros.
---------
Co-authored-by: GiGaGon <107241144+MeGaGiGaGon@users.noreply.github.com>
## Summary
Infer a type of `Self` for unannotated `self` parameters in methods of
classes.
part of https://github.com/astral-sh/ty/issues/159
closes https://github.com/astral-sh/ty/issues/1081
## Conformance tests changes
```diff
+enums_member_values.py:85:9: error[invalid-assignment] Object of type `int` is not assignable to attribute `_value_` of type `str`
```
A true positive ✔️
```diff
-generics_self_advanced.py:35:9: error[type-assertion-failure] Argument does not have asserted type `Self@method2`
-generics_self_basic.py:14:9: error[type-assertion-failure] Argument does not have asserted type `Self@set_scale
```
Two false positives going away ✔️
```diff
+generics_syntax_infer_variance.py:82:9: error[invalid-assignment] Cannot assign to final attribute `x` on type `Self@__init__`
```
This looks like a true positive to me, even if it's not marked with `#
E` ✔️
```diff
+protocols_explicit.py:56:9: error[invalid-assignment] Object of type `tuple[int, int, str]` is not assignable to attribute `rgb` of type `tuple[int, int, int]`
```
True positive ✔️
```
+protocols_explicit.py:85:9: error[invalid-attribute-access] Cannot assign to ClassVar `cm1` from an instance of type `Self@__init__`
```
This looks like a true positive to me, even if it's not marked with `#
E`. But this is consistent with our understanding of `ClassVar`, I
think. ✔️
```py
+qualifiers_final_annotation.py:52:9: error[invalid-assignment] Cannot assign to final attribute `ID4` on type `Self@__init__`
+qualifiers_final_annotation.py:65:9: error[invalid-assignment] Cannot assign to final attribute `ID7` on type `Self@method1`
```
New true positives ✔️
```py
+qualifiers_final_annotation.py:52:9: error[invalid-assignment] Cannot assign to final attribute `ID4` on type `Self@__init__`
+qualifiers_final_annotation.py:57:13: error[invalid-assignment] Cannot assign to final attribute `ID6` on type `Self@__init__`
+qualifiers_final_annotation.py:59:13: error[invalid-assignment] Cannot assign to final attribute `ID6` on type `Self@__init__`
```
This is a new false positive, but that's a pre-existing issue on main
(if you annotate with `Self`):
https://play.ty.dev/3ee1c56d-7e13-43bb-811a-7a81e236e6ab❌ => reported
as https://github.com/astral-sh/ty/issues/1409
## Ecosystem
* There are 5931 new `unresolved-attribute` and 3292 new
`possibly-missing-attribute` attribute errors, way too many to look at
all of them. I randomly sampled 15 of these errors and found:
* 13 instances where there was simply no such attribute that we could
plausibly see. Sometimes [I didn't find it
anywhere](8644d886c6/openlibrary/plugins/openlibrary/tests/test_listapi.py (L33)).
Sometimes it was set externally on the object. Sometimes there was some
[`setattr` dynamicness going
on](a49f6b927d/setuptools/wheel.py (L88-L94)).
I would consider all of them to be true positives.
* 1 instance where [attribute was set on `obj` in
`__new__`](9e87b44fd4/sympy/tensor/array/array_comprehension.py (L45C1-L45C36)),
which we don't support yet
* 1 instance [where the attribute was defined via `__slots__`
](e250ec0fc8/lib/spack/spack/vendor/pyrsistent/_pdeque.py (L48C5-L48C14))
* I see 44 instances [of the false positive
above](https://github.com/astral-sh/ty/issues/1409) with `Final`
instance attributes being set in `__init__`. I don't think this should
block this PR.
## Test Plan
New Markdown tests.
---------
Co-authored-by: Shaygan Hooshyari <sh.hooshyari@gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#21017
Taught UP032’s parenthesize check to ignore underscores when inspecting
decimal integer literals so the converter emits `f"{(1_2).real}"`
instead of invalid syntax.
## Test Plan
Added test cases to UP032_2.py.
<!-- How was it tested? -->
This PR adds another useful simplification when rendering constraint
sets: `T = int` instead of `T = int ∧ T ≠ str`. (The "smaller"
constraint `T = int` implies the "larger" constraint `T ≠ str`.
Constraint set clauses are intersections, and if one constraint in a
clause implies another, we can throw away the "larger" constraint.)
While we're here, we also normalize the bounds of a constraint, so that
we equate e.g. `T ≤ int | str` with `T ≤ str | int`, and change the
ordering of BDD variables so that all constraints with the same typevar
are ordered adjacent to each other.
Lastly, we also add a new `display_graph` helper method that prints out
the full graph structure of a BDD.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Fall back to `C[Divergent]` if we are trying to specialize `C[T]` with a
type that itself already contains deeply nested specialized generic
classes. This is a way to prevent infinite recursion for cases like
`self.x = [self.x]` where type inference for the implicit instance
attribute would not converge.
closes https://github.com/astral-sh/ty/issues/1383
closes https://github.com/astral-sh/ty/issues/837
## Test Plan
Regression tests.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#18778
Prevent SIM911 from triggering when zip() is called on .keys()/.values()
that take any positional or keyword arguments, so Ruff
never suggests the lossy rewrite.
## Test Plan
<!-- How was it tested? -->
Added a test case to SIM911.py.
## Summary
I spun this out from #21005 because I thought it might be helpful
separately. It just renders a nice `Diagnostic` for syntax errors
pointing to the source of the error. This seemed a bit more helpful to
me than just the byte offset when working on #21005, and we had most of
the code around after #20443 anyway.
## Test Plan
This doesn't actually affect any passing tests, but here's an example of
the additional output I got when I broke the spacing after the `in`
token:
```
error[internal-error]: Expected 'in', found name
--> /home/brent/astral/ruff/crates/ruff_python_formatter/resources/test/fixtures/black/cases/cantfit.py:50:79
|
48 | need_more_to_make_the_line_long_enough,
49 | )
50 | del ([], name_1, name_2), [(), [], name_4, name_3], name_1[[name_2 for name_1 inname_0]]
| ^^^^^^^^
51 | del ()
|
```
I just appended this to the other existing output for now.
This is an alternative to #21012 that more narrowly handles this logic
in the stub-mapping machinery rather than pervasively allowing us to
identify cached files as typeshed stubs. Much of the logic is the same
(pulling the logic out of ty_server so it can be reused).
I don't have a good sense for if one approach is "better" or "worse" in
terms of like, semantics and Weird Bugs that this can cause. This one is
just "less spooky in its broad consequences" and "less muddying of
separation of concerns" and puts the extra logic on a much colder path.
I won't be surprised if one day the previous implementation needs to be
revisited for its more sweeping effects but for now this is good.
Fixes https://github.com/astral-sh/ty/issues/1054
## Summary
We currently panic in the seemingly rare case where the type of a
default value of a parameter depends on the callable itself:
```py
class C:
def f(self: C):
self.x = lambda a=self.x: a
```
Types of default values are only used for display reasons, and it's
unclear if we even want to track them (or if we should rather track the
actual value). So it didn't seem to me that we should spend a lot of
effort (and runtime) trying to achieve a theoretically correct type here
(which would be infinite).
Instead, we simply replace *nested* default types with `Unknown`, i.e.
only if the type of the default value is a callable itself.
closes https://github.com/astral-sh/ty/issues/1402
## Test Plan
Regression tests
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements a new semantic syntax error where name is parameter &
global.
## Test Plan
<!-- How was it tested? -->
I have written inline test as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Only run the "pull types" test after performing the "actual" mdtest. We
observed that the order matters. There is currently one mdtest which
panics when checked in the CLI or the playground. With this change, it
also panics in the mdtest suite.
reopens https://github.com/astral-sh/ty/issues/837?
## Summary
Implement handling of ellipsis (`...`) defaults in the `FAST002` autofix
to correctly differentiate between required and optional parameters in
FastAPI route definitions.
Previously, the autofix did not properly handle cases where parameters
used `...` as a default value (to indicate required parameters). This
could lead to incorrect transformations when applying the autofix.
This change updates the `FAST002` autofix logic to:
- Correctly recognize `...` as a valid FastAPI required default.
- Preserve the semantics of required parameters while still applying
other autofix improvements.
- Avoid incorrectly substituting or removing ellipsis defaults.
Fixes https://github.com/astral-sh/ruff/issues/20800
## Test Plan
Added a new test fixture at:
```crates/ruff_linter/resources/test/fixtures/fastapi/FAST002_2.py```
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#20941
Skip autofix for keyword and __debug__ path params
## Test Plan
<!-- How was it tested? -->
I added two test cases to
crates/ruff_linter/resources/test/fixtures/fastapi/FAST003.py.
Closes#20997
This will _decrease_ the number of diagnostics emitted for
[zip-without-explicit-strict
(B905)](https://docs.astral.sh/ruff/rules/zip-without-explicit-strict/#zip-without-explicit-strict-b905),
since previously it triggered on any `zip` call no matter the number of
arguments. It may _increase_ the number of diagnostics for
[map-without-explicit-strict
(B912)](https://docs.astral.sh/ruff/rules/map-without-explicit-strict/#map-without-explicit-strict-b912)
since it will now trigger on a single starred argument where before it
would not. However, the latter rule is in `preview` so this is
acceptable.
Note - we do not need to make any changes to
[batched-without-explicit-strict
(B911)](https://docs.astral.sh/ruff/rules/batched-without-explicit-strict/#batched-without-explicit-strict-b911)
since that just takes a single iterable.
I am doing this in one PR rather than two because we should keep the
behavior of these rules consistent with one another.
For review: apologies for the unreadability of the snapshot for `B905`.
Unfortunately I saw no way of keeping a small diff and a correct fixture
(the fixture labeled a whole block as `# Error` whereas now several in
the block became `# Ok`).Probably simplest to just view the actual
snapshot - it's relatively small.
## Summary
Make rules `INT001`, `INT002`, and `INT003` also
* trigger on qualified names when we're sure the calls are calls to the
`gettext` module. For example
```python
from gettext import gettext as foo
foo(f"{'bar'}") # very certain that this is a call to a real `gettext`
function => worth linting
```
* trigger on `builtins` bindings
```python
from builtins, gettext
gettext.install("...") # binds `gettext.gettext` to `builtins._`
builtins.__dict__["_"] = ... # also a common pattern
_(f"{'bar'}") # should therefore also be linted
```
Fixes: https://github.com/astral-sh/ruff/issues/19028
## Test Plan
Tests have been added to all three rules.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Summary
--
This PR fixes the issue I added in #20867 and noticed in #20930. Cases
like this
cause an error on any Python version:
```py
f"{1:""}"
```
which gave me a false sense of security before. Cases like this are
still
invalid only before 3.12 and weren't flagged after the changes in
#20867:
```py
f'{1: abcd "{'aa'}" }'
# ^ reused quote
f'{1: abcd "{"\n"}" }'
# ^ backslash
```
I didn't recognize these as nested interpolations that also need to be
checked
for invalid expressions, so filtering out the whole format spec wasn't
quite
right. And `elements.interpolations()` only iterates over the outermost
interpolations, not the nested ones.
There's basically no code change in this PR, I just moved the existing
check
from `parse_interpolated_string`, which parses the entire string, to
`parse_interpolated_element`. This kind of seems more natural anyway and
avoids
having to try to recursively visit nested elements after the fact in
`parse_interpolated_string`. So viewing the diff with something like
```
git diff --color-moved --ignore-space-change --color-moved-ws=allow-indentation-change main
```
should make this more clear.
Test Plan
--
New tests
## Summary
- Type checkers (and type-checker authors) think in terms of types, but
I think most Python users think in terms of values. Rather than saying
that a _type_ `X` "has no attribute `foo`" (which I think sounds strange
to many users), say that "an object of type `X` has no attribute `foo`"
- Special-case certain types so that the diagnostic messages read more
like normal English: rather than saying "Type `<class 'Foo'>` has no
attribute `bar`" or "Object of type `<class 'Foo'>` has no attribute
`bar`", just say "Class `Foo` has no attribute `bar`"
## Test Plan
Mdtests and snapshots updated
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements semantic syntax error where alternative patterns bind
different names
## Test Plan
<!-- How was it tested? -->
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Derived from #20900
Implement `VarianceInferable` for `KnownInstanceType` (especially for
`KnownInstanceType::TypeAliasType`).
The variance of a type alias matches its value type. In normal usage,
type aliases are expanded to value types, so the variance of a type
alias can be obtained without implementing this. However, for example,
if we want to display the variance when hovering over a type alias, we
need to be able to obtain the variance of the type alias itself (cf.
#20900).
## Test Plan
I couldn't come up with a way to test this in mdtest, so I'm testing it
in a test submodule at the end of `types.rs`.
I also added a test to `mdtest/generics/pep695/variance.md`, but it
passes without the changes in this PR.
## Summary
Fixes#20774 by tracking whether an `InterpolatedStringState` element is
nested inside of another interpolated element. This feels like kind of a
naive fix, so I'm welcome to other ideas. But it resolves the problem in
the issue and clears up the syntax error in the black compatibility
test, without affecting many other cases.
The other affected case is actually interesting too because the
[input](96b156303b/crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/fstring.py (L707))
is invalid, but the previous quote selection fixed the invalid syntax:
```pycon
Python 3.11.13 (main, Sep 2 2025, 14:20:25) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f'{1: abcd "{'aa'}" }' # input
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
>>> f'{1: abcd "{"aa"}" }' # old output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier ' abcd "aa" ' for object of type 'int'
>>> f'{1: abcd "{'aa'}" }' # new output
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
```
We now preserve the invalid syntax in the input.
Unfortunately, this also seems to be another edge case I didn't consider
in https://github.com/astral-sh/ruff/pull/20867 because we don't flag
this as a syntax error after 0.14.1:
<details><summary>Shell output</summary>
<p>
```
> uvx ruff@0.14.0 check --ignore ALL --target-version py311 - <<EOF
f'{1: abcd "{'aa'}" }'
EOF
invalid-syntax: Cannot reuse outer quote character in f-strings on Python 3.11 (syntax was added in Python 3.12)
--> -:1:14
|
1 | f'{1: abcd "{'aa'}" }'
| ^
|
Found 1 error.
> uvx ruff@0.14.1 check --ignore ALL --target-version py311 - <<EOF
f'{1: abcd "{'aa'}" }'
EOF
All checks passed!
> uvx python@3.11 -m ast <<EOF
f'{1: abcd "{'aa'}" }'
EOF
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 1752, in <module>
main()
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 1748, in main
tree = parse(source, args.infile.name, args.mode, type_comments=args.no_type_comments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brent/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<stdin>", line 1
f'{1: abcd "{'aa'}" }'
^^
SyntaxError: f-string: expecting '}'
```
</p>
</details>
I assumed that was the same `ParseError` as the one caused by
`f"{1:""}"`, but this is a nested interpolation inside of the format
spec.
## Test Plan
New test copied from the black compatibility test. I guess this is a
duplicate now, I started working on this branch before the new black
tests were imported, so I could delete the separate test in our fixtures
if that's preferable.
## Summary
Support `dataclass_transform` when used on a (base) class.
## Typing conformance
* The changes in `dataclasses_transform_class.py` look good, just a few
mistakes due to missing `alias` support.
* I didn't look closely at the changes in
`dataclasses_transform_converter.py` since we don't support `converter`
yet.
## Ecosystem impact
The impact looks huge, but it's concentrated on a single project (ibis).
Their setup looks more or less like this:
* the real `Annotatable`:
d7083c2c96/ibis/common/grounds.py (L100-L101)
* the real `DataType`:
d7083c2c96/ibis/expr/datatypes/core.py (L161-L179)
* the real `Array`:
d7083c2c96/ibis/expr/datatypes/core.py (L1003-L1006)
```py
from typing import dataclass_transform
@dataclass_transform()
class Annotatable:
pass
class DataType(Annotatable):
nullable: bool = True
class Array[T](DataType):
value_type: T
```
They expect something like `Array([1, 2])` to work, but ty, pyright,
mypy, and pyrefly would all expect there to be a first argument for the
`nullable` field on `DataType`. I don't really understand on what
grounds they expect the `nullable` field to be excluded from the
signature, but this seems to be the main reason for the new diagnostics
here. Not sure if related, but it looks like their typing setup is not
really complete
(https://github.com/ibis-project/ibis/issues/6844#issuecomment-1868274770,
this thread also mentions `dataclass_transform`).
## Test Plan
Update pre-existing tests.
Detect legacy namespace packages and treat them like namespace packages
when looking them up as the *parent* of the module we're interested in.
In all other cases treat them like a regular package.
(This PR is coauthored by @MichaReiser in a shared coding session)
Fixes https://github.com/astral-sh/ty/issues/838
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Prefer the declared type for collection literals, e.g.,
```py
x: list[Any] = [1, "2", (3,)]
reveal_type(x) # list[Any]
```
This solves a large part of https://github.com/astral-sh/ty/issues/136
for invariant generics, where respecting the declared type is a lot more
important. It also means that annotated dict literals with `dict[_,
Any]` is a way out of https://github.com/astral-sh/ty/issues/1248.
We have to track whether a typevar appears in a position where it's
inferable or not. In a non-inferable position (in the body of the
generic class or function that binds it), assignability must hold for
every possible specialization of the typevar. In an inferable position,
it only needs to hold for _some_ specialization.
https://github.com/astral-sh/ruff/pull/20093 is working on using
constraint sets to model assignability of typevars, and the constraint
sets that we produce will be the same for inferable vs non-inferable
typevars; what changes is what we _compare_ that constraint set to. (For
a non-inferable typevar, the constraint set must equal the set of valid
specializations; for an inferable typevar, it must not be `never`.)
When I first added support for tracking inferable vs non-inferable
typevars, it seemed like it would be easiest to have separate `Type`
variants for each. The alternative (which lines up with the Δ set in
[POPL15](https://doi.org/10.1145/2676726.2676991)) would be to
explicitly plumb through a list of inferable typevars through our type
property methods. That seemed cumbersome.
In retrospect, that was the wrong decision. We've had to jump through
hoops to translate types between the inferable and non-inferable
variants, which has been quite brittle. Combined with the original point
above, that much of the assignability logic will become more identical
between inferable and non-inferable, there is less justification for the
two `Type` variants. And plumbing an extra `inferable` parameter through
all of these methods turns out to not be as bad as I anticipated.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Use the declared type of variables as type context for the RHS of assignment expressions, e.g.,
```py
x: list[int | str]
x = [1]
reveal_type(x) # revealed: list[int | str]
```
## Summary
Ignore the type context when specializing a generic call if it leads to
an unnecessarily wide return type. For example, [the example mentioned
here](https://github.com/astral-sh/ruff/pull/20796#issuecomment-3403319536)
works as expected after this change:
```py
def id[T](x: T) -> T:
return x
def _(i: int):
x: int | None = id(i)
y: int | None = i
reveal_type(x) # revealed: int
reveal_type(y) # revealed: int
```
I also added extended our usage of `filter_disjoint_elements` to tuple
and typed-dict inference, which resolves
https://github.com/astral-sh/ty/issues/1266.
## Summary
Add support for the `field_specifiers` parameter on
`dataclass_transform` decorator calls.
closes https://github.com/astral-sh/ty/issues/1068
## Conformance test results
All true positives ✔️
## Ecosystem analysis
* `trio`: this is the kind of change that I would expect from this PR.
The code makes use of a dataclass `Outcome` with a `_unwrapped: bool =
attr.ib(default=False, eq=False, init=False)` field that is excluded
from the `__init__` signature, so we now see a bunch of
constructor-call-related errors going away.
* `home-assistant/core`: They have a `domain: str = attr.ib(init=False,
repr=False)` field and then use
```py
@domain.default
def _domain_default(self) -> str:
# …
```
This accesses the `default` attribute on `dataclasses.Field[…]` with a
type of `default: _T | Literal[_MISSING_TYPE.MISSING]`, so we get those
"Object of type `_MISSING_TYPE` is not callable" errors. I don't really
understand how that is supposed to work. Even if `_MISSING_TYPE` would
be absent from that union, what does this try to call? pyright also
issues an error and it doesn't seem to work at runtime? So this looks
like a true positive?
* `attrs`: Similar here. There are some new diagnostics on code that
tries to access `.validator` on a field. This *does* work at runtime,
but I'm not sure how that is supposed to type-check (without a [custom
plugin](2c6c395935/mypy/plugins/attrs.py (L575-L602))).
pyright errors on this as well.
* A handful of new false positives because we don't support `alias` yet
## Test Plan
Updated tests.
Summary
--
This PR unifies the two different ways Ruff and ty construct syntax
errors. Ruff has been storing the primary message in the diagnostic
itself, while ty attached the message to the primary annotation:
```
> ruff check try.py
invalid-syntax: name capture `x` makes remaining patterns unreachable
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^
3 | case y: ...
|
Found 1 error.
> uvx ty check try.py
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
Checking ------------------------------------------------------------ 1/1 files
error[invalid-syntax]
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^ name capture `x` makes remaining patterns unreachable
3 | case y: ...
|
Found 1 diagnostic
```
I think there are benefits to both approaches, and I do like ty's
version, but I feel like we should pick one (and it might help with
#20901 eventually). I slightly prefer Ruff's version, so I went with
that. Hopefully this isn't too controversial, but I'm happy to close
this if it is.
Note that this shouldn't change any other diagnostic formats in ty
because
[`Diagnostic::primary_message`](98d27c4128/crates/ruff_db/src/diagnostic/mod.rs (L177))
was already falling back to the primary annotation message if the
diagnostic message was empty. As a result, I think this change will
partially resolve the FIXME therein.
Test Plan
--
Existing tests with updated snapshots
## Summary
Implement `docstring-extraneous-parameter` (`DOC102`). This rule checks
that all parameters present in a functions docstring are also present in
its signature.
Split from #13280, per this
[comment](https://github.com/astral-sh/ruff/pull/13280#issuecomment-3280575506).
Part of #12434.
## Test Plan
Test cases added.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
This is the ultra-minimal implementation of
* https://github.com/astral-sh/ty/issues/296
that was previously discussed as a good starting point. In particular we
don't actually bother trying to figure out the exact python versions,
but we still mention "hey btw for No Reason At All... you're on python
3.10" when you try to access something that has a definition rooted in
the stdlib that we believe exists sometimes.
This is a drive-by improvement that I stumbled backwards into while
looking into
* https://github.com/astral-sh/ty/issues/296
I was writing some simple tests for "thing not in old version of stdlib"
diagnostics and checked what was added in 3.14, and saw
`compression.zstd` and to my surprise discovered that `import
compression.zstd` and `from compression import zstd` had completely
different quality diagnostics.
This is because `compression` and `compression.zstd` were *both*
introduced in 3.14, and so per VERSIONS policy only an entry for
`compression` was added, and so we don't actually have any definite info
on `compression.zstd` and give up on producing a diagnostic. However the
`from compression import zstd` form fails on looking up `compression`
and we *do* have an exact match for that, so it gets a better
diagnostic!
(aside: I have now learned about the VERSIONS format and I *really* wish
they would just enumerate all the submodules but, oh well!)
The fix is, when handling an import failure, if we fail to find an exact
match *we requery with the parent module*. In cases like
`compression.zstd` this lets us at least identify that, hey, not even
`compression` exists, and luckily that fixes the whole issue. In cases
where the parent module and submodule were introduced at different times
then we may discover that the parent module is in-range and that's fine,
we don't produce the richer stdlib diagnostic.
## Summary
`dataclasses.field` and field-specifier functions of commonly used
libraries like `pydantic`, `attrs`, and `SQLAlchemy` all return the
default type for the field (or `Any`) instead of an actual `Field`
instance, even if this is not what happens at runtime. Let's make use of
this fact and assume that *all* field specifiers return the type of the
default value of the field.
For standard dataclasses, this leads to more or less the same outcome
(see test diff for details), but this change is important for 3rd party
dataclass-transformers.
## Test Plan
Tested the consequences of this change on the field-specifiers branch as
well.
## Summary
Resolves https://github.com/astral-sh/ty/issues/1349.
Fix match statement value patterns to use equality comparison semantics
instead of incorrectly narrowing to literal types directly. Value
patterns use equality for matching, and equality can be overridden, so
we can't always narrow to the matched literal.
## Test Plan
Updated match.md with corrected expected types and an additional example
with explanation
---------
Co-authored-by: David Peter <mail@david-peter.de>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements `F702`
https://docs.astral.sh/ruff/rules/continue-outside-loop/ as semantic
syntax error.
## Test Plan
<!-- How was it tested? -->
Tests are already previously written in F702
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
## Summary
Part of astral-sh/ty#1341
The following changes will be made to `Place`.
* Introduce `TypeOrigin`
* `Place::Type` -> `Place::Defined`
* `Place::Unbound` -> `Place::Undefined`
* `Boundness` -> `Definedness`
`TypeOrigin::Declared`+`Definedness::PossiblyUndefined` are patterns
that weren't considered before, but this PR doesn't address them yet,
only refactors.
## Test Plan
Refactoring
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
`airflow.datasets.DatasetEvent` has been removed in 3 but `AssetEvent`
might be added in the future
## Test Plan
<!-- How was it tested? -->
update the test fixture and reorg in the second commit
Summary
--
Fixes#20844 by refining the unsupported syntax error check for [PEP
701]
f-strings before Python 3.12 to allow backslash escapes and escaped
outer quotes
in the format spec part of f-strings. These are only disallowed within
the
f-string expression part on earlier versions. Using the examples from
the PR:
```pycon
>>> f"{1:\x64}"
'1'
>>> f"{1:\"d\"}"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier '"d"' for object of type 'int'
```
Note that the second case is a runtime error, but this is actually
avoidable if
you override `__format__`, so despite being pretty weird, this could
actually be
a valid use case.
```pycon
>>> class C:
... def __format__(*args, **kwargs): return "<C>"
...
>>> f"{C():\"d\"}"
'<C>'
```
At first I thought narrowing the range we check to exclude the format
spec would
only work for escapes, but it turns out that cases like `f"{1:""}"` are
already
covered by an existing `ParseError`, so we can just narrow the range of
both our
escape and quote checks.
Our comment check also seems to be working correctly because it's based
on the
actual tokens. A case like
[this](https://play.ruff.rs/9f1c2ff2-cd8e-4ad7-9f40-56c0a524209f):
```python
f"""{1:# }"""
```
doesn't include a comment token, instead the `#` is part of an
`InterpolatedStringLiteralElement`.
Test Plan
--
New inline parser tests
[PEP 701]: https://peps.python.org/pep-0701/
A large part of the diff on #20677 just involves threading a new
`inferable` parameter through all of the type property methods. In the
interests of making that PR easier to review, I've pulled that bit out
into here, so that it can be reviewed in isolation. This should be a
pure refactoring, with no logic changes or behavioral changes.
## Summary
Fixed a typo. It should be "or", not "of". Both `.pop()` and `next()` on
an empty collection will raise `IndexError`, not "`[0]` of the `pop()`
function"
## Test Plan
n/a
Summary
--
This PR implements the black preview style from
https://github.com/psf/black/pull/4720. As of Python 3.14, you're
allowed to omit the parentheses around groups of exceptions, as long as
there's no `as` binding:
**3.13**
```pycon
Python 3.13.4 (main, Jun 4 2025, 17:37:06) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException: ...
...
File "<python-input-1>", line 2
except Exception, BaseException: ...
^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized
```
**3.14**
```pycon
Python 3.14.0rc2 (main, Sep 2 2025, 14:20:56) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except Exception, BaseException: ...
...
Ellipsis
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException as e: ...
...
File "<python-input-2>", line 2
except Exception, BaseException as e: ...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized when using 'as'
```
I think this ended up being pretty straightforward, at least once Micha
showed me where to start :)
Test Plan
--
New tests
At first I thought we were deviating from black in how we handle
comments within the exception type tuple, but I think this applies to
how we format all tuples, not specifically with the new preview style.
Summary
--
```shell
git clone git@github.com:psf/black.git ../other/black
crates/ruff_python_formatter/resources/test/fixtures/import_black_tests.py ../other/black
```
Then ran our tests and accepted the snapshots
I had to make a small fix to our tuple normalization logic for `del`
statements
in the second commit, otherwise the tests were panicking at a changed
AST. I
think the new implementation is closer to the intention described in the
nearby
comment anyway, though.
The first commit adds the new Python, settings, and `.expect` files, the
next three commits make some small
fixes to help get the tests running, and then the fifth commit accepts
all but one of the new snapshots. The last commit includes the new
unsupported syntax error for one f-string example, tracked in #20774.
Test Plan
--
Newly imported tests. I went through all of the new snapshots and added
review comments below. I think they're all expected, except a few cases
I wasn't 100% sure about.
## Summary
If a function is decorated with a decorator that returns a union of
`Callable`s, also treat it as a union of function-like `Callable`s.
Labeling as `internal`, since the previous change has not been released
yet.
## Test Plan
New regression test.
## Summary
Rename "unwrapping" methods on `Type` from e.g.
`Type::into_class_literal` to `Type::as_class_literal`. I personally
find that name more intuitive, since no transformation of any kind is
happening. We are just unwrapping from certain enum variants. An
alternative would be `try_as_class_literal`, which would follow the
[`strum` naming
scheme](https://docs.rs/strum/latest/strum/derive.EnumTryAs.html), but
is slightly longer.
Also rename `Type::into_callable` to `Type::try_upcast_to_callable`.
Note that I intentionally kept names like
`FunctionType::into_callable_type`, because those return `CallableType`,
not `Option<Type<…>>`.
## Test Plan
Pure refactoring
As part of #20598, we added `is_identical_to` methods to
`TypeVarInstance` and `BoundTypeVarInstance`, which compare when two
typevar instances refer to "the same" underlying typevar, even if we
have forced their lazy bounds/constraints as part of marking typevars as
inferable. (Doing so results in a different salsa interned struct ID,
since we've changed the contents of the `bounds_or_constraints` field.)
It turns out that marking typevars as inferable is not the only way that
we might force lazy bounds/constraints; it also happens when we
materialize a type containing a typevar. This surfaced as ecosystem
report failures on #20677.
That means that we need a more long-term fix to this problem.
(`is_identical_to`, and its underlying `original` field, were meant to
be a temporary fix until we removed the `MarkTypeVarsInferable` type
mapping.)
This PR extracts out a separate type (`TypeVarIdentity`) that only
includes the fields that actually inform whether two typevars are "the
same". All other properties of the typevar (default, bounds/constraints,
etc) still live in `TypeVarInstance`. Call sites that care about typevar
identity can now either store just `TypeVarIdentity` (if they never need
access to those other properties), or continue to store
`TypeVarInstance` but pull out its `identity` when performing those "are
they the same typevar" comparisons. (All of this also applies
respectively to `BoundTypeVar{Identity,Instance}`.) In particular,
constraint sets now work on `BoundTypeVarIdentity`, and generic contexts
still _store_ a `BoundTypeVarInstance` (since we might need access to
defaults when specializing), but are keyed on `BoundTypeVarIdentity`.
Generic classes are not allowed to bind or reference a typevar from an
enclosing scope:
```py
def f[T](x: T, y: T) -> None:
class Ok[S]: ...
# error: [invalid-generic-class]
class Bad1[T]: ...
# error: [invalid-generic-class]
class Bad2(Iterable[T]): ...
class C[T]:
class Ok1[S]: ...
# error: [invalid-generic-class]
class Bad1[T]: ...
# error: [invalid-generic-class]
class Bad2(Iterable[T]): ...
```
It does not matter if the class uses PEP 695 or legacy syntax. It does
not matter if the enclosing scope is a generic class or function. The
generic class cannot even _reference_ an enclosing typevar in its base
class list.
This PR adds diagnostics for these cases.
In addition, the PR adds better fallback behavior for generic classes
that violate this rule: any enclosing typevars are not included in the
class's generic context. (That ensures that we don't inadvertently try
to infer specializations for those typevars in places where we
shouldn't.) The `dulwich` ecosystem project has [examples of
this](d912eaaffd/dulwich/config.py (L251))
that were causing new false positives on #20677.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements https://docs.astral.sh/ruff/rules/break-outside-loop/
(F701) as a semantic syntax error.
## Test Plan
<!-- How was it tested? -->
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Treat `Callable`s as bound-method descriptors if `Callable` is the
return type of a decorator that is applied to a function definition. See
the [rendered version of the new test
file](https://github.com/astral-sh/ruff/blob/david/callables-as-descriptors/crates/ty_python_semantic/resources/mdtest/call/callables_as_descriptors.md)
for the full description of this new heuristic.
I could imagine that we want to treat `Callable`s as bound-method
descriptors in other cases as well, but this seems like a step in the
right direction. I am planning to add other "use cases" from
https://github.com/astral-sh/ty/issues/491 to this test suite.
partially addresses https://github.com/astral-sh/ty/issues/491
closes https://github.com/astral-sh/ty/issues/1333
## Ecosystem impact
All positive
* 2961 removed `unsupported-operator` diagnostics on `sympy`, which was
one of the main motivations for implementing this change
* 37 removed `missing-argument` diagnostics, and no added call-error
diagnostics, which is an indicator that this heuristic shouldn't cause
many false positives
* A few removed `possibly-missing-attribute` diagnostics when accessing
attributes like `__name__` on decorated functions. The two added
`unused-ignore-comment` diagnostics are also cases of this.
* One new `invalid-assignment` diagnostic on `dd-trace-py`, which looks
suspicious, but only because our `invalid-assignment` diagnostics are
not great. This is actually a "Implicit shadowing of function"
diagnostic that hides behind the `invalid-assignment` diagnostic,
because a module-global function is being patched through a
`module.func` attribute assignment.
## Test Plan
New Markdown tests.
This PR resolves the issue noticed in
https://github.com/astral-sh/ruff/pull/20777#discussion_r2417233227.
Namely, cases like this were being flagged as syntax errors despite
being perfectly valid on Python 3.8:
```pycon
Python 3.8.20 (default, Oct 2 2024, 16:34:12)
[Clang 18.1.8 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> with (open("foo.txt", "w")): ...
...
Ellipsis
>>> with (open("foo.txt", "w")) as f: print(f)
...
<_io.TextIOWrapper name='foo.txt' mode='w' encoding='UTF-8'>
```
The second of these was already allowed but not the first:
```shell
> ruff check --target-version py38 --ignore ALL - <<EOF
with (open("foo.txt", "w")): ...
with (open("foo.txt", "w")) as f: print(f)
EOF
invalid-syntax: Cannot use parentheses within a `with` statement on Python 3.8 (syntax was added in Python 3.9)
--> -:1:6
|
1 | with (open("foo.txt", "w")): ...
| ^
2 | with (open("foo.txt", "w")) as f: print(f)
|
Found 1 error.
```
There was some discussion of related cases in
https://github.com/astral-sh/ruff/pull/16523#discussion_r1984657793, but
it seems I overlooked the single-element case when flagging tuples. As
suggested in the other thread, we can just check if there's more than
one element or a trailing comma, which will cause the tuple parsing on
<=3.8 and avoid the false positives.
## Summary
Based on the suggestion in
https://github.com/astral-sh/ruff/issues/20774#issuecomment-3383153511,
I added rendering of unsupported syntax errors in our `format` test.
In support of this, I added a `DummyFileResolver` type to `ruff_db` to
pass to `DisplayDiagnostics::new` (first commit). Another option would
obviously be implementing this directly in the fixtures, but we'd have
to import a `NotebookIndex` somehow; either by depending directly on
`ruff_notebook` or re-exporting it from `ruff_db`. I thought it might be
convenient elsewhere to have a dummy resolver, for example in the
parser, where we currently have a separate rendering pipeline
[copied](https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_parser/tests/fixtures.rs#L321)
from our old rendering code in `ruff_linter`. I also briefly tried
implementing a `TestDb` in the formatter since I noticed the
`ruff_python_formatter::db` module, but that was turning into a lot more
code than the dummy resolver.
We could also push this a bit further if we wanted. I didn't add the new
snapshots to the black compatibility tests or to the preview snapshots,
for example. I thought it was kind of noisy enough (and helpful enough)
already, though. We could also use a shorter diagnostic format, but the
full output seems most useful once we accept this initial large batch of
changes.
## Test Plan
I went through the baseline snapshots pretty quickly, but they all
looked reasonable to me, with one exception I noted below. I also tested
that the case from #20774 produces a new unsupported syntax error.