## Summary
Previously if an explicit specialization failed (e.g. wrong number of
type arguments or violates an upper bound) we just inferred `Unknown`
for the entire type. This actually caused us to panic on an a case of a
recursive upper bound with invalid specialization; the upper bound would
oscillate indefinitely in fixpoint iteration between `Unknown` and the
given specialization. This could be fixed with a cycle recovery
function, but in this case there's a simpler fix: if we infer
`C[Unknown]` instead of `Unknown` for an invalid attempt to specialize
`C`, that allows fixpoint iteration to quickly converge, as well as
giving a more precise type inference.
Other type checkers actually just go with the attempted specialization
even if it's invalid. So if `C` has a type parameter with upper bound
`int`, and you say `C[str]`, they'll emit a diagnostic but just go with
`C[str]`. Even weirder, if `C` has a single type parameter and you say
`C[str, bytes]`, they'll just go with `C[str]` as the type. I'm not
convinced by this approach; it seems odd to have specializations
floating around that explicitly violate the declared upper bound, or in
the latter case aren't even the specialization the annotation requested.
I prefer `C[Unknown]` for this case.
Fixing this revealed an issue with `collections.namedtuple`, which
returns `type[tuple[Any, ...]]`. Due to
https://github.com/astral-sh/ty/issues/1649 we consider that to be an
invalid specialization. So previously we returned `Unknown`; after this
PR it would be `type[tuple[Unknown]]`, leading to more false positives
from our lack of functional namedtuple support. To avoid that I added an
explicit Todo type for functional namedtuples for now.
## Test Plan
Added and updated mdtests.
The conformance suite changes have to do with `ParamSpec`, so no
meaningful signal there.
The ecosystem changes appear to be the expected effects of having more
precise type information (including occurrences of known issues such as
https://github.com/astral-sh/ty/issues/1495 ). Most effects are just
changes to types in diagnostics.
## Summary
This PR updates the explicit specialization logic to avoid using the
call machinery.
Previously, the logic would use the call machinery by converting the
list of type variables into a `Binding` with a single `Signature` where
all the type variables are positional-only parameters with bounds and
constraints as the annotated type and the default type as the default
parameter value. This has the advantage that it doesn't need to
implement any specific logic but the disadvantages are subpar diagnostic
messages as it would use the ones specific to a function call. But, an
important disadvantage is that the kind of type variable is lost in this
translation which becomes important in #21445 where a `ParamSpec` can
specialize into a list of types which is provided using list literal.
For example,
```py
class Foo[T, **P]: ...
Foo[int, [int, str]]
```
This PR converts the logic to use a simple loop using `zip_longest` as
all type variables and their corresponding type argument maps on a 1-1
basis. They cannot be specified using keyword argument either e.g.,
`dict[_VT=str, _KT=int]` is invalid.
This PR also makes an initial attempt to improve the diagnostic message
to specifically target the specialization part by using words like "type
argument" instead of just "argument" and including information like the
type variable, bounds, and constraints. Further improvements can be made
by highlighting the type variable definition or the bounds / constraints
as a sub-diagnostic but I'm going to leave that as a follow-up.
## Test Plan
Update messages in existing test cases.
## Summary
Derived from #17371Fixesastral-sh/ty#256
Fixes https://github.com/astral-sh/ty/issues/1415
Fixes https://github.com/astral-sh/ty/issues/1433
Fixes https://github.com/astral-sh/ty/issues/1524
Properly handles any kind of recursive inference and prevents panics.
---
Let me explain techniques for converging fixed-point iterations during
recursive type inference.
There are two types of type inference that naively don't converge
(causing salsa to panic): divergent type inference and oscillating type
inference.
### Divergent type inference
Divergent type inference occurs when eagerly expanding a recursive type.
A typical example is this:
```python
class C:
def f(self, other: "C"):
self.x = (other.x, 1)
reveal_type(C().x) # revealed: Unknown | tuple[Unknown | tuple[Unknown | tuple[..., Literal[1]], Literal[1]], Literal[1]]
```
To solve this problem, we have already introduced `Divergent` types
(https://github.com/astral-sh/ruff/pull/20312). `Divergent` types are
treated as a kind of dynamic type [^1].
```python
Unknown | tuple[Unknown | tuple[Unknown | tuple[..., Literal[1]], Literal[1]], Literal[1]]
=> Unknown | tuple[Divergent, Literal[1]]
```
When a query function that returns a type enters a cycle, it sets
`Divergent` as the cycle initial value (instead of `Never`). Then, in
the cycle recovery function, it reduces the nesting of types containing
`Divergent` to converge.
```python
0th: Divergent
1st: Unknown | tuple[Divergent, Literal[1]]
2nd: Unknown | tuple[Unknown | tuple[Divergent, Literal[1]], Literal[1]]
=> Unknown | tuple[Divergent, Literal[1]]
```
Each cycle recovery function for each query should operate only on the
`Divergent` type originating from that query.
For this reason, while `Divergent` appears the same as `Any` to the
user, it internally carries some information: the location where the
cycle occurred. Previously, we roughly identified this by having the
scope where the cycle occurred, but with the update to salsa, functions
that create cycle initial values can now receive a `salsa::Id`
(https://github.com/salsa-rs/salsa/pull/1012). This is an opaque ID that
uniquely identifies the cycle head (the query that is the starting point
for the fixed-point iteration). `Divergent` now has this `salsa::Id`.
### Oscillating type inference
Now, another thing to consider is oscillating type inference.
Oscillating type inference arises from the fact that monotonicity is
broken. Monotonicity here means that for a query function, if it enters
a cycle, the calculation must start from a "bottom value" and progress
towards the final result with each cycle. Monotonicity breaks down in
type systems that have features like overloading and overriding.
```python
class Base:
def flip(self) -> "Sub":
return Sub()
class Sub(Base):
def flip(self) -> "Base":
return Base()
class C:
def __init__(self, x: Sub):
self.x = x
def replace_with(self, other: "C"):
self.x = other.x.flip()
reveal_type(C(Sub()).x)
```
Naive fixed-point iteration results in `Divergent -> Sub -> Base -> Sub
-> ...`, which oscillates forever without diverging or converging. To
address this, the salsa API has been modified so that the cycle recovery
function receives the value of the previous cycle
(https://github.com/salsa-rs/salsa/pull/1012).
The cycle recovery function returns the union type of the current cycle
and the previous cycle. In the above example, the result type for each
cycle is `Divergent -> Sub -> Base (= Sub | Base) -> Base`, which
converges.
The final result of oscillating type inference does not contain
`Divergent` because `Divergent` that appears in a union type can be
removed, as is clear from the expansion. This simplification is
performed at the same time as nesting reduction.
```
T | Divergent = T | (T | (T | ...)) = T
```
[^1]: In theory, it may be possible to strictly treat types containing
`Divergent` types as recursive types, but we probably shouldn't go that
deep yet. (AFAIK, there are no PEPs that specify how to handle
implicitly recursive types that aren't named by type aliases)
## Performance analysis
A happy side effect of this PR is that we've observed widespread
performance improvements!
This is likely due to the removal of the `ITERATIONS_BEFORE_FALLBACK`
and max-specialization depth trick
(https://github.com/astral-sh/ty/issues/1433,
https://github.com/astral-sh/ty/issues/1415), which means we reach a
fixed point much sooner.
## Ecosystem analysis
The changes look good overall.
You may notice changes in the converged values for recursive types,
this is because the way recursive types are normalized has been changed.
Previously, types containing `Divergent` types were normalized by
replacing them with the `Divergent` type itself, but in this PR, types
with a nesting level of 2 or more that contain `Divergent` types are
normalized by replacing them with a type with a nesting level of 1. This
means that information about the non-divergent parts of recursive types
is no longer lost.
```python
# previous
tuple[tuple[Divergent, int], int] => Divergent
# now
tuple[tuple[Divergent, int], int] => tuple[Divergent, int]
```
The false positive error introduced in this PR occurs in class
definitions with self-referential base classes, such as the one below.
```python
from typing_extensions import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class Base2(Generic[T, U]): ...
# TODO: no error
# error: [unsupported-base] "Unsupported class base with type `<class 'Base2[Sub2, U@Sub2]'> | <class 'Base2[Sub2[Unknown], U@Sub2]'>`"
class Sub2(Base2["Sub2", U]): ...
```
This is due to the lack of support for unions of MROs, or because cyclic
legacy generic types are not inferred as generic types early in the
query cycle.
## Test Plan
All samples listed in astral-sh/ty#256 are tested and passed without any
panic!
## Acknowledgments
Thanks to @MichaReiser for working on bug fixes and improvements to
salsa for this PR. @carljm also contributed early on to the discussion
of the query convergence mechanism proposed in this PR.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
We now use the type context for a lot of things, so re-inferring without
type context actually makes diagnostics more confusing (in most cases).
## Summary
This PR adds a code action to remove unused ignore comments.
This PR also includes some infrastructure boilerplate to set up code
actions in the editor:
* Extend `snapshot-diagnostics` to render fixes
* Render fixes when using `--output-format=full`
* Hook up edits and the code action request in the LSP
* Add the `Unnecessary` tag to `unused-ignore-comment` diagnostics
* Group multiple unused codes into a single diagnostic
The same fix can be used on the CLI once we add `ty fix`
Note: `unused-ignore-comment` is currently disabled by default.
https://github.com/user-attachments/assets/f9e21087-3513-4156-85d7-a90b1a7a3489
## Summary
Building on https://github.com/astral-sh/ruff/pull/21436.
There's nothing conceptually more complicated about this, it just
requires its own set of tests and its own subdiagnostic hint.
I also uncovered another inconsistency between mypy/pyright/pyrefly,
which is fun. In this case, I suggest we go with pyright's behaviour.
## Test Plan
mdtests/snapshots
## Summary
For something like this:
```py
from typing import Callable
def my_lossy_decorator(fn: Callable[..., int]) -> Callable[..., int]:
return fn
class MyClass:
@my_lossy_decorator
def method(self) -> int:
return 42
```
we will currently infer the type of `MyClass.method` as a function-like
`Callable`, but we will infer the type of `MyClass().method` as a
`Callable` that is _not_ function-like. That's because a `CallableType`
currently "forgets" whether it was function-like or not during the
`bound_self` transformation:
a57e291311/crates/ty_python_semantic/src/types.rs (L10985-L10987)
This seems incorrect, and it's quite different to what we do when
binding the `self` parameter of `FunctionLiteral` types: `BoundMethod`
types are all seen as subtypes of function-like `Callable` supertypes --
here's `BoundMethodType::into_callable_type`:
a57e291311/crates/ty_python_semantic/src/types.rs (L10844-L10860)
The bug here is also causing lots of false positives in the ecosystem
report on https://github.com/astral-sh/ruff/pull/21611: a decorated
method on a subclass is currently not seen as validly overriding an
undecorated method with the same signature on a superclass, because the
undecorated superclass method is seen as function-like after binding
`self` whereas the decorated subclass method is not.
Fixing the bug required adding a new API in `protocol_class.rs`, because
it turns out that for our purposes in protocol subtyping/assignability,
we really do want a callable type to forget its function-like-ness when
binding `self`.
I initially tried out this change without changing anything in
`protocol_class.rs`. However, it resulted in many ecosystem false
positives and new false positives on the typing conformance test suite.
This is because it would mean that no protocol with a `__call__` method
would ever be seen as a subtype of a `Callable` type, since the
`__call__` method on the protocol would be seen as being function-like
whereas the `Callable` type would not be seen as function-like.
## Test Plan
Added an mdtest that fails on `main`
Before, we would collapse any constraint of the form `Never ≤ T ≤
object` down to the "always true" constraint set. This is correct in
terms of BDD semantics, but loses information, since "not constraining a
typevar at all" is different than "constraining a typevar to take on any
type". Once we get to specialization inference, we should fall back on
the typevar's default for the former, but not for the latter.
This is much easier to support now that we have a sequent map, since we
need to treat `¬(Never ≤ T ≤ object)` as being impossible, and prune it
when we walk through BDD paths, just like we do for other impossible
combinations.
This patch updates our protocol assignability checks to substitute for
any occurrences of `typing.Self` in method signatures, replacing it with
the class being checked for assignability against the protocol.
This requires a new helper method on signatures, `apply_self`, which
substitutes occurrences of `typing.Self` _without_ binding the `self`
parameter.
We also update the `try_upcast_to_callable` method. Before, it would
return a `Type`, since certain types upcast to a _union_ of callables,
not to a single callable. However, even in that case, we know that every
element of the union is a callable. We now return a vector of
`CallableType`. (Actually a smallvec to handle the most common case of a
single callable; and wrapped in a new type so that we can provide helper
methods.) If there is more than one element in the result, it represents
a union of callables. This lets callers get at the `CallableType`
instances in a more type-safe way. (This makes it easier for our
protocol checking code to call the new `apply_self` helper.) We also
provide an `into_type` method so that callers that really do want a
`Type` can get the original result easily.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Fixes https://github.com/astral-sh/ty/issues/1620. #20909 added hints if
you do something like this and your Python version is set to 3.10 or
lower:
```py
import typing
typing.LiteralString
```
And we also have hints if you try to do something like this and your
Python version is set too low:
```py
from stdlib_module import new_submodule
```
But we don't currently have any subdiagnostic hint if you do something
like _this_ and your Python version is set too low:
```py
from typing import LiteralString
```
This PR adds that hint!
## Test Plan
snapshots
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
## Summary
This PR adds a failing mdtest for the panic in
https://github.com/astral-sh/ty/issues/1587. The added snippet currently
panics with this query stacktrace:
```
error[panic]: Panicked at /Users/alexw/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/17bc55d/src/function/execute.rs:321:21 when checking `/Users/alexw/dev/ruff/foo.py`: `ClassLiteral < 'db >::explicit_bases_(Id(4c09)): execute: too many cycle iterations`
info: This indicates a bug in ty.
info: If you could open an issue at https://github.com/astral-sh/ty/issues/new?title=%5Bpanic%5D, we'd be very appreciative!
info: Platform: macos aarch64
info: Version: ruff/0.14.5+105 (d24c891a4 2025-11-22)
info: Args: ["target/debug/ty", "check", "foo.py", "--python-version=3.14"]
info: run with `RUST_BACKTRACE=1` environment variable to show the full backtrace information
info: query stacktrace:
0: cached_protocol_interface(Id(6805))
at crates/ty_python_semantic/src/types/protocol_class.rs:795
1: is_equivalent_to_object_inner(Id(8003))
at crates/ty_python_semantic/src/types/instance.rs:667
2: infer_deferred_types(Id(1406))
at crates/ty_python_semantic/src/types/infer.rs:140
cycle heads: infer_definition_types(Id(140b)) -> iteration = 200, TypeVarInstance < 'db >::lazy_bound_(Id(5802)) -> iteration = 200
3: TypeVarInstance < 'db >::lazy_bound_(Id(5803))
at crates/ty_python_semantic/src/types.rs:8827
4: infer_definition_types(Id(140c))
at crates/ty_python_semantic/src/types/infer.rs:94
5: infer_deferred_types(Id(1405))
at crates/ty_python_semantic/src/types/infer.rs:140
6: TypeVarInstance < 'db >::lazy_bound_(Id(5802))
at crates/ty_python_semantic/src/types.rs:8827
7: infer_definition_types(Id(140b))
at crates/ty_python_semantic/src/types/infer.rs:94
8: infer_scope_types(Id(1000))
at crates/ty_python_semantic/src/types/infer.rs:70
9: check_file_impl(Id(c00))
at crates/ty_project/src/lib.rs:535
```
It's not totally clear to me how to fix this or to what extent it might
be a bug in our `Protocol` internals rather than a bug in our `TypeVar`
internals. (It's sort of interesting that we're trying to evaluate the
upper bound of any `TypeVar`s here!) @carljm suggested that it would be
a good idea to add a failing mdtest in the meantime to document the
panic, which I agree with.
## Test Plan
I verified that we panic on this snippet, and that the test fails if I
remove the `expect-panic` assertion or if I change the asserted error
message.
I experimented with ways of minimizing the snippet further, but I think
any further minimization takes the snippet further away from something a
user would actually be likely to write -- so I think is probably
counterproductive. The failing test added in this PR isn't unreasonable
code at the end of the day; I've seen Python like it in the wild.
These were added to try to make it clearer that assignability checks
will eventually return more detailed answers than true or false.
However, the constraint set display rendering is still more brittle than
I'd like it to be, and it's more trouble than it's worth to keep them
updated with semantically identically but textually different edits. The
`static_assert`s are sufficient to check correctness, and we can always
add `reveal_type` when needed for further debugging.
Refs https://github.com/astral-sh/ty/issues/544
## Summary
Takes a more incremental approach to PEP 613 type alias support (vs
https://github.com/astral-sh/ruff/pull/20107). Instead of eagerly
inferring the RHS of a PEP 613 type alias as a type expression, infer it
as a value expression, just like we do for implicit type aliases, taking
advantage of the same support for e.g. unions and other type special
forms.
The main reason I'm following this path instead of the one in
https://github.com/astral-sh/ruff/pull/20107 is that we've realized that
people do sometimes use PEP 613 type aliases as values, not just as
types (because they are just a normal runtime assignment, unlike PEP 695
type aliases which create an opaque `TypeAliasType`).
This PR doesn't yet provide full support for recursive type aliases
(they don't panic, but they just fall back to `Unknown` at the recursion
point). This is future work.
## Test Plan
Added mdtests.
Many new ecosystem diagnostics, mostly because we
understand new types in lots of places.
Conformance suite changes are correct.
Performance regression is due to understanding lots of new
types; nothing we do in this PR is inherently expensive.
## Summary
Eagerly evaluate the elements of a PEP 604 union in value position (e.g.
`IntOrStr = int | str`) as type expressions and store the result (the
corresponding `Type::Union` if all elements are valid type expressions,
or the first encountered `InvalidTypeExpressionError`) on the
`UnionTypeInstance`, such that the `Type::Union(…)` does not need to be
recomputed every time the implicit type alias is used in a type
annotation.
This might lead to performance improvements for large unions, but is
also necessary for correctness, because the elements of the union might
refer to type variables that need to be looked up in the scope of the
type alias, not at the usage site.
## Test Plan
New Markdown tests
#21414 added the ability to create a specialization from a constraint
set. It handled mutually constrained typevars just fine, e.g. given `T ≤
int ∧ U = T` we can infer `T = int, U = int`.
But it didn't handle _nested_ constraints correctly, e.g. `T ≤ int ∧ U =
list[T]`. Now we do! This requires doing a fixed-point "apply the
specialization to itself" step to propagate the assignments of any
nested typevars, and then a cycle detection check to make sure we don't
have an infinite expansion in the specialization.
This gets at an interesting nuance in our constraint set structure that
@sharkdp has asked about before. Constraint sets are BDDs, and each
internal node represents an _individual constraint_, of the form `lower
≤ T ≤ upper`. `lower` and `upper` are allowed to be other typevars, but
only if they appear "later" in the arbitary ordering that we establish
over typevars. The main purpose of this is to avoid infinite expansion
for mutually constrained typevars.
However, that restriction doesn't help us here, because only applies
when `lower` and `upper` _are_ typevars, not when they _contain_
typevars. That distinction is important, since it means the restriction
does not affect our expressiveness: we can always rewrite `Never ≤ T ≤
U` (a constraint on `T`) into `T ≤ U ≤ object` (a constraint on `U`).
The same is not true of `Never ≤ T ≤ list[U]` — there is no "inverse" of
`list` that we could apply to both sides to transform this into a
constraint on a bare `U`.
## Summary
Fixes https://github.com/astral-sh/ty/issues/1571.
I realised I was overcomplicating things when I described what we should
do in that issue description. The simplest thing to do here is just to
special-case call expressions and short-circuit the call-binding
machinery entirely if we see it's `NotImplemented` being called. It
doesn't really matter if the subdiagnostic doesn't fire when a union is
called and one element of the union is `NotImplemented` -- the
subdiagnostic doesn't need to be exhaustive; it's just to help people in
some common cases.
## Test Plan
Added snapshots
This patch lets us create specializations from a constraint set. The
constraint encodes the restrictions on which types each typevar can
specialize to. Given a generic context and a constraint set, we iterate
through all of the generic context's typevars. For each typevar, we
abstract the constraint set so that it only mentions the typevar in
question (propagating derived facts if needed). We then find the "best
representative type" for the typevar given the abstracted constraint
set.
When considering the BDD structure of the abstracted constraint set,
each path from the BDD root to the `true` terminal represents one way
that the constraint set can be satisfied. (This is also one of the
clauses in the DNF representation of the constraint set's boolean
formula.) Each of those paths is the conjunction of the individual
constraints of each internal node that we traverse as we walk that path,
giving a single lower/upper bound for the path. We use the upper bound
as the "best" (i.e. "closest to `object`") type for that path.
If there are multiple paths in the BDD, they technically represent
independent possible specializations. If there's a single specialization
that satisfies all of them, we will return that as the specialization.
If not, then the constraint set is ambiguous. (This happens most often
with constrained typevars.) We could in the future turn _each_ of the
paths into separate specializations, but it's not clear what we would do
with that, so instead we just report the ambiguity as a specialization
failure.
## Summary
Add a set of comprehensive tests for generic implicit type aliases to
illustrate the current behavior with many flavors of `@Todo` types and
false positive diagnostics.
The tests are partially based on the typing conformance suite, and the
expected behavior has been checked against other type checkers.
## Summary
Get rid of the catch-all todo type from subscripting a base type we
haven't implemented handling for yet in a type expression, and turn it
into a diagnostic instead.
Handle a few more cases explicitly, to avoid false positives from the
above change:
1. Subscripting any dynamic type (not just a todo type) in a type
expression should just result in that same dynamic type. This is
important for gradual guarantee, and matches other type checkers.
2. Subscripting a generic alias may be an error or not, depending
whether the specialization itself contains typevars. Don't try to handle
this yet (it should be handled in a later PR for specializing generic
non-PEP695 type aliases), just use a dedicated todo type for it.
3. Add a temporary todo branch to avoid false positives from string PEP
613 type aliases. This can be removed in the next PR, with PEP 613 type
alias support.
## Test Plan
Adjusted mdtests, ecosystem.
All new diagnostics in conformance suite are supposed to be diagnostics,
so this PR is a strict improvement there.
New diagnostics in the ecosystem are surfacing cases where we already
don't understand an annotation, but now we emit a diagnostic about it.
They are mostly intentional choices. Analysis of particular cases:
* `attrs`, `bokeh`, `django-stubs`, `dulwich`, `ibis`, `kornia`,
`mitmproxy`, `mongo-python-driver`, `mypy`, `pandas`, `poetry`,
`prefect`, `pydantic`, `pytest`, `scrapy`, `trio`, `werkzeug`, and
`xarray` are all cases where under `from __future__ import annotations`
or Python 3.14 deferred-annotations semantics, we follow normal
name-scoping rules, whereas some other type checkers prefer global names
over local names. This means we don't like it if e.g. you have a class
with a method or attribute named `type` or `tuple`, and you also try to
use `type` or `tuple` in method/attribute annotations of that class.
This PR isn't changing those semantics, just revealing them in more
cases where previously we just silently fell back to `Unknown`. I think
failing with a diagnostic (so authors can alias names as needed to avoid
relying on scoping rules that differ between type checkers) is better
than failing silently here.
* `beartype` assumes we support `TypeForm` (because it only supports
mypy and pyright, it uses `if MYPY:` to hide the `TypeForm` from mypy,
and pyright supports `TypeForm`), and we don't yet.
* `graphql-core` likes to use a `try: ... except ImportError: ...`
pattern for importing special forms from `typing` with fallback to
`typing_extensions`, instead of using `sys.version_info` checks. We
don't handle this well when type checking under an older Python version
(where the import from `typing` is not found); we see the imported name
as of type e.g. `Unknown | SpecialFormType(...)`, and because of the
union with `Unknown` we fail to handle it as the special form type. Mypy
and pyright also don't seem to support this pattern. They don't complain
about subscripting such special forms, but they do silently fail to
treat them as the desired special form. Again here, if we are going to
fail I'd rather fail with a diagnostic rather than silently.
* `ibis` is [trying to
use](https://github.com/ibis-project/ibis/blob/main/ibis/common/collections.py#L372)
`frozendict: type[FrozenDict]` as a way to create a "type alias" to
`FrozenDict`, but this is wrong: that means `frozendict:
type[FrozenDict[Any, Any]]`.
* `mypy` has some errors due to the fact that type-checking `typing.pyi`
itself (without knowing that it's the real `typing.pyi`) doesn't work
very well.
* `mypy-protobuf` imports some types from the protobufs library that end
up unioned with `Unknown` for some reason, and so we don't allow
explicit-specialization of them. Depending on the reason they end up
unioned with `Unknown`, we might want to better support this? But it's
orthogonal to this PR -- we aren't failing any worse here, just alerting
the author that we didn't understand their annotation.
* `pwndbg` has unresolved references due to star-importing from a
dependency that isn't installed, and uses un-imported names like `Dict`
in annotation expressions. Some of the unresolved references were hidden
by
https://github.com/astral-sh/ruff/blob/main/crates/ty_python_semantic/src/types/infer/builder.rs#L7223-L7228
when some annotations previously resolved to a Todo type that no longer
do.
This saga began with a regression in how we handle constraint sets where
a typevar is constrained by another typevar, which #21068 first added
support for:
```py
def mutually_constrained[T, U]():
# If [T = U ∧ U ≤ int], then [T ≤ int] must be true as well.
given_int = ConstraintSet.range(U, T, U) & ConstraintSet.range(Never, U, int)
static_assert(given_int.implies_subtype_of(T, int))
```
While working on #21414, I saw a regression in this test, which was
strange, since that PR has nothing to do with this logic! The issue is
that something in that PR made us instantiate the typevars `T` and `U`
in a different order, giving them differently ordered salsa IDs. And
importantly, we use these salsa IDs to define the variable ordering that
is used in our constraint set BDDs. This showed that our "mutually
constrained" logic only worked for one of the two possible orderings.
(We can — and now do — test this in a brute-force way by copy/pasting
the test with both typevar orderings.)
The underlying bug was in our `ConstraintSet::simplify_and_domain`
method. It would correctly detect `(U ≤ T ≤ U) ∧ (U ≤ int)`, because
those two constraints affect different typevars, and from that, infer `T
≤ int`. But it wouldn't detect the equivalent pattern in `(T ≤ U ≤ T) ∧
(U ≤ int)`, since those constraints affect the same typevar. At first I
tried adding that as yet more pattern-match logic in the ever-growing
`simplify_and_domain` method. But doing so caused other tests to start
failing.
At that point, I realized that `simplify_and_domain` had gotten to the
point where it was trying to do too much, and for conflicting consumers.
It was first written as part of our display logic, where the goal is to
remove redundant information from a BDD to make its string rendering
simpler. But we also started using it to add "derived facts" to a BDD. A
derived fact is a constraint that doesn't appear in the BDD directly,
but which we can still infer to be true. Our failing test relies on
derived facts — being able to infer that `T ≤ int` even though that
particular constraint doesn't appear in the original BDD. Before,
`simplify_and_domain` would trace through all of the constraints in a
BDD, figure out the full set of derived facts, and _add those derived
facts_ to the BDD structure. This is brittle, because those derived
facts are not universally true! In our example, `T ≤ int` only holds
along the BDD paths where both `T = U` and `U ≤ int`. Other paths will
test the negations of those constraints, and on those, we _shouldn't_
infer `T ≤ int`. In theory it's possible (and we were trying) to use BDD
operators to express that dependency...but that runs afoul of how we
were simultaneously trying to _remove_ information to make our displays
simpler.
So, I ripped off the band-aid. `simplify_and_domain` is now _only_ used
for display purposes. I have not touched it at all, except to remove
some logic that is definitely not used by our `Display` impl. Otherwise,
I did not want to touch that house of cards for now, since the display
logic is not load-bearing for any type inference logic.
For all non-display callers, we have a new **_sequent map_** data type,
which tracks exactly the same derived information. But it does so (a)
without trying to remove anything from the BDD, and (b) lazily, without
updating the BDD structure.
So the end result is that all of the tests (including the new
regressions) pass, via a more efficient (and hopefully better
structured/documented) implementation, at the cost of hanging onto a
pile of display-related tech debt that we'll want to clean up at some
point.
## Summary
This PR proposes that we add a new `set_concise_message` functionality
to our `Diagnostic` construction API. When used, the concise message
that is otherwise auto-generated from the main diagnostic message and
the primary annotation will be overwritten with the custom message.
To understand why this is desirable, let's look at the `invalid-key`
diagnostic. This is how I *want* the full diagnostic to look like:
<img width="620" height="282" alt="image"
src="https://github.com/user-attachments/assets/3bf70f52-9d9f-4817-bc16-fb0ebf7c2113"
/>
However, without the change in this PR, the concise message would have
the following form:
```
error[invalid-key]: Unknown key "Age" for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This duplication is why the full `invalid-key` diagnostic used a main
diagnostic message that is only "Invalid key for TypedDict `Person`", to
make that bearable:
```
error[invalid-key] Invalid key for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This is still less than ideal, *and* we had to make the "full"
diagnostic worse. With the new API here, we have to make no such
compromises. We need to do slightly more work (provide one additional
custom-designed message), but we get to keep the "full" diagnostic that
we actually want, and we can make the concise message more terse and
readable:
```
error[invalid-key] Unknown key "Age" for TypedDict `Person` - did you mean "age"?
```
Similar problems exist for other diagnostics as well (I really want this
for https://github.com/astral-sh/ruff/pull/21476). In this PR, I only
changed `invalid-key` and `type-assertion-failure`.
The PR here is somewhat related to the discussion in
https://github.com/astral-sh/ty/issues/1418, but note that we are
solving a problem that is unrelated to sub-diagnostics.
## Test Plan
Updated tests
## Summary
Add support for `Callable` special forms in implicit type aliases.
## Typing conformance
Four new tests are passing
## Ecosystem impact
* All of the `invalid-type-form` errors are from libraries that use
`mypy_extensions` and do something like `Callable[[NamedArg("x", str)],
int]`.
* A handful of new false positives because we do not support generic
specializations of implicit type aliases, yet. But other
* Everything else looks like true positives or known limitations
## Test Plan
New Markdown tests.
Constraint sets can now track subtyping/assignability/etc of generic
callables correctly. For instance:
```py
def identity[T](t: T) -> T:
return t
constraints = ConstraintSet.always()
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[str], str]))
```
A generic callable can be considered an intersection of all of its
possible specializations, and an assignability check with an
intersection as the lhs side succeeds of _any_ of the intersected types
satisfies the check. Put another way, if someone expects to receive any
function with a signature of `(int) -> int`, we can give them
`identity`.
Note that the corresponding check using `is_subtype_of` directly does
not yet work, since #20093 has not yet hooked up the core typing
relationship logic to use constraint sets:
```py
# These currently fail
static_assert(is_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(is_subtype_of(TypeOf[identity], Callable[[str], str]))
```
To do this, we add a new _existential quantification_ operation on
constraint sets. This takes in a list of typevars and _removes_ those
typevars from the constraint set. Conceptually, we return a new
constraint set that evaluates to `true` when there was _any_ assignment
of the removed typevars that caused the old constraint set to evaluate
to `true`.
When comparing a generic constraint set, we add its typevars to the
`inferable` set, and figure out whatever constraints would allow any
specialization to satisfy the check. We then use the new existential
quantification operator to remove those new typevars, since the caller
doesn't (and shouldn't) know anything about them.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
## Summary
Follow up from https://github.com/astral-sh/ruff/pull/21411. Again,
there are more things that could be improved here (like the diagnostics
for `lists`, or extending what we have for `dict` to `OrderedDict` etc),
but that will have to be postponed.
## Summary
We previously only allowed models to overwrite the
`{eq,order,kw_only,frozen}_defaults` of the dataclass-transformer, but
all other standard-dataclass parameters should be equally supported with
the same behavior.
## Test Plan
Added regression tests.
## Summary
Not a high-priority task... but it _is_ a weekend :P
This PR improves our diagnostics for invalid exceptions. Specifically:
- We now give a special-cased ``help: Did you mean
`NotImplementedError`` subdiagnostic for `except NotImplemented`, `raise
NotImplemented` and `raise <EXCEPTION> from NotImplemented`
- If the user catches a tuple of exceptions (`except (foo, bar, baz):`)
and multiple elements in the tuple are invalid, we now collect these
into a single diagnostic rather than emitting a separate diagnostic for
each tuple element
- The explanation of why the `except`/`raise` was invalid ("must be a
`BaseException` instance or `BaseException` subclass", etc.) is
relegated to a subdiagnostic. This makes the top-level diagnostic
summary much more concise.
## Test Plan
Lots of snapshots. And here's some screenshots:
<details>
<summary>Screenshots</summary>
<img width="1770" height="1520" alt="image"
src="https://github.com/user-attachments/assets/7f27fd61-c74d-4ddf-ad97-ea4fd24d06fd"
/>
<img width="1916" height="1392" alt="image"
src="https://github.com/user-attachments/assets/83e5027c-8798-48a6-a0ec-1babfc134000"
/>
<img width="1696" height="588" alt="image"
src="https://github.com/user-attachments/assets/1bc16048-6eb4-4dfa-9ace-dd271074530f"
/>
</details>
## Summary
Allow metaclass-based and baseclass-based dataclass-transformers to
overwrite the default behavior using class arguments:
```py
class Person(Model, order=True):
# ...
```
## Conformance tests
Four new tests passing!
## Test Plan
New Markdown tests
This PR updates the constraint implication type relationship to work on
compound types as well. (A compound type is a non-atomic type, like
`list[T]`.)
The goal of constraint implication is to check whether the requirements
of a constraint imply that a particular subtyping relationship holds.
Before, we were only checking atomic typevars. That would let us verify
that the constraint set `T ≤ bool` implies that `T` is always a subtype
of `int`. (In this case, the lhs of the subtyping check, `T`, is an
atomic typevar.)
But we weren't recursing into compound types, to look for nested
occurrences of typevars. That means that we weren't able to see that `T
≤ bool` implies that `Covariant[T]` is always a subtype of
`Covariant[int]`.
Doing this recursion means that we have to carry the constraint set
along with us as we recurse into types as part of `has_relation_to`, by
adding constraint implication as a new `TypeRelation` variant. (Before
it was just a method on `ConstraintSet`.)
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>