## Summary
Closes https://github.com/astral-sh/ty/issues/957
As explained in https://github.com/astral-sh/ty/issues/957, literal
union types for recursively defined values can be widened early to
speed up the convergence of fixed-point iterations.
This PR achieves this by embedding a marker in `UnionType` that
distinguishes whether a value is recursively defined.
This also allows us to identify values that are not recursively
defined, so I've increased the limit on the number of elements in a
literal union type for such values.
Edit: while this PR doesn't provide the significant performance
improvement initially hoped for, it does have the benefit of allowing
the number of elements in a literal union to be raised above the salsa
limit, and indeed mypy_primer results revealed that a literal union of
220 elements was actually being used.
## Test Plan
`call/union.md` has been updated
Fixes https://github.com/astral-sh/ty/issues/1587
## Summary
Perform cycle normalization on typevar bounds and constraints (similar
to how it was already done for typevar defaults) in order to ensure
convergence in cyclic cases.
There might be another fix here that could avoid the cycle in many more
cases, where we don't eagerly evaluate typevar bounds/constraints on
explicit specialization, but just accept the given specialization and
later evaluate to see whether we need to emit a diagnostic on it. But
the current fix here is sufficient to solve the problem and matches the
patterns we use to ensure cycle convergence elsewhere, so it seems good
for now; left a TODO for the other idea.
This fix is sufficient to make us not panic, but not sufficient to get
the semantics fully correct; see the TODOs in the tests. I have ideas
for fixing that as well, but it seems worth at least getting this in to
fix the panic.
## Test Plan
Test that previously panicked now does not.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
This makes auto-import include modules in suggestions.
In this initial implementation, we permit this to include submodules as
well. This is in contrast to what we do in `import ...` completions.
It's easy to change this behavior, but I think it'd be interesting to
run with this for now to see how well it works.
The existing importer functionality always required
an import request with a module and a member in that
module. But we want to be able to insert import statements
for a module itself and not any members in the module.
This is basically changing `member: &str` to an
`Option<&str>` and fixing the fallout in a way that
makes sense for module-only imports.
I think changes to this value are generally noise. It's hard to tell
what it means and it isn't especially actionable. We already have an
eval running in CI for completion ranking, so I don't think it's
terribly important to care about ranking here in e2e tests _generally_.
A completion lacking a module reference doesn't necessarily mean that
the symbol is defined within the current module. I believe the intent
here is that it means that no import is required to use it.
These are all improvements here with one slight regression on
`reveal_type` ranking. The previous completions offered were:
```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: _sqlite3)
SupportsGetItemViewable (module: _typeshed)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 8/10)
_remove_original_values (module: _osx_support)
_remove_universal_flags (module: _osx_support)
-----
found 10 completions
```
And now they are:
```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3.dbapi2)
removeHandler (module: unittest)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 9/9)
-----
found 9 completions
```
Some completions were removed (because they are now considered
unexported) and some were added (likely do to better re-export support).
This particular case probably warrants more special attention anyway.
So I think this is fine. (It's only a one-ranking regression.)
This applies recursively. So if *any* component of a module name starts
with a `_`, then symbols from that module are excluded from auto-import.
The exception is when it's a module within first party code. Then we
want to include it in auto-import.
Note that the `Deprecated` symbols from `importlib.metadata` are no
longer offered because 1) `importlib.metadata` defined `__all__` and 2)
the `Deprecated` symbols aren't in it. These seem to not be a part of
its public API according to the docs, so this seems right to me.
This commit (mostly) re-implements the support for `__all__` in
ty-proper, but inside the auto-import AST scanner.
When `__all__` isn't present in a module, we fall back to conventions to
determine whether a symbol is exported or not:
https://docs.python.org/3/library/index.html
However, in keeping with current practice for non-auto-import
completions, we continue to provide sunder and dunder names as
re-exports.
When `__all__` is present, we respect it strictly. That is, a symbol is
exported *if and only if* it's in `__all__`. This is somewhat stricter
than pylance seemingly is. I felt like it was a good idea to start here,
and we can relax it based on user demand (perhaps through a setting).
This simplifies the existing visitor by DRYing it up slightly.
We also add tests for the existing functionality. In particular,
we want to add support for re-export conventions, and that
warrants more careful testing.
## Summary
I realized we don't really test `DefinitionKind::ImportFromSubmodule` in
the IDE at all, so here's a bunch of them, just recording our current
behaviour.
## Test Plan
*stares at the camera*
## Summary
I have no idea what I'm doing with the fix (all the interesting stuff is
in the second commit).
The basic problem is the compiler emits the diagnostic:
```
x: "foobar"
^^^^^^
```
Which the suppression code-action hands the end of to `Tokens::after`
which then panics because that function panics if handed an offset that
is in the middle of a token.
Fixes https://github.com/astral-sh/ty/issues/1748
## Test Plan
Many tests added (only the e2e test matters).
## Summary
This makes an importing file a required argument to module resolution,
and if the fast-path cached query fails to resolve the module, take the
slow-path uncached (could be cached if we want)
`desperately_resolve_module` which will walk up from the importing file
until it finds a `pyproject.toml` (arbitrary decision, we could try
every ancestor directory), at which point it takes one last desperate
attempt to use that directory as a search-path. We do not continue
walking up once we've found a `pyproject.toml` (arbitrary decision, we
could keep going up).
Running locally, this fixes every broken-for-workspace-reasons import in
pyx's workspace!
* Fixes https://github.com/astral-sh/ty/issues/1539
* Improves https://github.com/astral-sh/ty/issues/839
## Test Plan
The workspace tests see a huge improvement on most absolute imports.
Fixes https://github.com/astral-sh/ty/issues/1716.
## Test plan
I added a corpus snippet that causes us to panic on `main` (I tested by
running `cargo run -p ty_python_semantic --test=corpus` without the fix
applied).
## Summary
This PR re-implements [return-in-generator
(B901)](https://docs.astral.sh/ruff/rules/return-in-generator/#return-in-generator-b901)
for async generators as a semantic syntax error. This is not a syntax
error for sync generators, so we'll need to preserve both the lint rule
and the syntax error in this case.
It also updates B901 and the new implementation to catch cases where the
generator's `yield` or `yield from` expression is part of another
statement, as in:
```py
def foo():
return (yield)
```
These were previously not caught because we only looked for
`Stmt::Expr(Expr::Yield)` in `visit_stmt` instead of visiting `yield`
expressions directly. I think this modification is within the spirit of
the rule and safe to try out since the rule is in preview.
## Test Plan
<!-- How was it tested? -->
I have written tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Star-imports can not just affect the state of symbols that they pull in,
they can also affect the state of members that are associated with those
symbols. For example, if `obj.attr` was previously narrowed from `int |
None` to `int`, and a star-import now overwrites `obj`, then the
narrowing on `obj.attr` should be "reset".
This PR keeps track of the state of associated members during star
imports and properly models the flow of their corresponding state
through the control flow structure that we artificially create for
star-imports.
See [this
comment](https://github.com/astral-sh/ty/issues/1355#issuecomment-3607125005)
for an explanation why this caused ty to see certain `asyncio` symbols
as not being accessible on Python 3.14.
closes https://github.com/astral-sh/ty/issues/1355
## Ecosystem impact
```diff
async-utils (https://github.com/mikeshardmind/async-utils)
- src/async_utils/bg_loop.py:115:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
- Found 30 diagnostics
+ Found 29 diagnostics
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ mitmproxy/utils/asyncio_utils.py:96:60: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
- test/conftest.py:37:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
```
All of these seem to be correct, they give us a different type for
`asyncio` symbols that are now imported from different
`sys.version_info` branches (where we previously failed to recognize
some of these as statically true/false).
```diff
dd-trace-py (https://github.com/DataDog/dd-trace-py)
- ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Coroutine[Any, Any, _T@create_task] | Generator[Any, None, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
+ ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Generator[Any, None, _T@create_task] | Coroutine[Any, Any, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
```
Similar, but only results in a diagnostic change.
## Test Plan
Added a regression test
* origin/main:
[ty] Reachability constraints: minor documentation fixes (#21774)
[ty] Fix non-determinism in `ConstraintSet.specialize_constrained` (#21744)
[ty] Improve `@override`, `@final` and Liskov checks in cases where there are multiple reachable definitions (#21767)
[ty] Extend `invalid-explicit-override` to also cover properties decorated with `@override` that do not override anything (#21756)
[ty] Enable LRU collection for parsed module (#21749)
[ty] Support typevar-specialized dynamic types in generic type aliases (#21730)
Add token based `parenthesized_ranges` implementation (#21738)
[ty] Default-specialization of generic type aliases (#21765)
[ty] Suppress false positives when `dataclasses.dataclass(...)(cls)` is called imperatively (#21729)
[syntax-error] Default type parameter followed by non-default type parameter (#21657)
This fixes a non-determinism that we were seeing in the constraint set
tests in https://github.com/astral-sh/ruff/pull/21715.
In this test, we create the following constraint set, and then try to
create a specialization from it:
```
(T@constrained_by_gradual_list = list[Base])
∨
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
```
That is, `T` is either specifically `list[Base]`, or it's any `list`.
Our current heuristics say that, absent other restrictions, we should
specialize `T` to the more specific type (`list[Base]`).
In the correct test output, we end up creating a BDD that looks like
this:
```
(T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ (Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```
In the incorrect output, the BDD looks like this:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```
The difference is the ordering of the two individual constraints. Both
constraints appear in the first BDD, but the second BDD only contains `T
is any list`. If we were to force the second BDD to contain both
constraints, it would look like this:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ never
```
This is the standard shape for an OR of two constraints. However! Those
two constraints are not independent of each other! If `T` is
specifically `list[Base]`, then it's definitely also "any `list`". From
that, we can infer the contrapositive: that if `T` is not any list, then
it cannot be `list[Base]` specifically. When we encounter impossible
situations like that, we prune that path in the BDD, and treat it as
`false`. That rewrites the second BDD to the following:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
┡━₁ never <-- IMPOSSIBLE, rewritten to never
└─₀ never
```
We then would see that that BDD node is redundant, since both of its
outgoing edges point at the `never` node. Our BDDs are _reduced_, which
means we have to remove that redundant node, resulting in the BDD we saw
above:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never <-- redundant node removed
```
The end result is that we were "forgetting" about the `T = list[Base]`
constraint, but only for some BDD variable orderings.
To fix this, I'm leaning in to the fact that our BDDs really do need to
"remember" all of the constraints that they were created with. Some
combinations might not be possible, but we now have the sequent map,
which is quite good at detecting and pruning those.
So now our BDDs are _quasi-reduced_, which just means that redundant
nodes are allowed. (At first I was worried that allowing redundant nodes
would be an unsound "fix the glitch". But it turns out they're real!
[This](https://ieeexplore.ieee.org/abstract/document/130209) is the
paper that introduces them, though it's very difficult to read. Knuth
mentions them in §7.1.4 of
[TAOCP](https://course.khoury.northeastern.edu/csu690/ssl/bdd-knuth.pdf),
and [this paper](https://par.nsf.gov/servlets/purl/10128966) has a nice
short summary of them in §2.)
While we're here, I've added a bunch of `debug` and `trace` level log
messages to the constraint set implementation. I was getting tired of
having to add these by hands over and over. To enable them, just set
`TY_LOG` in your environment, e.g.
```sh
env TY_LOG=ty_python_semantic::types::constraints::SequentMap=trace ty check ...
```
[Note, this has an `internal` label because are still not using
`specialize_constrained` in anything user-facing yet.]
## Summary
For a type alias like the one below, where `UnknownClass` is something
with a dynamic type, we previously lost track of the fact that this
dynamic type was explicitly specialized *with a type variable*. If that
alias is then later explicitly specialized itself (`MyAlias[int]`), we
would miscount the number of legacy type variables and emit a
`invalid-type-arguments` diagnostic
([playground](https://play.ty.dev/886ae6cc-86c3-4304-a365-510d29211f85)).
```py
T = TypeVar("T")
MyAlias: TypeAlias = UnknownClass[T] | None
```
The solution implemented here is not pretty, but we can hopefully get
rid of it via https://github.com/astral-sh/ty/issues/1711. Also, once we
properly support `ParamSpec` and `Concatenate`, we should be able to
remove some of this code.
This addresses many of the `invalid-type-arguments` false-positives in
https://github.com/astral-sh/ty/issues/1685. With this change, there are
still some diagnostics of this type left. Instead of implementing even
more (rather sophisticated) workarounds for these cases as well, it
might be much easier to wait for full `ParamSpec`/`Concatenate` support
and then try again.
A disadvantage of this implementation is that we lose track of some
`@Todo` types and replace them with `Unknown`. We could spend more
effort and try to preserve them, but I'm unsure if this is the best use
of our time right now.
## Test Plan
New Markdown tests.
## Summary
Implement default-specialization of generic type aliases (implicit or
PEP-613) if they are used in a type expression without an explicit
specialization.
closes https://github.com/astral-sh/ty/issues/1690
## Typing conformance
```diff
-generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, DefaultStrT]`
```
That's exactly what we want ✔️
All other tests in this file pass as well, with the exception of this
assertion, which is just wrong (at least according to our
interpretation, `type[Bar] != <class 'Bar'>`). I checked that we do
correctly default-specialize the type parameter which is not displayed
in the diagnostic that we raise.
```py
class Bar(SubclassMe[int, DefaultStrT]): ...
assert_type(Bar, type[Bar[str]]) # ty: Type `type[Bar[str]]` does not match asserted type `<class 'Bar'>`
```
## Ecosystem impact
Looks like I should have included this last week 😎
## Test Plan
Updated pre-existing tests and add a few new ones.
## Summary
This PR implements syntax error where a default type parameter is
followed by a non-default type parameter.
https://github.com/astral-sh/ruff/issues/17412#issuecomment-3584088217
## Test Plan
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Signed-off-by: 11happy <soni5happy@gmail.com>
This adds a new `suppression` module to the `ruff_linter` crate, similar
to the suppression
module for ty, to parse comments for ruff suppression directives, such
as `# ruff: disable[CODE]`.
## Summary
Fixes#21750 and a related bug in `PLE1142`. We were not properly
considering generators to be valid `await` contexts, which caused the
`F704` issue. One of the tests I added for this also uncovered an issue
in `PLE1142` for comprehensions nested within async generators because
we were only checking the current scope rather than traversing the
nested context.
## Test Plan
Both of these rules are implemented as semantic syntax errors, so I
added tests (and fixes) in both Ruff and ty.
* origin/main: (67 commits)
Move `Token`, `TokenKind` and `Tokens` to `ruff-python-ast` (#21760)
[ty] Don't confuse multiple occurrences of `typing.Self` when binding bound methods (#21754)
Use our org-wide Renovate preset (#21759)
Delete `my-script.py` (#21751)
[ty] Move `all_members`, and related types/routines, out of `ide_support.rs` (#21695)
[ty] Fix find-references for import aliases (#21736)
[ty] add tests for workspaces (#21741)
[ty] Stop testing the (brittle) constraint set display implementation (#21743)
[ty] Use generator over list comprehension to avoid cast (#21748)
[ty] Add a diagnostic for prohibited `NamedTuple` attribute overrides (#21717)
[ty] Fix subtyping with `type[T]` and unions (#21740)
Use `npm ci --ignore-scripts` everywhere (#21742)
[`flake8-simplify`] Fix truthiness assumption for non-iterable arguments in tuple/list/set calls (`SIM222`, `SIM223`) (#21479)
[`flake8-use-pathlib`] Mark fixes unsafe for return type changes (`PTH104`, `PTH105`, `PTH109`, `PTH115`) (#21440)
[ty] Fix auto-import code action to handle pre-existing import
Enable PEP 740 attestations when publishing to PyPI (#21735)
[ty] Fix find references for type defined in stub (#21732)
Use OIDC instead of codspeed token (#21719)
[ty] Exclude `typing_extensions` from completions unless it's really available
[ty] Fix false positives for `class F(Generic[*Ts]): ...` (#21723)
...
In the following example, there are two occurrences of `typing.Self`,
one for `Foo.foo` and one for `Bar.bar`:
```py
from typing import Self, reveal_type
class Foo[T]:
def foo(self: Self) -> T:
raise NotImplementedError
class Bar:
def bar(self: Self, x: Foo[Self]):
# SHOULD BE: bound method Foo[Self@bar].foo() -> Self@bar
# revealed: bound method Foo[Self@bar].foo() -> Foo[Self@bar]
reveal_type(x.foo)
def f[U: Bar](x: Foo[U]):
# revealed: bound method Foo[U@f].foo() -> U@f
reveal_type(x.foo)
```
When accessing a bound method, we replace any occurrences of `Self` with
the bound `self` type.
We were doing this correctly for the second reveal. We would first apply
the specialization, getting `(self: Self@foo) -> U@F` as the signature
of `x.foo`. We would then bind the `self` parameter, substituting
`Self@foo` with `Foo[U@F]` as part of that. The return type was already
specialized to `U@F`, so that substitution had no further affect on the
type that we revealed.
In the first reveal, we would follow the same process, but we confused
the two occurrences of `Self`. We would first apply the specialization,
getting `(self: Self@foo) -> Self@bar` as the method signature. We would
then try to bind the `self` parameter, substituting `Self@foo` with
`Foo[Self@bar]`. However, because we didn't distinguish the two separate
`Self`s, and applied the substitution to the return type as well as to
the `self` parameter.
The fix is to track which particular `Self` we're trying to substitute
when applying the type mapping.
Fixes https://github.com/astral-sh/ty/issues/1713
Here are a bunch of (variously failing and passing) mdtests that reflect
the kinds of issues people encounter when running ty over an entire
workspace without sufficient hand-holding (especially because in the IDE
it is unclear *how* to provide that hand-holding).
The `Display` implementation for constraint sets is brittle, and
deserves a rethink. But later! It's perfectly fine for printf debugging;
we just shouldn't be writing mdtests that depend on any particular
rendering details. Most of these tests can be replaced with an
equivalence check that actually validates that the _behavior_ of two
constraint sets are identical.
## Summary
Fixes false positives in SIM222 and SIM223 where truthiness was
incorrectly assumed for `tuple(x)`, `list(x)`, `set(x)` when `x` is not
iterable.
Fixes#21473.
## Problem
`Truthiness::from_expr` recursively called itself on arguments to
iterable initializers (`tuple`, `list`, `set`) without checking if the
argument is iterable, causing false positives for cases like `tuple(0)
or True` and `tuple("") or True`.
## Approach
Added `is_definitely_not_iterable` helper and updated
`Truthiness::from_expr` to return `Unknown` for non-iterable arguments
(numbers, booleans, None) and string literals when called with iterable
initializers, preventing incorrect truthiness assumptions.
## Test Plan
Added test cases to `SIM222.py` and `SIM223.py` for `tuple("")`,
`tuple(0)`, `tuple(1)`, `tuple(False)`, and `tuple(None)` with `or True`
and `and False` patterns.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Marks fixes as unsafe when they change return types (`None` → `Path`,
`str`/`bytes` → `Path`, `str` → `Path`), except when the call is a
top-level expression.
Fixes#21431.
## Problem
Fixes for `os.rename`, `os.replace`, `os.getcwd`/`os.getcwdb`, and
`os.readlink` were marked safe despite changing return types, which can
break code that uses the return value.
## Approach
Added `is_top_level_expression_call` helper to detect when a call is a
top-level expression (return value unused). Updated
`check_os_pathlib_two_arg_calls` and `check_os_pathlib_single_arg_calls`
to mark fixes as unsafe unless the call is a top-level expression.
Updated PTH109 to use the helper for applicability determination.
## Test Plan
Updated snapshots for `preview_full_name.py`, `preview_import_as.py`,
`preview_import_from.py`, and `preview_import_from_as.py` to reflect
unsafe markers.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Previously, the code action to do auto-import on a pre-existing symbol
assumed that the auto-importer would always generate an import
statement. But sometimes an import statement already exists.
A good example of this is the following snippet:
```
import warnings
@deprecated
def myfunc(): pass
```
Specifically, `deprecated` exists in `warnings` but isn't currently
imported. A code action to fix this could feasibly do two
transformations here. One is:
```
import warnings
@warnings.deprecated
def myfunc(): pass
```
Another is:
```
from warnings import deprecated
import warnings
@deprecated
def myfunc(): pass
```
The existing auto-import infrastructure chooses the former, since it
reuses a pre-existing import statement. But this PR chooses the latter
for the case of a code action. I'm not 100% sure this is the correct
choice, but it seems to defer more strongly to what the user has typed.
That is, that they want to use it unqualified because it's what has been
typed. So we should add the necessary import statement to make that
work.
Fixesastral-sh/ty#1668
This works by adding a third module resolution mode that lets the caller
opt into _some_ shadowing of modules that is otherwise not allowed (for
`typing` and `typing_extensions`).
Fixesastral-sh/ty#1658
## Summary
If you manage to create an `typing.GenericAlias` instance without us
knowing how that was created, then we don't know what to do with this in
a type annotation. So it's better to be explicit and show an error
instead of failing silently with a `@Todo` type.
## Test Plan
* New Markdown tests
* Zero ecosystem impact
## Summary
We had tests for this already, but they used generic classes that were
bivariant in their type parameter, and so this case wasn't captured.
closes https://github.com/astral-sh/ty/issues/1702
## Test Plan
Updated Markdown tests
## Summary
These projects from `mypy_primer` were missing from both `good.txt` and
`bad.txt` for some reason. I thought about writing a script that would
verify that `good.txt` + `bad.txt` = `mypy_primer.projects`, but that's
not completely trivial since there are projects like `cpython` only
appear once in `good.txt`. Given that we can hopefully soon get rid of
both of these files (and always run on all projects), it's probably not
worth the effort. We are usually notified of all `mypy_primer` changes.
## Test Plan
CI on this PR
## Summary
The exact behavior around what's allowed vs. disallowed was partly
detected through trial and error in the runtime.
I was a little confused by [this
comment](https://github.com/python/cpython/pull/129352) that says
"`NamedTuple` subclasses cannot be inherited from" because in practice
that doesn't appear to error at runtime.
Closes [#1683](https://github.com/astral-sh/ty/issues/1683).
## Summary
This is another small refactor for
https://github.com/astral-sh/ruff/pull/21445 that splits the single
`paramspec.md` into `generics/legacy/paramspec.md` and
`generics/pep695/paramspec.md`.
## Test Plan
Make sure that all mdtests pass.
## Summary
Add support for generic PEP 613 type aliases and generic implicit type
aliases:
```py
from typing import TypeVar
T = TypeVar("T")
ListOrSet = list[T] | set[T]
def _(xs: ListOrSet[int]):
reveal_type(xs) # list[int] | set[int]
```
closes https://github.com/astral-sh/ty/issues/1643
closes https://github.com/astral-sh/ty/issues/1629
closes https://github.com/astral-sh/ty/issues/1596
closes https://github.com/astral-sh/ty/issues/573
closes https://github.com/astral-sh/ty/issues/221
## Typing conformance
```diff
-aliases_explicit.py:52:5: error[type-assertion-failure] Type `list[int]` does not match asserted type `@Todo(specialized generic alias in type expression)`
-aliases_explicit.py:53:5: error[type-assertion-failure] Type `tuple[str, ...] | list[str]` does not match asserted type `@Todo(Generic specialization of types.UnionType)`
-aliases_explicit.py:54:5: error[type-assertion-failure] Type `tuple[int, int, int, str]` does not match asserted type `@Todo(specialized generic alias in type expression)`
-aliases_explicit.py:56:5: error[type-assertion-failure] Type `(int, str, /) -> str` does not match asserted type `@Todo(Generic specialization of typing.Callable)`
-aliases_explicit.py:59:5: error[type-assertion-failure] Type `int | str | None | list[list[int]]` does not match asserted type `int | str | None | list[@Todo(specialized generic alias in type expression)]`
```
New true negatives ✔️
```diff
+aliases_explicit.py:41:36: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
-aliases_explicit.py:57:5: error[type-assertion-failure] Type `(int, str, str, /) -> None` does not match asserted type `@Todo(Generic specialization of typing.Callable)`
+aliases_explicit.py:57:5: error[type-assertion-failure] Type `(int, str, str, /) -> None` does not match asserted type `(...) -> Unknown`
```
These require `ParamSpec`
```diff
+aliases_explicit.py:67:24: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+aliases_explicit.py:68:24: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+aliases_explicit.py:69:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_explicit.py:70:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_explicit.py:71:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_explicit.py:102:20: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
```
New true positives ✔️
```diff
-aliases_implicit.py:63:5: error[type-assertion-failure] Type `list[int]` does not match asserted type `@Todo(specialized generic alias in type expression)`
-aliases_implicit.py:64:5: error[type-assertion-failure] Type `tuple[str, ...] | list[str]` does not match asserted type `@Todo(Generic specialization of types.UnionType)`
-aliases_implicit.py:65:5: error[type-assertion-failure] Type `tuple[int, int, int, str]` does not match asserted type `@Todo(specialized generic alias in type expression)`
-aliases_implicit.py:67:5: error[type-assertion-failure] Type `(int, str, /) -> str` does not match asserted type `@Todo(Generic specialization of typing.Callable)`
-aliases_implicit.py:70:5: error[type-assertion-failure] Type `int | str | None | list[list[int]]` does not match asserted type `int | str | None | list[@Todo(specialized generic alias in type expression)]`
-aliases_implicit.py:71:5: error[type-assertion-failure] Type `list[bool]` does not match asserted type `@Todo(specialized generic alias in type expression)`
```
New true negatives ✔️
```diff
+aliases_implicit.py:54:36: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
-aliases_implicit.py:68:5: error[type-assertion-failure] Type `(int, str, str, /) -> None` does not match asserted type `@Todo(Generic specialization of typing.Callable)`
+aliases_implicit.py:68:5: error[type-assertion-failure] Type `(int, str, str, /) -> None` does not match asserted type `(...) -> Unknown`
```
These require `ParamSpec`
```diff
+aliases_implicit.py:76:24: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+aliases_implicit.py:77:24: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+aliases_implicit.py:78:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_implicit.py:79:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_implicit.py:80:29: error[invalid-type-arguments] Too many type arguments: expected 1, got 2
+aliases_implicit.py:81:25: error[invalid-type-arguments] Type `str` is not assignable to upper bound `int | float` of type variable `TFloat@GoodTypeAlias12`
+aliases_implicit.py:135:20: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
```
New true positives ✔️
```diff
+callables_annotation.py:172:19: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+callables_annotation.py:175:19: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+callables_annotation.py:188:25: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
+callables_annotation.py:189:25: error[invalid-type-arguments] Too many type arguments: expected 0, got 1
```
These require `ParamSpec` and `Concatenate`.
```diff
-generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, typing.TypeVar]`
+generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, DefaultStrT]`
```
Favorable diagnostic change ✔️
```diff
-generics_defaults_specialization.py:27:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, bool]` does not match asserted type `@Todo(specialized generic alias in type expression)`
```
New true negative ✔️
```diff
-generics_defaults_specialization.py:30:1: error[non-subscriptable] Cannot subscript object of type `<class 'SomethingWithNoDefaults[int, typing.TypeVar]'>` with no `__class_getitem__` method
+generics_defaults_specialization.py:30:15: error[invalid-type-arguments] Too many type arguments: expected between 0 and 1, got 2
```
Correct new diagnostic ✔️
```diff
-generics_variance.py:175:25: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:175:35: error[non-subscriptable] Cannot subscript object of type `<class 'Co[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:179:29: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:179:39: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:183:21: error[non-subscriptable] Cannot subscript object of type `<class 'Co[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:183:27: error[non-subscriptable] Cannot subscript object of type `<class 'Co[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:187:25: error[non-subscriptable] Cannot subscript object of type `<class 'Co[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:187:31: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:191:33: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:191:43: error[non-subscriptable] Cannot subscript object of type `<class 'Co[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:191:49: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:196:5: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:196:15: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
-generics_variance.py:196:25: error[non-subscriptable] Cannot subscript object of type `<class 'Contra[typing.TypeVar]'>` with no `__class_getitem__` method
```
One of these should apparently be an error, but not of this kind, so
this is good ✔️
```diff
-specialtypes_type.py:152:16: error[invalid-type-form] `typing.TypeVar` is not a generic class
-specialtypes_type.py:156:16: error[invalid-type-form] `typing.TypeVar` is not a generic class
```
Good, those were false positives. ✔️
I skipped the analysis for everything involving `TypeVarTuple`.
## Ecosystem impact
**[Full report with detailed
diff](https://david-generic-implicit-alias.ecosystem-663.pages.dev/diff)**
Previous iterations of this PR showed all kinds of problems. In it's
current state, I do not see any large systematic problems, but it is
hard to tell with 5k diagnostic changes.
## Performance
* There is a huge 4x regression in `colour-science/colour`, related to
[this large
file](https://github.com/colour-science/colour/blob/develop/colour/io/luts/tests/test_lut.py)
with [many assignments of hard-coded arrays (lists of lists) to
`np.NDArray`
types](83e754c8b6/colour/io/luts/tests/test_lut.py (L701-L781))
that we now understand. We now take ~2 seconds to check this file, so
definitely not great, but maybe acceptable for now.
## Test Plan
Updated and new Markdown tests
## Summary
This is a bugfix for subtyping of `type[Any]` / `type[T]` and protocols.
## Test Plan
Regression test that will only be really meaningful once
https://github.com/astral-sh/ruff/pull/21553 lands.
## Summary
**This is the final goto-targets with missing
goto-definition/declaration implementations!
You can now theoretically click on all the user-defined names in all the
syntax. 🎉**
This adds:
* goto definition/declaration on patterns/typevars
* find-references/rename on patterns/typevars
* fixes syntax highlighting of `*rest` patterns
This notably *does not* add:
* goto-type for patterns/typevars
* hover for patterns/typevars (because that's just goto-type for names)
Also I realized we were at the precipice of one of the great GotoTarget
sins being resolved, and so I made import aliases also resolve to a
ResolvedDefinition. This removes a ton of cruft and prevents further
backsliding.
Note however that import aliases are, in general, completely jacked up
when it comes to find-references/renames (both before and after this
PR). Previously you could try to rename an import alias and it just
wouldn't do anything. With this change we instead refuse to even let you
try to rename it.
Sorting out why import aliases are jacked up is an ongoing thing I hope
to handle in a followup.
## Test Plan
You'll surely not regret checking in 86 snapshot tests
## Summary
* Fixes https://github.com/astral-sh/ty/issues/1650
* Part of https://github.com/astral-sh/ty/issues/1610
We now handle:
* `.. warning::` (and friends) by bolding the line and rendering the
block as normal (non-code) text
* `.. code::` (and friends) by treating it the same as `::` (fully
deleted if seen, introduce a code block)
* `.. code:: lang` (and friends) by letting it set the language on the
codefence
* `.. versionchanged:: 1.2.3` (and friends) by rendering it like
`warning` but with the version included and italicized
* `.. dsfsdf-unknown:: (lang)` by assuming it's the same as `.. code::
(lang)`
## Test Plan
Snapshots added/updated. I also deleted a bunch of useless checks on
plaintext rendering. It's important for some edge-case tests but not for
the vast majority of tests.
## Summary
This PR adds a new `db` parameter to `Parameters::new` for
https://github.com/astral-sh/ruff/pull/21445. This change creates a
large diff so thought to split it out as it's just a mechanical change.
The `Parameters::new` method not only creates the `Parameters` but also
analyses the parameters to check what kind it is. For `ParamSpec`
support, it's going to require the `db` to check whether the annotated
type is `ParamSpec` or not. For the current set of parameters that isn't
required because it's only checking whether it's dynamic or not which
doesn't require `db`.
## Summary
Originally I planned to feed this in as a `fix` but I realized that we
probably don't want to be trying to resolve import suggestions while
we're doing type inference. Thus I implemented this as a fallback when
there's no fixes on a diagnostic, which can use the full lsp machinery.
Fixes https://github.com/astral-sh/ty/issues/1552
## Test Plan
Works in the IDE, added some e2e tests.
## Summary
Previously if an explicit specialization failed (e.g. wrong number of
type arguments or violates an upper bound) we just inferred `Unknown`
for the entire type. This actually caused us to panic on an a case of a
recursive upper bound with invalid specialization; the upper bound would
oscillate indefinitely in fixpoint iteration between `Unknown` and the
given specialization. This could be fixed with a cycle recovery
function, but in this case there's a simpler fix: if we infer
`C[Unknown]` instead of `Unknown` for an invalid attempt to specialize
`C`, that allows fixpoint iteration to quickly converge, as well as
giving a more precise type inference.
Other type checkers actually just go with the attempted specialization
even if it's invalid. So if `C` has a type parameter with upper bound
`int`, and you say `C[str]`, they'll emit a diagnostic but just go with
`C[str]`. Even weirder, if `C` has a single type parameter and you say
`C[str, bytes]`, they'll just go with `C[str]` as the type. I'm not
convinced by this approach; it seems odd to have specializations
floating around that explicitly violate the declared upper bound, or in
the latter case aren't even the specialization the annotation requested.
I prefer `C[Unknown]` for this case.
Fixing this revealed an issue with `collections.namedtuple`, which
returns `type[tuple[Any, ...]]`. Due to
https://github.com/astral-sh/ty/issues/1649 we consider that to be an
invalid specialization. So previously we returned `Unknown`; after this
PR it would be `type[tuple[Unknown]]`, leading to more false positives
from our lack of functional namedtuple support. To avoid that I added an
explicit Todo type for functional namedtuples for now.
## Test Plan
Added and updated mdtests.
The conformance suite changes have to do with `ParamSpec`, so no
meaningful signal there.
The ecosystem changes appear to be the expected effects of having more
precise type information (including occurrences of known issues such as
https://github.com/astral-sh/ty/issues/1495 ). Most effects are just
changes to types in diagnostics.
## Summary
Lots of Ruff rules encourage you to make changes that might then cause
ty to start complaining about Liskov violations. Most of these Ruff
rules already refrain from complaining about a method if they see that
the method is decorated with `@override`, but this usually isn't
documented. This PR updates the docs of many Ruff rules to note that
they refrain from complaining about `@override`-decorated methods, and
also adds a similar note to the ty `invalid-method-override`
documentation.
Helps with
https://github.com/astral-sh/ty/issues/1644#issuecomment-3581663859
## Test Plan
- `uvx prek run -a` locally
- CI on this PR
## Summary
This PR updates the explicit specialization logic to avoid using the
call machinery.
Previously, the logic would use the call machinery by converting the
list of type variables into a `Binding` with a single `Signature` where
all the type variables are positional-only parameters with bounds and
constraints as the annotated type and the default type as the default
parameter value. This has the advantage that it doesn't need to
implement any specific logic but the disadvantages are subpar diagnostic
messages as it would use the ones specific to a function call. But, an
important disadvantage is that the kind of type variable is lost in this
translation which becomes important in #21445 where a `ParamSpec` can
specialize into a list of types which is provided using list literal.
For example,
```py
class Foo[T, **P]: ...
Foo[int, [int, str]]
```
This PR converts the logic to use a simple loop using `zip_longest` as
all type variables and their corresponding type argument maps on a 1-1
basis. They cannot be specified using keyword argument either e.g.,
`dict[_VT=str, _KT=int]` is invalid.
This PR also makes an initial attempt to improve the diagnostic message
to specifically target the specialization part by using words like "type
argument" instead of just "argument" and including information like the
type variable, bounds, and constraints. Further improvements can be made
by highlighting the type variable definition or the bounds / constraints
as a sub-diagnostic but I'm going to leave that as a follow-up.
## Test Plan
Update messages in existing test cases.
## Summary
This caused "deterministic but chaotic" ordering of some intersection
types in diagnostics. When calling a union, we infer the argument type
once per matching parameter type, intersecting the inferred types for
the argument expression, and we did that in an unpredictable order.
We do need a hashset here for de-duplication. Sometimes we call large
unions where the type for a given parameter is the same across the
union, we should infer the argument once per parameter type, not once
per union element. So use an `FxIndexSet` instead of an `FxHashSet`.
## Test Plan
With this change, switching between `main` and
https://github.com/astral-sh/ruff/pull/21646 no longer changes the
ordering of the intersection type in the test in
cca3a8045d
## Summary
Derived from #17371Fixesastral-sh/ty#256
Fixes https://github.com/astral-sh/ty/issues/1415
Fixes https://github.com/astral-sh/ty/issues/1433
Fixes https://github.com/astral-sh/ty/issues/1524
Properly handles any kind of recursive inference and prevents panics.
---
Let me explain techniques for converging fixed-point iterations during
recursive type inference.
There are two types of type inference that naively don't converge
(causing salsa to panic): divergent type inference and oscillating type
inference.
### Divergent type inference
Divergent type inference occurs when eagerly expanding a recursive type.
A typical example is this:
```python
class C:
def f(self, other: "C"):
self.x = (other.x, 1)
reveal_type(C().x) # revealed: Unknown | tuple[Unknown | tuple[Unknown | tuple[..., Literal[1]], Literal[1]], Literal[1]]
```
To solve this problem, we have already introduced `Divergent` types
(https://github.com/astral-sh/ruff/pull/20312). `Divergent` types are
treated as a kind of dynamic type [^1].
```python
Unknown | tuple[Unknown | tuple[Unknown | tuple[..., Literal[1]], Literal[1]], Literal[1]]
=> Unknown | tuple[Divergent, Literal[1]]
```
When a query function that returns a type enters a cycle, it sets
`Divergent` as the cycle initial value (instead of `Never`). Then, in
the cycle recovery function, it reduces the nesting of types containing
`Divergent` to converge.
```python
0th: Divergent
1st: Unknown | tuple[Divergent, Literal[1]]
2nd: Unknown | tuple[Unknown | tuple[Divergent, Literal[1]], Literal[1]]
=> Unknown | tuple[Divergent, Literal[1]]
```
Each cycle recovery function for each query should operate only on the
`Divergent` type originating from that query.
For this reason, while `Divergent` appears the same as `Any` to the
user, it internally carries some information: the location where the
cycle occurred. Previously, we roughly identified this by having the
scope where the cycle occurred, but with the update to salsa, functions
that create cycle initial values can now receive a `salsa::Id`
(https://github.com/salsa-rs/salsa/pull/1012). This is an opaque ID that
uniquely identifies the cycle head (the query that is the starting point
for the fixed-point iteration). `Divergent` now has this `salsa::Id`.
### Oscillating type inference
Now, another thing to consider is oscillating type inference.
Oscillating type inference arises from the fact that monotonicity is
broken. Monotonicity here means that for a query function, if it enters
a cycle, the calculation must start from a "bottom value" and progress
towards the final result with each cycle. Monotonicity breaks down in
type systems that have features like overloading and overriding.
```python
class Base:
def flip(self) -> "Sub":
return Sub()
class Sub(Base):
def flip(self) -> "Base":
return Base()
class C:
def __init__(self, x: Sub):
self.x = x
def replace_with(self, other: "C"):
self.x = other.x.flip()
reveal_type(C(Sub()).x)
```
Naive fixed-point iteration results in `Divergent -> Sub -> Base -> Sub
-> ...`, which oscillates forever without diverging or converging. To
address this, the salsa API has been modified so that the cycle recovery
function receives the value of the previous cycle
(https://github.com/salsa-rs/salsa/pull/1012).
The cycle recovery function returns the union type of the current cycle
and the previous cycle. In the above example, the result type for each
cycle is `Divergent -> Sub -> Base (= Sub | Base) -> Base`, which
converges.
The final result of oscillating type inference does not contain
`Divergent` because `Divergent` that appears in a union type can be
removed, as is clear from the expansion. This simplification is
performed at the same time as nesting reduction.
```
T | Divergent = T | (T | (T | ...)) = T
```
[^1]: In theory, it may be possible to strictly treat types containing
`Divergent` types as recursive types, but we probably shouldn't go that
deep yet. (AFAIK, there are no PEPs that specify how to handle
implicitly recursive types that aren't named by type aliases)
## Performance analysis
A happy side effect of this PR is that we've observed widespread
performance improvements!
This is likely due to the removal of the `ITERATIONS_BEFORE_FALLBACK`
and max-specialization depth trick
(https://github.com/astral-sh/ty/issues/1433,
https://github.com/astral-sh/ty/issues/1415), which means we reach a
fixed point much sooner.
## Ecosystem analysis
The changes look good overall.
You may notice changes in the converged values for recursive types,
this is because the way recursive types are normalized has been changed.
Previously, types containing `Divergent` types were normalized by
replacing them with the `Divergent` type itself, but in this PR, types
with a nesting level of 2 or more that contain `Divergent` types are
normalized by replacing them with a type with a nesting level of 1. This
means that information about the non-divergent parts of recursive types
is no longer lost.
```python
# previous
tuple[tuple[Divergent, int], int] => Divergent
# now
tuple[tuple[Divergent, int], int] => tuple[Divergent, int]
```
The false positive error introduced in this PR occurs in class
definitions with self-referential base classes, such as the one below.
```python
from typing_extensions import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class Base2(Generic[T, U]): ...
# TODO: no error
# error: [unsupported-base] "Unsupported class base with type `<class 'Base2[Sub2, U@Sub2]'> | <class 'Base2[Sub2[Unknown], U@Sub2]'>`"
class Sub2(Base2["Sub2", U]): ...
```
This is due to the lack of support for unions of MROs, or because cyclic
legacy generic types are not inferred as generic types early in the
query cycle.
## Test Plan
All samples listed in astral-sh/ty#256 are tested and passed without any
panic!
## Acknowledgments
Thanks to @MichaReiser for working on bug fixes and improvements to
salsa for this PR. @carljm also contributed early on to the discussion
of the query convergence mechanism proposed in this PR.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
We now use the type context for a lot of things, so re-inferring without
type context actually makes diagnostics more confusing (in most cases).
Autocomplete suggestions were not suppressed correctly during
some variable bindings if the parameter name was currently
matching a keyword. E.g. `def f(foo<CURSOR>` was handled
correctly but not `def f(in<CURSOR>`.
Previously we extracted the entire token as the query
independently of the cursor position. By not doing that
you avoid having to do special range handling
to figure out the start position of the current token.
It's likely also more intuitive from a user perspective
to only consider characters left of the cursor when
suggesting autocompletions.
## Summary
The implementation here is to just record the idents of these statements
in `scopes_by_expression` (which already supported idents but only ones
that happened to appear in expressions), so that `definitions_for_name`
Just Works.
goto-type (and therefore hover) notably does not work on these
statements because the typechecker does not record info for them. I am
tempted to just introduce `type_for_name` which runs
`definitions_for_name` to find other expressions and queries the
inferred type... but that's a bit whack because it won't be the computed
type at the right point in the code. It probably wouldn't be
particularly expensive to just compute/record the type at those nodes,
as if they were a load, because global/nonlocal is so scarce?
## Test Plan
Snapshot tests added/re-enabled.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Don't allow edits of some more invalid syntax types.
## Test Plan
Add a test for `x = Literal['a']` (similar) to show we don't allow
edits.
Fixes https://github.com/astral-sh/ty/issues/1009
## Summary
This adds support for:
* semantic-tokens (syntax highlighting)
* goto-type **(partially implemented, but want to land as-is)**
* goto-declaration
* goto-definition (falls out of goto-declaration)
* hover **(limited by goto-type)**
* find-references
* rename-references (falls out of find-references)
There are 3 major things being introduced here:
* `TypeInferenceBuilder::string_annotations` is a `FxHashSet` of exprs
which were determined to be string annotations during inference. It's
bubbled up in `extras` to hopefully minimize the overhead as in most
contexts it's empty.
* Very happy to hear if this is too hacky and if I should do something
better, but it's IMO important that we get an authoritative answer on
whether something is a string annotation or not.
* `SemanticModel::enter_string_annotation` checks if the expr was marked
by `TypeInferenceBuilder::string_annotations` and then parses the subast
and produces a sub-SemanticModel that sets
`SemanticModel::in_string_annotation_expr`. This expr will be used by
the model whenever we need to query e.g. the scope of the current
expression (otherwise the code will constantly panic as the subast nodes
are not in the current File's AST)
* This hazard consequently encouraged me to refactor a bunch of code to
replace uses of file/db with SemanticModel to minimize hazards (it is no
longer as safe to randomly materialize a SemanticModel in the middle of
analysis, you need to thread through the one you have in case it has
`in_string_annotation_expr` set).
* `GotoTarget::StringAnnotationSubexpr` (and a semantic-tokens impl)
which involves invoking `SemanticModel::enter_string_annotation` before
invoking the same kind of subroutine a normal expression would.
* goto-type (and consequently displaying the type in hover) is the main
hole here, because we can only get the type iff the string annotation is
the entire subexpression (i.e. we can get the type of `"int"` but not
the parts of `"int | str"`). This is shippable IMO.
## Test Plan
Messed around in IDE, wrote a ton of tests.
## Summary
This PR adds a code action to remove unused ignore comments.
This PR also includes some infrastructure boilerplate to set up code
actions in the editor:
* Extend `snapshot-diagnostics` to render fixes
* Render fixes when using `--output-format=full`
* Hook up edits and the code action request in the LSP
* Add the `Unnecessary` tag to `unused-ignore-comment` diagnostics
* Group multiple unused codes into a single diagnostic
The same fix can be used on the CLI once we add `ty fix`
Note: `unused-ignore-comment` is currently disabled by default.
https://github.com/user-attachments/assets/f9e21087-3513-4156-85d7-a90b1a7a3489
## Summary
Building on https://github.com/astral-sh/ruff/pull/21436.
There's nothing conceptually more complicated about this, it just
requires its own set of tests and its own subdiagnostic hint.
I also uncovered another inconsistency between mypy/pyright/pyrefly,
which is fun. In this case, I suggest we go with pyright's behaviour.
## Test Plan
mdtests/snapshots
## Summary
For something like this:
```py
from typing import Callable
def my_lossy_decorator(fn: Callable[..., int]) -> Callable[..., int]:
return fn
class MyClass:
@my_lossy_decorator
def method(self) -> int:
return 42
```
we will currently infer the type of `MyClass.method` as a function-like
`Callable`, but we will infer the type of `MyClass().method` as a
`Callable` that is _not_ function-like. That's because a `CallableType`
currently "forgets" whether it was function-like or not during the
`bound_self` transformation:
a57e291311/crates/ty_python_semantic/src/types.rs (L10985-L10987)
This seems incorrect, and it's quite different to what we do when
binding the `self` parameter of `FunctionLiteral` types: `BoundMethod`
types are all seen as subtypes of function-like `Callable` supertypes --
here's `BoundMethodType::into_callable_type`:
a57e291311/crates/ty_python_semantic/src/types.rs (L10844-L10860)
The bug here is also causing lots of false positives in the ecosystem
report on https://github.com/astral-sh/ruff/pull/21611: a decorated
method on a subclass is currently not seen as validly overriding an
undecorated method with the same signature on a superclass, because the
undecorated superclass method is seen as function-like after binding
`self` whereas the decorated subclass method is not.
Fixing the bug required adding a new API in `protocol_class.rs`, because
it turns out that for our purposes in protocol subtyping/assignability,
we really do want a callable type to forget its function-like-ness when
binding `self`.
I initially tried out this change without changing anything in
`protocol_class.rs`. However, it resulted in many ecosystem false
positives and new false positives on the typing conformance test suite.
This is because it would mean that no protocol with a `__call__` method
would ever be seen as a subtype of a `Callable` type, since the
`__call__` method on the protocol would be seen as being function-like
whereas the `Callable` type would not be seen as function-like.
## Test Plan
Added an mdtest that fails on `main`
Before, we would collapse any constraint of the form `Never ≤ T ≤
object` down to the "always true" constraint set. This is correct in
terms of BDD semantics, but loses information, since "not constraining a
typevar at all" is different than "constraining a typevar to take on any
type". Once we get to specialization inference, we should fall back on
the typevar's default for the former, but not for the latter.
This is much easier to support now that we have a sequent map, since we
need to treat `¬(Never ≤ T ≤ object)` as being impossible, and prune it
when we walk through BDD paths, just like we do for other impossible
combinations.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves
https://github.com/astral-sh/ty/issues/317#issuecomment-3567398107.
I can't get the auto import working great.
I haven't added many places where we specify that the type display is
invalid syntax.
## Test Plan
Nothing yet
This patch updates our protocol assignability checks to substitute for
any occurrences of `typing.Self` in method signatures, replacing it with
the class being checked for assignability against the protocol.
This requires a new helper method on signatures, `apply_self`, which
substitutes occurrences of `typing.Self` _without_ binding the `self`
parameter.
We also update the `try_upcast_to_callable` method. Before, it would
return a `Type`, since certain types upcast to a _union_ of callables,
not to a single callable. However, even in that case, we know that every
element of the union is a callable. We now return a vector of
`CallableType`. (Actually a smallvec to handle the most common case of a
single callable; and wrapped in a new type so that we can provide helper
methods.) If there is more than one element in the result, it represents
a union of callables. This lets callers get at the `CallableType`
instances in a more type-safe way. (This makes it easier for our
protocol checking code to call the new `apply_self` helper.) We also
provide an `into_type` method so that callers that really do want a
`Type` can get the original result easily.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
This only applies to items that have a type associated with them. That
is, things that are already in scope. For items that don't have a type
associated with them (i.e., suggestions from auto-import), we still
suggest them since we can't know if they're appropriate or not. It's not
quite clear on how best to improve here for the auto-import case. (Short
of, say, asking for the type of each such symbol. But the performance
implications of that aren't known yet.)
Note that because of auto-import, we were still suggesting
`NotImplemented` even though astral-sh/ty#1262 specifically cites it as
the motivating example that we *shouldn't* suggest. This was occuring
because auto-import was including symbols from the `builtins` module,
even though those are actually already in scope. So this PR also gets
rid of those suggestions from auto-import.
Overall, this means that, at least, `raise NotImpl` won't suggest
`NotImplemented`.
Fixesastral-sh/ty#1262
## Summary
Fixes https://github.com/astral-sh/ty/issues/1620. #20909 added hints if
you do something like this and your Python version is set to 3.10 or
lower:
```py
import typing
typing.LiteralString
```
And we also have hints if you try to do something like this and your
Python version is set too low:
```py
from stdlib_module import new_submodule
```
But we don't currently have any subdiagnostic hint if you do something
like _this_ and your Python version is set too low:
```py
from typing import LiteralString
```
This PR adds that hint!
## Test Plan
snapshots
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
## Summary
This PR adds a failing mdtest for the panic in
https://github.com/astral-sh/ty/issues/1587. The added snippet currently
panics with this query stacktrace:
```
error[panic]: Panicked at /Users/alexw/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/17bc55d/src/function/execute.rs:321:21 when checking `/Users/alexw/dev/ruff/foo.py`: `ClassLiteral < 'db >::explicit_bases_(Id(4c09)): execute: too many cycle iterations`
info: This indicates a bug in ty.
info: If you could open an issue at https://github.com/astral-sh/ty/issues/new?title=%5Bpanic%5D, we'd be very appreciative!
info: Platform: macos aarch64
info: Version: ruff/0.14.5+105 (d24c891a4 2025-11-22)
info: Args: ["target/debug/ty", "check", "foo.py", "--python-version=3.14"]
info: run with `RUST_BACKTRACE=1` environment variable to show the full backtrace information
info: query stacktrace:
0: cached_protocol_interface(Id(6805))
at crates/ty_python_semantic/src/types/protocol_class.rs:795
1: is_equivalent_to_object_inner(Id(8003))
at crates/ty_python_semantic/src/types/instance.rs:667
2: infer_deferred_types(Id(1406))
at crates/ty_python_semantic/src/types/infer.rs:140
cycle heads: infer_definition_types(Id(140b)) -> iteration = 200, TypeVarInstance < 'db >::lazy_bound_(Id(5802)) -> iteration = 200
3: TypeVarInstance < 'db >::lazy_bound_(Id(5803))
at crates/ty_python_semantic/src/types.rs:8827
4: infer_definition_types(Id(140c))
at crates/ty_python_semantic/src/types/infer.rs:94
5: infer_deferred_types(Id(1405))
at crates/ty_python_semantic/src/types/infer.rs:140
6: TypeVarInstance < 'db >::lazy_bound_(Id(5802))
at crates/ty_python_semantic/src/types.rs:8827
7: infer_definition_types(Id(140b))
at crates/ty_python_semantic/src/types/infer.rs:94
8: infer_scope_types(Id(1000))
at crates/ty_python_semantic/src/types/infer.rs:70
9: check_file_impl(Id(c00))
at crates/ty_project/src/lib.rs:535
```
It's not totally clear to me how to fix this or to what extent it might
be a bug in our `Protocol` internals rather than a bug in our `TypeVar`
internals. (It's sort of interesting that we're trying to evaluate the
upper bound of any `TypeVar`s here!) @carljm suggested that it would be
a good idea to add a failing mdtest in the meantime to document the
panic, which I agree with.
## Test Plan
I verified that we panic on this snippet, and that the test fails if I
remove the `expect-panic` assertion or if I change the asserted error
message.
I experimented with ways of minimizing the snippet further, but I think
any further minimization takes the snippet further away from something a
user would actually be likely to write -- so I think is probably
counterproductive. The failing test added in this PR isn't unreasonable
code at the end of the day; I've seen Python like it in the wild.
## Summary
Fixes a panic when parsing IPython escape commands with `Help` kind
(`?`) in expression contexts. The parser now reports an error instead of
panicking.
Fixes#21465.
## Problem
The parser panicked with `unreachable!()` in
`parse_ipython_escape_command_expression` when encountering escape
commands with `Help` kind (`?`) in expression contexts, where only
`Magic` (`%`) and `Shell` (`!`) are allowed.
## Approach
Replaced the `unreachable!()` panic with error handling that adds a
`ParseErrorType::OtherError` and continues parsing, returning a valid
AST node with the error attached.
## Test Plan
Added `test_ipython_escape_command_in_with_statement` and
`test_ipython_help_escape_command_as_expression` to verify the fix.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
* Fixes https://github.com/astral-sh/ty/issues/1011
* Also fixes the fact that we didn't handle `.x` properly *at all* in
hover/goto
It turns out all of our import handling completely ignored the `level`
(number of relative `.`'s) in a `from ..x.y import z` statement. It was
nice seeing how much my understanding of `ty` has improved -- previously
this would have all been opaque to me but now it was just, completely
glaring and blatant.
Fixing this required refactoring all the import code to take the
importing file into consideration. I ended up refactoring a bunch of
code to pass around/require `SemanticModel` more, as it's the natural
API for resolving this kind of import (it actually had an API for this
that was just... dead code, whoops!).
## Summary
As reported in #19757:
While attempting ISC003 autofix for an expression with explicit string
concatenation, with either operand being a string literal that wraps
across multiple lines (in parentheses) - it resulted in generating a fix
which caused runtime error.
Example:
```
_ = "abc" + (
"def"
"ghi"
)
```
was being auto-fixed to:
```
_ = "abc" (
"def"
"ghi"
)
```
which raised `TypeError: 'str' object is not callable`
This commit makes changes to just report diagnostic - no autofix in such
cases.
Fixes#19757.
## Test Plan
Added example scenarios in
`crates/ruff_linter/resources/test/fixtures/flake8_implicit_str_concat/ISC.py`.
Signed-off-by: Prakhar Pratyush <prakhar1144@gmail.com>
## Summary
Fixes the PLE1141 (`dict-iter-missing-items`) rule to allow fixes for
empty dictionaries unless they have type annotations indicating 2-tuple
keys. Previously, the fix was incorrectly suppressed for all empty dicts
due to vacuous truth in the `all()` function.
Fixes#21289
## Problem Analysis
The `is_dict_key_tuple_with_two_elements` function was designed to
suppress the fix when a dictionary's keys are all 2-tuples, as unpacking
tuple keys directly would change runtime behavior.
However, for empty dictionaries, `iter_keys()` returns an empty
iterator, and `all()` on an empty iterator returns `true` (vacuous
truth). This caused the function to incorrectly suppress fixes for empty
dicts, even when there was no indication that future keys would be
2-tuples.
## Approach
1. **Detect empty dictionaries**: Added a check to identify when a dict
literal has no keys.
2. **Handle annotated empty dicts**: For empty dicts with type
annotations:
- Parse the annotation to check if it's `dict[tuple[T1, T2], ...]` where
the tuple has exactly 2 elements
- Support both PEP 484 (`typing.Dict`, `typing.Tuple`) and PEP 585
(`dict`, `tuple`) syntax
- If tuple keys are detected, suppress the fix (correct behavior)
- Otherwise, allow the fix
3. **Handle unannotated empty dicts**: For empty dicts without
annotations, allow the fix since there's no indication that keys will be
2-tuples.
4. **Preserve existing behavior**: For non-empty dicts, the original
logic is unchanged - check if all existing keys are 2-tuples.
The implementation includes helper functions:
- `is_annotation_dict_with_tuple_keys()`: Checks if a type annotation
specifies dict with tuple keys
- `is_tuple_type_with_two_elements()`: Checks if a type expression
represents a 2-tuple
Test cases were added to verify:
- Empty dict without annotation triggers the error
- Empty dict with `dict[tuple[int, str], bool]` suppresses the error
- Empty dict with `dict[str, int]` triggers the error
- Existing tests remain unchanged
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
PR #21549 introduced a subtle overflow bug that seemed impossible, but
can empirically happen. This PR fixes it by saturating to zero.
I did try to write a regression test for this, but couldn't manage it.
Instead, I'll attach before-and-after screen recordings.
These were added to try to make it clearer that assignability checks
will eventually return more detailed answers than true or false.
However, the constraint set display rendering is still more brittle than
I'd like it to be, and it's more trouble than it's worth to keep them
updated with semantically identically but textually different edits. The
`static_assert`s are sufficient to check correctness, and we can always
add `reveal_type` when needed for further debugging.
## Summary
Extends the `used-dummy-variable` rule
([RUF052](https://docs.astral.sh/ruff/rules/used-dummy-variable/)) to
detect dummy variables that are used within list comprehensions, dict
comprehensions, set comprehensions, and generator expressions, not just
regular for loops and function assignments.
### Problem
Previously, RUF052 only flagged dummy variables (variables with leading
underscores) that were used in function scopes via assignments or
regular for loops. It missed cases where dummy variables were used
within comprehensions:
```python
def example():
my_list = [{"foo": 1}, {"foo": 2}]
# These were not detected before:
[_item["foo"] for _item in my_list] # Should warn: _item is used
{_item["key"]: _item["val"] for _item in my_list} # Should warn: _item is used
(_item["foo"] for _item in my_list) # Should warn: _item is used
```
### Solution
- Extended scope checking to include all generator scopes () with any
(list/dict/set comprehensions and generator expressions)
`ScopeKind::Generator``GeneratorKind`
- Added support for bindings, which cover loop variables in both regular
for loops and comprehensions `BindingKind::LoopVar`
- Refactored the scope validation logic for better readability with a
descriptive variable `is_allowed_scope`
[ISSUE](https://github.com/astral-sh/ruff/issues/19732)
## Test Plan
```bash
cargo test
```
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Statements such as `def foo(p<CURSOR>`,
`def foo[T<CURSOR>` and `for foo<CURSOR>`
should not generate any suggestions as these
cases are introducing new names.
If it's not possible to determine that suggestions should be omitted
using token matching in an easy way, we turn
to traversing the AST to determine the context.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ty/issues/1563
It keeps using the existing token matching pattern for the easy cases
(nothing typed and most recent token is a definition token) and
fallbacks to AST traveral for the slightly more difficult cases where
token matching becomes difficult and error prone.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
New test cases and sanity-checking in the ty playground
<!-- How was it tested? -->
## Summary
This introduces a very bad and naive
python-docstring-flavoured-reStructuredText to github-flavor-markdown
translator. The main goal is to try to preserve a lot of the formatting
and plaintext, progressively enhance the content when we find things we
know about, and escape the text when we find things that might get
corrupt.
Previously I'd broken this out into rendering each different format, but
with this approach you don't really need to?
## Test Plan
Lots of snapshot tests, also messed around in some random stdlib
modules.
This commit essentially does away of all our old heuristic and piecemeal
code for detecting different kinds of import statements. Instead, we
offer one single state machine that does everything. This on its own
fixes a few bugs. For example, `import collections.abc, unico<CURSOR>`
would previously offer global scope completions instead of module
completions.
For the most part though, this commit is a refactoring that preserves
parity. In the next commit, we'll add support for completions on
relative imports.
This is a small refactor that helps centralize the
logic for how we gather, convert and possibly filter
completions.
Some of this logic was spread out before, which
motivated this refactor. Moreover, as part of other
refactoring, I found myself chaffing against the
lack of this abstraction.
Refs https://github.com/astral-sh/ty/issues/544
## Summary
Takes a more incremental approach to PEP 613 type alias support (vs
https://github.com/astral-sh/ruff/pull/20107). Instead of eagerly
inferring the RHS of a PEP 613 type alias as a type expression, infer it
as a value expression, just like we do for implicit type aliases, taking
advantage of the same support for e.g. unions and other type special
forms.
The main reason I'm following this path instead of the one in
https://github.com/astral-sh/ruff/pull/20107 is that we've realized that
people do sometimes use PEP 613 type aliases as values, not just as
types (because they are just a normal runtime assignment, unlike PEP 695
type aliases which create an opaque `TypeAliasType`).
This PR doesn't yet provide full support for recursive type aliases
(they don't panic, but they just fall back to `Unknown` at the recursion
point). This is future work.
## Test Plan
Added mdtests.
Many new ecosystem diagnostics, mostly because we
understand new types in lots of places.
Conformance suite changes are correct.
Performance regression is due to understanding lots of new
types; nothing we do in this PR is inherently expensive.
This is a very conservative minimal implementation of applying overloads
to resolve a callable-type-being-called down to a single function
signature on hover. If we ever encounter a situation where the answer
doesn't simplify down to a single function call, we bail out to preserve
prettier printing of non-raw-Signatures.
The resulting Signatures are still a bit bare, I'm going to try to
improve that in a followup to improve our Signature printing in general.
Fixes https://github.com/astral-sh/ty/issues/73
As far as I know this change is largely non-functional, largely because
of https://github.com/astral-sh/ty/issues/1601
It's possible some of these like `Type::KnownInstance` produce something
useful sometimes. `LiteralString` is a new introduction, although its
goto-type jumps to `str` which is a bit sad (considering that part of
the SpecialForm discourse for now).
Also wrt the generics testing followup: turns out the snapshot tests
were full of those already.
## Summary
Eagerly evaluate the elements of a PEP 604 union in value position (e.g.
`IntOrStr = int | str`) as type expressions and store the result (the
corresponding `Type::Union` if all elements are valid type expressions,
or the first encountered `InvalidTypeExpressionError`) on the
`UnionTypeInstance`, such that the `Type::Union(…)` does not need to be
recomputed every time the implicit type alias is used in a type
annotation.
This might lead to performance improvements for large unions, but is
also necessary for correctness, because the elements of the union might
refer to type variables that need to be looked up in the scope of the
type alias, not at the usage site.
## Test Plan
New Markdown tests
This PR generalizes the signature_help system's SignatureWriter which
could get the subspans of function parameters.
We now have TypeDetailsWriter which is threaded between type's display
implementations via a new `fmt_detailed` method that many of the Display
types now have.
With this information we can properly add goto-type targets to our inlay
hints. This also lays groundwork for any future "I want to render a type
but get spans" work.
Also a ton of lifetimes are introduced to avoid things getting conflated
with `'db`.
This PR is broken up into a series of commits:
* Generalizing `SignatureWriter` to `TypeDetailsWriter`, but not using
it anywhere else. This commit was confirmed to be a non-functional
change (no test results changed)
* Introducing `fmt_detailed` everywhere to thread through
`TypeDetailsWriter` and annotate various spans as "being" a given Type
-- this is also where I had to reckon with a ton of erroneous `&'db
self`. This commit was also confirmed to be a non-functional change.
* Finally, actually using the results for goto-type on inlay hints!
* Regenerating snapshots, fixups, etc.
#21414 added the ability to create a specialization from a constraint
set. It handled mutually constrained typevars just fine, e.g. given `T ≤
int ∧ U = T` we can infer `T = int, U = int`.
But it didn't handle _nested_ constraints correctly, e.g. `T ≤ int ∧ U =
list[T]`. Now we do! This requires doing a fixed-point "apply the
specialization to itself" step to propagate the assignments of any
nested typevars, and then a cycle detection check to make sure we don't
have an infinite expansion in the specialization.
This gets at an interesting nuance in our constraint set structure that
@sharkdp has asked about before. Constraint sets are BDDs, and each
internal node represents an _individual constraint_, of the form `lower
≤ T ≤ upper`. `lower` and `upper` are allowed to be other typevars, but
only if they appear "later" in the arbitary ordering that we establish
over typevars. The main purpose of this is to avoid infinite expansion
for mutually constrained typevars.
However, that restriction doesn't help us here, because only applies
when `lower` and `upper` _are_ typevars, not when they _contain_
typevars. That distinction is important, since it means the restriction
does not affect our expressiveness: we can always rewrite `Never ≤ T ≤
U` (a constraint on `T`) into `T ≤ U ≤ object` (a constraint on `U`).
The same is not true of `Never ≤ T ≤ list[U]` — there is no "inverse" of
`list` that we could apply to both sides to transform this into a
constraint on a bare `U`.
## Summary
Updated `S508` (snmp-insecure-version) and `S509`
(snmp-weak-cryptography) rules to support both old and new PySNMP API
module paths. Previously, these rules only detected the old API path
`pysnmp.hlapi.*`, but now they correctly detect all PySNMP API variants
including `pysnmp.hlapi.asyncio.*`, `pysnmp.hlapi.v1arch.*`,
`pysnmp.hlapi.v3arch.*`, and `pysnmp.hlapi.auth.*`.
Fixes#21364
## Problem Analysis
The `S508` and `S509` rules used exact pattern matching on qualified
names:
- `S509` only matched `["pysnmp", "hlapi", "UsmUserData"]`
- `S508` only matched `["pysnmp", "hlapi", "CommunityData"]`
This meant that newer PySNMP API paths were not detected, such as:
- `pysnmp.hlapi.asyncio.UsmUserData`
- `pysnmp.hlapi.v3arch.asyncio.UsmUserData`
- `pysnmp.hlapi.v3arch.asyncio.auth.UsmUserData`
- `pysnmp.hlapi.auth.UsmUserData`
- Similar variants for `CommunityData` in `S508`
Additionally, the old API path `pysnmp.hlapi.auth.*` was also missing
from both rules.
## Approach
Instead of exact pattern matching, both rules now check if:
1. The qualified name starts with `["pysnmp", "hlapi"]`
2. The qualified name ends with the target class name (`"UsmUserData"`
for `S509`, `"CommunityData"` for `S508`)
This flexible approach matches all PySNMP API paths without hardcoding
each variant, making the rules more maintainable and future-proof.
## Test Plan
Added comprehensive test cases to both `S508.py` and `S509.py` test
files covering:
- New API paths: `pysnmp.hlapi.asyncio.*`, `pysnmp.hlapi.v1arch.*`,
`pysnmp.hlapi.v3arch.*`
- Old API path: `pysnmp.hlapi.auth.*`
- Both insecure and secure usage patterns
All existing tests pass, and new snapshot tests were added and accepted.
Manual verification confirms both rules correctly detect all PySNMP API
variants.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Fixes https://github.com/astral-sh/ty/issues/1571.
I realised I was overcomplicating things when I described what we should
do in that issue description. The simplest thing to do here is just to
special-case call expressions and short-circuit the call-binding
machinery entirely if we see it's `NotImplemented` being called. It
doesn't really matter if the subdiagnostic doesn't fire when a union is
called and one element of the union is `NotImplemented` -- the
subdiagnostic doesn't need to be exhaustive; it's just to help people in
some common cases.
## Test Plan
Added snapshots
## Summary
The `.expect()` call here:
5dd56264fb/crates/ty_python_semantic/src/types/instance.rs (L816-L827)
is the direct cause of the panic in
https://github.com/astral-sh/ty/issues/1587. This patch gets rid of the
panic by refactoring our `Protocol` enum so that the
`Protocol::FromClass` variant holds a `ProtocolClass` instance rather
than a `ClassType` instance (all the `.expect()` call was doing was
attempting to convert form a `ClassType` to a `ProtocolClass`).
I hoped that this would provide a fix for
https://github.com/astral-sh/ty/issues/1587, but we still panic on the
provided reproducible examples in that issue even with this PR.
Nonetheless, I think this PR is a worthwhile change to make because:
- It's probably slightly more efficient this way (we no longer have to
re-verify that the wrapped class in a `Protocol::FromClass()` variant is
a protocol class every time we want to access its interface)
- It's nice to get rid of `.expect()` calls where possible, and this one
seems definitely unnecessary
- The _new_ panic message on this PR branch makes it much clearer what
the underlying cause of the bug in
https://github.com/astral-sh/ty/issues/1587 is:
<details>
<summary>New panic message</summary>
```
error[panic]: Panicked at
/Users/alexw/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/a885bb4/src/function/execute.rs:321:21
when checking `/Users/alexw/dev/ruff/foo.py`: `ClassLiteral < 'db
>::explicit_bases_(Id(4c09)): execute: too many cycle iterations`
info: This indicates a bug in ty.
info: If you could open an issue at
https://github.com/astral-sh/ty/issues/new?title=%5Bpanic%5D, we'd be
very appreciative!
info: Platform: macos aarch64
info: Version: ruff/0.14.5+60 (18a14bfaf 2025-11-19)
info: Args: ["target/debug/ty", "check", "foo.py",
"--python-version=3.14"]
info: run with `RUST_BACKTRACE=1` environment variable to show the full
backtrace information
info: query stacktrace:
0: cached_protocol_interface(Id(6805))
at crates/ty_python_semantic/src/types/protocol_class.rs:790
1: is_equivalent_to_object_inner(Id(8003))
at crates/ty_python_semantic/src/types/instance.rs:667
2: infer_deferred_types(Id(1409))
at crates/ty_python_semantic/src/types/infer.rs:141
cycle heads: infer_definition_types(Id(140b)) -> iteration = 200,
TypeVarInstance < 'db >::lazy_bound_(Id(5803)) -> iteration = 200
3: TypeVarInstance < 'db >::lazy_bound_(Id(5802))
at crates/ty_python_semantic/src/types.rs:8734
4: infer_definition_types(Id(140c))
at crates/ty_python_semantic/src/types/infer.rs:94
5: infer_deferred_types(Id(140a))
at crates/ty_python_semantic/src/types/infer.rs:141
6: TypeVarInstance < 'db >::lazy_bound_(Id(5803))
at crates/ty_python_semantic/src/types.rs:8734
7: infer_definition_types(Id(140b))
at crates/ty_python_semantic/src/types/infer.rs:94
8: infer_scope_types(Id(1000))
at crates/ty_python_semantic/src/types/infer.rs:70
9: check_file_impl(Id(c00))
at crates/ty_project/src/lib.rs:535
Found 1 diagnostic
WARN A fatal error occurred while checking some files. Not all project
files were analyzed. See the diagnostics list above for details.
```
</details>
## Test Plan
All existing tests pass.
This patch lets us create specializations from a constraint set. The
constraint encodes the restrictions on which types each typevar can
specialize to. Given a generic context and a constraint set, we iterate
through all of the generic context's typevars. For each typevar, we
abstract the constraint set so that it only mentions the typevar in
question (propagating derived facts if needed). We then find the "best
representative type" for the typevar given the abstracted constraint
set.
When considering the BDD structure of the abstracted constraint set,
each path from the BDD root to the `true` terminal represents one way
that the constraint set can be satisfied. (This is also one of the
clauses in the DNF representation of the constraint set's boolean
formula.) Each of those paths is the conjunction of the individual
constraints of each internal node that we traverse as we walk that path,
giving a single lower/upper bound for the path. We use the upper bound
as the "best" (i.e. "closest to `object`") type for that path.
If there are multiple paths in the BDD, they technically represent
independent possible specializations. If there's a single specialization
that satisfies all of them, we will return that as the specialization.
If not, then the constraint set is ambiguous. (This happens most often
with constrained typevars.) We could in the future turn _each_ of the
paths into separate specializations, but it's not clear what we would do
with that, so instead we just report the ambiguity as a specialization
failure.
We were previously normalizing the upper and lower bounds of each
constraint when constructing constraint sets. Like in #21463, this was
for conflated reasons: It made constraint set displays nicer, since we
wouldn't render multiple constraints with obviously equivalent bounds.
(Think `T ≤ A & B` and `T ≤ B & A`) But it was also useful for
correctness, since prior to #21463 we were (trying to) add the full
transitive closure to a constraint set's BDD, and normalization gave a
useful reduction in the number of nodes in a typical BDD.
Now that we don't store the transitive closure explicitly, that second
reason is no longer relevant. Our sequent map can store that full
transitive closure much more efficiently than the expanded BDD would
have. This helps fix some false positives on #20933, where we're seeing
some (incorrect, need to be fixed, but ideally not blocking this effort)
assignability failures between a type and its normalization.
Normalization is still useful for display purposes, and so we do
normalize the upper/lower bounds before building up our display
representation of a constraint set BDD.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
We're seeing flaky test failures on macos, which seems to be caused by
different Salsa ID orderings on the different platforms. Constraint set
BDDs order their internal nodes based on the Salsa IDs of the interned
typevar structs, and we had some code that depended on variable ordering
in an unexpected way.
This patch definitely fixes the macos test failure on #21414, and
hopefully fixes it on #21436, too.
## Summary
Add a set of comprehensive tests for generic implicit type aliases to
illustrate the current behavior with many flavors of `@Todo` types and
false positive diagnostics.
The tests are partially based on the typing conformance suite, and the
expected behavior has been checked against other type checkers.
## Summary
Get rid of the catch-all todo type from subscripting a base type we
haven't implemented handling for yet in a type expression, and turn it
into a diagnostic instead.
Handle a few more cases explicitly, to avoid false positives from the
above change:
1. Subscripting any dynamic type (not just a todo type) in a type
expression should just result in that same dynamic type. This is
important for gradual guarantee, and matches other type checkers.
2. Subscripting a generic alias may be an error or not, depending
whether the specialization itself contains typevars. Don't try to handle
this yet (it should be handled in a later PR for specializing generic
non-PEP695 type aliases), just use a dedicated todo type for it.
3. Add a temporary todo branch to avoid false positives from string PEP
613 type aliases. This can be removed in the next PR, with PEP 613 type
alias support.
## Test Plan
Adjusted mdtests, ecosystem.
All new diagnostics in conformance suite are supposed to be diagnostics,
so this PR is a strict improvement there.
New diagnostics in the ecosystem are surfacing cases where we already
don't understand an annotation, but now we emit a diagnostic about it.
They are mostly intentional choices. Analysis of particular cases:
* `attrs`, `bokeh`, `django-stubs`, `dulwich`, `ibis`, `kornia`,
`mitmproxy`, `mongo-python-driver`, `mypy`, `pandas`, `poetry`,
`prefect`, `pydantic`, `pytest`, `scrapy`, `trio`, `werkzeug`, and
`xarray` are all cases where under `from __future__ import annotations`
or Python 3.14 deferred-annotations semantics, we follow normal
name-scoping rules, whereas some other type checkers prefer global names
over local names. This means we don't like it if e.g. you have a class
with a method or attribute named `type` or `tuple`, and you also try to
use `type` or `tuple` in method/attribute annotations of that class.
This PR isn't changing those semantics, just revealing them in more
cases where previously we just silently fell back to `Unknown`. I think
failing with a diagnostic (so authors can alias names as needed to avoid
relying on scoping rules that differ between type checkers) is better
than failing silently here.
* `beartype` assumes we support `TypeForm` (because it only supports
mypy and pyright, it uses `if MYPY:` to hide the `TypeForm` from mypy,
and pyright supports `TypeForm`), and we don't yet.
* `graphql-core` likes to use a `try: ... except ImportError: ...`
pattern for importing special forms from `typing` with fallback to
`typing_extensions`, instead of using `sys.version_info` checks. We
don't handle this well when type checking under an older Python version
(where the import from `typing` is not found); we see the imported name
as of type e.g. `Unknown | SpecialFormType(...)`, and because of the
union with `Unknown` we fail to handle it as the special form type. Mypy
and pyright also don't seem to support this pattern. They don't complain
about subscripting such special forms, but they do silently fail to
treat them as the desired special form. Again here, if we are going to
fail I'd rather fail with a diagnostic rather than silently.
* `ibis` is [trying to
use](https://github.com/ibis-project/ibis/blob/main/ibis/common/collections.py#L372)
`frozendict: type[FrozenDict]` as a way to create a "type alias" to
`FrozenDict`, but this is wrong: that means `frozendict:
type[FrozenDict[Any, Any]]`.
* `mypy` has some errors due to the fact that type-checking `typing.pyi`
itself (without knowing that it's the real `typing.pyi`) doesn't work
very well.
* `mypy-protobuf` imports some types from the protobufs library that end
up unioned with `Unknown` for some reason, and so we don't allow
explicit-specialization of them. Depending on the reason they end up
unioned with `Unknown`, we might want to better support this? But it's
orthogonal to this PR -- we aren't failing any worse here, just alerting
the author that we didn't understand their annotation.
* `pwndbg` has unresolved references due to star-importing from a
dependency that isn't installed, and uses un-imported names like `Dict`
in annotation expressions. Some of the unresolved references were hidden
by
https://github.com/astral-sh/ruff/blob/main/crates/ty_python_semantic/src/types/infer/builder.rs#L7223-L7228
when some annotations previously resolved to a Todo type that no longer
do.
Summary
--
This PR wires up the `Diagnostic::set_documentation_url` method from
#21502 to Ruff's lint diagnostics. This enables the links for the full
and concise output formats without any other changes.
I considered also including the URLs for the grouped and pylint output
formats, but the grouped format is still in `ruff_linter` instead of
`ruff_db`, so we'd have to export some additional functionality to wire
it up with `fmt_with_hyperlink`; and the pylint format doesn't currently
render with color, so I think it might actually be machine readable
rather than human readable?
The other ouput formats (json, json-lines, junit, github, gitlab,
rdjson, azure, sarif) seem more clearly not to need the links.
Test Plan
--
I guess you can't see my cursor or the browser opening, but it works for
lint rules, which have links, and doesn't include a link for syntax
errors, which don't have valid links.

Closes#11216
Essentially the approach is to implement `Format` for a new struct
`FormatClause` which is just a clause header _and_ its body. We then
have the information we need to see whether there is a skip suppression
comment on the last child in the body and it all fits on one line.
This saga began with a regression in how we handle constraint sets where
a typevar is constrained by another typevar, which #21068 first added
support for:
```py
def mutually_constrained[T, U]():
# If [T = U ∧ U ≤ int], then [T ≤ int] must be true as well.
given_int = ConstraintSet.range(U, T, U) & ConstraintSet.range(Never, U, int)
static_assert(given_int.implies_subtype_of(T, int))
```
While working on #21414, I saw a regression in this test, which was
strange, since that PR has nothing to do with this logic! The issue is
that something in that PR made us instantiate the typevars `T` and `U`
in a different order, giving them differently ordered salsa IDs. And
importantly, we use these salsa IDs to define the variable ordering that
is used in our constraint set BDDs. This showed that our "mutually
constrained" logic only worked for one of the two possible orderings.
(We can — and now do — test this in a brute-force way by copy/pasting
the test with both typevar orderings.)
The underlying bug was in our `ConstraintSet::simplify_and_domain`
method. It would correctly detect `(U ≤ T ≤ U) ∧ (U ≤ int)`, because
those two constraints affect different typevars, and from that, infer `T
≤ int`. But it wouldn't detect the equivalent pattern in `(T ≤ U ≤ T) ∧
(U ≤ int)`, since those constraints affect the same typevar. At first I
tried adding that as yet more pattern-match logic in the ever-growing
`simplify_and_domain` method. But doing so caused other tests to start
failing.
At that point, I realized that `simplify_and_domain` had gotten to the
point where it was trying to do too much, and for conflicting consumers.
It was first written as part of our display logic, where the goal is to
remove redundant information from a BDD to make its string rendering
simpler. But we also started using it to add "derived facts" to a BDD. A
derived fact is a constraint that doesn't appear in the BDD directly,
but which we can still infer to be true. Our failing test relies on
derived facts — being able to infer that `T ≤ int` even though that
particular constraint doesn't appear in the original BDD. Before,
`simplify_and_domain` would trace through all of the constraints in a
BDD, figure out the full set of derived facts, and _add those derived
facts_ to the BDD structure. This is brittle, because those derived
facts are not universally true! In our example, `T ≤ int` only holds
along the BDD paths where both `T = U` and `U ≤ int`. Other paths will
test the negations of those constraints, and on those, we _shouldn't_
infer `T ≤ int`. In theory it's possible (and we were trying) to use BDD
operators to express that dependency...but that runs afoul of how we
were simultaneously trying to _remove_ information to make our displays
simpler.
So, I ripped off the band-aid. `simplify_and_domain` is now _only_ used
for display purposes. I have not touched it at all, except to remove
some logic that is definitely not used by our `Display` impl. Otherwise,
I did not want to touch that house of cards for now, since the display
logic is not load-bearing for any type inference logic.
For all non-display callers, we have a new **_sequent map_** data type,
which tracks exactly the same derived information. But it does so (a)
without trying to remove anything from the BDD, and (b) lazily, without
updating the BDD structure.
So the end result is that all of the tests (including the new
regressions) pass, via a more efficient (and hopefully better
structured/documented) implementation, at the cost of hanging onto a
pile of display-related tech debt that we'll want to clean up at some
point.
## Summary
This is another attempt at https://github.com/astral-sh/ruff/pull/21410
that fixes https://github.com/astral-sh/ruff/issues/19226.
@MichaReiser helped me get something working in a very helpful pairing
session. I pushed one additional commit moving the comments back from
leading comments to trailing comments, which I think retains more of the
input formatting.
I was inspired by Dylan's PR (#21185) to make one of these tables:
<table>
<thead>
<tr>
<th scope="col">Input</th>
<th scope="col">Main</th>
<th scope="col">PR</th>
</tr>
</thead>
<tbody>
<tr>
<td><pre lang="python">
if (
not
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
# comment
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
not
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
# unary comment
not
# operand comment
(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
<td><pre lang="python">
if (
# unary comment
# operand comment
not (
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
<td><pre lang="python">
if (
# unary comment
not
# operand comment
(
# comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
)
):
pass
</pre></td>
</tr>
<tr>
<td><pre lang="python">
if (
not # comment
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if ( # comment
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
<td><pre lang="python">
if (
not aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa # comment
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
):
pass
</pre></td>
</tr>
</tbody>
</table>
hopefully it helps even though the snippets are much wider here.
The two main differences are (1) that we now retain own-line comments
between the unary operator and its operand instead of moving these to
leading comments on the operator itself, and (2) that we move
end-of-line comments between the operator and operand to dangling
end-of-line comments on the operand (the last example in the table).
## Test Plan
Existing tests, plus new ones based on the issue. As I noted below, I
also ran the output from main on the unary.py file back through this
branch to check that we don't reformat code from main. This made me feel
a bit better about not preview-gating the changes in this PR.
```shell
> git show main:crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/unary.py | ruff format - | ./target/debug/ruff format --diff -
> echo $?
0
```
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Takayuki Maeda <takoyaki0316@gmail.com>
## Summary
This PR proposes that we add a new `set_concise_message` functionality
to our `Diagnostic` construction API. When used, the concise message
that is otherwise auto-generated from the main diagnostic message and
the primary annotation will be overwritten with the custom message.
To understand why this is desirable, let's look at the `invalid-key`
diagnostic. This is how I *want* the full diagnostic to look like:
<img width="620" height="282" alt="image"
src="https://github.com/user-attachments/assets/3bf70f52-9d9f-4817-bc16-fb0ebf7c2113"
/>
However, without the change in this PR, the concise message would have
the following form:
```
error[invalid-key]: Unknown key "Age" for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This duplication is why the full `invalid-key` diagnostic used a main
diagnostic message that is only "Invalid key for TypedDict `Person`", to
make that bearable:
```
error[invalid-key] Invalid key for TypedDict `Person`: Unknown key "Age" - did you mean "age"?
```
This is still less than ideal, *and* we had to make the "full"
diagnostic worse. With the new API here, we have to make no such
compromises. We need to do slightly more work (provide one additional
custom-designed message), but we get to keep the "full" diagnostic that
we actually want, and we can make the concise message more terse and
readable:
```
error[invalid-key] Unknown key "Age" for TypedDict `Person` - did you mean "age"?
```
Similar problems exist for other diagnostics as well (I really want this
for https://github.com/astral-sh/ruff/pull/21476). In this PR, I only
changed `invalid-key` and `type-assertion-failure`.
The PR here is somewhat related to the discussion in
https://github.com/astral-sh/ty/issues/1418, but note that we are
solving a problem that is unrelated to sub-diagnostics.
## Test Plan
Updated tests
## Summary
Add support for `Callable` special forms in implicit type aliases.
## Typing conformance
Four new tests are passing
## Ecosystem impact
* All of the `invalid-type-form` errors are from libraries that use
`mypy_extensions` and do something like `Callable[[NamedArg("x", str)],
int]`.
* A handful of new false positives because we do not support generic
specializations of implicit type aliases, yet. But other
* Everything else looks like true positives or known limitations
## Test Plan
New Markdown tests.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#21389
Avoid RUF012 false positives when reassigning a ClassVar
## Test Plan
<!-- How was it tested? -->
Added the new reassignment scenario to
`crates/ruff_linter/resources/test/fixtures/ruff/RUF012.py`.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Constraint sets can now track subtyping/assignability/etc of generic
callables correctly. For instance:
```py
def identity[T](t: T) -> T:
return t
constraints = ConstraintSet.always()
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(constraints.implies_subtype_of(TypeOf[identity], Callable[[str], str]))
```
A generic callable can be considered an intersection of all of its
possible specializations, and an assignability check with an
intersection as the lhs side succeeds of _any_ of the intersected types
satisfies the check. Put another way, if someone expects to receive any
function with a signature of `(int) -> int`, we can give them
`identity`.
Note that the corresponding check using `is_subtype_of` directly does
not yet work, since #20093 has not yet hooked up the core typing
relationship logic to use constraint sets:
```py
# These currently fail
static_assert(is_subtype_of(TypeOf[identity], Callable[[int], int]))
static_assert(is_subtype_of(TypeOf[identity], Callable[[str], str]))
```
To do this, we add a new _existential quantification_ operation on
constraint sets. This takes in a list of typevars and _removes_ those
typevars from the constraint set. Conceptually, we return a new
constraint set that evaluates to `true` when there was _any_ assignment
of the removed typevars that caused the old constraint set to evaluate
to `true`.
When comparing a generic constraint set, we add its typevars to the
`inferable` set, and figure out whatever constraints would allow any
specialization to satisfy the check. We then use the new existential
quantification operator to remove those new typevars, since the caller
doesn't (and shouldn't) know anything about them.
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Closes#19350
This fixes a syntax error caused by formatting. However, the new tests reveal that there are some cases where formatting attributes with certain comments behaves strangely, both before and after this PR, so some more polish may be in order.
For example, without parentheses around the value, and both before and after this PR, we have:
```python
# unformatted
variable = (
something # a comment
.first_method("some string")
)
# formatted
variable = something.first_method("some string") # a comment
```
which is probably not where the comment ought to go.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Partially addresses https://github.com/astral-sh/ty/issues/1562
Only suggest the keyword "as" in import statements when the user have
written `import foo a<CURSOR>` or `from foo import bar a<CURSOR>` as no
other suggestion makes sense here.
Re-uses the existing pattern for incomplete `import from` statements to
determine incomplete import alias statements and make the suggestions
more sane in those cases.
There was a potential suggestion from @BurntSushi in
https://github.com/astral-sh/ty/issues/1562#issue-3626853513 to move the
handling of import statements into one unified state machine but I acted
on the side of caution and fixed this with already established patterns,
pending a potential bigger re-write down the line.
## Test Plan
Added new tests and checked that it behaved reasonable in the
playground.
<!-- How was it tested? -->
This PR attempts to improve the placement of own-line comments between
branches in the setting where the comment is more indented than the
preceding node.
There are two main changes.
### First change: Preceding node has leading content
If the preceding node has leading content, we now regard the comment as
automatically _less_ indented than the preceding node, and format
accordingly.
For example,
```python
if True: preceding_node
# leading on `else`, not trailing on `preceding_node`
else: ...
```
This is more compatible with `black`, although there is a (presumably
very uncommon) edge case:
```python
if True:
this;that
# leading on `else`, but trailing in `black`
else: ...
```
I'm sort of okay with this - presumably if one wanted a comment for
those semi-colon separated statements, one should have put it _above_
them, and one wanted a comment only for `that` then it ought to have
been on the same line?
### Second change: searching for last child in body
While searching for the (recursively) last child in the body of the
preceding _branch_, we implicitly assumed that the preceding node had to
have a body to begin the recursion. But actually, in the base case, the
preceding node _is_ the last child in the body of the preceding branch.
So, for example:
```python
if True:
something
last_child_but_no_body
# leading on else for `main` but trailing in this PR
else: ...
```
### More examples
The table below is an attempt to summarize the changes in behavior. The
rows alternate between an example snippet with `while` and the same
example with `if` - in the former case we do _not_ have an `else` node
and in the latter we do.
Notice that:
1. On `main` our handling of `if` vs. `while` is not consistent, whereas
it is consistent in the present PR
2. We disagree with `black` in all cases except that last example on
`main`, but agree in all cases for the present PR (though see above for
a wonky edge case where we disagree).
<table>
<tr>
<th>Original
</th>
<th><code>main</code> </th>
<th>This
PR </th>
<th><code>black</code> </th>
</tr>
<tr>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
else:
# comment
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
while True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
while True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
else:
# comment
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
while True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
<tr>
<td>
<pre lang="python">
if True: pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
<td>
<pre lang="python">
if True:
pass
# comment
else:
pass
</pre>
</td>
</tr>
</table>
## Summary
Follow up from https://github.com/astral-sh/ruff/pull/21411. Again,
there are more things that could be improved here (like the diagnostics
for `lists`, or extending what we have for `dict` to `OrderedDict` etc),
but that will have to be postponed.
## Summary
We previously only allowed models to overwrite the
`{eq,order,kw_only,frozen}_defaults` of the dataclass-transformer, but
all other standard-dataclass parameters should be equally supported with
the same behavior.
## Test Plan
Added regression tests.
## Summary
Not a high-priority task... but it _is_ a weekend :P
This PR improves our diagnostics for invalid exceptions. Specifically:
- We now give a special-cased ``help: Did you mean
`NotImplementedError`` subdiagnostic for `except NotImplemented`, `raise
NotImplemented` and `raise <EXCEPTION> from NotImplemented`
- If the user catches a tuple of exceptions (`except (foo, bar, baz):`)
and multiple elements in the tuple are invalid, we now collect these
into a single diagnostic rather than emitting a separate diagnostic for
each tuple element
- The explanation of why the `except`/`raise` was invalid ("must be a
`BaseException` instance or `BaseException` subclass", etc.) is
relegated to a subdiagnostic. This makes the top-level diagnostic
summary much more concise.
## Test Plan
Lots of snapshots. And here's some screenshots:
<details>
<summary>Screenshots</summary>
<img width="1770" height="1520" alt="image"
src="https://github.com/user-attachments/assets/7f27fd61-c74d-4ddf-ad97-ea4fd24d06fd"
/>
<img width="1916" height="1392" alt="image"
src="https://github.com/user-attachments/assets/83e5027c-8798-48a6-a0ec-1babfc134000"
/>
<img width="1696" height="588" alt="image"
src="https://github.com/user-attachments/assets/1bc16048-6eb4-4dfa-9ace-dd271074530f"
/>
</details>
## Summary
Allow metaclass-based and baseclass-based dataclass-transformers to
overwrite the default behavior using class arguments:
```py
class Person(Model, order=True):
# ...
```
## Conformance tests
Four new tests passing!
## Test Plan
New Markdown tests
This PR updates the constraint implication type relationship to work on
compound types as well. (A compound type is a non-atomic type, like
`list[T]`.)
The goal of constraint implication is to check whether the requirements
of a constraint imply that a particular subtyping relationship holds.
Before, we were only checking atomic typevars. That would let us verify
that the constraint set `T ≤ bool` implies that `T` is always a subtype
of `int`. (In this case, the lhs of the subtyping check, `T`, is an
atomic typevar.)
But we weren't recursing into compound types, to look for nested
occurrences of typevars. That means that we weren't able to see that `T
≤ bool` implies that `Covariant[T]` is always a subtype of
`Covariant[int]`.
Doing this recursion means that we have to carry the constraint set
along with us as we recurse into types as part of `has_relation_to`, by
adding constraint implication as a new `TypeRelation` variant. (Before
it was just a method on `ConstraintSet`.)
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
## Summary
Currently our diagnostic only covers the range of the thing being
subscripted:
<img width="1702" height="312" alt="image"
src="https://github.com/user-attachments/assets/7e630431-e846-46ca-93c1-139f11aaba11"
/>
But it should probably cover the _whole_ subscript expression (arguably
the more "incorrect" bit is the `["foo"]` part of this expression, not
the `x` part of this expression!)
## Test Plan
Added a snapshot
Co-authored-by: Brent Westbrook
<36778786+ntBre@users.noreply.github.com>
## Summary
Extends literal promotion to apply to any generic method, as opposed to
only generic class constructors. This PR also improves our literal
promotion heuristics to only promote literals in non-covariant position
in the return type, and avoid promotion if the literal is present in
non-covariant position in any argument type.
Resolves https://github.com/astral-sh/ty/issues/1357.
## Summary
- Always restore the previous `deferred_state` after parsing a type
expression: we don't want that state leaking out into other contexts
where we shouldn't be deferring expression inference
- Always defer the right-hand-side of a PEP-613 type alias in a stub
file, allowing for forward references on the right-hand side of `T:
TypeAlias = X | Y` in a stub file
Addresses @carljm's review in
https://github.com/astral-sh/ruff/pull/21401#discussion_r2524260153
## Test Plan
I added a regression test for a regression that the first version of
this PR introduced (we need to make sure the r.h.s. of a PEP-613
`TypeAlias`es is always deferred in a stub file)
## Summary
We currently fail to account for the type context when inferring generic
classes constructed with `__new__`, or synthesized `__init__` for
dataclasses.
There are a few places in Python where it is known that new names are
being introduced and thus we probably shouldn't offer completions. We
already handle this today for things like `class <CURSOR>` and `def
<CURSOR>`. But we didn't handle `as <CURSOR>`, which can appear in
`import`, `with`, `except` and `match` statements. Indeed, these are
exactly the 4 cases where the `as` keyword can occur. So we look for the
presence of `as` and suppress completions based on that.
While we're here, we also make the implementation a bit more robust with
respect to suppressing completions when the user hasn't typed anything.
Namely, previously, we'd still offer completions in a `class <CURSOR>`
context. But it looks like LSP clients (at least, VS Code) doesn't ask
for completions here, so we were "saved" incidentally. This PR detects
this case and suppresses completions there so we don't rely on LSP
client behavior to handle that case correctly.
Fixesastral-sh/ty#1287
## Summary
Infer the first argument `type` inside `Annotated[type, …]` as a type
expression. This allows us to support stringified annotations inside
`Annotated`.
## Ecosystem
* The removed diagnostic on `prefect` shows that we now understand the
`State.data` type annotation in
`src/prefect/client/schemas/objects.py:230`, which uses a stringified
annotation in `Annoated`. The other diagnostics are downstream changes
that result from this, it seems to be a commonly used data type.
* `artigraph` does something like `Annotated[cast(Any,
field_info.annotation), *field_info.metadata]` which I'm not sure we
need to allow? It's unfortunate since this is probably supported at
runtime, but it seems reasonable that they need to add a `# type:
ignore` for that.
* `pydantic` uses something like `Annotated[(self.annotation,
*self.metadata)]` but adds a `# type: ignore`
## Test Plan
New Markdown test
## Summary
Typeshed has a (fake) `__getattr__` method on `types.ModuleType` with a
return type of `Any`. We ignore this method when accessing attributes on
module *literals*, but with this PR, we respect this method when dealing
with `ModuleType` itself. That is, we allow arbitrary attribute accesses
on instances of `types.ModuleType`. This is useful because dynamic
import mechanisms such as `importlib.import_module` use `ModuleType` as
a return type.
closes https://github.com/astral-sh/ty/issues/1346
## Ecosystem
Massive reduction in diagnostics. The few new diagnostics are true
positives.
## Test Plan
Added regression test.
## Summary
Add synthetic members to completions on dataclasses and dataclass
instances.
Also, while we're at it, add support for `__weakref__` and
`__match_args__`.
closes https://github.com/astral-sh/ty/issues/1542
## Test Plan
New Markdown tests
## Summary
Support various legacy `typing` special forms (`List`, `Dict`, …) in
implicit type aliases.
## Ecosystem impact
A lot of true positives (e.g. on `alerta`)!
## Test Plan
New Markdown tests