Fixes https://github.com/astral-sh/ty/issues/1832, fixes
https://github.com/astral-sh/ty/issues/1513
## Summary
A class object `C` (for which we infer an unspecialized `ClassLiteral`
type) should always be assignable to the type `type[C]` (which is
default-specialized, if `C` is generic). We already implemented this for
most cases, but we missed the case of a generic final type, where we
simplify `type[C]` to the `GenericAlias` type for the default
specialization of `C`. So we also need to implement this assignability
of generic `ClassLiteral` types as-if default-specialized.
## Test Plan
Added mdtests that failed before this PR.
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
Respect typevar bounds and constraints when matching against a union.
For example:
```py
def accepts_t_or_int[T_str: str](x: T_str | int) -> T_str:
raise NotImplementedError
reveal_type(accepts_t_or_int("a")) # ok, reveals `Literal["a"]`
reveal_type(accepts_t_or_int(1)) # ok, reveals `Unknown`
class Unrelated: ...
# error: [invalid-argument-type] "Argument type `Unrelated` does not
# satisfy upper bound `str` of type variable `T_str`"
accepts_t_or_int(Unrelated())
```
Previously, the last call succeed without any errors. Worse than that,
we also incorrectly solved `T_str = Unrelated`, which often lead to
downstream errors.
closes https://github.com/astral-sh/ty/issues/1837
## Ecosystem impact
Looks good!
* Lots of removed false positives, often because we previously selected
a wrong overload for a generic function (because we didn't respect the
typevar bound in an earlier overload).
* We now understand calls to functions accepting an argument of type
`GenericPath: TypeAlias = AnyStr | PathLike[AnyStr]`. Previously, we
would incorrectly match a `Path` argument against the `AnyStr` typevar
(violating its constraints), but now we match against `PathLike`.
## Performance
Another regression on `colour`. This package uses `numpy` heavily. And
`numpy` is the codebase that originally lead me to this bug. The fix
here allows us to infer more precise `np.array` types in some cases, so
it's reasonable that we just need to perform more work.
The fix here also requires us to look at more union elements when we
would previously short-circuit incorrectly, so some more work needs to
be done in the solver.
## Test Plan
New Markdown tests
This hack was introduced to reduce the amount of warnings that users
would get while transitioning to the new settings format
(https://github.com/astral-sh/ruff/pull/19787) but now that we're near
the beta release, it would be good to remove this.
In a constraint set, it's not useful for an upper bound to be an
intersection type, or for a lower bound to be a union type. Both of
those can be rewritten as simpler BDDs:
```
T ≤ α & β ⇒ (T ≤ α) ∧ (T ≤ β)
T ≤ α & ¬β ⇒ (T ≤ α) ∧ ¬(T ≤ β)
α | β ≤ T ⇒ (α ≤ T) ∧ (β ≤ T)
```
We were seeing performance issues on #21551 when _not_ performing this
simplification. For instance, `pandas` was producing some constraint
sets involving intersections of 8-9 different types. Our sequent map
calculation was timing out calculating all of the different permutations
of those types:
```
t1 & t2 & t3 → t1
t1 & t2 & t3 → t2
t1 & t2 & t3 → t3
t1 & t2 & t3 → t1 & t2
t1 & t2 & t3 → t1 & t3
t1 & t2 & t3 → t2 & t3
```
(and then imagine what that looks like for 9 types instead of 3...)
With this change, all of those permutations are now encoded in the BDD
structure itself, which is very good at simplifying that kind of thing.
Pulling this out of #21551 for separate review.
#21744 fixed some non-determinism in our constraint set implementation
by switching our BDD representation from being "fully reduced" to being
"quasi-reduced". We still deduplicate identical nodes (via salsa
interning), but we removed the logic to prune redundant nodes (one with
identical outgoing true and false edges). This ensures that the BDD
"remembers" all of the individual constraints that it was created with.
However, that comes at the cost of creating larger BDDs, and on #21551
that was causing performance issues. `scikit-learn` was producing a
function signature with dozens of overloads, and we were trying to
create a constraint set that would map a return type typevar to any of
those overload's return types. This created a combinatorial explosion in
the BDD, with by far most of the BDD paths leading to the `never`
terminal.
This change updates the quasi-reduction logic to prune nodes that are
redundant _because both edges lead to the `never` terminal_. In this
case, we don't need to "remember" that constraint, since no assignment
to it can lead to a valid specialization. So we keep the "memory" of our
quasi-reduced structure, while still pruning large unneeded portions of
the BDD structure.
Pulling this out of https://github.com/astral-sh/ruff/pull/21551 for
separate review.
## Summary
This is a follow-up to #21868. As soon as I started merging #21868 into
#21385, I realized that I had missed a test case with `**kwargs` after
the `*args` parameter. Such a case is supposed to be formatted on one
line like:
```py
# input
(
lambda
# comment
*x,
**y: x
)
# output
(
lambda
# comment
*x, **y: x
)
```
which you can still see on the
[playground](https://play.ruff.rs/bd88d339-1358-40d2-819f-865bfcb23aef?secondary=Format),
but on `main` after #21868, this was formatted as:
```py
(
lambda
# comment
*x,
**y: x
)
```
because the leading comment on the first parameter caused the whole
group around the parameters to break.
Instead of making these comments leading comments on the first
parameter, this PR makes them leading comments on the parameters list as
a whole.
## Test Plan
New tests, and I will also try merging this into #21385 _before_ opening
it for review this time.
<hr>
(labeling `internal` since #21868 should not be released before some
kind of fix)
## Summary
This PR adds special handling for `asynccontextmanager` calls as a
temporary solution for https://github.com/astral-sh/ty/issues/1804. We
will be able to remove this soon once we have support for generic
protocols in the solver.
closes https://github.com/astral-sh/ty/issues/1804
## Ecosystem
```diff
+ tests/test_downloadermiddleware.py:305:56: error[invalid-argument-type] Argument to bound method `download` is incorrect: Expected `Spider`, found `Unknown | Spider | None`
+ tests/test_downloadermiddleware.py:305:56: warning[possibly-missing-attribute] Attribute `spider` may be missing on object of type `Crawler | None`
```
These look like true positives
```diff
+ pymongo/asynchronous/database.py:1021:35: error[invalid-assignment] Object of type `(AsyncClientSession & ~AlwaysTruthy & ~AlwaysFalsy) | (_ServerMode & ~AlwaysFalsy) | Unknown | Primary` is not assignable to `_ServerMode | None`
+ pymongo/asynchronous/database.py:1025:17: error[invalid-argument-type] Argument to bound method `_conn_for_reads` is incorrect: Expected `_ServerMode`, found `_ServerMode | None`
```
Known problems or true positives, just caused by the new type for
`session`
```diff
- src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[Unknown, None] | Connection | AsyncConnection`
+ src/integrations/prefect-sqlalchemy/prefect_sqlalchemy/database.py:269:16: error[invalid-return-type] Return type does not match returned value: expected `Connection | AsyncConnection`, found `_GeneratorContextManager[Unknown, None, None] | _AsyncGeneratorContextManager[AsyncConnection, None] | Connection | AsyncConnection`
```
Just a more concrete type
```diff
- src/prefect/flow_engine.py:1277:24: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/server/api/server.py:696:49: error[missing-argument] No argument provided for required parameter `cls`
- src/prefect/task_engine.py:1426:24: error[missing-argument] No argument provided for required parameter `cls`
```
Good
## Test Plan
* Adapted and newly added Markdown tests
* Tested on internal codebase
Summary
--
This PR makes two changes to comment placement in lambda parameters.
First, we
now insert a line break if the first parameter has a leading comment:
```py
# input
(
lambda
* # comment 2
x:
x
)
# main
(
lambda # comment 2
*x: x
)
# this PR
(
lambda
# comment 2
*x: x
)
```
Note the missing space in the output from main. This case is currently
unstable
on main. Also note that the new formatting is more consistent with our
stable
formatting in cases where the lambda has its own dangling comment:
```py
# input
(
lambda # comment 1
* # comment 2
x:
x
)
# output
(
lambda # comment 1
# comment 2
*x: x
)
```
and when a parameter without a comment precedes the split `*x`:
```py
# input
(
lambda y,
* # comment 2
x:
x
)
# output
(
lambda y,
# comment 2
*x: x
)
```
This does change the stable formatting, but I think such cases are rare
(expecting zero hits in the ecosystem report), this fixes an existing
instability, and it should not change any code we've previously
formatted.
Second, this PR modifies the comment placement such that `# comment 2`
in these
outputs is still a leading comment on the parameter. This is also not
the case
on main, where it becomes a [dangling lambda
comment](https://play.ruff.rs/3b29bb7e-70e4-4365-88e0-e60fe1857a35?secondary=Comments).
This doesn't cause any
instability that I'm aware of on main, but it does cause problems when
trying to
adjust the placement of dangling lambda comments in #21385. Changing the
placement in this way should not affect any formatting here.
Test Plan
--
New lambda tests, plus existing tests covering the cases above with
multiple
comments around the parameters (see lambda.py 122-143, and 122-205 or so
more
broadly)
I also checked manually that the comments are now leading on the
parameter:
```shell
❯ cargo run --bin ruff_python_formatter -- --emit stdout --target-version 3.10 --print-comments <<EOF
(
lambda
# comment 2
*x: x
)
EOF
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/ruff_python_formatter --emit stdout --target-version 3.10 --print-comments`
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
21..32, None, Some((Parameters, 37..39)), (ExprLambda, 6..42), "# comment 2"
{
Node {
kind: Parameter,
range: 37..39,
source: `*x`,
}: {
"leading": [
SourceComment {
text: "# comment 2",
position: OwnLine,
formatted: true,
},
],
"dangling": [],
"trailing": [],
},
}
(
lambda
# comment 2
*x: x
)
```
But I didn't see a great place to put a test like this. Is there
somewhere I can assert this comment placement since it doesn't affect
any formatting yet? Or is it okay to wait until we use this in #21385?
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Closes#17347
Goal is to detect the useless exception statement not just for builtin
exceptions but also custom (user defined) ones.
## Test Plan
<!-- How was it tested? -->
I added test cases in the rule fixture and updated the insta snapshot.
Note that I first moved up a test case case which was at the bottom to
the correct "violation category".
I wasn't sure if I should create new test cases or just insert inside
those tests. I know that ideally each test case should test only one
thing, but here, duplicating twice 12 test cases seemed very verbose,
and actually less maintainable in the future. The drawback is that the
diff in the snapshot is hard to review, sorry. But you can see that the
snapshot gives 38 diagnostics, which is what we expect.
Alternatively, I also created this file for manual testing.
```py
# tmp/test_error.py
class MyException(Exception):
...
class MyBaseException(BaseException):
...
class MyValueError(ValueError):
...
class MyExceptionCustom(Exception):
...
class MyBaseExceptionCustom(BaseException):
...
class MyValueErrorCustom(ValueError):
...
class MyDeprecationWarning(DeprecationWarning):
...
class MyDeprecationWarningCustom(MyDeprecationWarning):
...
class MyExceptionGroup(ExceptionGroup):
...
class MyExceptionGroupCustom(MyExceptionGroup):
...
class MyBaseExceptionGroup(ExceptionGroup):
...
class MyBaseExceptionGroupCustom(MyBaseExceptionGroup):
...
def foo():
Exception("...")
BaseException("...")
ValueError("...")
RuntimeError("...")
DeprecationWarning("...")
GeneratorExit("...")
SystemExit("...")
ExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
BaseExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)])
MyException("...")
MyBaseException("...")
MyValueError("...")
MyExceptionCustom("...")
MyBaseExceptionCustom("...")
MyValueErrorCustom("...")
MyDeprecationWarning("...")
MyDeprecationWarningCustom("...")
MyExceptionGroup("...")
MyExceptionGroupCustom("...")
MyBaseExceptionGroup("...")
MyBaseExceptionGroupCustom("...")
```
and you can run this to check the PR:
```sh
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py310
target/debug/ruff check tmp/test_error.py --select PLW0133 --unsafe-fixes --diff --no-cache --isolated --target-version py314
```
## Summary
This fixes https://github.com/astral-sh/ty/issues/1736 where recursive
generic protocols with growing specializations caused a stack overflow.
The issue occurred with protocols like:
```python
class C[T](Protocol):
a: 'C[set[T]]'
```
When checking `C[set[int]]` against e.g. `C[Unknown]`, member `a`
requires checking `C[set[set[int]]]`, which requires
`C[set[set[set[int]]]]`, etc. Each level has different type
specializations, so the existing cycle detection (using full types as
cache keys) didn't catch the infinite recursion.
This fix adds a simple recursion depth limit (64) to the CycleDetector.
When the depth exceeds the limit, we return the fallback value (assume
compatible) to safely terminate the recursion.
This is a bit of a blunt hammer, but it should be broadly effective to
prevent stack overflow in any nested-relation case, and it's hard to
imagine that non-recursive nested relation comparisons of depth > 64
exist much in the wild.
## Test Plan
Added mdtest.
## Summary
This PR allows our generics solver to find a solution for `T` in cases
like the following:
```py
def extract_t[T](x: P[T] | Q[T]) -> T:
raise NotImplementedError
reveal_type(extract_t(P[int]())) # revealed: int
reveal_type(extract_t(Q[str]())) # revealed: str
```
closes https://github.com/astral-sh/ty/issues/1772
closes https://github.com/astral-sh/ty/issues/1314
## Ecosystem
The impact here looks very good!
It took me a long time to figure this out, but the new diagnostics on
bokeh are actually true positives. I should have tested with another
type-checker immediately, I guess. All other type checkers also emit
errors on these `__init__` calls. MRE
[here](https://play.ty.dev/5c19d260-65e2-4f70-a75e-1a25780843a2) (no
error on main, diagnostic on this branch)
A lot of false positives on home-assistant go away for calls to
functions like
[`async_listen`](180053fe98/homeassistant/core.py (L1581-L1587))
which take a `event_type: EventType[_DataT] | str` parameter. We can now
solve for `_DataT` here, which was previously falling back to its
default value, and then caused problems because it was used as an
argument to an invariant generic class.
## Test Plan
New Markdown tests
## Summary
fixes: https://github.com/astral-sh/ty/issues/1809
I took this chance to add some debug level tracing logs for overload
call evaluation similar to Doug's implementation in `constraints.rs`.
## Test Plan
- Add new mdtests
- Tested it against `sqlalchemy.select` in pyx which results in the
correct overload being matched
While still under development, it's far enough along now that we think
it's worth enabling it by default. This should also help give us
feedback for how it behaves.
This PR adds a "completion settings" grouping similar to inlay hints. We
only have an auto-import setting there now, but I expect we'll add more
options to configure completion behavior in the future.
Closesastral-sh/ty#1765
This adds autocomplete suggestions for function arguments. For example,
`okay` in:
```python
def foo(okay=None):
foo(o<CURSOR>
```
This also ensures that we don't suggest a keyword argument if it has
already been used.
Closesastral-sh/issues#1550
Closes issue #21565
## Summary
As pointed out in the issue, slices are currently flagged by B008 but
this behavior is incorrect because slices are immutable.
## Test Plan
Added a test case in the "B006_B008.py" fixture. Sorry for the diff in
the snapshots, the only thing that changes in those flies is the line
numbers, though.
You can also test this manually with this file:
```py
# test_slice.py
def c(d=slice(0, 3)): ...
```
```sh
> target/debug/ruff check tmp/test_slice.py --no-cache --select B008
All checks passed!
```
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ruff/issues/8774
This PR fixes `pydocstyle` incorrectly flagging missing argument for
arguments with `Unpack` type annotation by extracting the `kwarg` `D417`
suppression logic into a helper function for future rules as needed.
## Problem Statement
The below example was incorrectly triggering `D417` error for missing
`**kwargs` doc.
```python
class User(TypedDict):
id: int
name: str
def do_something(some_arg: str, **kwargs: Unpack[User]):
"""Some doc
Args:
some_arg: Some argument
"""
```
<img width="1135" height="276" alt="image"
src="https://github.com/user-attachments/assets/42fa4bb9-61a5-4a70-a79c-0c8922a3ee66"
/>
`**kwargs: Unpack[User]` indicates the function expects keyword
arguments that will be unpacked. Ideally, the individual fields of the
User `TypedDict` should be documented, not in the `**kwargs` itself. The
`**kwargs` parameter acts as a semantic grouping rather than a parameter
requiring documentation.
## Solution
As discussed in the linked issue, it makes sense to suppress the `D417`
for parameters with `Unpack` annotation. I extract a helper function to
solely check `D417` should be suppressed with `**kwarg: Unpack[T]`
parameter, this function can also be unit tested independently and
reduce complexity of current `missing_args` check function. This also
makes it easier to add additional rules in the future.
_✏️ Note:_ This is my first PR in this repo, as I've learned a ton from
it, please call out anything that could be improved. Thanks for making
this excellent tool 👏
## Test Plan
Add 2 test cases in `D417.py` and update snapshots.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
By taking a purely syntactic approach to the problem of trivial
initializer calls we can supress `x: T = T()`, `x: T = x.y.T()` and `x:
MyNewType = MyNewType(0)` but still display `x: T[U] = T()`.
The place where we drop a ball is this does not compose with our
analysis for supressing `x = (0, "hello")` as `x = (0, T())` and `x =
(T(), T())` will still get inlay hints (I don't think this is a huge
deal).
* fixes https://github.com/astral-sh/ty/issues/1516
## Test Plan
Existing snapshots cover this well.
## Summary
If you pass a non-tuple to `Annotated`, we end up running inference on
it twice. I _think_ the only case here is `Annotated[]`, where we insert
a (fake) empty `Name` node in the slice.
Closes https://github.com/astral-sh/ty/issues/1801.
## Summary
Increase our SQLAlchemy test coverage to make sure we understand
`Session.scalar`, `Session.scalars`, `Session.execute` (and their async
equivalents), as well as `Result.tuples`, `Result.one_or_none`,
`Row._tuple`.
## Summary
This PR adds the possibility to write mdtests that specify external
dependencies in a `project` section of TOML blocks. For example, here is
a test that makes sure that we understand Pydantic's dataclass-transform
setup:
````markdown
```toml
[environment]
python-version = "3.12"
python-platform = "linux"
[project]
dependencies = ["pydantic==2.12.2"]
```
```py
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str
user = User(id=1, name="Alice")
reveal_type(user.id) # revealed: int
reveal_type(user.name) # revealed: str
# error: [missing-argument] "No argument provided for required parameter
`name`"
invalid_user = User(id=2)
```
````
## How?
Using the `python-version` and the `dependencies` fields from the
Markdown section, we generate a `pyproject.toml` file, write it to a
temporary directory, and use `uv sync` to install the dependencies into
a virtual environment. We then copy the Python source files from that
venv's `site-packages` folder to a corresponding directory structure in
the in-memory filesystem. Finally, we configure the search paths
accordingly, and run the mdtest as usual.
I fully understand that there are valid concerns here:
* Doesn't this require network access? (yes, it does)
* Is this fast enough? (`uv` caching makes this almost unnoticeable,
actually)
* Is this deterministic? ~~(probably not, package resolution can depend
on the platform you're on)~~ (yes, hopefully)
For this reason, this first version is opt-in, locally. ~~We don't even
run these tests in CI (even though they worked fine in a previous
iteration of this PR).~~ You need to set `MDTEST_EXTERNAL=1`, or use the
new `-e/--enable-external` command line option of the `mdtest.py`
runner. For example:
```bash
# Skip mdtests with external dependencies (default):
uv run crates/ty_python_semantic/mdtest.py
# Run all mdtests, including those with external dependencies:
uv run crates/ty_python_semantic/mdtest.py -e
# Only run the `pydantic` tests. Use `-e` to make sure it is not skipped:
uv run crates/ty_python_semantic/mdtest.py -e pydantic
```
## Why?
I believe that this can be a useful addition to our testing strategy,
which lies somewhere between ecosystem tests and normal mdtests.
Ecosystem tests cover much more code, but they have the disadvantage
that we only see second- or third-order effects via diagnostic diffs. If
we unexpectedly gain or lose type coverage somewhere, we might not even
notice (assuming the gradual guarantee holds, and ecosystem code is
mostly correct). Another disadvantage of ecosystem checks is that they
only test checked-in code that is usually correct. However, we also want
to test what happens on wrong code, like the code that is momentarily
written in an editor, before fixing it. On the other end of the spectrum
we have normal mdtests, which have the disadvantage that they do not
reflect the reality of complex real-world code. We experience this
whenever we're surprised by an ecosystem report on a PR.
That said, these tests should not be seen as a replacement for either of
these things. For example, we should still strive to write detailed
self-contained mdtests for user-reported issues. But we might use this
new layer for regression tests, or simply as a debugging tool. It can
also serve as a tool to document our support for popular third-party
libraries.
## Test Plan
* I've been locally using this for a couple of weeks now.
* `uv run crates/ty_python_semantic/mdtest.py -e`
## Summary
As-is, a single-element tuple gets destructured via:
```rust
let arguments = if let ast::Expr::Tuple(tuple) = slice {
&*tuple.elts
} else {
std::slice::from_ref(slice)
};
```
But then, because it's a single element, we call
`infer_annotation_expression_impl`, passing in the tuple, rather than
the first element.
Closes https://github.com/astral-sh/ty/issues/1793.
Closes https://github.com/astral-sh/ty/issues/1768.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Closes: https://github.com/astral-sh/ty/issues/157
This PR adds support for the following capabilities involving a
`ParamSpec` type variable:
- Representing `P.args` and `P.kwargs` in the type system
- Matching against a callable containing `P` to create a type mapping
- Specializing `P` against the stored parameters
The value of a `ParamSpec` type variable is being represented using
`CallableType` with a `CallableTypeKind::ParamSpecValue` variant. This
`CallableTypeKind` is expanded from the existing `is_function_like`
boolean flag. An `enum` is used as these variants are mutually
exclusive.
For context, an initial iteration made an attempt to expand the
`Specialization` to use `TypeOrParameters` enum that represents that a
type variable can specialize into either a `Type` or `Parameters` but
that increased the complexity of the code as all downstream usages would
need to handle both the variants appropriately. Additionally, we'd have
also need to establish an invariant that a regular type variable always
maps to a `Type` while a paramspec type variable always maps to a
`Parameters`.
I've intentionally left out checking and raising diagnostics when the
`ParamSpec` type variable and it's components are not being used
correctly to avoid scope increase and it can easily be done as a
follow-up. This would also include the scoping rules which I don't think
a regular type variable implements either.
## Test Plan
Add new mdtest cases and update existing test cases.
Ran this branch on pyx, no new diagnostics.
### Ecosystem analysis
There's a case where in an annotated assignment like:
```py
type CustomType[P] = Callable[...]
def value[**P](...): ...
def another[**P](...):
target: CustomType[P] = value
```
The type of `value` is a callable and it has a paramspec that's bound to
`value`, `CustomType` is a type alias that's a callable and `P` that's
used in it's specialization is bound to `another`. Now, ty infers the
type of `target` same as `value` and does not use the declared type
`CustomType[P]`. [This is the
assignment](0980b9d9ab/src/async_utils/gen_transform.py (L108))
that I'm referring to which then leads to error in downstream usage.
Pyright and mypy does seem to use the declared type.
There are multiple diagnostics in `dd-trace-py` that requires support
for `cls`.
I'm seeing `Divergent` type for an example like which ~~I'm not sure
why, I'll look into it tomorrow~~ is because of a cycle as mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974:
```py
from typing import Callable
def decorator[**P](c: Callable[P, int]) -> Callable[P, str]: ...
@decorator
def func(a: int) -> int: ...
# ((a: int) -> str) | ((a: Divergent) -> str)
reveal_type(func)
```
I ~~need to look into why are the parameters not being specialized
through multiple decorators in the following code~~ think this is also
because of the cycle mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974 and
the fact that we don't support `staticmethod` properly:
```py
from contextlib import contextmanager
class Foo:
@staticmethod
@contextmanager
def method(x: int):
yield
foo = Foo()
# ty: Revealed type: `() -> _GeneratorContextManager[Unknown, None, None]` [revealed-type]
reveal_type(foo.method)
```
There's some issue related to `Protocol` that are generic over a
`ParamSpec` in `starlette` which might be related to
https://github.com/astral-sh/ty/issues/1635 but I'm not sure. Here's a
minimal example to reproduce:
<details><summary>Code snippet:</summary>
<p>
```py
from collections.abc import Awaitable, Callable, MutableMapping
from typing import Any, Callable, ParamSpec, Protocol
P = ParamSpec("P")
Scope = MutableMapping[str, Any]
Message = MutableMapping[str, Any]
Receive = Callable[[], Awaitable[Message]]
Send = Callable[[Message], Awaitable[None]]
ASGIApp = Callable[[Scope, Receive, Send], Awaitable[None]]
_Scope = Any
_Receive = Callable[[], Awaitable[Any]]
_Send = Callable[[Any], Awaitable[None]]
# Since `starlette.types.ASGIApp` type differs from `ASGIApplication` from `asgiref`
# we need to define a more permissive version of ASGIApp that doesn't cause type errors.
_ASGIApp = Callable[[_Scope, _Receive, _Send], Awaitable[None]]
class _MiddlewareFactory(Protocol[P]):
def __call__(
self, app: _ASGIApp, *args: P.args, **kwargs: P.kwargs
) -> _ASGIApp: ...
class Middleware:
def __init__(
self, factory: _MiddlewareFactory[P], *args: P.args, **kwargs: P.kwargs
) -> None:
self.factory = factory
self.args = args
self.kwargs = kwargs
class ServerErrorMiddleware:
def __init__(
self,
app: ASGIApp,
value: int | None = None,
flag: bool = False,
) -> None:
self.app = app
self.value = value
self.flag = flag
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ...
# ty: Argument to bound method `__init__` is incorrect: Expected `_MiddlewareFactory[(...)]`, found `<class 'ServerErrorMiddleware'>` [invalid-argument-type]
Middleware(ServerErrorMiddleware, value=500, flag=True)
```
</p>
</details>
### Conformance analysis
> ```diff
> -constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(...) -> Unknown`
> +constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(x: int) -> Unknown`
> ```
Requires return type inference i.e.,
https://github.com/astral-sh/ruff/pull/21551
> ```diff
> +constructors_callable.py:194:16: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:194:22: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:195:4: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | int]`
> +constructors_callable.py:195:9: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | str]`
> ```
I might need to look into why this is happening...
> ```diff
> +generics_defaults.py:79:1: error[type-assertion-failure] Type
`type[Class_ParamSpec[(str, int, /)]]` does not match asserted type
`<class 'Class_ParamSpec'>`
> ```
which is on the following code
```py
DefaultP = ParamSpec("DefaultP", default=[str, int])
class Class_ParamSpec(Generic[DefaultP]): ...
assert_type(Class_ParamSpec, type[Class_ParamSpec[str, int]])
```
It's occurring because there's no equivalence relationship defined
between `ClassLiteral` and `KnownInstanceType::TypeGenericAlias` which
is what these types are.
Everything else looks good to me!
When converting a class (whether specialized or not) into a `Callable`
type, we should carry through any generic context that the constructor
has. This includes both the generic context of the class itself (if it's
generic) and of the constructor methods (if they are separately
generic).
To help test this, this also updates the `generic_context` extension
function to work on `Callable` types and unions; and adds a new
`into_callable` extension function that works just like
`CallableTypeOf`, but on value forms instead of type forms.
Pulled this out of #21551 for separate review.
## Summary
Closes https://github.com/astral-sh/ty/issues/957
As explained in https://github.com/astral-sh/ty/issues/957, literal
union types for recursively defined values can be widened early to
speed up the convergence of fixed-point iterations.
This PR achieves this by embedding a marker in `UnionType` that
distinguishes whether a value is recursively defined.
This also allows us to identify values that are not recursively
defined, so I've increased the limit on the number of elements in a
literal union type for such values.
Edit: while this PR doesn't provide the significant performance
improvement initially hoped for, it does have the benefit of allowing
the number of elements in a literal union to be raised above the salsa
limit, and indeed mypy_primer results revealed that a literal union of
220 elements was actually being used.
## Test Plan
`call/union.md` has been updated
Fixes https://github.com/astral-sh/ty/issues/1587
## Summary
Perform cycle normalization on typevar bounds and constraints (similar
to how it was already done for typevar defaults) in order to ensure
convergence in cyclic cases.
There might be another fix here that could avoid the cycle in many more
cases, where we don't eagerly evaluate typevar bounds/constraints on
explicit specialization, but just accept the given specialization and
later evaluate to see whether we need to emit a diagnostic on it. But
the current fix here is sufficient to solve the problem and matches the
patterns we use to ensure cycle convergence elsewhere, so it seems good
for now; left a TODO for the other idea.
This fix is sufficient to make us not panic, but not sufficient to get
the semantics fully correct; see the TODOs in the tests. I have ideas
for fixing that as well, but it seems worth at least getting this in to
fix the panic.
## Test Plan
Test that previously panicked now does not.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
This makes auto-import include modules in suggestions.
In this initial implementation, we permit this to include submodules as
well. This is in contrast to what we do in `import ...` completions.
It's easy to change this behavior, but I think it'd be interesting to
run with this for now to see how well it works.
The existing importer functionality always required
an import request with a module and a member in that
module. But we want to be able to insert import statements
for a module itself and not any members in the module.
This is basically changing `member: &str` to an
`Option<&str>` and fixing the fallout in a way that
makes sense for module-only imports.
I think changes to this value are generally noise. It's hard to tell
what it means and it isn't especially actionable. We already have an
eval running in CI for completion ranking, so I don't think it's
terribly important to care about ranking here in e2e tests _generally_.
A completion lacking a module reference doesn't necessarily mean that
the symbol is defined within the current module. I believe the intent
here is that it means that no import is required to use it.
These are all improvements here with one slight regression on
`reveal_type` ranking. The previous completions offered were:
```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: _sqlite3)
SupportsGetItemViewable (module: _typeshed)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 8/10)
_remove_original_values (module: _osx_support)
_remove_universal_flags (module: _osx_support)
-----
found 10 completions
```
And now they are:
```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3.dbapi2)
removeHandler (module: unittest)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 9/9)
-----
found 9 completions
```
Some completions were removed (because they are now considered
unexported) and some were added (likely do to better re-export support).
This particular case probably warrants more special attention anyway.
So I think this is fine. (It's only a one-ranking regression.)
This applies recursively. So if *any* component of a module name starts
with a `_`, then symbols from that module are excluded from auto-import.
The exception is when it's a module within first party code. Then we
want to include it in auto-import.
Note that the `Deprecated` symbols from `importlib.metadata` are no
longer offered because 1) `importlib.metadata` defined `__all__` and 2)
the `Deprecated` symbols aren't in it. These seem to not be a part of
its public API according to the docs, so this seems right to me.
This commit (mostly) re-implements the support for `__all__` in
ty-proper, but inside the auto-import AST scanner.
When `__all__` isn't present in a module, we fall back to conventions to
determine whether a symbol is exported or not:
https://docs.python.org/3/library/index.html
However, in keeping with current practice for non-auto-import
completions, we continue to provide sunder and dunder names as
re-exports.
When `__all__` is present, we respect it strictly. That is, a symbol is
exported *if and only if* it's in `__all__`. This is somewhat stricter
than pylance seemingly is. I felt like it was a good idea to start here,
and we can relax it based on user demand (perhaps through a setting).
This simplifies the existing visitor by DRYing it up slightly.
We also add tests for the existing functionality. In particular,
we want to add support for re-export conventions, and that
warrants more careful testing.
## Summary
I realized we don't really test `DefinitionKind::ImportFromSubmodule` in
the IDE at all, so here's a bunch of them, just recording our current
behaviour.
## Test Plan
*stares at the camera*
## Summary
I have no idea what I'm doing with the fix (all the interesting stuff is
in the second commit).
The basic problem is the compiler emits the diagnostic:
```
x: "foobar"
^^^^^^
```
Which the suppression code-action hands the end of to `Tokens::after`
which then panics because that function panics if handed an offset that
is in the middle of a token.
Fixes https://github.com/astral-sh/ty/issues/1748
## Test Plan
Many tests added (only the e2e test matters).
## Summary
This makes an importing file a required argument to module resolution,
and if the fast-path cached query fails to resolve the module, take the
slow-path uncached (could be cached if we want)
`desperately_resolve_module` which will walk up from the importing file
until it finds a `pyproject.toml` (arbitrary decision, we could try
every ancestor directory), at which point it takes one last desperate
attempt to use that directory as a search-path. We do not continue
walking up once we've found a `pyproject.toml` (arbitrary decision, we
could keep going up).
Running locally, this fixes every broken-for-workspace-reasons import in
pyx's workspace!
* Fixes https://github.com/astral-sh/ty/issues/1539
* Improves https://github.com/astral-sh/ty/issues/839
## Test Plan
The workspace tests see a huge improvement on most absolute imports.
Fixes https://github.com/astral-sh/ty/issues/1716.
## Test plan
I added a corpus snippet that causes us to panic on `main` (I tested by
running `cargo run -p ty_python_semantic --test=corpus` without the fix
applied).
## Summary
This PR re-implements [return-in-generator
(B901)](https://docs.astral.sh/ruff/rules/return-in-generator/#return-in-generator-b901)
for async generators as a semantic syntax error. This is not a syntax
error for sync generators, so we'll need to preserve both the lint rule
and the syntax error in this case.
It also updates B901 and the new implementation to catch cases where the
generator's `yield` or `yield from` expression is part of another
statement, as in:
```py
def foo():
return (yield)
```
These were previously not caught because we only looked for
`Stmt::Expr(Expr::Yield)` in `visit_stmt` instead of visiting `yield`
expressions directly. I think this modification is within the spirit of
the rule and safe to try out since the rule is in preview.
## Test Plan
<!-- How was it tested? -->
I have written tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Star-imports can not just affect the state of symbols that they pull in,
they can also affect the state of members that are associated with those
symbols. For example, if `obj.attr` was previously narrowed from `int |
None` to `int`, and a star-import now overwrites `obj`, then the
narrowing on `obj.attr` should be "reset".
This PR keeps track of the state of associated members during star
imports and properly models the flow of their corresponding state
through the control flow structure that we artificially create for
star-imports.
See [this
comment](https://github.com/astral-sh/ty/issues/1355#issuecomment-3607125005)
for an explanation why this caused ty to see certain `asyncio` symbols
as not being accessible on Python 3.14.
closes https://github.com/astral-sh/ty/issues/1355
## Ecosystem impact
```diff
async-utils (https://github.com/mikeshardmind/async-utils)
- src/async_utils/bg_loop.py:115:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
- Found 30 diagnostics
+ Found 29 diagnostics
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ mitmproxy/utils/asyncio_utils.py:96:60: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
- test/conftest.py:37:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
```
All of these seem to be correct, they give us a different type for
`asyncio` symbols that are now imported from different
`sys.version_info` branches (where we previously failed to recognize
some of these as statically true/false).
```diff
dd-trace-py (https://github.com/DataDog/dd-trace-py)
- ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Coroutine[Any, Any, _T@create_task] | Generator[Any, None, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
+ ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Generator[Any, None, _T@create_task] | Coroutine[Any, Any, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
```
Similar, but only results in a diagnostic change.
## Test Plan
Added a regression test
This fixes a non-determinism that we were seeing in the constraint set
tests in https://github.com/astral-sh/ruff/pull/21715.
In this test, we create the following constraint set, and then try to
create a specialization from it:
```
(T@constrained_by_gradual_list = list[Base])
∨
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
```
That is, `T` is either specifically `list[Base]`, or it's any `list`.
Our current heuristics say that, absent other restrictions, we should
specialize `T` to the more specific type (`list[Base]`).
In the correct test output, we end up creating a BDD that looks like
this:
```
(T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ (Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```
In the incorrect output, the BDD looks like this:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```
The difference is the ordering of the two individual constraints. Both
constraints appear in the first BDD, but the second BDD only contains `T
is any list`. If we were to force the second BDD to contain both
constraints, it would look like this:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ never
```
This is the standard shape for an OR of two constraints. However! Those
two constraints are not independent of each other! If `T` is
specifically `list[Base]`, then it's definitely also "any `list`". From
that, we can infer the contrapositive: that if `T` is not any list, then
it cannot be `list[Base]` specifically. When we encounter impossible
situations like that, we prune that path in the BDD, and treat it as
`false`. That rewrites the second BDD to the following:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
┡━₁ never <-- IMPOSSIBLE, rewritten to never
└─₀ never
```
We then would see that that BDD node is redundant, since both of its
outgoing edges point at the `never` node. Our BDDs are _reduced_, which
means we have to remove that redundant node, resulting in the BDD we saw
above:
```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never <-- redundant node removed
```
The end result is that we were "forgetting" about the `T = list[Base]`
constraint, but only for some BDD variable orderings.
To fix this, I'm leaning in to the fact that our BDDs really do need to
"remember" all of the constraints that they were created with. Some
combinations might not be possible, but we now have the sequent map,
which is quite good at detecting and pruning those.
So now our BDDs are _quasi-reduced_, which just means that redundant
nodes are allowed. (At first I was worried that allowing redundant nodes
would be an unsound "fix the glitch". But it turns out they're real!
[This](https://ieeexplore.ieee.org/abstract/document/130209) is the
paper that introduces them, though it's very difficult to read. Knuth
mentions them in §7.1.4 of
[TAOCP](https://course.khoury.northeastern.edu/csu690/ssl/bdd-knuth.pdf),
and [this paper](https://par.nsf.gov/servlets/purl/10128966) has a nice
short summary of them in §2.)
While we're here, I've added a bunch of `debug` and `trace` level log
messages to the constraint set implementation. I was getting tired of
having to add these by hands over and over. To enable them, just set
`TY_LOG` in your environment, e.g.
```sh
env TY_LOG=ty_python_semantic::types::constraints::SequentMap=trace ty check ...
```
[Note, this has an `internal` label because are still not using
`specialize_constrained` in anything user-facing yet.]
## Summary
For a type alias like the one below, where `UnknownClass` is something
with a dynamic type, we previously lost track of the fact that this
dynamic type was explicitly specialized *with a type variable*. If that
alias is then later explicitly specialized itself (`MyAlias[int]`), we
would miscount the number of legacy type variables and emit a
`invalid-type-arguments` diagnostic
([playground](https://play.ty.dev/886ae6cc-86c3-4304-a365-510d29211f85)).
```py
T = TypeVar("T")
MyAlias: TypeAlias = UnknownClass[T] | None
```
The solution implemented here is not pretty, but we can hopefully get
rid of it via https://github.com/astral-sh/ty/issues/1711. Also, once we
properly support `ParamSpec` and `Concatenate`, we should be able to
remove some of this code.
This addresses many of the `invalid-type-arguments` false-positives in
https://github.com/astral-sh/ty/issues/1685. With this change, there are
still some diagnostics of this type left. Instead of implementing even
more (rather sophisticated) workarounds for these cases as well, it
might be much easier to wait for full `ParamSpec`/`Concatenate` support
and then try again.
A disadvantage of this implementation is that we lose track of some
`@Todo` types and replace them with `Unknown`. We could spend more
effort and try to preserve them, but I'm unsure if this is the best use
of our time right now.
## Test Plan
New Markdown tests.
## Summary
Implement default-specialization of generic type aliases (implicit or
PEP-613) if they are used in a type expression without an explicit
specialization.
closes https://github.com/astral-sh/ty/issues/1690
## Typing conformance
```diff
-generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, DefaultStrT]`
```
That's exactly what we want ✔️
All other tests in this file pass as well, with the exception of this
assertion, which is just wrong (at least according to our
interpretation, `type[Bar] != <class 'Bar'>`). I checked that we do
correctly default-specialize the type parameter which is not displayed
in the diagnostic that we raise.
```py
class Bar(SubclassMe[int, DefaultStrT]): ...
assert_type(Bar, type[Bar[str]]) # ty: Type `type[Bar[str]]` does not match asserted type `<class 'Bar'>`
```
## Ecosystem impact
Looks like I should have included this last week 😎
## Test Plan
Updated pre-existing tests and add a few new ones.
## Summary
This PR implements syntax error where a default type parameter is
followed by a non-default type parameter.
https://github.com/astral-sh/ruff/issues/17412#issuecomment-3584088217
## Test Plan
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Signed-off-by: 11happy <soni5happy@gmail.com>
This adds a new `suppression` module to the `ruff_linter` crate, similar
to the suppression
module for ty, to parse comments for ruff suppression directives, such
as `# ruff: disable[CODE]`.
## Summary
Fixes#21750 and a related bug in `PLE1142`. We were not properly
considering generators to be valid `await` contexts, which caused the
`F704` issue. One of the tests I added for this also uncovered an issue
in `PLE1142` for comprehensions nested within async generators because
we were only checking the current scope rather than traversing the
nested context.
## Test Plan
Both of these rules are implemented as semantic syntax errors, so I
added tests (and fixes) in both Ruff and ty.