In the following example, there are two occurrences of `typing.Self`,
one for `Foo.foo` and one for `Bar.bar`:
```py
from typing import Self, reveal_type
class Foo[T]:
def foo(self: Self) -> T:
raise NotImplementedError
class Bar:
def bar(self: Self, x: Foo[Self]):
# SHOULD BE: bound method Foo[Self@bar].foo() -> Self@bar
# revealed: bound method Foo[Self@bar].foo() -> Foo[Self@bar]
reveal_type(x.foo)
def f[U: Bar](x: Foo[U]):
# revealed: bound method Foo[U@f].foo() -> U@f
reveal_type(x.foo)
```
When accessing a bound method, we replace any occurrences of `Self` with
the bound `self` type.
We were doing this correctly for the second reveal. We would first apply
the specialization, getting `(self: Self@foo) -> U@F` as the signature
of `x.foo`. We would then bind the `self` parameter, substituting
`Self@foo` with `Foo[U@F]` as part of that. The return type was already
specialized to `U@F`, so that substitution had no further affect on the
type that we revealed.
In the first reveal, we would follow the same process, but we confused
the two occurrences of `Self`. We would first apply the specialization,
getting `(self: Self@foo) -> Self@bar` as the method signature. We would
then try to bind the `self` parameter, substituting `Self@foo` with
`Foo[Self@bar]`. However, because we didn't distinguish the two separate
`Self`s, and applied the substitution to the return type as well as to
the `self` parameter.
The fix is to track which particular `Self` we're trying to substitute
when applying the type mapping.
Fixes https://github.com/astral-sh/ty/issues/1713
I tried to get Claude to come up with tests, but most of them weren't very
interesting. I think these two additional types of assignments might be worth
having, though.
parenthesizing these seems redundant. I would prefer our old formatting more
like this:
```py
def ddb():
sql = (
lambda var, table, n=N: f"""
CREATE TABLE {table} AS
SELECT ROW_NUMBER() OVER () AS id, {var}
FROM (
SELECT {var}
FROM RANGE({n}) _ ({var})
ORDER BY RANDOM()
)
"""
)
```
where the `f"""` serves as the parentheses, instead of the current:
```py
def ddb():
sql = lambda var, table, n=N: (
f"""
CREATE TABLE {table} AS
SELECT ROW_NUMBER() OVER () AS id, {var}
FROM (
SELECT {var}
FROM RANGE({n}) _ ({var})
ORDER BY RANDOM()
)
"""
)
```
this case ends up too long at 108 columns:
```py
class C:
def foo():
if True:
transaction_count = self._query_txs_for_range(
get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
),
)
```
instead, it should be formatted like this, fitting within 88 columns:
```py
class C:
def foo():
if True:
transaction_count = self._query_txs_for_range(
get_count_fn=lambda from_ts, to_ts, _chain_id=chain_id: (
db_evmtx.count_transactions_in_range(
chain_id=_chain_id,
from_ts=from_ts,
to_ts=to_ts,
)
),
)
```
we can fix this by removing the `has_own_parentheses` check in the new lambda
formatting, but this breaks other cases. we might want to preserve this? in this
specific ecosystem case, the project has a `noqa: E501` comment, so this seems
to be what they want anyway, although we don't know that when formatting
I would expect this to format as:
```py
class C:
_is_recognized_dtype: Callable[[DtypeObj], bool] = lambda x: (
lib.is_np_dtype(x, "M") or isinstance(x, DatetimeTZDtype)
)
```
instead of the current:
```py
class C:
_is_recognized_dtype: Callable[[DtypeObj], bool] = (
lambda x: lib.is_np_dtype(x, "M") or isinstance(x, DatetimeTZDtype)
)
```
```diff
-name = re.sub(r"[^\x21\x23-\x5b\x5d-\x7e]...............", lambda m: (
- f"\\{m.group(0)}"
- ), p["name"])
+name = re.sub(
+ r"[^\x21\x23-\x5b\x5d-\x7e]...............",
+ lambda m: (f"\\{m.group(0)}"),
+ p["name"],
+)
```
the second format is actually fine, but the first one obviously looks horrible,
even if it were stable
this formats as:
```py
class C:
function_dict: Dict[Text, Callable[[CRFToken], Any]] = {
CRFEntityExtractorOptions.POS2: lambda crf_token: crf_token.pos_tag[
:2
] if crf_token.pos_tag is not None else None,
}
```
when I think it should look like:
```py
class C:
function_dict: Dict[Text, Callable[[CRFToken], Any]] = {
CRFEntityExtractorOptions.POS2: lambda crf_token: (
crf_token.pos_tag[:2] if crf_token.pos_tag is not None else None,
)
}
```
Here are a bunch of (variously failing and passing) mdtests that reflect
the kinds of issues people encounter when running ty over an entire
workspace without sufficient hand-holding (especially because in the IDE
it is unclear *how* to provide that hand-holding).
The `Display` implementation for constraint sets is brittle, and
deserves a rethink. But later! It's perfectly fine for printf debugging;
we just shouldn't be writing mdtests that depend on any particular
rendering details. Most of these tests can be replaced with an
equivalence check that actually validates that the _behavior_ of two
constraint sets are identical.
## Summary
Fixes false positives in SIM222 and SIM223 where truthiness was
incorrectly assumed for `tuple(x)`, `list(x)`, `set(x)` when `x` is not
iterable.
Fixes#21473.
## Problem
`Truthiness::from_expr` recursively called itself on arguments to
iterable initializers (`tuple`, `list`, `set`) without checking if the
argument is iterable, causing false positives for cases like `tuple(0)
or True` and `tuple("") or True`.
## Approach
Added `is_definitely_not_iterable` helper and updated
`Truthiness::from_expr` to return `Unknown` for non-iterable arguments
(numbers, booleans, None) and string literals when called with iterable
initializers, preventing incorrect truthiness assumptions.
## Test Plan
Added test cases to `SIM222.py` and `SIM223.py` for `tuple("")`,
`tuple(0)`, `tuple(1)`, `tuple(False)`, and `tuple(None)` with `or True`
and `and False` patterns.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Marks fixes as unsafe when they change return types (`None` → `Path`,
`str`/`bytes` → `Path`, `str` → `Path`), except when the call is a
top-level expression.
Fixes#21431.
## Problem
Fixes for `os.rename`, `os.replace`, `os.getcwd`/`os.getcwdb`, and
`os.readlink` were marked safe despite changing return types, which can
break code that uses the return value.
## Approach
Added `is_top_level_expression_call` helper to detect when a call is a
top-level expression (return value unused). Updated
`check_os_pathlib_two_arg_calls` and `check_os_pathlib_single_arg_calls`
to mark fixes as unsafe unless the call is a top-level expression.
Updated PTH109 to use the helper for applicability determination.
## Test Plan
Updated snapshots for `preview_full_name.py`, `preview_import_as.py`,
`preview_import_from.py`, and `preview_import_from_as.py` to reflect
unsafe markers.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Previously, the code action to do auto-import on a pre-existing symbol
assumed that the auto-importer would always generate an import
statement. But sometimes an import statement already exists.
A good example of this is the following snippet:
```
import warnings
@deprecated
def myfunc(): pass
```
Specifically, `deprecated` exists in `warnings` but isn't currently
imported. A code action to fix this could feasibly do two
transformations here. One is:
```
import warnings
@warnings.deprecated
def myfunc(): pass
```
Another is:
```
from warnings import deprecated
import warnings
@deprecated
def myfunc(): pass
```
The existing auto-import infrastructure chooses the former, since it
reuses a pre-existing import statement. But this PR chooses the latter
for the case of a code action. I'm not 100% sure this is the correct
choice, but it seems to defer more strongly to what the user has typed.
That is, that they want to use it unqualified because it's what has been
typed. So we should add the necessary import statement to make that
work.
Fixesastral-sh/ty#1668
This works by adding a third module resolution mode that lets the caller
opt into _some_ shadowing of modules that is otherwise not allowed (for
`typing` and `typing_extensions`).
Fixesastral-sh/ty#1658
## Summary
If you manage to create an `typing.GenericAlias` instance without us
knowing how that was created, then we don't know what to do with this in
a type annotation. So it's better to be explicit and show an error
instead of failing silently with a `@Todo` type.
## Test Plan
* New Markdown tests
* Zero ecosystem impact
## Summary
We had tests for this already, but they used generic classes that were
bivariant in their type parameter, and so this case wasn't captured.
closes https://github.com/astral-sh/ty/issues/1702
## Test Plan
Updated Markdown tests
## Summary
These projects from `mypy_primer` were missing from both `good.txt` and
`bad.txt` for some reason. I thought about writing a script that would
verify that `good.txt` + `bad.txt` = `mypy_primer.projects`, but that's
not completely trivial since there are projects like `cpython` only
appear once in `good.txt`. Given that we can hopefully soon get rid of
both of these files (and always run on all projects), it's probably not
worth the effort. We are usually notified of all `mypy_primer` changes.
## Test Plan
CI on this PR