This is a port of the logic in https://github.com/astral-sh/uv/pull/7691
The basic idea is we use CONDA_DEFAULT_ENV as a signal for whether
CONDA_PREFIX is just the ambient system conda install, or the user has
explicitly activated a custom one. If the former, then the conda is
treated like a system install (having lowest priority). If the latter,
the conda is treated like an activated venv (having priority over
everything but an Actual activated venv).
Fixes https://github.com/astral-sh/ty/issues/611
## Summary
Closes: https://github.com/astral-sh/ty/issues/669
(This turned out to be simpler that I thought :))
## Test Plan
Update existing test cases.
### Ecosystem report
Most of them are basically because ty has now started inferring more
precise types for the return type to an overloaded call and a lot of the
types are defined using type aliases, here's some examples:
<details><summary>Details</summary>
<p>
> attrs (https://github.com/python-attrs/attrs)
> + tests/test_make.py:146:14: error[unresolved-attribute] Type
`Literal[42]` has no attribute `default`
> - Found 555 diagnostics
> + Found 556 diagnostics
This is accurate now that we infer the type as `Literal[42]` instead of
`Unknown` (Pyright infers it as `int`)
> optuna (https://github.com/optuna/optuna)
> + optuna/_gp/search_space.py:181:53: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `tuple[int | float, int | float]`, found `tuple[Unknown |
ndarray[Unknown, <class 'float'>], Unknown | ndarray[Unknown, <class
'float'>]]`
> + optuna/_gp/search_space.py:181:83: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `int | float`, found `Unknown | ndarray[Unknown, <class
'float'>]`
> + tests/gp_tests/test_search_space.py:109:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `tuple[int | float, int
| float]`, found `Unknown | ndarray[Unknown, <class 'float'>]`
> + tests/gp_tests/test_search_space.py:110:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `int | float`, found
`Unknown | ndarray[Unknown, <class 'float'>]`
> - Found 559 diagnostics
> + Found 563 diagnostics
Same as above where ty is now inferring a more precise type like
`Unknown | ndarray[tuple[int, int], <class 'float'>]` instead of just
`Unknown` as before
> jinja (https://github.com/pallets/jinja)
> + src/jinja2/bccache.py:298:39: error[invalid-argument-type] Argument
to bound method `write_bytecode` is incorrect: Expected `IO[bytes]`,
found `_TemporaryFileWrapper[str]`
> - Found 186 diagnostics
> + Found 187 diagnostics
This requires support for type aliases to match the correct overload.
> hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
> + src/hydra_zen/wrapper/_implementations.py:945:16:
error[invalid-return-type] Return type does not match returned value:
expected `DataClass_ | type[@Todo(type[T] for protocols)] | ListConfig |
DictConfig`, found `@Todo(unsupported type[X] special form) | (((...) ->
Any) & dict[Unknown, Unknown]) | (DataClass_ & dict[Unknown, Unknown]) |
dict[Any, Any] | (ListConfig & dict[Unknown, Unknown]) | (DictConfig &
dict[Unknown, Unknown]) | (((...) -> Any) & list[Unknown]) | (DataClass_
& list[Unknown]) | list[Any] | (ListConfig & list[Unknown]) |
(DictConfig & list[Unknown])`
> + tests/annotations/behaviors.py:60:28: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/behaviors.py:64:21: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:167:17: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:524:17:
error[unresolved-attribute] Type `<class 'int'>` has no attribute
`_target_`
> - Found 561 diagnostics
> + Found 566 diagnostics
Same as above, this requires support for type aliases to match the
correct overload.
> paasta (https://github.com/yelp/paasta)
> + paasta_tools/utils.py:4188:19: warning[redundant-cast] Value is
already of type `list[str]`
> - Found 888 diagnostics
> + Found 889 diagnostics
This is correct.
> colour (https://github.com/colour-science/colour)
> + colour/plotting/diagrams.py:448:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/diagrams.py:462:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/models.py:419:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:230:9: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:474:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:495:17: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:513:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> + colour/plotting/temperature.py:514:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> - Found 480 diagnostics
> + Found 488 diagnostics
Most of them are correct except for the last two diagnostics which I'm
not sure
what's happening, it's trying to index into an `np.ndarray` type (which
is
inferred correctly) but I think it might be picking up an incorrect
overload
for the `__getitem__` method.
Scipy's diagnostics also requires support for type alises to pick the
correct overload.
</p>
</details>
In implementing partial stubs I had observed that this continue in the
namespace package code seemed erroneous since the same continue for
partial stubs didn't work. Unfortunately I wasn't confident enough to
push on that hunch. Fortunately I remembered that hunch to make this an
easy fix.
The issue with the continue is that it bails out of the current
search-path without testing any .py files. This breaks when for example
`google` and `google-stubs`/`types-google` are both in the same
site-packages dir -- failing to find a module in `types-google` has us
completely skip over `google`!
Fixes https://github.com/astral-sh/ty/issues/520
fix https://github.com/astral-sh/ty/issues/1047
## Summary
This PR fixes how `KW_ONLY` is applied in dataclasses. Previously, the
sentinel leaked into subclasses and incorrectly marked their fields as
keyword-only; now it only affects fields declared in the same class.
```py
from dataclasses import dataclass, KW_ONLY
@dataclass
class D:
x: int
_: KW_ONLY
y: str
@dataclass
class E(D):
z: bytes
# This should work: x=1 (positional), z=b"foo" (positional), y="foo" (keyword-only)
E(1, b"foo", y="foo")
reveal_type(E.__init__) # revealed: (self: E, x: int, z: bytes, *, y: str) -> None
```
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
mdtests
Requires some iteration, but this includes the most tedious part --
threading a new concept of DisplaySettings through every type display
impl. Currently it only holds a boolean for multiline, but in the future
it could also take other things like "render to markdown" or "here's
your base indent if you make a newline".
For types which have exposed display functions I've left the old
signature as a compatibility polyfill to avoid having to audit
everywhere that prints types right off the bat (notably I originally
tried doing multiline functions unconditionally and a ton of things
churned that clearly weren't ready for multi-line (diagnostics).
The only real use of this API in this PR is to multiline render function
types in hovers, which is the highest impact (see snapshot changes).
Fixes https://github.com/astral-sh/ty/issues/1000
This change rejiggers how we register globs for file watching with the
LSP client. Previously, we registered a few globs like `**/*.py`,
`**/pyproject.toml` and more. There were two problems with this
approach.
Firstly, it only watches files within the project root. Search paths may
be outside the project root. Such as virtualenv directory.
Secondly, there is variation on how tools interact with virtual
environments. In the case of uv, depending on its link mode, we might
not get any file change notifications after running `uv add foo` or
`uv remove foo`.
To remedy this, we instead just list for file change notifications on
all files for all search paths. This simplifies the globs we use, but
does potentially increase the number of notifications we'll get.
However, given the somewhat simplistic interface supported by the LSP
protocol, I think this is unavoidable (unless we used our own file
watcher, which has its own considerably downsides). Moreover, this is
seemingly consistent with how `ty check --watch` works.
This also required moving file watcher registration to *after*
workspaces are initialized, or else we don't know what the right search
paths are.
This change is in service of #19883, which in order for cache
invalidation to work right, the LSP client needs to send notifications
whenever a dependency is added or removed. This change should make that
possible.
I tried this patch with #19883 in addition to my work to activate Salsa
caching, and everything seems to work as I'd expect. That is,
completions no longer show stale results after a dependency is added or
removed.
## Summary
Fixes https://github.com/astral-sh/ty/issues/1046
We special-case iteration of certain types because they may have a more
detailed tuple-spec. Now that type aliases are a distinct type variant,
we need to handle them as well.
I don't love that `Type::TypeAlias` means we have to remember to add a
case for it basically anywhere we are special-casing a certain kind of
type, but at the moment I don't have a better plan. It's another
argument for avoiding fallback cases in `Type` matches, which we usually
prefer; I've updated this match statement to be comprehensive.
## Test Plan
Added mdtest.
`Type::TypeVar` now distinguishes whether the typevar in question is
inferable or not.
A typevar is _not inferable_ inside the body of the generic class or
function that binds it:
```py
def f[T](t: T) -> T:
return t
```
The infered type of `t` in the function body is `TypeVar(T,
NotInferable)`. This represents how e.g. assignability checks need to be
valid for all possible specializations of the typevar. Most of the
existing assignability/etc logic only applies to non-inferable typevars.
Outside of the function body, the typevar is _inferable_:
```py
f(4)
```
Here, the parameter type of `f` is `TypeVar(T, Inferable)`. This
represents how e.g. assignability doesn't need to hold for _all_
specializations; instead, we need to find the constraints under which
this specific assignability check holds.
This is in support of starting to perform specialization inference _as
part of_ performing the assignability check at the call site.
In the [[POPL2015][]] paper, this concept is called _monomorphic_ /
_polymorphic_, but I thought _non-inferable_ / _inferable_ would be
clearer for us.
Depends on #19784
[POPL2015]: https://doi.org/10.1145/2676726.2676991
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
**Stacked on top of #19849; diff will include that PR until it is
merged.**
---
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
As part of #19849, I noticed this fix could be implemented.
## Test Plan
Tests added based on CPython behaviour.
## Summary
This PR adds a new lint, `invalid-await`, for all sorts of reasons why
an object may not be `await`able, as discussed in astral-sh/ty#919.
Precisely, `__await__` is guarded against being missing, possibly
unbound, or improperly defined (expects additional arguments or doesn't
return an iterator).
Of course, diagnostics need to be fine-tuned. If `__await__` cannot be
called with no extra arguments, it indicates an error (or a quirk?) in
the method signature, not at the call site. Without any doubt, such an
object is not `Awaitable`, but I feel like talking about arguments for
an *implicit* call is a bit leaky.
I didn't reference any actual diagnostic messages in the lint
definition, because I want to hear feedback first.
Also, there's no mention of the actual required method signature for
`__await__` anywhere in the docs. The only reference I had is the
`typing` stub. I basically ended up linking `[Awaitable]` to ["must
implement
`__await__`"](https://docs.python.org/3/library/collections.abc.html#collections.abc.Awaitable),
which is insufficient on its own.
## Test Plan
The following code was tested:
```python
import asyncio
import typing
class Awaitable:
def __await__(self) -> typing.Generator[typing.Any, None, int]:
yield None
return 5
class NoDunderMethod:
pass
class InvalidAwaitArgs:
def __await__(self, value: int) -> int:
return value
class InvalidAwaitReturn:
def __await__(self) -> int:
return 5
class InvalidAwaitReturnImplicit:
def __await__(self):
pass
async def main() -> None:
result = await Awaitable() # valid
result = await NoDunderMethod() # `__await__` is missing
result = await InvalidAwaitReturn() # `__await__` returns `int`, which is not a valid iterator
result = await InvalidAwaitArgs() # `__await__` expects additional arguments and cannot be called implicitly
result = await InvalidAwaitReturnImplicit() # `__await__` returns `Unknown`, which is not a valid iterator
asyncio.run(main())
```
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This PR renames `ty.inlayHints.functionArgumentNames` to
`ty.inlayHints.callArgumentNames` which would contain both function
calls and class initialization calls i.e., it represents a generic call
expression.
## Summary
This PR changes the default of `ty.inlayHints.*` settings to `true`.
I somehow missed this in my initial PR.
This is marked as `internal` because it's not yet released.
## Summary
For PEP 695 generic functions and classes, there is an extra "type
params scope" (a child of the outer scope, and wrapping the body scope)
in which the type parameters are defined; class bases and function
parameter/return annotations are resolved in that type-params scope.
This PR fixes some longstanding bugs in how we resolve name loads from
inside these PEP 695 type parameter scopes, and also defers type
inference of PEP 695 typevar bounds/constraints/default, so we can
handle cycles without panicking.
We were previously treating these type-param scopes as lazy nested
scopes, which is wrong. In fact they are eager nested scopes; the class
`C` here inherits `int`, not `str`, and previously we got that wrong:
```py
Base = int
class C[T](Base): ...
Base = str
```
But certain syntactic positions within type param scopes (typevar
bounds/constraints/defaults) are lazy at runtime, and we should use
deferred name resolution for them. This also means they can have cycles;
in order to handle that without panicking in type inference, we need to
actually defer their type inference until after we have constructed the
`TypeVarInstance`.
PEP 695 does specify that typevar bounds and constraints cannot be
generic, and that typevar defaults can only reference prior typevars,
not later ones. This reduces the scope of (valid from the type-system
perspective) cycles somewhat, although cycles are still possible (e.g.
`class C[T: list[C]]`). And this is a type-system-only restriction; from
the runtime perspective an "invalid" case like `class C[T: T]` actually
works fine.
I debated whether to implement the PEP 695 restrictions as a way to
avoid some cycles up-front, but I ended up deciding against that; I'd
rather model the runtime name-resolution semantics accurately, and
implement the PEP 695 restrictions as a separate diagnostic on top.
(This PR doesn't yet implement those diagnostics, thus some `# TODO:
error` in the added tests.)
Introducing the possibility of cyclic typevars made typevar display
potentially stack overflow. For now I've handled this by simply removing
typevar details (bounds/constraints/default) from typevar display. This
impacts display of two kinds of types. If you `reveal_type(T)` on an
unbound `T` you now get just `typing.TypeVar` instead of
`typing.TypeVar("T", ...)` where `...` is the bound/constraints/default.
This matches pyright and mypy; pyrefly uses `type[TypeVar[T]]` which
seems a bit confusing, but does include the name. (We could easily
include the name without cycle issues, if there's a syntax we like for
that.)
It also means that displaying a generic function type like `def f[T:
int](x: T) -> T: ...` now displays as `f[T](x: T) -> T` instead of `f[T:
int](x: T) -> T`. This matches pyright and pyrefly; mypy does include
bound/constraints/defaults of typevars in function/callable type
display. If we wanted to add this, we would either need to thread a
visitor through all the type display code, or add a `decycle` type
transformation that replaced recursive reoccurrence of a type with a
marker.
## Test Plan
Added mdtests and modified existing tests to improve their correctness.
After this PR, there's only a single remaining py-fuzzer seed in the
0-500 range that panics! (Before this PR, there were 10; the fuzzer
likes to generate cyclic PEP 695 syntax.)
## Ecosystem report
It's all just the changes to `TypeVar` display.
This PR adds a type tag to the `CycleDetector` visitor (and its
aliases).
There are some places where we implement e.g. an equivalence check by
making a disjointness check. Both `is_equivalent_to` and
`is_disjoint_from` use a `PairVisitor` to handle cycles, but they should
not use the same visitor. I was finding it tedious to remember when it
was appropriate to pass on a visitor and when not to. This adds a
`PhantomData` type tag to ensure that we can't pass on one method's
visitor to a different method.
For `has_relation` and `apply_type_mapping`, we have an existing type
that we can use as the tag. For the other methods, I've added empty
structs (`Normalized`, `IsDisjointFrom`, `IsEquivalentTo`) to use as
tags.
## Summary
- Refactored `BLE001` logic for clarity and minor speed-up.
- Improved documentation and comments (previously, `BLE001` docs claimed
it catches bare `except:`s, but it doesn't).
- Fixed a false-positive bug with `from None` cause:
```python
# somefile.py
try:
pass
except BaseException as e:
raise e from None
```
### main branch
```
somefile.py:3:8: BLE001 Do not catch blind exception: `BaseException`
|
1 | try:
2 | pass
3 | except BaseException as e:
| ^^^^^^^^^^^^^ BLE001
4 | raise e from None
|
Found 1 error.
```
### this change
```cargo run -p ruff -- check somefile.py --no-cache --select=BLE001```
```
All checks passed!
```
## Test Plan
- Added a test case to cover `raise X from Y` clause
- Added a test case to cover `raise X from None` clause
This also reintroduces the `ResolvedDefinition::Module` variant because
reverse-engineering it in several places is a bit confusing. In an ideal
world we wouldn't have `ResolvedDefinition::FileWithRange` as it kinda
kills the ability to do richer analysis, so I want to chip away at its
scope wherever I can (currently it's used to point at asname parts of
import statements when doing `ImportAliasResolution::PreserveAliases`,
and also keyword arguments).
This also makes a kind of odd change to allow a hover to *only* produce
a docstring. This works around an oddity where hovering over a module
name in an import fails to resolve to a `ty` even though hovering over
uses of that imported name *does*.
The two fixed tests reflect the two interesting cases here.
## Summary
Fixes#19881. While I was here, I also made a couple of related tweaks
to the output format. First, we don't need to strip the `SyntaxError: `
prefix anymore since that's not added directly to the diagnostic message
after #19644. Second, we can use `secondary_code_or_id` to fall back on
the lint ID for syntax errors, which changes the `check_name` from
`syntax-error` to `invalid-syntax`. And then the main change requested
in the issue, prepending the `check_name` to the description.
## Test Plan
Existing tests and a new screenshot from GitLab:
<img width="362" height="113" alt="image"
src="https://github.com/user-attachments/assets/97654ad4-a639-4489-8c90-8661c7355097"
/>
This PR has several components:
* Introduce a Docstring String wrapper type that has render_plaintext
and render_markdown methods, to force docstring handlers to pick a
rendering format
* Implement [PEP-257](https://peps.python.org/pep-0257/) docstring
trimming for it
* The markdown rendering just renders the content in a plaintext
codeblock for now (followup work)
* Introduce a `DefinitionsOrTargets` type representing the partial
evaluation of `GotoTarget::get_definition_targets` to ideally stop at
getting `ResolvedDefinitions`
* Add `declaration_targets`, `definition_targets`, and `docstring`
methods to `DefinitionsOrTargets` for the 3 usecases we have for this
operation
* `docstring` is of course the key addition here, it uses the same basic
logic that `signature_help` was using: first check the goto-declaration
for docstrings, then check the goto-definition for docstrings.
* Refactor `signature_help` to use the new APIs instead of implementing
it itself
* Not fixed in this PR: an issue I found where `signature_help` will
erroneously cache docs between functions that have the same type (hover
docs don't have this bug)
* A handful of new tests and additions to tests to add docstrings in
various places and see which get caught
Examples of it working with stdlib, third party, and local definitions:
<img width="597" height="120" alt="Screenshot 2025-08-12 at 2 13 55 PM"
src="https://github.com/user-attachments/assets/eae54efd-882e-4b50-b5b4-721595224232"
/>
<img width="598" height="281" alt="Screenshot 2025-08-12 at 2 14 06 PM"
src="https://github.com/user-attachments/assets/5c9740d5-a06b-4c22-9349-da6eb9a9ba5a"
/>
<img width="327" height="180" alt="Screenshot 2025-08-12 at 2 14 18 PM"
src="https://github.com/user-attachments/assets/3b5647b9-2cdd-4c5b-bb7d-da23bff1bcb5"
/>
Notably modules don't work yet (followup work):
<img width="224" height="83" alt="Screenshot 2025-08-12 at 2 14 37 PM"
src="https://github.com/user-attachments/assets/7e9dcb70-a10e-46d9-a85c-9fe52c3b7e7b"
/>
Notably we don't show docs for an item if you hover its actual
definition (followup work, but also, not the most important):
<img width="324" height="69" alt="Screenshot 2025-08-12 at 2 16 54 PM"
src="https://github.com/user-attachments/assets/d4ddcdd8-c3fc-4120-ac93-cefdf57933b4"
/>
## Summary
A [passing
comment](https://github.com/astral-sh/ruff/pull/19711#issuecomment-3169312014)
led me to explore why we didn't report a class attribute as possibly
unbound if it was a method and defined in two different conditional
branches.
I found that the reason was because of our handling of "conflicting
declarations" in `place_from_declarations`. It returned a `Result` which
would be `Err` in case of conflicting declarations.
But we only actually care about conflicting declarations when we are
actually doing type inference on that scope and might emit a diagnostic
about it. And in all cases (including that one), we want to otherwise
proceed with the union of the declared types, as if there was no
conflict.
In several cases we were failing to handle the union of declared types
in the same way as a normal declared type if there was a declared-types
conflict. The `Result` return type made this mistake really easy to
make, as we'd match on e.g. `Ok(Place::Type(...))` and do one thing,
then match on `Err(...)` and do another, even though really both of
those cases should be handled the same.
This PR refactors `place_from_declarations` to instead return a struct
which always represents the declared type we should use in the same way,
as well as carrying the conflicting declared types, if any. This struct
has a method to allow us to explicitly ignore the declared-types
conflict (which is what we want in most cases), as well as a method to
get the declared type and the conflict information, in the case where we
want to emit a diagnostic on the conflict.
## Test Plan
Existing CI; added a test showing that we now understand a
multiply-conditionally-defined method as possibly-unbound.
This does trigger issues on a couple new fuzzer seeds, but the issues
are just new instances of an already-known (and rarely occurring)
problem which I already plan to address in a future PR, so I think it's
OK to land as-is.
I happened to build this initially on top of
https://github.com/astral-sh/ruff/pull/19711, which adds invalid-await
diagnostics, so I also updated some invalid-syntax tests to not await on
an invalid type, since the purpose of those tests is to check the
syntactic location of the `await`, not the validity of the awaited type.
Summary
--
To take advantage of the new diagnostics, we need to update our caching
model to include all of the information supported by `ruff_db`'s
diagnostic type. Instead of trying to serialize all of this information,
Micha suggested simply not caching files with diagnostics, like we
already do for files with syntax errors. This PR is an attempt at that
approach.
This has the added benefit of trimming down our `Rule` derives since
this was the last place the `FromStr`/`strum_macros::EnumString`
implementation was used, as well as the (de)serialization macros and
`CacheKey`.
Test Plan
--
Existing tests, with their input updated not to include a diagnostic,
plus a new test showing that files with lint diagnostics are not cached.
Benchmarks
--
In addition to tests, we wanted to check that this doesn't degrade
performance too much. I posted part of this new analysis in
https://github.com/astral-sh/ruff/issues/18198#issuecomment-3175048672,
but I'll duplicate it here. In short, there's not much difference
between `main` and this branch for projects with few diagnostics
(`home-assistant`, `airflow`), as expected. The difference for projects
with many diagnostics (`cpython`) is quite a bit bigger (~300 ms vs ~220
ms), but most projects that run ruff regularly are likely to have very
few diagnostics, so this may not be a problem practically.
I guess GitHub isn't really rendering this as I intended, but the extra
separator line is meant to separate the benchmarks on `main` (above the
line) from this branch (below the line).
| Command | Mean [ms] | Min [ms] | Max [ms] |
|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 322.0 | 317.5
| 326.2 |
| `ruff check cpython --isolated --exit-zero` | 217.3 | 209.8 | 237.9 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 279.5
| 277.0 | 283.6 |
| `ruff check home-assistant --isolated --exit-zero` | 37.2 | 35.7 |
40.6 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 133.1 | 130.4
| 146.4 |
| `ruff check airflow --isolated --exit-zero` | 34.7 | 32.9 | 41.6 |
|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 330.1 | 324.5
| 333.6 |
| `ruff check cpython --isolated --exit-zero` | 309.2 | 306.1 | 314.7 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 288.6
| 279.4 | 302.3 |
| `ruff check home-assistant --isolated --exit-zero` | 39.8 | 36.9 |
42.4 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 134.5 | 131.3
| 140.6 |
| `ruff check airflow --isolated --exit-zero` | 39.1 | 37.2 | 44.3 |
I had Claude adapt one of the
[scripts](https://github.com/sharkdp/hyperfine/blob/master/scripts/plot_whisker.py)
from the hyperfine repo to make this plot, so it's not quite perfect,
but maybe it's still useful. The table is probably more reliable for
close comparisons. I'll put more details about the benchmarks below for
the sake of future reproducibility.
<img width="4472" height="2368" alt="image"
src="https://github.com/user-attachments/assets/1c42d13e-818a-44e7-b34c-247340a936d7"
/>
<details><summary>Benchmark details</summary>
<p>
The versions of each project:
- CPython: 6322edd260e8cad4b09636e05ddfb794a96a0451, the 3.10 branch
from the contributing docs
- `home-assistant`: 5585376b406f099fb29a970b160877b57e5efcb0
- `airflow`: 29a1cb0cfde9d99b1774571688ed86cb60123896
The last two are just the main branches at the time I cloned the repos.
I don't think our Ruff config should be applied since I used
`--isolated`, but these are cloned into my copy of Ruff at
`crates/ruff_linter/resources/test`, and I trimmed the
`./target/release/` prefix from each of the commands, but these are
builds of Ruff in release mode.
And here's the script with the `hyperfine` invocation:
```shell
#!/bin/bash
cargo build --release --bin ruff
# git clone --depth 1 https://github.com/home-assistant/core crates/ruff_linter/resources/test/home-assistant
# git clone --depth 1 https://github.com/apache/airflow crates/ruff_linter/resources/test/airflow
bin=./target/release/ruff
resources=./crates/ruff_linter/resources/test
cpython=$resources/cpython
home_assistant=$resources/home-assistant
airflow=$resources/airflow
base=${1:-bench}
hyperfine --warmup 10 --export-json $base.json --export-markdown $base.md \
"$bin check $cpython --no-cache --isolated --exit-zero" \
"$bin check $cpython --isolated --exit-zero" \
"$bin check $home_assistant --no-cache --isolated --exit-zero" \
"$bin check $home_assistant --isolated --exit-zero" \
"$bin check $airflow --no-cache --isolated --exit-zero" \
"$bin check $airflow --isolated --exit-zero"
```
I ran this once on `main` (`baseline` in the graph, top half of the
table) and once on this branch (`nocache` and bottom of the table).
</p>
</details>
## Summary
Support recursive type aliases by adding a `Type::TypeAlias` type
variant, which allows referring to a type alias directly as a type
without eagerly unpacking it to its value.
We still unpack type aliases when they are added to intersections and
unions, so that we can simplify the intersection/union appropriately
based on the unpacked value of the type alias.
This introduces new possible recursive types, and so also requires
expanding our usage of recursion-detecting visitors in Type methods. The
use of these visitors is still not fully comprehensive in this PR, and
will require further expansion to support recursion in more kinds of
types (I already have further work on this locally), but I think it may
be better to do this incrementally in multiple PRs.
## Test Plan
Added some recursive type-alias tests and made them pass.
## Summary
After https://github.com/astral-sh/ruff/pull/19871, I realized that now
that we are passing around shared references to `CycleDetector`
visitors, we can now also simplify the `visit` callback signature; we
don't need to smuggle a single visitor reference through it anymore.
This is a pretty minor simplification, and it doesn't really make
anything shorter since I typically used a very short name (`v`) for the
smuggled reference, but I think it reduces cognitive overhead in reading
these `visit` usages; the extra variable would likely be confusing
otherwise for a reader.
## Test Plan
Existing CI.
## Summary
Type visitors are conceptually immutable, they just internally track the
types they've seen (and some maintain a cache of results.) Passing
around mutable visitors everywhere can get us into borrow-checker
trouble in some cases, where we need to recursively pass along the
visitor inside more than one closure with non-disjoint lifetime.
Use interior mutability (via `RefCell` and `Cell`) inside the visitors
instead, to allow us to pass around shared references.
## Test Plan
Existing tests.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Add "airflow.secrets.cache.SecretCache" →
"airflow.sdk.cache.SecretCache" rule
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Wei Lee <weilee.rx@gmail.com>
Summary
--
This fixes a regression caused by the BOM handling in #19806. Most
diagnostics already account for the BOM in their ranges, but those that
use `TextRange::default` to mean the beginning of the file do not,
causing an underflow in `RenderableAnnotation::new` when subtracting the
BOM-shifted `snippet_start` from the annotation range.
I ran into this when trying to run benchmarks on CPython in preparation
for caching work. The file `cpython/Lib/test/bad_coding2.py` was causing
a crash because it had a default-range `I002` diagnostic, with a BOM.
7cc3f1ebe9/crates/ruff_linter/src/rules/isort/rules/add_required_imports.rs (L122-L126)
The fix here is just to saturate to zero instead of panicking. I
considered adding a `TextRange::saturating_sub` method, but I wasn't
sure it was worth it for this one use. I'm happy to do that if
preferred, though.
Saturating seemed easier than shifting the affected annotations over,
but that could be another solution.
Test Plan
--
A new `ruff_db` test that reproduced the issue and manual testing
against the CPython file mentioned above
## Summary
This is a follow-up to
https://github.com/astral-sh/ruff/pull/19415#discussion_r2263456740 to
remove some unused code. As Micha noticed,
`GroupedEmitter::with_show_source` was only used in local unit tests[^1]
and was safe to remove. This allowed deleting `MessageCodeFrame` and a
lot more helper code previously shared with the `full` output format.
I also moved some other code from `text.rs` and `message/mod.rs` into
`grouped.rs` that is now only used for the `grouped` format. With a
little refactoring of the `concise` rendering logic in `ruff_db`, we
could probably remove `RuleCodeAndBody` too. The only difference I see
from the `concise` output is whether we print the filename next to the
row and column or not:
```shell
> ruff check --output-format concise
try.py:1:8: F401 [*] `math` imported but unused
> ruff check --output-format grouped
try.py:
1:8 F401 [*] `math` imported but unused
```
But I didn't try to do that here.
## Test Plan
Existing tests, with the source code no longer displayed. I also deleted
one test, as it was now a duplicate of the `default` test.
[^1]: "Local unit tests" as opposed to all of our linter snapshot tests,
as is the case for `TextEmitter::with_show_fix_diff`. We also want to
expose that to users eventually
(https://github.com/astral-sh/ruff/issues/7352), which I don't believe
is the case for the `grouped` format.
The [minimal
reproduction](https://gist.github.com/dcreager/fc53c59b30d7ce71d478dcb2c1c56444)
of https://github.com/astral-sh/ty/issues/948 is an example of a class
with implicit attributes whose types end up depending on themselves. Our
existing cycle detection for `infer_expression_types` is usually enough
to handle this situation correctly, but when there are very many of
these implicit attributes, we get a combinatorial explosion of running
time and memory usage.
Adding a separate cycle handler for `ClassLiteral::implicit_attribute`
lets us catch and recover from this situation earlier.
Closes https://github.com/astral-sh/ty/issues/948
The stub mapper wasn't being passed into this codepath. It is now being
used. A previously messed up test result I intentionally checked in was
subsequently fixed.
by using essentially the same logic for system site-packages, on the
assumption that system site-packages are always a subdir of the stdlib
we were looking for.
fix https://github.com/astral-sh/ty/issues/943
## Summary
Add module-level `__getattr__` support for ty's type checker, fixing
issue https://github.com/astral-sh/ty/issues/943.
Module-level `__getattr__` functions ([PEP
562](https://peps.python.org/pep-0562/)) are now respected when
resolving dynamic attributes, matching the behavior of mypy and pyright.
## Implementation
Thanks @sharkdp for the guidance in
https://github.com/astral-sh/ty/issues/943#issuecomment-3157566579
- Adds module-specific `__getattr__` resolution in
`ModuleLiteral.static_member()`
- Maintains proper attribute precedence: explicit attributes >
submodules > `__getattr__`
## Test Plan
- New mdtest covering basic functionality, type annotations, attribute
precedence, and edge cases
(run ```cargo nextest run -p ty_python_semantic
mdtest__import_module_getattr```)
- All new tests pass, verifying `__getattr__` is called correctly and
returns proper types
- Existing test suite passes, ensuring no regressions introduced
## Summary
This PR switches the `full` output format in Ruff over to use the
rendering code
in `ruff_db`. As proposed in the design doc, this involves a lot of
changes to the snapshot output.
I also had to comment out this assertion with a TODO to replace it after
https://github.com/astral-sh/ruff/issues/19688 because many of Ruff's
"file-level" annotations aren't actually file-level. They just happen to
occur at the start of the file, especially in tests with very short
snippets.
529d81daca/crates/ruff_annotate_snippets/src/renderer/display_list.rs (L1204-L1208)
I broke up the snapshot commits at the end into several blocks, but I
don't think it's enough to help with review. The first few (notebooks,
syntax errors, and test rules) are small enough to look at, but I
couldn't really think of other categories beyond that. I'm happy to
break those up or pick out specific examples beyond what I have below,
if that would help.
The minimal code changes are in this
[range](abd28f1e77),
with the snapshot commits following. Moving the `FullRenderer` and
updating the `EmitterFlags` aren't strictly necessary either. I even
dropped the renderer commit this morning but figured it made sense to
keep it since we have the `full` module for tests. I don't feel strongly
either way.
## Test Plan
I did actually click through all 1700 snapshots individually instead of
accepting them all at once, although I moved through them quickly. There
are a
few main categories:
### Lint diagnostics
```diff
-unused.py:8:19: F401 [*] `pathlib` imported but unused
+F401 [*] `pathlib` imported but unused
+ --> unused.py:8:19
|
7 | # Unused, _not_ marked as required (due to the alias).
8 | import pathlib as non_alias
- | ^^^^^^^^^ F401
+ | ^^^^^^^^^
9 |
10 | # Unused, marked as required.
|
- = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
```
- The filename and line numbers are moved to the second line
- The second noqa code next to the underline is removed
### Syntax errors
These are much like the above.
```diff
- -:1:16: invalid-syntax: Expected one or more symbol names after import
+ invalid-syntax: Expected one or more symbol names after import
+ --> -:1:16
|
1 | from foo import
| ^
```
One thing I noticed while reviewing some of these, but I don't think is
strictly syntax-error-related, is that some of the new diagnostics have
a little less context after the error. I don't think this is a problem,
but it's one small discrepancy I hadn't noticed before. Here's a minor
example:
```diff
-syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
+invalid-syntax: Expected one or more symbol names after import
+ --> syntax_errors.py:1:15
|
1 | from os import
| ^
2 |
3 | if call(foo
-4 | def bar():
|
```
And one of the biggest examples:
```diff
-E30_syntax_error.py:18:11: invalid-syntax: Expected ')', found newline
+invalid-syntax: Expected ')', found newline
+ --> E30_syntax_error.py:18:11
|
16 | pass
17 |
18 | foo = Foo(
| ^
-19 |
-20 |
-21 | def top(
|
```
Similarly, a few of the lint diagnostics showed that the cut indicator
calculation for overly long lines is also slightly different, but I
think that's okay too.
### Full-file diagnostics
```diff
-comment.py:1:1: I002 [*] Missing required import: `from __future__ import annotations`
+I002 [*] Missing required import: `from __future__ import annotations`
+--> comment.py:1:1
+help: Insert required import: `from __future__ import annotations`
+
```
As noted above, these will be much more rare after #19688 too. This case
isn't a true full-file diagnostic and will render a snippet in the
future, but you can see that we're now rendering the help message that
would have been discarded before. In contrast, this is a true full-file
diagnostic and should still look like this after #19688:
```diff
-__init__.py:1:1: A005 Module `logging` shadows a Python standard-library module
+A005 Module `logging` shadows a Python standard-library module
+--> __init__.py:1:1
```
### Jupyter notebooks
There's nothing particularly different about these, just showing off the
cell index again.
```diff
- Jupyter.ipynb:cell 3:1:7: F821 Undefined name `x`
+ F821 Undefined name `x`
+ --> Jupyter.ipynb:cell 3:1:7
|
1 | print(x)
- | ^ F821
+ | ^
|
```
## Summary
Fixes the remaining range reporting differences between the `ruff_db`
diagnostic rendering and Ruff's existing rendering, as noted in
https://github.com/astral-sh/ruff/pull/19415#issuecomment-3160525595.
This PR is structured as a series of three pairs. The first commit in
each pair adds a test showing the previous behavior, followed by a fix
and the updated snapshot. It's quite a small PR, but that might be
helpful just for the contrast.
You can also look at [this
range](052e656c6c..c3ea51030d)
of commits from #19415 to see the impact on real Ruff diagnostics. I
spun these commits out of that PR.
## Test Plan
New `ruff_db` tests
PLE2513 --fix changes ESC and SUB to uppercase hexadecimal values such
as \x1B while the formatter changes them to lowercase \x1b
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
This is a small refactor to update the server to send a single request
to perform registrations and unregistrations of dynamic capabilities.
## Test Plan
Existing E2E test cases pass, add a new test case to verify multiple
registrations.
## Summary
Reported in:
https://github.com/astral-sh/ruff/pull/19795#issuecomment-3161981945
If a root expression is reassigned, narrowing on the member should be
invalidated, but there was an oversight in the current implementation.
This PR fixes that, and also removes some unnecessary handling.
## Test Plan
New tests cases in `narrow/conditionals/nested.md`.
## Summary
This PR adds a new `ty.inlayHints.variableTypes` server setting to
configure ty to include / exclude inlay hints at variable position.
Currently, we only support inlay hints at this position so this option
basically translates to enabling / disabling inlay hints for now :)
The VS Code extension PR is
https://github.com/astral-sh/ty-vscode/pull/112.
closes: astral-sh/ty#472
## Test Plan
Add E2E tests.
## Summary
This PR is a follow-up from https://github.com/astral-sh/ruff/pull/19551
and adds a new `ty.experimental.rename` setting to conditionally
register for the rename capability. The complementary PR in ty VS Code
extension is https://github.com/astral-sh/ty-vscode/pull/111.
This is done using dynamic registration after the settings have been
resolved. The experimental group is part of the global settings because
they're applied for all workspaces that are managed by the client.
## Test Plan
Add E2E tests.
In VS Code, with the following setting:
```json
{
"ty.experimental.rename": "true",
"python.languageServer": "None"
}
```
I get the relevant log entry:
```
2025-08-07 16:05:40.598709000 DEBUG client_response{id=3 method="client/registerCapability"}: Registered rename capability
```
And, I'm able to rename a symbol. Once I set it to `false`, then I can
see this log entry:
```
2025-08-07 16:08:39.027876000 DEBUG Rename capability is disabled in the client settings
```
And, I don't see the "Rename Symbol" open in the VS Code dropdown.
https://github.com/user-attachments/assets/501659df-ba96-4252-bf51-6f22acb4920b
This PR adds support for the "rename" language server feature. It builds
upon existing functionality used for "go to references".
The "rename" feature involves two language server requests. The first is
a "prepare rename" request that determines whether renaming should be
possible for the identifier at the current offset. The second is a
"rename" request that returns a list of file ranges where the rename
should be applied.
Care must be taken when attempting to rename symbols that span files,
especially if the symbols are defined in files that are not part of the
project. We don't want to modify code in the user's Python environment
or in the vendored stub files.
I found a few bugs in the "go to references" feature when implementing
"rename", and those bug fixes are included in this PR.
---------
Co-authored-by: UnboundVariable <unbound@gmail.com>
## Summary
As per our naming scheme (at least for callable types) this should
return a `BoundMethodType`, or be renamed, but it makes more sense to
change the return type.
I also ensure `ClassType.into_callable` returns a `Type::Callable` in
the changed branch.
Ideally we could return a `CallableType` from these `into_callable`
functions (and rename to `into_callable_type` but because of unions we
cannot do this.
## Summary
Validates writes to `TypedDict` keys, for example:
```py
class Person(TypedDict):
name: str
age: int | None
def f(person: Person):
person["naem"] = "Alice" # error: [invalid-key]
person["age"] = "42" # error: [invalid-assignment]
```
The new specialized `invalid-assignment` diagnostic looks like this:
<img width="1160" height="279" alt="image"
src="https://github.com/user-attachments/assets/51259455-3501-4829-a84e-df26ff90bd89"
/>
## Ecosystem analysis
As far as I can tell, all true positives!
There are some extremely long diagnostic messages. We should truncate
our display of overload sets somehow.
## Test Plan
New Markdown tests
## Summary
When seeing a failed test like
```bash
is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)
crates/ty_python_semantic/resources/mdtest/type_properties/is_subtype_of.md:1810 unexpected error: [unresolved-reference] "Name `Aa` used when not defined"
To rerun this specific test, set the environment variable: MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)'
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```
running the following now works
```bash
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```
## Test Plan
Do we have tests for the test runner? :)
This fixes our logic for binding a legacy typevar with its binding
context. (To recap, a legacy typevar starts out "unbound" when it is
first created, and each time it's used in a generic class or function,
we "bind" it with the corresponding `Definition`.)
We treat `typing.Self` the same as a legacy typevar, and so we apply
this binding logic to it too. Before, we were using the enclosing class
as its binding context. But that's not correct — it's the method where
`typing.Self` is used that binds the typevar. (Each invocation of the
method will find a new specialization of `Self` based on the specific
instance type containing the invoked method.)
This required plumbing through some additional state to the
`in_type_expression` method.
This also revealed that we weren't handling `Self`-typed instance
attributes correctly (but were coincidentally not getting the expected
false positive diagnostics).
## Summary
Disallow `typing.TypedDict` in type expressions.
Related reference: https://github.com/python/mypy/issues/11030
## Test Plan
New Markdown tests, checked ecosystem and conformance test impact.
## Summary
This PR updates the client settings handling to recognize unknown
options provided by the user and show a warning popup along with a
warning log message.
## Test Plan
Add E2E tests.
## Summary
This PR implements support for providing LSP client settings.
The complementary PR in the ty VS Code extension:
astral-sh/ty-vscode#106.
Notes for the previous iteration of this PR is in
https://github.com/astral-sh/ruff/pull/19614#issuecomment-3136477864
(click on "Details").
Specifically, this PR splits the client settings into 3 distinct groups.
Keep in mind that these groups are not visible to the user, they're
merely an implementation detail. The groups are:
1. `GlobalOptions` - these are the options that are global to the
language server and will be the same for all the workspaces that are
handled by the server
2. `WorkspaceOptions` - these are the options that are specific to a
workspace and will be applied only when running any logic for that
workspace
3. `InitializationOptions` - these are the options that can be specified
during initialization
The initialization options are a superset that contains both the global
and workspace options flattened into a 1-dimensional structure. This
means that the user can specify any and all fields present in
`GlobalOptions` and `WorkspaceOptions` in the initialization options in
addition to the fields that are _specific_ to initialization options.
From the current set of available settings, following are only available
during initialization because they are required at that time, are static
during the runtime of the server and changing their values require a
restart to take effect:
- `logLevel`
- `logFile`
And, following are available under `GlobalOptions`:
- `diagnosticMode`
And, following under `WorkspaceOptions`:
- `disableLanguageServices`
- `pythonExtension` (Python environment information that is populated by
the ty VS Code extension)
### `workspace/configuration`
This request allows server to ask the client for configuration to a
specific workspace. But, this is only supported by the client that has
the `workspace.configuration` client capability set to `true`. What to
do for clients that don't support pulling configurations?
In that case, the settings needs to be provided in the initialization
options and updating the values of those settings can only be done by
restarting the server. With the way this is implemented, this means that
if the client does not support pulling workspace configuration then
there's no way to specify settings specific to a workspace. Earlier,
this would've been possible by providing an array of client options with
an additional field which specifies which workspace the options belong
to but that adds complexity and clients that actually do not support
`workspace/configuration` would usually not support multiple workspaces
either.
Now, for the clients that do support this, the server will initiate the
request to get the configuration for all the workspaces at the start of
the server. Once the server receives these options, it will resolve them
for each workspace as follows:
1. Combine the client options sent during initialization with the
options specific to the workspace creating the final client options
that's specific to this workspace
2. Create a global options by combining the global options from (1) for
all workspaces which in turn will also combine the global options sent
during initialization
The global options are resolved into the global settings and are
available on the `Session` which is initialized with the default global
settings. The workspace options are resolved into the workspace settings
and are available on the respective `Workspace`.
The `SessionSnapshot` contains the global settings while the document
snapshot contains the workspace settings. We could add the global
settings to the document snapshot but that's currently not needed.
### Document diagnostic dynamic registration
Currently, the document diagnostic server capability is created based on
the `diagnosticMode` sent during initialization. But, that wouldn't
provide us with the complete picture. This means the server needs to
defer registering the document diagnostic capability at a later point
once the settings have been resolved.
This is done using dynamic registration for clients that support it. For
clients that do not support dynamic registration for document diagnostic
capability, the server advertises itself as always supporting workspace
diagnostics and work done progress token.
This dynamic registration now allows us to change the server capability
for workspace diagnostics based on the resolved `diagnosticMode` value.
In the future, once `workspace/didChangeConfiguration` is supported, we
can avoid the server restart when users have changed any client
settings.
## Test Plan
Add integration tests and recorded videos on the user experience in
various editors:
### VS Code
For VS Code users, the settings experience is unchanged because the
extension defines it's own interface on how the user can specify the
server setting. This means everything is under the `ty.*` namespace as
usual.
https://github.com/user-attachments/assets/c2e5ba5c-7617-406e-a09d-e397ce9c3b93
### Zed
For Zed, the settings experience has changed. Users can specify settings
during initialization:
```json
{
"lsp": {
"ty": {
"initialization_options": {
"logLevel": "debug",
"logFile": "~/.cache/ty.log",
"diagnosticMode": "workspace",
"disableLanguageServices": true
}
},
}
}
```
Or, can specify the options under the `settings` key:
```json
{
"lsp": {
"ty": {
"settings": {
"ty": {
"diagnosticMode": "openFilesOnly",
"disableLanguageServices": true
}
},
"initialization_options": {
"logLevel": "debug",
"logFile": "~/.cache/ty.log"
}
},
}
}
```
The `logLevel` and `logFile` setting still needs to go under the
initialization options because they're required by the server during
initialization.
We can remove the nesting of the settings under the "ty" namespace by
updating the return type of
db9ea0cdfd/src/tychecker.rs (L45-L49)
to be wrapped inside `ty` directly so that users can avoid doing the
double nesting.
There's one issue here which is that if the `diagnosticMode` is
specified in both the initialization option and settings key, then the
resolution is a bit different - if either of them is set to be
`workspace`, then it wins which means that in the following
configuration, the diagnostic mode is `workspace`:
```json
{
"lsp": {
"ty": {
"settings": {
"ty": {
"diagnosticMode": "openFilesOnly"
}
},
"initialization_options": {
"diagnosticMode": "workspace"
}
},
}
}
```
This behavior is mainly a result of combining global options from
various workspace configuration results. Users should not be able to
provide global options in multiple workspaces but that restriction
cannot be done on the server side. The ty VS Code extension restricts
these global settings to only be set in the user settings and not in
workspace settings but we do not control extensions in other editors.
https://github.com/user-attachments/assets/8e2d6c09-18e6-49e5-ab78-6cf942fe1255
### Neovim
Same as in Zed.
### Other
Other editors that do not support `workspace/configuration`, the users
would need to provide the server settings during initialization.
## Summary
This PR improves the `is_safe_mutable_class` function in `infer.rs` in
several ways:
- It uses `KnownClass::to_instance()` for all "safe mutable classes".
Previously, we were using `SpecialFormType::instance_fallback()` for
some variants -- I'm not totally sure why. Switching to
`KnownClass::to_instance()` for all "safe mutable classes" fixes a
number of TODOs in the `assignment.md` mdtest suite
- Rather than eagerly calling `.to_instance(db)` on all "safe mutable
classes" every time `is_safe_mutable_class` is called, we now only call
it lazily on each element, allowing us to short-circuit more
effectively.
- I removed the entry entirely for `TypedDict` from the list of "safe
mutable classes", as it's not correct.
`SpecialFormType::TypedDict.instance_fallback(db)` just returns an
instance type representing "any instance of `typing._SpecialForm`",
which I don't think was the intent of this code. No tests fail as a
result of removing this entry, as we already check separately whether an
object is an inhabitant of a `TypedDict` type (and consider that object
safe-mutable if so!).
## Test Plan
mdtests updated
## Summary
This PR adds type inference for key-based access on `TypedDict`s and a
new diagnostic for invalid subscript accesses:
```py
class Person(TypedDict):
name: str
age: int | None
alice = Person(name="Alice", age=25)
reveal_type(alice["name"]) # revealed: str
reveal_type(alice["age"]) # revealed: int | None
alice["naem"] # Unknown key "naem" - did you mean "name"?
```
## Test Plan
Updated Markdown tests
## Summary
This PR remaps ranges in Jupyter notebooks from simple `row:column`
indices in the concatenated source code to `cell:row:col` to match
Ruff's output. This is probably not a likely change to land upstream in
`annotate-snippets`, but I didn't see a good way around it.
The remapping logic is taken nearly verbatim from here:
cd6bf1457d/crates/ruff_linter/src/message/text.rs (L212-L222)
## Test Plan
New `full` rendering test for a notebook
I was mainly focused on Ruff, but in local tests this also works for ty:
```
error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
--> Untitled.ipynb:cell 1:3:1
|
1 | import math
2 |
3 | x: str = 1
| ^
|
info: rule `invalid-assignment` is enabled by default
error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
--> Untitled.ipynb:cell 2:3:1
|
1 | import math
2 |
3 | x: str = 1
| ^
|
info: rule `invalid-assignment` is enabled by default
```
This isn't a duplicate diagnostic, just an unimaginative example:
```py
# cell 1
import math
x: str = 1
# cell 2
import math
x: str = 1
```
Summary
--
This is the other commit I wanted to spin off from #19415, currently
stacked on #19644.
This PR suppresses blank snippets for empty ranges at the very beginning
of a file, and for empty ranges in non-existent files. Ruff includes
empty ranges for IO errors, for example.
f4e93b6335/crates/ruff_linter/src/message/text.rs (L100-L110)
The diagnostics now look like this (new snapshot test):
```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
```
Instead of [^*]
```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
|
|
```
Test Plan
--
A new `ruff_db` test showing the expected output format
[^*]: This doesn't correspond precisely to the example in the PR because
of some details of the diagnostic builder helper methods in `ruff_db`,
but you can see another example in the current version of the summary in
#19415.
Summary
--
Fixes a snapshot test failure I saw in #19653 locally and in Windows CI
by
padding the hex ID to 16 digits to match the regex in
`filter_result_id`.
78e5fe0a51/crates/ty_server/tests/e2e/pull_diagnostics.rs (L380-L384)
Test Plan
--
I applied this to the branch from #19653 locally and saw that the tests
now
pass. I couldn't reproduce this failure directly on `main` or this
branch,
though.
## Summary
This PR is a spin-off from https://github.com/astral-sh/ruff/pull/19415.
It enables replacing the severity and lint name in a ty-style
diagnostic:
```
error[unused-import]: `os` imported but unused
```
with the noqa code and optional fix availability icon for a Ruff
diagnostic:
```
F401 [*] `os` imported but unused
F821 Undefined name `a`
```
or nothing at all for a Ruff syntax error:
```
SyntaxError: Expected one or more symbol names after import
```
Ruff adds the `SyntaxError` prefix to these messages manually.
Initially (d912458), I just passed a `hide_severity` flag through a
bunch of calls to get it into `annotate-snippets`, but after looking at
it again today, I think reusing the `None` severity/level gave a nicer
result. As I note in a lengthy code comment, I think all of this code
should be temporary and reverted when Ruff gets real severities, so
hopefully it's okay if it feels a little hacky.
I think the main visible downside of this approach is that we can't
style the asterisk in the fix availabilty icon in cyan, as in Ruff's
current output. It's part of the message in this PR and any styling gets
overwritten in `annotate-snippets`.
<img width="400" height="342" alt="image"
src="https://github.com/user-attachments/assets/57542ec9-a81c-4a01-91c7-bd6d7ec99f99"
/>
Hmm, I guess reusing `Level::None` also means the `F401` isn't red
anymore. Maybe my initial approach was better after all. In any case,
the rest of the PR should be basically the same, it just depends how we
want to toggle the severity.
## Test Plan
New `ruff_db` tests. These snapshots should be compared to the two tests
just above them (`hide_severity_output` vs `output` and
`hide_severity_syntax_errors` against `syntax_errors`).
## Summary
This PR fixes a few inaccuracies in attribute access on `TypedDict`s. It
also changes the return type of `type(person)` to `type[dict[str,
object]]` if `person: Person` is an inhabitant of a `TypedDict`
`Person`. We still use `type[Person]` as the *meta type* of Person,
however (see reasoning
[here](https://github.com/astral-sh/ruff/pull/19733#discussion_r2253297926)).
## Test Plan
Updated Markdown tests.
## Summary
This PR adds a new `Type::TypedDict` variant. Before this PR, we treated
`TypedDict`-based types as dynamic Todo-types, and I originally planned
to make this change a no-op. And we do in fact still treat that new
variant similar to a dynamic type when it comes to type properties such
as assignability and subtyping. But then I somehow tricked myself into
implementing some of the things correctly, so here we are. The two main
behavioral changes are: (1) we now also detect generic `TypedDict`s,
which removes a few false positives in the ecosystem, and (2) we now
support *attribute* access (not key-based indexing!) on these types,
i.e. we infer proper types for something like
`MyTypedDict.__required_keys__`. Nothing exciting yet, but gets the
infrastructure into place.
Note that with this PR, the type of (the type) `MyTypedDict` itself is
still represented as a `Type::ClassLiteral` or `Type::GenericAlias` (in
case `MyTypedDict` is generic). Only inhabitants of `MyTypedDict`
(instances of `dict` at runtime) are represented by `Type::TypedDict`.
We may want to revisit this decision in the future, if this turns out to
be too error-prone. Right now, we need to use `.is_typed_dict(db)` in
all the right places to distinguish between actual (generic) classes and
`TypedDict`s. But so far, it seemed unnecessary to add additional `Type`
variants for these as well.
part of https://github.com/astral-sh/ty/issues/154
## Ecosystem impact
The new diagnostics on `cloud-init` look like true positives to me.
## Test Plan
Updated and new Markdown tests
## Summary
This is a follow-up to #19321.
Narrowing constraints introduced in a class scope were not applied even
when they can be applied in lazy nested scopes. This PR fixes so that
they are now applied.
Conversely, there were cases where narrowing constraints were being
applied in places where they should not, so it is also fixed.
## Test Plan
Some TODOs in `narrow/conditionals/nested.md` are now work correctly.
## Summary
This is a follow-up to #19321.
If we try to access a class variable before it is defined, the variable
is looked up in the global scope, rather than in any enclosing scopes.
Closes https://github.com/astral-sh/ty/issues/875.
## Test Plan
New tests in `narrow/conditionals/nested.md`.
## Summary
This PR enhances the `BLE001` rule to correctly detect blind exception
handling in tuple exceptions. Previously, the rule only checked single
exception types, but Python allows catching multiple exceptions using
tuples like `except (Exception, ValueError):`.
## Test Plan
It fails the following (whereas the main branch does not):
```bash
cargo run -p ruff -- check somefile.py --no-cache --select=BLE001
```
```python
# somefile.py
try:
1/0
except (ValueError, Exception) as e:
print(e)
```
```
somefile.py:3:21: BLE001 Do not catch blind exception: `Exception`
|
1 | try:
2 | 1/0
3 | except (ValueError, Exception) as e:
| ^^^^^^^^^ BLE001
4 | print(e)
|
Found 1 error.
```
## Summary
Support `as` patterns in reachability analysis:
```py
from typing import assert_never
def f(subject: str | int):
match subject:
case int() as x:
pass
case str():
pass
case _:
assert_never(subject) # would previously emit an error
```
Note that we still don't support inferring correct types for the bound
name (`x`).
Closes https://github.com/astral-sh/ty/issues/928
## Test Plan
New Markdown tests
## Summary
This PR reduces the virality of some of the `Todo` types in
`infer_tuple_type_expression`. Rather than inferring `Todo`, we instead
infer `tuple[Todo, ...]`. This reflects the fact that whatever the
contents of the slice in a `tuple[]` type expression, we would always
infer some kind of tuple type as the result of the type expression. Any
tuple type should be assignable to `tuple[Todo, ...]`, so this shouldn't
introduce any new false positives; this can be seen in the ecosystem
report.
As a result of the change, we are now able to enforce in the signature
of `Type::infer_tuple_type_expression` that it returns an
`Option<TupleType<'db>>`, which is more strongly typed and expresses
clearly the invariant that a tuple type expression should always be
inferred as a `tuple` type. To enable this, it was necessary to refactor
several `TupleType` constructors in `tuple.rs` so that they return
`Option<TupleType>` rather than `Type`; this means that callers of these
constructor functions are now free to either propagate the
`Option<TupleType<'db>>` or convert it to a `Type<'db>`.
## Test Plan
Mdtests updated.
## Summary
When splitting triple-quoted, raw strings one has to take care before attempting to make each item have single-quotes.
Fixes#19577
---------
Co-authored-by: dylwil3 <dylwil3@gmail.com>
This is subtle, and the root cause became more apparent with #19604,
since we now have many more cases of superclasses and subclasses using
different typevars. The issue is easiest to see in the following:
```py
class C[T]:
def __init__(self, t: T) -> None: ...
class D[U](C[T]):
pass
reveal_type(C(1)) # revealed: C[int]
reveal_type(D(1)) # should be: D[int]
```
When instantiating a generic class, the `__init__` method inherits the
generic context of that class. This lets our call binding machinery
infer a specialization for that context.
Prior to this PR, the instantiation of `C` worked just fine. Its
`__init__` method would inherit the `[T]` generic context, and we would
infer `{T = int}` as the specialization based on the argument
parameters.
It didn't work for `D`. The issue is that the `__init__` method was
inheriting the generic context of the class where `__init__` was defined
(here, `C` and `[T]`). At the call site, we would then infer `{T = int}`
as the specialization — but that wouldn't help us specialize `D[U]`,
since `D` does not have `T` in its generic context!
Instead, the `__init__` method should inherit the generic context of the
class that we are performing the lookup on (here, `D` and `[U]`). That
lets us correctly infer `{U = int}` as the specialization, which we can
successfully apply to `D[U]`.
(Note that `__init__` refers to `C`'s typevars in its signature, but
that's okay; our member lookup logic already applies the `T = U`
specialization when returning a member of `C` while performing a lookup
on `D`, transforming its signature from `(Self, T) -> None` to `(Self,
U) -> None`.)
Closes https://github.com/astral-sh/ty/issues/588
This PR introduces a few related changes:
- We now keep track of each time a legacy typevar is bound in a
different generic context (e.g. class, function), and internally create
a new `TypeVarInstance` for each usage. This means the rest of the code
can now assume that salsa-equivalent `TypeVarInstance`s refer to the
same typevar, even taking into account that legacy typevars can be used
more than once.
- We also go ahead and track the binding context of PEP 695 typevars.
That's _much_ easier to track since we have the binding context right
there during type inference.
- With that in place, we can now include the name of the binding context
when rendering typevars (e.g. `T@f` instead of `T`)
## Summary
Adds an initial set of tests based on the highest-priority items in
https://github.com/astral-sh/ty/issues/154. This is certainly not yet
exhaustive (required/non-required, `total`, and other things are
missing), but will be useful to measure progress on this feature.
## Test Plan
Checked intended behavior against runtime and other type checkers.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Part of #18972
This PR makes [unnecessary-from-float
(FURB164)](https://docs.astral.sh/ruff/rules/unnecessary-from-float/#unnecessary-from-float-furb164)'s
example error out-of-the-box.
[Old example](https://play.ruff.rs/807ef72f-9671-408d-87ab-8b8bad65b33f)
```py
Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```
[New example](https://play.ruff.rs/303680d1-8a68-4b6c-a5fd-d79c56eb0f88)
```py
from decimal import Decimal
from fractions import Fraction
Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```
The "Use instead" section also had imports added, and one of the fixed
examples was slightly wrong and needed modification.
## Test Plan
<!-- How was it tested? -->
N/A, no functionality/tests affected
## Summary
Adds validation to subscript assignment expressions.
```py
class Foo: ...
class Bar:
__setattr__ = None
class Baz:
def __setitem__(self, index: str, value: int) -> None:
pass
# We now emit a diagnostic on these statements
Foo()[1] = 2
Bar()[1] = 2
Baz()[1] = 2
```
Also improves error messages on invalid `__getitem__` expressions
## Test Plan
Update mdtests and add more to `subscript/instance.md`
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Co-authored-by: David Peter <mail@david-peter.de>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Part of #18972
This PR makes [meta-class-abc-meta
(FURB180)](https://docs.astral.sh/ruff/rules/meta-class-abc-meta/#meta-class-abc-meta-furb180)'s
example error out-of-the-box.
[Old example](https://play.ruff.rs/6beca1be-45cd-4e5a-aafa-6a0584c10d64)
```py
class C(metaclass=ABCMeta):
pass
```
[New example](https://play.ruff.rs/bbad34da-bf07-44e6-9f34-53337e8f57d4)
```py
import abc
class C(metaclass=abc.ABCMeta):
pass
```
The "Use instead" section as also modified similarly.
## Test Plan
<!-- How was it tested? -->
N/A, no functionality/tests affected
## Summary
Fixes#18729 and fixes#16802
## Test Plan
Manually verified via CLI that Ruff no longer enters an infinite loop by
running:
```sh
echo 1 | ruff --isolated check - --select I002,UP010 --fix
```
with `required-imports = ["from __future__ import generator_stop"]` set
in the config, confirming “All checks passed!” and no snapshots were
generated.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Summary
--
Fixes#19640. I'm not sure these are the exact fixes we really want, but
I
reproduced the issue in a 32-bit Docker container and tracked down the
causes,
so I figured I'd open a PR.
As I commented on the issue, the `goto_references` test depends on the
iteration
order of the files in an `FxHashSet` in `Indexed`. In this case, we can
just
sort the output in test code.
Similarly, the tuple case depended on the order of overloads inserted in
an
`FxHashMap`. `FxIndexMap` seemed like a convenient drop-in replacement,
but I
don't know if that will have other detrimental effects. I did have to
change the
assertion for the tuple test, but I think it should now be stable across
architectures.
Test Plan
--
Running the tests in the aforementioned Docker container
Issue: https://github.com/astral-sh/ruff/issues/19498
## Summary
[missing-required-import](https://docs.astral.sh/ruff/rules/missing-required-import/)
inserts the missing import on the line immediately following the last
line of the docstring. However, if the dosctring is immediately followed
by a continuation token (i.e. backslash) then this leads to a syntax
error because Python interprets the docstring and the inserted import to
be on the same line.
The proposed solution in this PR is to check if the first token after a
file docstring is a continuation character, and if so, to advance an
additional line before inserting the missing import.
## Test Plan
Added a unit test, and the following example was verified manually:
Given this simple test Python file:
```python
"Hello, World!"\
print(__doc__)
```
and this ruff linting configuration in the `pyproject.toml` file:
```toml
[tool.ruff.lint]
select = ["I"]
[tool.ruff.lint.isort]
required-imports = ["import sys"]
```
Without the changes in this PR, the ruff linter would try to insert the
missing import in line 2, resulting in a syntax error, and report the
following:
`error: Fix introduced a syntax error. Reverting all changes.`
With the changes in this PR, ruff correctly advances one more line
before adding the missing import, resulting in the following output:
```python
"Hello, World!"\
import sys
print(__doc__)
```
---------
Co-authored-by: Jim Hoekstra <jim.hoekstra@pacmed.nl>
## Summary
This PR improves our generics solver such that we are able to solve the
`TypeVar` in this snippet to `int | str` (the union of the elements in
the heterogeneous tuple) by upcasting the heterogeneous tuple to its
pure-homogeneous-tuple supertype:
```py
def f[T](x: tuple[T, ...]) -> T:
return x[0]
def g(x: tuple[int, str]):
reveal_type(f(x))
```
## Test Plan
Mdtests. Some TODOs remain in the mdtest regarding solving `TypeVar`s
for mixed tuples, but I think this PR on its own is a significant step
forward for our generics solver when it comes to tuple types.
---------
Co-authored-by: Douglas Creager <dcreager@dcreager.net>
## Summary
Add support for `async for` loops and async iterables.
part of https://github.com/astral-sh/ty/issues/151
## Ecosystem impact
```diff
- boostedblob/listing.py:445:54: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
```
This is correct. We now find a true positive in the `# type: ignore`'d
code.
All of the other ecosystem hits are of the type
```diff
trio (https://github.com/python-trio/trio)
+ src/trio/_core/_tests/test_guest_mode.py:532:24: error[not-iterable] Object of type `MemorySendChannel[int] | MemoryReceiveChannel[int]` may not be iterable
```
The message is correct, because only `MemoryReceiveChannel` has an
`__aiter__` method, but `MemorySendChannel` does not. What's not correct
is our inferred type here. It should be `MemoryReceiveChannel[int]`, not
the union of the two. This is due to missing unpacking support for tuple
subclasses, which @AlexWaygood is working on. I don't think this should
block merging this PR, because those wrong types are already there,
without this PR.
## Test Plan
New Markdown tests and snapshot tests for diagnostics.
## Summary
I was a bit stuck on some snapshot differences I was seeing in #19415,
but @BurntSushi pointed out that `annotate-snippets` already normalizes
tabs on its own, which was very helpful! Instead of applying this change
directly to the other branch, I wanted to try applying it in
`ruff_linter` first. This should very slightly reduce the number of
changes in #19415 proper.
It looks like `annotate-snippets` always expands a tab to four spaces,
whereas I think we were aligning to tab stops:
```diff
6 | spam(ham[1], { eggs: 2})
7 | #: E201:1:6
- 8 | spam( ham[1], {eggs: 2})
- | ^^^ E201
+ 8 | spam( ham[1], {eggs: 2})
+ | ^^^^ E201
```
```diff
61 | #: E203:2:15 E702:2:16
62 | if x == 4:
-63 | print(x, y) ; x, y = y, x
- | ^ E203
+63 | print(x, y) ; x, y = y, x
+ | ^^^^ E203
```
```diff
E27.py:15:6: E271 [*] Multiple spaces after keyword
|
-13 | True and False
+13 | True and False
14 | #: E271
15 | a and b
| ^^ E271
```
I don't think this is too bad and has the major benefit of allowing us
to pass the non-tab-expanded range to `annotate-snippets` in #19415,
where it's also displayed in the header. Ruff doesn't have this problem
currently because it uses its own concise diagnostic output as the
header for full diagnostics, where the pre-expansion range is used
directly.
## Test Plan
Existing tests with a few snapshot updates
## Summary
- Add support for the return types of `async` functions
- Add type inference for `await` expressions
- Add support for `async with` / async context managers
- Add support for `yield from` expressions
This PR is generally lacking proper error handling in some cases (e.g.
illegal `__await__` attributes). I'm planning to work on this in a
follow-up.
part of https://github.com/astral-sh/ty/issues/151
closes https://github.com/astral-sh/ty/issues/736
## Ecosystem
There are a lot of true positives on `prefect` which look similar to:
```diff
prefect (https://github.com/PrefectHQ/prefect)
+ src/integrations/prefect-aws/tests/workers/test_ecs_worker.py:406:12: error[unresolved-attribute] Type `str` has no attribute `status_code`
```
This is due to a wrong return type annotation
[here](e926b8c4c1/src/integrations/prefect-aws/tests/workers/test_ecs_worker.py (L355-L391)).
```diff
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ test/mitmproxy/addons/test_clientplayback.py:18:1: error[invalid-argument-type] Argument to function `asynccontextmanager` is incorrect: Expected `(...) -> AsyncIterator[Unknown]`, found `def tcp_server(handle_conn, **server_args) -> Unknown | tuple[str, int]`
```
[This](a4d794c59a/test/mitmproxy/addons/test_clientplayback.py (L18-L19))
is a true positive. That function should return
`AsyncIterator[Address]`, not `Address`.
I looked through almost all of the other new diagnostics and they all
look like known problems or true positives.
## Typing conformance
The typing conformance diff looks good.
## Test Plan
New Markdown tests
Summary
--
This PR adds a `Checker::context` method that returns the underlying
`LintContext` to unify `Candidate::into_diagnostic` and
`Candidate::report_diagnostic` in our ambiguous Unicode character
checks. This avoids some duplication and also avoids collecting a `Vec`
of `Candidate`s only to iterate over it later.
Test Plan
--
Existing tests
## Summary
Fixes#19385.
Based on [unnecessary-placeholder
(PIE790)](https://docs.astral.sh/ruff/rules/unnecessary-placeholder/)
behavior, [ellipsis-in-non-empty-class-body
(PYI013)](https://docs.astral.sh/ruff/rules/ellipsis-in-non-empty-class-body/)
now safely preserve inline comment on ellipsis removal.
## Test Plan
A new test class was added:
```python
class NonEmptyChildWithInlineComment:
value: int
... # preserve me
```
with the following expected fix:
```python
class NonEmptyChildWithInlineComment:
value: int
# preserve me
```
The diagram is written in the Dot language, which can
be converted to SVG (or any other image) by GraphViz.
I thought it was a good idea to write this down in
preparation for adding routines that list modules.
Code reuse is likely to be difficult and I wanted to
be sure I understood how it worked.
I mostly just did this because the long string literals were annoying
me. And these can make rustfmt give up on formatting.
I also re-flowed some long comment lines while I was here.
I'm not sure if this used to be used elsewhere, but it no longer is.
And it looks like an internal-only helper function, so just un-export
it.
And note that `ModuleNameIngredient` is also un-exported, so this
function isn't really usable outside of its defining module anyway.
Summary
--
I noticed while reviewing #19390 that in `check_tokens` we were still
passing
around an extra `LinterSettings`, despite all of the same functions also
receiving a `LintContext` with its own settings.
This PR adds the `LintContext::settings` method and calls that instead
of using
the separate `LinterSettings`.
Test Plan
--
Existing tests
## Summary
Resolves#19531
I've implemented a check to determine whether the for_stmt target is
declared as global or nonlocal. I believe we should skip the rule in all
such cases, since variables declared this way are intended for use
outside the loop scope, making value changes expected behavior.
## Test Plan
Added two test cases for global and nonlocal variable to snapshot.
This PR improves the "signature help" language server feature in two
ways:
1. It adds support for the recently-introduced "stub mapper" which maps
symbol declarations within stubs to their implementation counterparts.
This allows the signature help to display docstrings from the original
implementation.
2. It incorporates a more robust fix to a bug that was addressed in a
[previous PR](https://github.com/astral-sh/ruff/pull/19542). It also
adds more comprehensive tests to cover this case.
Co-authored-by: UnboundVariable <unbound@gmail.com>
This eliminates the panic reported in
https://github.com/astral-sh/ty/issues/909, though it doesn't address
the underlying cause, which is that we aren't yet checking the types of
the fields of a protocol when checking whether a class implements the
protocol. And in particular, if a class explictly opts out of iteration
via
```py
class NotIterable:
__iter__ = None
```
we currently treat that as "having an `__iter__`" member, and therefore
implementing `Iterable`.
Note that the assumption that was in the comment before is still
correct: call binding will have already checked that the argument
satisfies `Iterable`, and so it shouldn't be an error to iterate over
said argument. But arguably, the new logic in this PR is a better way to
discharge that assumption — instead of panicking if we happen to be
wrong, fall back on an unknown iteration result.
## Summary
Fixes#18844
I'm not too sure if the solution is as simple as the way I implemented
it, but I'm curious to see if we are covering all cases correctly here.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
As a follow-up to #18949 (suggested
[here](https://github.com/astral-sh/ruff/pull/18949#pullrequestreview-2998417889)),
this PR implements auto-fix logic for `PLC0207`.
## Test Plan
<!-- How was it tested? -->
Existing tests pass, with updates to the snapshot so that it expects the
new output that comes along with the auto-fix.
## Summary
Split the "Generator functions" tests into two parts. The first part
(synchronous) refers to a function called `i` from a function `i2`. But
`i` is later redeclared in the asynchronous part, which was probably not
intended.
As of [this cpython PR](https://github.com/python/cpython/pull/135996),
it is not allowed to concatenate t-strings with non-t-strings,
implicitly or explicitly. Expressions such as `"foo" t"{bar}"` are now
syntax errors.
This PR updates some AST nodes and parsing to reflect this change.
The structural change is that `TStringPart` is no longer needed, since,
as in the case of `BytesStringLiteral`, the only possibilities are that
we have a single `TString` or a vector of such (representing an implicit
concatenation of t-strings). This removes a level of nesting from many
AST expressions (which is what all the snapshot changes reflect), and
simplifies some logic in the implementation of visitors, for example.
The other change of note is in the parser. When we meet an implicit
concatenation of string-like literals, we now count the number of
t-string literals. If these do not exhaust the total number of
implicitly concatenated pieces, then we emit a syntax error. To recover
from this syntax error, we encode any t-string pieces as _invalid_
string literals (which means we flag them as invalid, record their
range, and record the value as `""`). Note that if at least one of the
pieces is an f-string we prefer to parse the entire string as an
f-string; otherwise we parse it as a string.
This logic is exactly the same as how we currently treat
`BytesStringLiteral` parsing and error recovery - and carries with it
the same pros and cons.
Finally, note that I have not implemented any changes in the
implementation of the formatter. As far as I can tell, none are needed.
I did change a few of the fixtures so that we are always concatenating
t-strings with t-strings.
This PR adds support for the "selection range" language server feature.
This feature was recently requested by a ty user in [this feature
request](https://github.com/astral-sh/ty/issues/882).
This feature allows a client to implement "smart selection expansion"
based on the structure of the parse tree. For example, if you type
"shift-ctrl-right-arrow" in VS Code, the current selection will be
expanded to include the parent AST node. Conversely,
"shift-ctrl-left-arrow" shrinks the selection.
We will probably need to tune the granularity of selection expansion
based on user feedback. The initial implementation includes most AST
nodes, but users may find this to be too fine-grained. We have the
option of skipping some AST nodes that are not as meaningful when
editing code.
Co-authored-by: UnboundVariable <unbound@gmail.com>
We now correctly exclude legacy typevars from enclosing scopes when
constructing the generic context for a generic function.
more detail:
A function is generic if it refers to legacy typevars in its signature:
```py
from typing import TypeVar
T = TypeVar("T")
def f(t: T) -> T:
return t
```
Generic functions are allowed to appear inside of other generic
contexts. When they do, they can refer to the typevars of those
enclosing generic contexts, and that should not rebind the typevar:
```py
from typing import TypeVar, Generic
T = TypeVar("T")
U = TypeVar("U")
class C(Generic[T]):
@staticmethod
def method(t: T, u: U) -> None: ...
# revealed: def method(t: int, u: U) -> None
reveal_type(C[int].method)
```
This substitution was already being performed correctly, but we were
also still including the enclosing legacy typevars in the method's own
generic context, which can be seen via `ty_extensions.generic_context`
(which has been updated to work on generic functions and methods):
```py
from ty_extensions import generic_context
# before: tuple[T, U]
# after: tuple[U]
reveal_type(generic_context(C[int].method))
```
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Changing `BLE001` (blind-except) so that it does not flag `except`
clauses which include `logging.critical(..., exc_info=True)`.
## Test Plan
It passes the following (whereas the `main` branch does not):
```sh
$ cargo run -p ruff -- check somefile.py --no-cache --select=BLE001
```
```python
# somefile.py
import logging
try:
print("Hello world!")
except Exception:
logging.critical("Did not run.", exc_info=True)
```
Related: https://github.com/astral-sh/ruff/issues/19519
Small rewording to indicate that core development is done but that we
may add breaking changes.
Feel free to bikeshed!
Test:
```console
❯ echo "t''" | cargo run -p ruff -- check --no-cache --isolated --target-version py314 -
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.13s
Running `target/debug/ruff check --no-cache --isolated --target-version py314 -`
warning: Support for Python 3.14 is in preview and may undergo breaking changes. Enable `preview` to remove this warning.
All checks passed!
```
This PR adds support for "document symbols" and "workspace symbols"
language server features. Most of the logic to implement these features
is shared.
The "document symbols" feature returns a list of all symbols within a
specified source file. Clients can specify whether they want a flat or
hierarchical list. Document symbols are typically presented by a client
in an "outline" form. Here's what this looks like in VS Code, for
example.
<img width="240" height="249" alt="image"
src="https://github.com/user-attachments/assets/82b11f4f-32ec-4165-ba01-d6496ad13bdf"
/>
The "workspace symbols" feature returns a list of all symbols across the
entire workspace that match some user-supplied query string. This allows
the user to quickly find and navigate to any symbol within their code.
<img width="450" height="134" alt="image"
src="https://github.com/user-attachments/assets/aac131e0-9464-4adf-8a6c-829da028c759"
/>
---------
Co-authored-by: UnboundVariable <unbound@gmail.com>
Summary
--
I looked at other uses of `TextEmitter`, and I think this should be the
only one affected by this. The other integration tests must work
properly since they're run with `assert_cmd_snapshot!`, which I assume
triggers the `SHOULD_COLORIZE` case, and the `cfg!(test)` check will
work for uses in `ruff_linter`.
4a4dc38b5b/crates/ruff_linter/src/message/text.rs (L36-L44)
Alternatively, we could probably move this to a CLI test instead.
Test Plan
--
`cargo test -p ruff`, which was failing on `main` with color codes in
the output before this
## Summary
We currently infer a `@Todo` type whenever we access an attribute on an
intersection type with negative components. This can happen very
naturally. Consequently, this `@Todo` type is rather pervasive and hides
a lot of true positives that ty could otherwise detect:
```py
class Foo:
attr: int = 1
def _(f: Foo | None):
if f:
reveal_type(f) # Foo & ~AlwaysFalsy
reveal_type(f.attr) # now: int, previously: @Todo
```
The changeset here proposes to handle member access on these
intersection types by simply ignoring all negative contributions. This
is not always ideal: a negative contribution like `~<Protocol with
members 'attr'>` could be a hint that `.attr` should not be accessible
on the full intersection type. The behavior can certainly be improved in
the future, but this seems like a reasonable initial step to get rid of
this unnecessary `@Todo` type.
## Ecosystem analysis
There are quite a few changes here. I spot-checked them and found one
bug where attribute access on pure negation types (`~P == object & ~P`)
would not allow attributes on `object` to be accessed. After that was
fixed, I only see true positives and known problems. The fact that a lot
of `unused-ignore-comment` diagnostics go away are also evidence for the
fact that this touches a sensitive area, where static analysis clashes
with dynamically adding attributes to objects:
```py
… # type: ignore # Runtime attribute access
```
## Test Plan
Updated tests.
## Summary
Add basic support for `dataclasses.field`:
* remove fields with `init=False` from the signature of the synthesized
`__init__` method
* infer correct default value types from `default` or `default_factory`
arguments
```py
from dataclasses import dataclass, field
def default_roles() -> list[str]:
return ["user"]
@dataclass
class Member:
name: str
roles: list[str] = field(default_factory=default_roles)
tag: str | None = field(default=None, init=False)
# revealed: (self: Member, name: str, roles: list[str] = list[str]) -> None
reveal_type(Member.__init__)
```
Support for `kw_only` has **not** been added.
part of https://github.com/astral-sh/ty/issues/111
## Test Plan
New Markdown tests
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Co-authored-by: Carl Meyer <carl@oddbird.net>
Co-authored-by: Micha Reiser <micha@reiser.io>
This PR fixes bug [#879](https://github.com/astral-sh/ty/issues/879)
where the signature help popup remains visible after typing the closing
paren in a call expression.
Co-authored-by: UnboundVariable <unbound@gmail.com>
This PR adds support for the "document highlights" language server
feature.
This feature allows a client to highlight all instances of a selected
name within a document. Without this feature, editors perform
highlighting based on a simple text match. This adds semantic knowledge.
The implementation of this feature largely overlaps that of the
recently-added "references" feature. This PR refactors the existing
"references.rs" module, separating out the functionality and tests that
are specific to the other language feature into a "goto_references.rs"
module. The "references.rs" module now contains the functionality that
is common to "goto references", "document highlights" and "rename"
(which is not yet implemented).
As part of this PR, I also created a new `ReferenceTarget` type which is
similar to the existing `NavigationTarget` type but better suited for
references. This idea was suggested by @MichaReiser in [this code review
feedback](https://github.com/astral-sh/ruff/pull/19475#discussion_r2224061006)
from a previous PR. Notably, this new type contains a field that
specifies the "kind" of the reference (read, write or other). This
"kind" is needed for the document highlights feature.
Before: all textual instances of `foo` are highlighted
<img width="156" height="126" alt="Screenshot 2025-07-23 at 12 51 09 PM"
src="https://github.com/user-attachments/assets/37ccdb2f-d48a-473d-89d5-8e89cb6c394e"
/>
After: only semantic matches are highlighted
<img width="164" height="157" alt="Screenshot 2025-07-23 at 12 52 05 PM"
src="https://github.com/user-attachments/assets/2efadadd-4691-4815-af04-b031e74c81b7"
/>
---------
Co-authored-by: UnboundVariable <unbound@gmail.com>
## Summary
Expand cases in which ruff can offer a fix for `RUF039` (some of which
are unsafe).
While turning `"\n"` (== `\n`) into `r"\n"` (== `\\n`) is not equivalent
at run-time, it's still functionally equivalent to do so in the context
of [regex
patterns](https://docs.python.org/3/library/re.html#regular-expression-syntax)
as they themselves interpret the escape sequence. Therefore, an unsafe
fix can be offered.
Further, this PR also makes ruff offer fixes for byte string literals,
not only strings literals as before.
## Test Plan
Tests for all escape sequences have been added.
## Related
Closes: https://github.com/astral-sh/ruff/issues/16713
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
I saw that this creates a lot of false positives in the ecosystem, and
it seemed to be relatively easy to add basic support for this.
Some preliminary work on this was done by @InSyncWithFoo — thank you.
part of https://github.com/astral-sh/ty/issues/111
## Ecosystem analysis
The results look good.
## Test Plan
New Markdown tests
---------
Co-authored-by: InSync <insyncwithfoo@gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
The generated fix for `RUF033` would cause a syntax error for named
expressions as parameter defaults.
```python
from dataclasses import InitVar, dataclass
@dataclass
class Foo:
def __post_init__(self, bar: int = (x := 1)) -> None:
pass
```
would be turned into
```python
from dataclasses import InitVar, dataclass
@dataclass
class Foo:
x: InitVar[int] = x := 1
def __post_init__(self, bar: int = (x := 1)) -> None:
pass
```
instead of the syntactically correct
```python
# ...
x: InitVar[int] = (x := 1)
# ...
```
## Test Plan
Test reproducer (plus some extra tests) have been added to the test
suite.
## Related
Fixes: https://github.com/astral-sh/ruff/issues/18950
This PR updates our iterator protocol machinery to return a tuple spec
describing the elements that are returned, instead of a type. That
allows us to track heterogeneous iterators more precisely, and
consolidates the logic in unpacking and splatting, which are the two
places where we can take advantage of that more precise information.
(Other iterator consumers, like `for` loops, have to collapse the
iterated elements down to a single type regardless, and we provide a new
helper method on `TupleSpec` to perform that summarization.)
## Summary
Implements proper reachability analysis and — in effect — exhaustiveness
checking for `match` statements. This allows us to check the following
code without any errors (leads to *"can implicitly return `None`"* on
`main`):
```py
from enum import Enum, auto
class Color(Enum):
RED = auto()
GREEN = auto()
BLUE = auto()
def hex(color: Color) -> str:
match color:
case Color.RED:
return "#ff0000"
case Color.GREEN:
return "#00ff00"
case Color.BLUE:
return "#0000ff"
```
Note that code like this already worked fine if there was a
`assert_never(color)` statement in a catch-all case, because we would
then consider that `assert_never` call terminal. But now this also works
without the wildcard case. Adding a member to the enum would still lead
to an error here, if that case would not be handled in `hex`.
What needed to happen to support this is a new way of evaluating match
pattern constraints. Previously, we would simply compare the type of the
subject expression against the patterns. For the last case here, the
subject type would still be `Color` and the value type would be
`Literal[Color.BLUE]`, so we would infer an ambiguous truthiness.
Now, before we compare the subject type against the pattern, we first
generate a union type that corresponds to the set of all values that
would have *definitely been matched* by previous patterns. Then, we
build a "narrowed" subject type by computing `subject_type &
~already_matched_type`, and compare *that* against the pattern type. For
the example here, `already_matched_type = Literal[Color.RED] |
Literal[Color.GREEN]`, and so we have a narrowed subject type of `Color
& ~(Literal[Color.RED] | Literal[Color.GREEN]) = Literal[Color.BLUE]`,
which allows us to infer a reachability of `AlwaysTrue`.
<details>
<summary>A note on negated reachability constraints</summary>
It might seem that we now perform duplicate work, because we also record
*negated* reachability constraints. But that is still important for
cases like the following (and possibly also for more realistic
scenarios):
```py
from typing import Literal
def _(x: int | str):
match x:
case None:
pass # never reachable
case _:
y = 1
y
```
</details>
closes https://github.com/astral-sh/ty/issues/99
## Test Plan
* I verified that this solves all examples from the linked ticket (the
first example needs a PEP 695 type alias, because we don't support
legacy type aliases yet)
* Verified that the ecosystem changes are all because of removed false
positives
* Updated tests
## Summary
I noticed that our type narrowing and reachability analysis was
incorrect for class patterns that are not irrefutable. The test cases
below compare the old and the new behavior:
```py
from dataclasses import dataclass
@dataclass
class Point:
x: int
y: int
class Other: ...
def _(target: Point):
y = 1
match target:
case Point(0, 0):
y = 2
case Point(x=0, y=1):
y = 3
case Point(x=1, y=0):
y = 4
reveal_type(y) # revealed: Literal[1, 2, 3, 4] (previously: Literal[2])
def _(target: Point | Other):
match target:
case Point(0, 0):
reveal_type(target) # revealed: Point
case Point(x=0, y=1):
reveal_type(target) # revealed: Point (previously: Never)
case Point(x=1, y=0):
reveal_type(target) # revealed: Point (previously: Never)
case Other():
reveal_type(target) # revealed: Other (previously: Other & ~Point)
```
## Test Plan
New Markdown test
## Summary
closes#19204
## Test Plan
1. test case is added in dedicated file
2. locally tested the code manually
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: CodeMan62 <sharmahimanshu150082007@gmail.com>