Previously, if the module was just `foo-stubs`, we'd skip over
stripping the `-stubs` suffix which would lead to us returning
`None`.
This function is now a little convoluted and could be simpler
if we did an intermediate allocation. But I kept the iterative
approach and added a special case to handle `foo-stubs`.
These tests capture existing behavior.
I added these when I stumbled upon what I thought was an
oddity: we prioritize `foo.pyi` over `foo.py`, but
prioritize `foo/__init__.py` over `foo.pyi`.
(I plan to investigate this more closely in follow-up
work. Particularly, to look at other type checkers. It
seems like we may want to change this to always prioritize
stubs.)
This is a port of the logic in https://github.com/astral-sh/uv/pull/7691
The basic idea is we use CONDA_DEFAULT_ENV as a signal for whether
CONDA_PREFIX is just the ambient system conda install, or the user has
explicitly activated a custom one. If the former, then the conda is
treated like a system install (having lowest priority). If the latter,
the conda is treated like an activated venv (having priority over
everything but an Actual activated venv).
Fixes https://github.com/astral-sh/ty/issues/611
## Summary
Closes: https://github.com/astral-sh/ty/issues/669
(This turned out to be simpler that I thought :))
## Test Plan
Update existing test cases.
### Ecosystem report
Most of them are basically because ty has now started inferring more
precise types for the return type to an overloaded call and a lot of the
types are defined using type aliases, here's some examples:
<details><summary>Details</summary>
<p>
> attrs (https://github.com/python-attrs/attrs)
> + tests/test_make.py:146:14: error[unresolved-attribute] Type
`Literal[42]` has no attribute `default`
> - Found 555 diagnostics
> + Found 556 diagnostics
This is accurate now that we infer the type as `Literal[42]` instead of
`Unknown` (Pyright infers it as `int`)
> optuna (https://github.com/optuna/optuna)
> + optuna/_gp/search_space.py:181:53: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `tuple[int | float, int | float]`, found `tuple[Unknown |
ndarray[Unknown, <class 'float'>], Unknown | ndarray[Unknown, <class
'float'>]]`
> + optuna/_gp/search_space.py:181:83: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `int | float`, found `Unknown | ndarray[Unknown, <class
'float'>]`
> + tests/gp_tests/test_search_space.py:109:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `tuple[int | float, int
| float]`, found `Unknown | ndarray[Unknown, <class 'float'>]`
> + tests/gp_tests/test_search_space.py:110:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `int | float`, found
`Unknown | ndarray[Unknown, <class 'float'>]`
> - Found 559 diagnostics
> + Found 563 diagnostics
Same as above where ty is now inferring a more precise type like
`Unknown | ndarray[tuple[int, int], <class 'float'>]` instead of just
`Unknown` as before
> jinja (https://github.com/pallets/jinja)
> + src/jinja2/bccache.py:298:39: error[invalid-argument-type] Argument
to bound method `write_bytecode` is incorrect: Expected `IO[bytes]`,
found `_TemporaryFileWrapper[str]`
> - Found 186 diagnostics
> + Found 187 diagnostics
This requires support for type aliases to match the correct overload.
> hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
> + src/hydra_zen/wrapper/_implementations.py:945:16:
error[invalid-return-type] Return type does not match returned value:
expected `DataClass_ | type[@Todo(type[T] for protocols)] | ListConfig |
DictConfig`, found `@Todo(unsupported type[X] special form) | (((...) ->
Any) & dict[Unknown, Unknown]) | (DataClass_ & dict[Unknown, Unknown]) |
dict[Any, Any] | (ListConfig & dict[Unknown, Unknown]) | (DictConfig &
dict[Unknown, Unknown]) | (((...) -> Any) & list[Unknown]) | (DataClass_
& list[Unknown]) | list[Any] | (ListConfig & list[Unknown]) |
(DictConfig & list[Unknown])`
> + tests/annotations/behaviors.py:60:28: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/behaviors.py:64:21: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:167:17: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:524:17:
error[unresolved-attribute] Type `<class 'int'>` has no attribute
`_target_`
> - Found 561 diagnostics
> + Found 566 diagnostics
Same as above, this requires support for type aliases to match the
correct overload.
> paasta (https://github.com/yelp/paasta)
> + paasta_tools/utils.py:4188:19: warning[redundant-cast] Value is
already of type `list[str]`
> - Found 888 diagnostics
> + Found 889 diagnostics
This is correct.
> colour (https://github.com/colour-science/colour)
> + colour/plotting/diagrams.py:448:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/diagrams.py:462:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/models.py:419:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:230:9: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:474:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:495:17: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:513:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> + colour/plotting/temperature.py:514:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> - Found 480 diagnostics
> + Found 488 diagnostics
Most of them are correct except for the last two diagnostics which I'm
not sure
what's happening, it's trying to index into an `np.ndarray` type (which
is
inferred correctly) but I think it might be picking up an incorrect
overload
for the `__getitem__` method.
Scipy's diagnostics also requires support for type alises to pick the
correct overload.
</p>
</details>
In implementing partial stubs I had observed that this continue in the
namespace package code seemed erroneous since the same continue for
partial stubs didn't work. Unfortunately I wasn't confident enough to
push on that hunch. Fortunately I remembered that hunch to make this an
easy fix.
The issue with the continue is that it bails out of the current
search-path without testing any .py files. This breaks when for example
`google` and `google-stubs`/`types-google` are both in the same
site-packages dir -- failing to find a module in `types-google` has us
completely skip over `google`!
Fixes https://github.com/astral-sh/ty/issues/520
fix https://github.com/astral-sh/ty/issues/1047
## Summary
This PR fixes how `KW_ONLY` is applied in dataclasses. Previously, the
sentinel leaked into subclasses and incorrectly marked their fields as
keyword-only; now it only affects fields declared in the same class.
```py
from dataclasses import dataclass, KW_ONLY
@dataclass
class D:
x: int
_: KW_ONLY
y: str
@dataclass
class E(D):
z: bytes
# This should work: x=1 (positional), z=b"foo" (positional), y="foo" (keyword-only)
E(1, b"foo", y="foo")
reveal_type(E.__init__) # revealed: (self: E, x: int, z: bytes, *, y: str) -> None
```
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
mdtests
Requires some iteration, but this includes the most tedious part --
threading a new concept of DisplaySettings through every type display
impl. Currently it only holds a boolean for multiline, but in the future
it could also take other things like "render to markdown" or "here's
your base indent if you make a newline".
For types which have exposed display functions I've left the old
signature as a compatibility polyfill to avoid having to audit
everywhere that prints types right off the bat (notably I originally
tried doing multiline functions unconditionally and a ton of things
churned that clearly weren't ready for multi-line (diagnostics).
The only real use of this API in this PR is to multiline render function
types in hovers, which is the highest impact (see snapshot changes).
Fixes https://github.com/astral-sh/ty/issues/1000
This change rejiggers how we register globs for file watching with the
LSP client. Previously, we registered a few globs like `**/*.py`,
`**/pyproject.toml` and more. There were two problems with this
approach.
Firstly, it only watches files within the project root. Search paths may
be outside the project root. Such as virtualenv directory.
Secondly, there is variation on how tools interact with virtual
environments. In the case of uv, depending on its link mode, we might
not get any file change notifications after running `uv add foo` or
`uv remove foo`.
To remedy this, we instead just list for file change notifications on
all files for all search paths. This simplifies the globs we use, but
does potentially increase the number of notifications we'll get.
However, given the somewhat simplistic interface supported by the LSP
protocol, I think this is unavoidable (unless we used our own file
watcher, which has its own considerably downsides). Moreover, this is
seemingly consistent with how `ty check --watch` works.
This also required moving file watcher registration to *after*
workspaces are initialized, or else we don't know what the right search
paths are.
This change is in service of #19883, which in order for cache
invalidation to work right, the LSP client needs to send notifications
whenever a dependency is added or removed. This change should make that
possible.
I tried this patch with #19883 in addition to my work to activate Salsa
caching, and everything seems to work as I'd expect. That is,
completions no longer show stale results after a dependency is added or
removed.
## Summary
Fixes https://github.com/astral-sh/ty/issues/1046
We special-case iteration of certain types because they may have a more
detailed tuple-spec. Now that type aliases are a distinct type variant,
we need to handle them as well.
I don't love that `Type::TypeAlias` means we have to remember to add a
case for it basically anywhere we are special-casing a certain kind of
type, but at the moment I don't have a better plan. It's another
argument for avoiding fallback cases in `Type` matches, which we usually
prefer; I've updated this match statement to be comprehensive.
## Test Plan
Added mdtest.
`Type::TypeVar` now distinguishes whether the typevar in question is
inferable or not.
A typevar is _not inferable_ inside the body of the generic class or
function that binds it:
```py
def f[T](t: T) -> T:
return t
```
The infered type of `t` in the function body is `TypeVar(T,
NotInferable)`. This represents how e.g. assignability checks need to be
valid for all possible specializations of the typevar. Most of the
existing assignability/etc logic only applies to non-inferable typevars.
Outside of the function body, the typevar is _inferable_:
```py
f(4)
```
Here, the parameter type of `f` is `TypeVar(T, Inferable)`. This
represents how e.g. assignability doesn't need to hold for _all_
specializations; instead, we need to find the constraints under which
this specific assignability check holds.
This is in support of starting to perform specialization inference _as
part of_ performing the assignability check at the call site.
In the [[POPL2015][]] paper, this concept is called _monomorphic_ /
_polymorphic_, but I thought _non-inferable_ / _inferable_ would be
clearer for us.
Depends on #19784
[POPL2015]: https://doi.org/10.1145/2676726.2676991
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
**Stacked on top of #19849; diff will include that PR until it is
merged.**
---
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
As part of #19849, I noticed this fix could be implemented.
## Test Plan
Tests added based on CPython behaviour.
## Summary
This PR adds a new lint, `invalid-await`, for all sorts of reasons why
an object may not be `await`able, as discussed in astral-sh/ty#919.
Precisely, `__await__` is guarded against being missing, possibly
unbound, or improperly defined (expects additional arguments or doesn't
return an iterator).
Of course, diagnostics need to be fine-tuned. If `__await__` cannot be
called with no extra arguments, it indicates an error (or a quirk?) in
the method signature, not at the call site. Without any doubt, such an
object is not `Awaitable`, but I feel like talking about arguments for
an *implicit* call is a bit leaky.
I didn't reference any actual diagnostic messages in the lint
definition, because I want to hear feedback first.
Also, there's no mention of the actual required method signature for
`__await__` anywhere in the docs. The only reference I had is the
`typing` stub. I basically ended up linking `[Awaitable]` to ["must
implement
`__await__`"](https://docs.python.org/3/library/collections.abc.html#collections.abc.Awaitable),
which is insufficient on its own.
## Test Plan
The following code was tested:
```python
import asyncio
import typing
class Awaitable:
def __await__(self) -> typing.Generator[typing.Any, None, int]:
yield None
return 5
class NoDunderMethod:
pass
class InvalidAwaitArgs:
def __await__(self, value: int) -> int:
return value
class InvalidAwaitReturn:
def __await__(self) -> int:
return 5
class InvalidAwaitReturnImplicit:
def __await__(self):
pass
async def main() -> None:
result = await Awaitable() # valid
result = await NoDunderMethod() # `__await__` is missing
result = await InvalidAwaitReturn() # `__await__` returns `int`, which is not a valid iterator
result = await InvalidAwaitArgs() # `__await__` expects additional arguments and cannot be called implicitly
result = await InvalidAwaitReturnImplicit() # `__await__` returns `Unknown`, which is not a valid iterator
asyncio.run(main())
```
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This PR renames `ty.inlayHints.functionArgumentNames` to
`ty.inlayHints.callArgumentNames` which would contain both function
calls and class initialization calls i.e., it represents a generic call
expression.
## Summary
This PR changes the default of `ty.inlayHints.*` settings to `true`.
I somehow missed this in my initial PR.
This is marked as `internal` because it's not yet released.
## Summary
For PEP 695 generic functions and classes, there is an extra "type
params scope" (a child of the outer scope, and wrapping the body scope)
in which the type parameters are defined; class bases and function
parameter/return annotations are resolved in that type-params scope.
This PR fixes some longstanding bugs in how we resolve name loads from
inside these PEP 695 type parameter scopes, and also defers type
inference of PEP 695 typevar bounds/constraints/default, so we can
handle cycles without panicking.
We were previously treating these type-param scopes as lazy nested
scopes, which is wrong. In fact they are eager nested scopes; the class
`C` here inherits `int`, not `str`, and previously we got that wrong:
```py
Base = int
class C[T](Base): ...
Base = str
```
But certain syntactic positions within type param scopes (typevar
bounds/constraints/defaults) are lazy at runtime, and we should use
deferred name resolution for them. This also means they can have cycles;
in order to handle that without panicking in type inference, we need to
actually defer their type inference until after we have constructed the
`TypeVarInstance`.
PEP 695 does specify that typevar bounds and constraints cannot be
generic, and that typevar defaults can only reference prior typevars,
not later ones. This reduces the scope of (valid from the type-system
perspective) cycles somewhat, although cycles are still possible (e.g.
`class C[T: list[C]]`). And this is a type-system-only restriction; from
the runtime perspective an "invalid" case like `class C[T: T]` actually
works fine.
I debated whether to implement the PEP 695 restrictions as a way to
avoid some cycles up-front, but I ended up deciding against that; I'd
rather model the runtime name-resolution semantics accurately, and
implement the PEP 695 restrictions as a separate diagnostic on top.
(This PR doesn't yet implement those diagnostics, thus some `# TODO:
error` in the added tests.)
Introducing the possibility of cyclic typevars made typevar display
potentially stack overflow. For now I've handled this by simply removing
typevar details (bounds/constraints/default) from typevar display. This
impacts display of two kinds of types. If you `reveal_type(T)` on an
unbound `T` you now get just `typing.TypeVar` instead of
`typing.TypeVar("T", ...)` where `...` is the bound/constraints/default.
This matches pyright and mypy; pyrefly uses `type[TypeVar[T]]` which
seems a bit confusing, but does include the name. (We could easily
include the name without cycle issues, if there's a syntax we like for
that.)
It also means that displaying a generic function type like `def f[T:
int](x: T) -> T: ...` now displays as `f[T](x: T) -> T` instead of `f[T:
int](x: T) -> T`. This matches pyright and pyrefly; mypy does include
bound/constraints/defaults of typevars in function/callable type
display. If we wanted to add this, we would either need to thread a
visitor through all the type display code, or add a `decycle` type
transformation that replaced recursive reoccurrence of a type with a
marker.
## Test Plan
Added mdtests and modified existing tests to improve their correctness.
After this PR, there's only a single remaining py-fuzzer seed in the
0-500 range that panics! (Before this PR, there were 10; the fuzzer
likes to generate cyclic PEP 695 syntax.)
## Ecosystem report
It's all just the changes to `TypeVar` display.
This PR adds a type tag to the `CycleDetector` visitor (and its
aliases).
There are some places where we implement e.g. an equivalence check by
making a disjointness check. Both `is_equivalent_to` and
`is_disjoint_from` use a `PairVisitor` to handle cycles, but they should
not use the same visitor. I was finding it tedious to remember when it
was appropriate to pass on a visitor and when not to. This adds a
`PhantomData` type tag to ensure that we can't pass on one method's
visitor to a different method.
For `has_relation` and `apply_type_mapping`, we have an existing type
that we can use as the tag. For the other methods, I've added empty
structs (`Normalized`, `IsDisjointFrom`, `IsEquivalentTo`) to use as
tags.
## Summary
- Refactored `BLE001` logic for clarity and minor speed-up.
- Improved documentation and comments (previously, `BLE001` docs claimed
it catches bare `except:`s, but it doesn't).
- Fixed a false-positive bug with `from None` cause:
```python
# somefile.py
try:
pass
except BaseException as e:
raise e from None
```
### main branch
```
somefile.py:3:8: BLE001 Do not catch blind exception: `BaseException`
|
1 | try:
2 | pass
3 | except BaseException as e:
| ^^^^^^^^^^^^^ BLE001
4 | raise e from None
|
Found 1 error.
```
### this change
```cargo run -p ruff -- check somefile.py --no-cache --select=BLE001```
```
All checks passed!
```
## Test Plan
- Added a test case to cover `raise X from Y` clause
- Added a test case to cover `raise X from None` clause
This also reintroduces the `ResolvedDefinition::Module` variant because
reverse-engineering it in several places is a bit confusing. In an ideal
world we wouldn't have `ResolvedDefinition::FileWithRange` as it kinda
kills the ability to do richer analysis, so I want to chip away at its
scope wherever I can (currently it's used to point at asname parts of
import statements when doing `ImportAliasResolution::PreserveAliases`,
and also keyword arguments).
This also makes a kind of odd change to allow a hover to *only* produce
a docstring. This works around an oddity where hovering over a module
name in an import fails to resolve to a `ty` even though hovering over
uses of that imported name *does*.
The two fixed tests reflect the two interesting cases here.
## Summary
Fixes#19881. While I was here, I also made a couple of related tweaks
to the output format. First, we don't need to strip the `SyntaxError: `
prefix anymore since that's not added directly to the diagnostic message
after #19644. Second, we can use `secondary_code_or_id` to fall back on
the lint ID for syntax errors, which changes the `check_name` from
`syntax-error` to `invalid-syntax`. And then the main change requested
in the issue, prepending the `check_name` to the description.
## Test Plan
Existing tests and a new screenshot from GitLab:
<img width="362" height="113" alt="image"
src="https://github.com/user-attachments/assets/97654ad4-a639-4489-8c90-8661c7355097"
/>
This PR has several components:
* Introduce a Docstring String wrapper type that has render_plaintext
and render_markdown methods, to force docstring handlers to pick a
rendering format
* Implement [PEP-257](https://peps.python.org/pep-0257/) docstring
trimming for it
* The markdown rendering just renders the content in a plaintext
codeblock for now (followup work)
* Introduce a `DefinitionsOrTargets` type representing the partial
evaluation of `GotoTarget::get_definition_targets` to ideally stop at
getting `ResolvedDefinitions`
* Add `declaration_targets`, `definition_targets`, and `docstring`
methods to `DefinitionsOrTargets` for the 3 usecases we have for this
operation
* `docstring` is of course the key addition here, it uses the same basic
logic that `signature_help` was using: first check the goto-declaration
for docstrings, then check the goto-definition for docstrings.
* Refactor `signature_help` to use the new APIs instead of implementing
it itself
* Not fixed in this PR: an issue I found where `signature_help` will
erroneously cache docs between functions that have the same type (hover
docs don't have this bug)
* A handful of new tests and additions to tests to add docstrings in
various places and see which get caught
Examples of it working with stdlib, third party, and local definitions:
<img width="597" height="120" alt="Screenshot 2025-08-12 at 2 13 55 PM"
src="https://github.com/user-attachments/assets/eae54efd-882e-4b50-b5b4-721595224232"
/>
<img width="598" height="281" alt="Screenshot 2025-08-12 at 2 14 06 PM"
src="https://github.com/user-attachments/assets/5c9740d5-a06b-4c22-9349-da6eb9a9ba5a"
/>
<img width="327" height="180" alt="Screenshot 2025-08-12 at 2 14 18 PM"
src="https://github.com/user-attachments/assets/3b5647b9-2cdd-4c5b-bb7d-da23bff1bcb5"
/>
Notably modules don't work yet (followup work):
<img width="224" height="83" alt="Screenshot 2025-08-12 at 2 14 37 PM"
src="https://github.com/user-attachments/assets/7e9dcb70-a10e-46d9-a85c-9fe52c3b7e7b"
/>
Notably we don't show docs for an item if you hover its actual
definition (followup work, but also, not the most important):
<img width="324" height="69" alt="Screenshot 2025-08-12 at 2 16 54 PM"
src="https://github.com/user-attachments/assets/d4ddcdd8-c3fc-4120-ac93-cefdf57933b4"
/>
## Summary
A [passing
comment](https://github.com/astral-sh/ruff/pull/19711#issuecomment-3169312014)
led me to explore why we didn't report a class attribute as possibly
unbound if it was a method and defined in two different conditional
branches.
I found that the reason was because of our handling of "conflicting
declarations" in `place_from_declarations`. It returned a `Result` which
would be `Err` in case of conflicting declarations.
But we only actually care about conflicting declarations when we are
actually doing type inference on that scope and might emit a diagnostic
about it. And in all cases (including that one), we want to otherwise
proceed with the union of the declared types, as if there was no
conflict.
In several cases we were failing to handle the union of declared types
in the same way as a normal declared type if there was a declared-types
conflict. The `Result` return type made this mistake really easy to
make, as we'd match on e.g. `Ok(Place::Type(...))` and do one thing,
then match on `Err(...)` and do another, even though really both of
those cases should be handled the same.
This PR refactors `place_from_declarations` to instead return a struct
which always represents the declared type we should use in the same way,
as well as carrying the conflicting declared types, if any. This struct
has a method to allow us to explicitly ignore the declared-types
conflict (which is what we want in most cases), as well as a method to
get the declared type and the conflict information, in the case where we
want to emit a diagnostic on the conflict.
## Test Plan
Existing CI; added a test showing that we now understand a
multiply-conditionally-defined method as possibly-unbound.
This does trigger issues on a couple new fuzzer seeds, but the issues
are just new instances of an already-known (and rarely occurring)
problem which I already plan to address in a future PR, so I think it's
OK to land as-is.
I happened to build this initially on top of
https://github.com/astral-sh/ruff/pull/19711, which adds invalid-await
diagnostics, so I also updated some invalid-syntax tests to not await on
an invalid type, since the purpose of those tests is to check the
syntactic location of the `await`, not the validity of the awaited type.
Summary
--
To take advantage of the new diagnostics, we need to update our caching
model to include all of the information supported by `ruff_db`'s
diagnostic type. Instead of trying to serialize all of this information,
Micha suggested simply not caching files with diagnostics, like we
already do for files with syntax errors. This PR is an attempt at that
approach.
This has the added benefit of trimming down our `Rule` derives since
this was the last place the `FromStr`/`strum_macros::EnumString`
implementation was used, as well as the (de)serialization macros and
`CacheKey`.
Test Plan
--
Existing tests, with their input updated not to include a diagnostic,
plus a new test showing that files with lint diagnostics are not cached.
Benchmarks
--
In addition to tests, we wanted to check that this doesn't degrade
performance too much. I posted part of this new analysis in
https://github.com/astral-sh/ruff/issues/18198#issuecomment-3175048672,
but I'll duplicate it here. In short, there's not much difference
between `main` and this branch for projects with few diagnostics
(`home-assistant`, `airflow`), as expected. The difference for projects
with many diagnostics (`cpython`) is quite a bit bigger (~300 ms vs ~220
ms), but most projects that run ruff regularly are likely to have very
few diagnostics, so this may not be a problem practically.
I guess GitHub isn't really rendering this as I intended, but the extra
separator line is meant to separate the benchmarks on `main` (above the
line) from this branch (below the line).
| Command | Mean [ms] | Min [ms] | Max [ms] |
|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 322.0 | 317.5
| 326.2 |
| `ruff check cpython --isolated --exit-zero` | 217.3 | 209.8 | 237.9 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 279.5
| 277.0 | 283.6 |
| `ruff check home-assistant --isolated --exit-zero` | 37.2 | 35.7 |
40.6 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 133.1 | 130.4
| 146.4 |
| `ruff check airflow --isolated --exit-zero` | 34.7 | 32.9 | 41.6 |
|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 330.1 | 324.5
| 333.6 |
| `ruff check cpython --isolated --exit-zero` | 309.2 | 306.1 | 314.7 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 288.6
| 279.4 | 302.3 |
| `ruff check home-assistant --isolated --exit-zero` | 39.8 | 36.9 |
42.4 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 134.5 | 131.3
| 140.6 |
| `ruff check airflow --isolated --exit-zero` | 39.1 | 37.2 | 44.3 |
I had Claude adapt one of the
[scripts](https://github.com/sharkdp/hyperfine/blob/master/scripts/plot_whisker.py)
from the hyperfine repo to make this plot, so it's not quite perfect,
but maybe it's still useful. The table is probably more reliable for
close comparisons. I'll put more details about the benchmarks below for
the sake of future reproducibility.
<img width="4472" height="2368" alt="image"
src="https://github.com/user-attachments/assets/1c42d13e-818a-44e7-b34c-247340a936d7"
/>
<details><summary>Benchmark details</summary>
<p>
The versions of each project:
- CPython: 6322edd260e8cad4b09636e05ddfb794a96a0451, the 3.10 branch
from the contributing docs
- `home-assistant`: 5585376b406f099fb29a970b160877b57e5efcb0
- `airflow`: 29a1cb0cfde9d99b1774571688ed86cb60123896
The last two are just the main branches at the time I cloned the repos.
I don't think our Ruff config should be applied since I used
`--isolated`, but these are cloned into my copy of Ruff at
`crates/ruff_linter/resources/test`, and I trimmed the
`./target/release/` prefix from each of the commands, but these are
builds of Ruff in release mode.
And here's the script with the `hyperfine` invocation:
```shell
#!/bin/bash
cargo build --release --bin ruff
# git clone --depth 1 https://github.com/home-assistant/core crates/ruff_linter/resources/test/home-assistant
# git clone --depth 1 https://github.com/apache/airflow crates/ruff_linter/resources/test/airflow
bin=./target/release/ruff
resources=./crates/ruff_linter/resources/test
cpython=$resources/cpython
home_assistant=$resources/home-assistant
airflow=$resources/airflow
base=${1:-bench}
hyperfine --warmup 10 --export-json $base.json --export-markdown $base.md \
"$bin check $cpython --no-cache --isolated --exit-zero" \
"$bin check $cpython --isolated --exit-zero" \
"$bin check $home_assistant --no-cache --isolated --exit-zero" \
"$bin check $home_assistant --isolated --exit-zero" \
"$bin check $airflow --no-cache --isolated --exit-zero" \
"$bin check $airflow --isolated --exit-zero"
```
I ran this once on `main` (`baseline` in the graph, top half of the
table) and once on this branch (`nocache` and bottom of the table).
</p>
</details>
## Summary
Support recursive type aliases by adding a `Type::TypeAlias` type
variant, which allows referring to a type alias directly as a type
without eagerly unpacking it to its value.
We still unpack type aliases when they are added to intersections and
unions, so that we can simplify the intersection/union appropriately
based on the unpacked value of the type alias.
This introduces new possible recursive types, and so also requires
expanding our usage of recursion-detecting visitors in Type methods. The
use of these visitors is still not fully comprehensive in this PR, and
will require further expansion to support recursion in more kinds of
types (I already have further work on this locally), but I think it may
be better to do this incrementally in multiple PRs.
## Test Plan
Added some recursive type-alias tests and made them pass.
## Summary
After https://github.com/astral-sh/ruff/pull/19871, I realized that now
that we are passing around shared references to `CycleDetector`
visitors, we can now also simplify the `visit` callback signature; we
don't need to smuggle a single visitor reference through it anymore.
This is a pretty minor simplification, and it doesn't really make
anything shorter since I typically used a very short name (`v`) for the
smuggled reference, but I think it reduces cognitive overhead in reading
these `visit` usages; the extra variable would likely be confusing
otherwise for a reader.
## Test Plan
Existing CI.
## Summary
Type visitors are conceptually immutable, they just internally track the
types they've seen (and some maintain a cache of results.) Passing
around mutable visitors everywhere can get us into borrow-checker
trouble in some cases, where we need to recursively pass along the
visitor inside more than one closure with non-disjoint lifetime.
Use interior mutability (via `RefCell` and `Cell`) inside the visitors
instead, to allow us to pass around shared references.
## Test Plan
Existing tests.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Add "airflow.secrets.cache.SecretCache" →
"airflow.sdk.cache.SecretCache" rule
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Wei Lee <weilee.rx@gmail.com>
Summary
--
This fixes a regression caused by the BOM handling in #19806. Most
diagnostics already account for the BOM in their ranges, but those that
use `TextRange::default` to mean the beginning of the file do not,
causing an underflow in `RenderableAnnotation::new` when subtracting the
BOM-shifted `snippet_start` from the annotation range.
I ran into this when trying to run benchmarks on CPython in preparation
for caching work. The file `cpython/Lib/test/bad_coding2.py` was causing
a crash because it had a default-range `I002` diagnostic, with a BOM.
7cc3f1ebe9/crates/ruff_linter/src/rules/isort/rules/add_required_imports.rs (L122-L126)
The fix here is just to saturate to zero instead of panicking. I
considered adding a `TextRange::saturating_sub` method, but I wasn't
sure it was worth it for this one use. I'm happy to do that if
preferred, though.
Saturating seemed easier than shifting the affected annotations over,
but that could be another solution.
Test Plan
--
A new `ruff_db` test that reproduced the issue and manual testing
against the CPython file mentioned above
## Summary
This is a follow-up to
https://github.com/astral-sh/ruff/pull/19415#discussion_r2263456740 to
remove some unused code. As Micha noticed,
`GroupedEmitter::with_show_source` was only used in local unit tests[^1]
and was safe to remove. This allowed deleting `MessageCodeFrame` and a
lot more helper code previously shared with the `full` output format.
I also moved some other code from `text.rs` and `message/mod.rs` into
`grouped.rs` that is now only used for the `grouped` format. With a
little refactoring of the `concise` rendering logic in `ruff_db`, we
could probably remove `RuleCodeAndBody` too. The only difference I see
from the `concise` output is whether we print the filename next to the
row and column or not:
```shell
> ruff check --output-format concise
try.py:1:8: F401 [*] `math` imported but unused
> ruff check --output-format grouped
try.py:
1:8 F401 [*] `math` imported but unused
```
But I didn't try to do that here.
## Test Plan
Existing tests, with the source code no longer displayed. I also deleted
one test, as it was now a duplicate of the `default` test.
[^1]: "Local unit tests" as opposed to all of our linter snapshot tests,
as is the case for `TextEmitter::with_show_fix_diff`. We also want to
expose that to users eventually
(https://github.com/astral-sh/ruff/issues/7352), which I don't believe
is the case for the `grouped` format.
The [minimal
reproduction](https://gist.github.com/dcreager/fc53c59b30d7ce71d478dcb2c1c56444)
of https://github.com/astral-sh/ty/issues/948 is an example of a class
with implicit attributes whose types end up depending on themselves. Our
existing cycle detection for `infer_expression_types` is usually enough
to handle this situation correctly, but when there are very many of
these implicit attributes, we get a combinatorial explosion of running
time and memory usage.
Adding a separate cycle handler for `ClassLiteral::implicit_attribute`
lets us catch and recover from this situation earlier.
Closes https://github.com/astral-sh/ty/issues/948
The stub mapper wasn't being passed into this codepath. It is now being
used. A previously messed up test result I intentionally checked in was
subsequently fixed.
by using essentially the same logic for system site-packages, on the
assumption that system site-packages are always a subdir of the stdlib
we were looking for.
fix https://github.com/astral-sh/ty/issues/943
## Summary
Add module-level `__getattr__` support for ty's type checker, fixing
issue https://github.com/astral-sh/ty/issues/943.
Module-level `__getattr__` functions ([PEP
562](https://peps.python.org/pep-0562/)) are now respected when
resolving dynamic attributes, matching the behavior of mypy and pyright.
## Implementation
Thanks @sharkdp for the guidance in
https://github.com/astral-sh/ty/issues/943#issuecomment-3157566579
- Adds module-specific `__getattr__` resolution in
`ModuleLiteral.static_member()`
- Maintains proper attribute precedence: explicit attributes >
submodules > `__getattr__`
## Test Plan
- New mdtest covering basic functionality, type annotations, attribute
precedence, and edge cases
(run ```cargo nextest run -p ty_python_semantic
mdtest__import_module_getattr```)
- All new tests pass, verifying `__getattr__` is called correctly and
returns proper types
- Existing test suite passes, ensuring no regressions introduced
## Summary
This PR switches the `full` output format in Ruff over to use the
rendering code
in `ruff_db`. As proposed in the design doc, this involves a lot of
changes to the snapshot output.
I also had to comment out this assertion with a TODO to replace it after
https://github.com/astral-sh/ruff/issues/19688 because many of Ruff's
"file-level" annotations aren't actually file-level. They just happen to
occur at the start of the file, especially in tests with very short
snippets.
529d81daca/crates/ruff_annotate_snippets/src/renderer/display_list.rs (L1204-L1208)
I broke up the snapshot commits at the end into several blocks, but I
don't think it's enough to help with review. The first few (notebooks,
syntax errors, and test rules) are small enough to look at, but I
couldn't really think of other categories beyond that. I'm happy to
break those up or pick out specific examples beyond what I have below,
if that would help.
The minimal code changes are in this
[range](abd28f1e77),
with the snapshot commits following. Moving the `FullRenderer` and
updating the `EmitterFlags` aren't strictly necessary either. I even
dropped the renderer commit this morning but figured it made sense to
keep it since we have the `full` module for tests. I don't feel strongly
either way.
## Test Plan
I did actually click through all 1700 snapshots individually instead of
accepting them all at once, although I moved through them quickly. There
are a
few main categories:
### Lint diagnostics
```diff
-unused.py:8:19: F401 [*] `pathlib` imported but unused
+F401 [*] `pathlib` imported but unused
+ --> unused.py:8:19
|
7 | # Unused, _not_ marked as required (due to the alias).
8 | import pathlib as non_alias
- | ^^^^^^^^^ F401
+ | ^^^^^^^^^
9 |
10 | # Unused, marked as required.
|
- = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
```
- The filename and line numbers are moved to the second line
- The second noqa code next to the underline is removed
### Syntax errors
These are much like the above.
```diff
- -:1:16: invalid-syntax: Expected one or more symbol names after import
+ invalid-syntax: Expected one or more symbol names after import
+ --> -:1:16
|
1 | from foo import
| ^
```
One thing I noticed while reviewing some of these, but I don't think is
strictly syntax-error-related, is that some of the new diagnostics have
a little less context after the error. I don't think this is a problem,
but it's one small discrepancy I hadn't noticed before. Here's a minor
example:
```diff
-syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
+invalid-syntax: Expected one or more symbol names after import
+ --> syntax_errors.py:1:15
|
1 | from os import
| ^
2 |
3 | if call(foo
-4 | def bar():
|
```
And one of the biggest examples:
```diff
-E30_syntax_error.py:18:11: invalid-syntax: Expected ')', found newline
+invalid-syntax: Expected ')', found newline
+ --> E30_syntax_error.py:18:11
|
16 | pass
17 |
18 | foo = Foo(
| ^
-19 |
-20 |
-21 | def top(
|
```
Similarly, a few of the lint diagnostics showed that the cut indicator
calculation for overly long lines is also slightly different, but I
think that's okay too.
### Full-file diagnostics
```diff
-comment.py:1:1: I002 [*] Missing required import: `from __future__ import annotations`
+I002 [*] Missing required import: `from __future__ import annotations`
+--> comment.py:1:1
+help: Insert required import: `from __future__ import annotations`
+
```
As noted above, these will be much more rare after #19688 too. This case
isn't a true full-file diagnostic and will render a snippet in the
future, but you can see that we're now rendering the help message that
would have been discarded before. In contrast, this is a true full-file
diagnostic and should still look like this after #19688:
```diff
-__init__.py:1:1: A005 Module `logging` shadows a Python standard-library module
+A005 Module `logging` shadows a Python standard-library module
+--> __init__.py:1:1
```
### Jupyter notebooks
There's nothing particularly different about these, just showing off the
cell index again.
```diff
- Jupyter.ipynb:cell 3:1:7: F821 Undefined name `x`
+ F821 Undefined name `x`
+ --> Jupyter.ipynb:cell 3:1:7
|
1 | print(x)
- | ^ F821
+ | ^
|
```
## Summary
Fixes the remaining range reporting differences between the `ruff_db`
diagnostic rendering and Ruff's existing rendering, as noted in
https://github.com/astral-sh/ruff/pull/19415#issuecomment-3160525595.
This PR is structured as a series of three pairs. The first commit in
each pair adds a test showing the previous behavior, followed by a fix
and the updated snapshot. It's quite a small PR, but that might be
helpful just for the contrast.
You can also look at [this
range](052e656c6c..c3ea51030d)
of commits from #19415 to see the impact on real Ruff diagnostics. I
spun these commits out of that PR.
## Test Plan
New `ruff_db` tests
PLE2513 --fix changes ESC and SUB to uppercase hexadecimal values such
as \x1B while the formatter changes them to lowercase \x1b
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
This is a small refactor to update the server to send a single request
to perform registrations and unregistrations of dynamic capabilities.
## Test Plan
Existing E2E test cases pass, add a new test case to verify multiple
registrations.