## Summary
Implements new rule `B912` that requires the `strict=` argument for
`map(...)` calls with two or more iterables on Python 3.14+, following
the same pattern as `B905` for `zip()`.
Closes#20057
---------
Co-authored-by: dylwil3 <dylwil3@gmail.com>
Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs
from Renovate will soon appear from 'Mend'. Learn more
[here](https://redirect.github.com/renovatebot/renovate/discussions/37842).
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [hashbrown](https://redirect.github.com/rust-lang/hashbrown) |
workspace.dependencies | minor | `0.15.0` -> `0.16.0` |
---
> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.
---
### Release Notes
<details>
<summary>rust-lang/hashbrown (hashbrown)</summary>
###
[`v0.16.0`](https://redirect.github.com/rust-lang/hashbrown/blob/HEAD/CHANGELOG.md#0160---2025-08-28)
[Compare
Source](https://redirect.github.com/rust-lang/hashbrown/compare/v0.15.5...v0.16.0)
##### Changed
- Bump foldhash, the default hasher, to 0.2.0.
- Replaced `DefaultHashBuilder` with a newtype wrapper around `foldhash`
instead
of re-exporting it directly.
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: David Peter <mail@david-peter.de>
Co-authored-by: Ibraheem Ahmed <ibraheem@ibraheem.ca>
Now that imports are actually inserted, this should give us some
valuable dog-fooding experience.
Note that we don't currently do any ranking on completions, so until
that is improved, even in-scope completions could suffer. With that
said, this shouldn't have any impact at all in several scenarios (like
completions for attributes on objects).
We don't attempt to fix these yet. I think there are bigger fish to fry.
I came up with these based on this discussion:
https://github.com/astral-sh/ruff/pull/20439#discussion_r2357769518
Here's one example:
```
if ...:
from foo import MAGIC
else:
from bar import MAGIC
MAG<CURSOR>
```
Now in this example, completions will include `MAGIC` from the local
scope. That is, auto-import is involved with that completion. But at
present, auto-import will suggest importing `foo` and `bar` because we
haven't de-duplicated completions yet. Which is fine.
Here's another example:
```
if ...:
import foo as fubar
else:
import bar as fubar
MAG<CURSOR>
```
Now here, there is no `MAGIC` symbol in scope. So auto-import is in
play. Let's assume that the user selects `MAGIC` from `foo` in this
example. (`bar` also has `MAGIC`.)
Since we currently ignore the declaration site for symbols with
multiple possible bindings, the importer today doesn't know that
`fubar` _could_ contain `MAGIC`. But even if it did, what would we do
with that information? Should we do this?
```
if ...:
import foo as fubar
from foo import MAGIC
else:
import bar as fubar
MAGIC
```
Or could we reason that `bar` also has `MAGIC`?
```
if ...:
import foo as fubar
else:
import bar as fubar
fubar.MAGIC
```
But if we did that, we're making an assumption of user intent, since
they *selected* `foo.MAGIC` but not `bar.MAGIC`.
Anyway, I don't think we need to settle on an answer today, but I
wanted to capture some of these tricky cases in tests at the very
least.
## Summary
This PR adds support for unpacking `**kwargs` argument.
This can be matched against any standard (positional or keyword),
keyword-only, or keyword variadic parameter that haven't been matched
yet.
This PR also takes care of special casing `TypedDict` because the key
names and the corresponding value type is known, so we can be more
precise in our matching and type checking step. In the future, this
special casing would be extended to include `ParamSpec` as well.
Part of astral-sh/ty#247
## Test Plan
Add test cases for various scenarios.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves#20033
## Test Plan
unit tests added to the new split function, existing snapshot test
updated.
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#19887
- flynt(FLY002): When joining only string constants, upgrade raw
single-quoted strings to raw triple-quoted if the resulting
content contains a newline.
- Choose a safe triple-quote delimiter by switching to the opposite
quote style if the preferred triple appears inside the
content.
- Update FLY002 snapshot to include the `\n'.join([r'line1','line2'])`
case.
## Test Plan
I've added one test case to FLY002.py.
<!-- How was it tested? -->
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#20255
Mark single-item-membership-test fixes as always unsafe
- Always set `Applicability::Unsafe` for FURB171 fixes
- Update “Fix safety” docs to reflect always-unsafe behavior
- Expand tests (not in, nested set/frozenset, commented args)
## Test Plan
<!-- How was it tested? -->
I have added new test cases to
`crates/ruff_linter/resources/test/fixtures/refurb/FURB171_0.py` and
`crates/ruff_linter/resources/test/fixtures/refurb/FURB171_1.py`.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Fixes https://github.com/astral-sh/ruff/issues/20134
## Test Plan
`cargo nextest run flake8_use_pathlib`
---------
Co-authored-by: Dan Parizher <danparizher@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
This seems to be more consistent with how other LSPs work (like
`rust-analyzer`), and also I think is more consistent with how
`CompletionItem.detail` is itself rendered. Namely, in VS Code, it
is right-aligned. And it's also where we put the type signature.
But `CompletionItemLabelDetails.detail` is left-aligned where as
`CompletionItemLabelDetails.description` is right-aligned. So let's
swap them such that type signatures go in the latter and not the
former.
This also adds a space before the module name and contextualizes
it with `(import <name>)` to help aide the end user in figuring out
selecting the completion will do.
Fixes#1200
## Summary
This change reduces MD test compilation time from 6s to 3s on my laptop.
We don't need to build the unit tests and the corpus tests when we're
only interested in Markdown-based tests.
## Test Plan
local benchmarks
## Summary
Part of https://github.com/astral-sh/ty/issues/168. Infer more precise types for collection literals (currently, only `list` and `set`). For example,
```py
x = [1, 2, 3] # revealed: list[Unknown | int]
y: list[int] = [1, 2, 3] # revealed: list[int]
```
This could easily be extended to `dict` literals, but I am intentionally limiting scope for now.
## Summary
Part of https://github.com/astral-sh/ruff/issues/2331
## Test Plan
`cargo nextest run flake8_use_pathlib`
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Fixes: https://github.com/astral-sh/ty/issues/1173
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR will change the logic of binding Self type variables to bind
self to the immediate function that it's used on.
Since we are binding `self` to methods and not the class itself we need
to ensure that we bind self consistently.
The fix is to traverse scopes containing the self and find the first
function inside a class and use that function to bind the typevar for
self.
If no such scope is found we fallback to the normal behavior. Using Self
outside of a class scope is not legal anyway.
## Test Plan
Added a new mdtest.
Checked the diagnostics that are not emitted anymore in [primer
results](https://github.com/astral-sh/ruff/pull/20366#issuecomment-3289411424).
It looks good altough I don't completely understand what was wrong
before.
---------
Co-authored-by: Douglas Creager <dcreager@dcreager.net>
This is somewhat inspired by a similar abstraction in
`ruff_linter`. The main idea is to create an importer once
for a module that you want to add imports to. And then call
`import` to generate an edit for each symbol you want to
add.
I haven't done any performance profiling here yet. I don't
know if it will be a bottleneck. In particular, I do expect
`Importer::import` (but not `Importer::new`) to get called
many times for a single completion request when auto-import
is enabled. Particularly in projects with a lot of unimported
symbols. Because I don't know the perf impact, I didn't do
any premature optimization here. But there are surely some
low hanging fruit if this does prove to be a problem.
New tests make up a big portion of the diff here. I tried to
think of a bunch of different cases, although I'm sure there
are more.
This rejiggers some stuff in the main completions entrypoint
in `ty_ide`. A more refined `Completion` type is defined
with more information. In particular, to support auto-import,
we now include a module name and an "edit" for inserting an
import.
This also rolls the old "detailed completion" into the new
completion type. Previously, we were relying on the completion
type for `ty_python_semantic`. But `ty_ide` is really the code
that owns completions.
Note that this code doesn't build as-is. The next commit will
add the importer used here in `add_unimported_completions`.
Based on how this API is currently implemented, this doesn't
really cost us anything. But it gives us access to more
information about where the symbol is defined.
I think this is a better home for it. This way, `ty_ide`
more clearly owns how the "kind" of a completion is computed.
In particular, it is computed differently for things where
we know its type versus unimported symbols.
In the course of writing the "add an import" implementation,
I realized that we needed to know which symbols were in scope
and how they were defined. This was necessary to be able to
determine how to add a new import in a way that (minimally)
does not conflict with existing symbols.
I'm not sure that this is fully correct (especially for
symbol bindings) and it's unclear to me in which cases a
definition site will be missing. But this seems to work for
some of the basic cases that I tried.
The names of the submodules returned should be *complete*. This
is the contract of `Module::name`. However, we were previously
only returning the basename of the submodule.
This can already be accomplished via a `From` impl (and indeed,
that's how this is implemented). But in a generic context, the
turbo-fishing that needs to be applied is quite annoying.
Basically, given a `from module import name1, name2, ...` statement,
we'd like to be able to insert another name in that list.
This new `Insertion::existing_import` API provides such
functionality. There isn't much to it, although we are careful
to try and avoid inserting nonsense for import statements
that are already invalid.
This refactors the importer abstraction to use a shared
`Insertion`. This is mostly just moving some code around
with some slight tweaks.
The plan here is to keep the rest of the importing code
in `ruff_linter` and then write something ty-specific on
top of `Insertion`. This ends up sharing some code, but
not as much as would be ideal. In particular, the
`ruff_linter` imported is pretty tightly coupled with
ruff's semantic model. So to share the code, we'd need to
abstract over that.
## Summary
This PR wires up the GitHub output format moved to `ruff_db` in #20320
to the ty CLI.
It's a bit smaller than the GitLab version (#20155) because some of the
helpers were already in place, but I did factor out a few
`DisplayDiagnosticConfig` constructor calls in Ruff. I also exposed the
`GithubRenderer` and a wrapper `DisplayGithubDiagnostics` type because
we needed a way to configure the program name displayed in the GitHub
diagnostics. This was previously hard-coded to `Ruff`:
<img width="675" height="247" alt="image"
src="https://github.com/user-attachments/assets/592da860-d2f5-4abd-bc5a-66071d742509"
/>
Another option would be to drop the program name in the output format,
but I think it can be helpful in workflows with multiple programs
emitting annotations (such as Ruff and ty!)
## Test Plan
New CLI test, and a manual test with `--config 'terminal.output-format =
"github"'`
## Summary
Catch infinite recursion in binary-compare inference.
Fixes the stack overflow in `graphql-core` in mypy-primer.
## Test Plan
Added two tests that stack-overflowed before this PR.
## Summary
Use `Type::Divergent` to short-circuit diverging types in type
expressions. This avoids panicking in a wide variety of cases of
recursive type expressions.
Avoids many panics (but not yet all -- I'll be tracking down the rest)
from https://github.com/astral-sh/ty/issues/256 by falling back to
Divergent. For many of these recursive type aliases, we'd like to
support them properly (i.e. really understand the recursive nature of
the type, not just fall back to Divergent) but that will be future work.
This switches `Type::has_divergent_type` from using `any_over_type` to a
custom set of visit methods, because `any_over_type` visits more than we
need to visit, and exercises some lazy attributes of type, causing
significantly more work. This change means this diff doesn't regress
perf; it even reclaims some of the perf regression from
https://github.com/astral-sh/ruff/pull/20333.
## Test Plan
Added mdtest for recursive type alias that panics on main.
Verified that we can now type-check `packaging` (and projects depending
on it) without panic; this will allow moving a number of mypy-primer
projects from `bad.txt` to `good.txt` in a subsequent PR.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR implements F406
https://docs.astral.sh/ruff/rules/undefined-local-with-nested-import-star-usage/
as a semantic syntax error
## Test Plan
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
- Convert panics to diagnostics with id `Panic`, severity `Fatal`, and
the error as the diagnostic message, annotated with a `Span` with empty
code block and no range.
- Updates the post-linting message diagnostic handling to track the
maximum severity seen, and then prints the "report a bug in ruff"
message only if the max severity was `Fatal`
This depends on the sorting changes since it creates diagnostics with no
range specified.
Previously, we used a very fine-grained representation for individual
constraints: each constraint was _either_ a range constraint, a
not-equivalent constraint, or an incomparable constraint. These three
pieces are enough to represent all of the "real" constraints we need to
create — range constraints and their negation.
However, it meant that we weren't picking up as many chances to simplify
constraint sets as we could. Our simplification logic depends on being
able to look at _pairs_ of constraints or clauses to see if they
simplify relative to each other. With our fine-grained representation,
we could easily encounter situations that we should have been able to
simplify, but that would require looking at three or more individual
constraints.
For instance, negating a range constraint would produce:
```
¬(Base ≤ T ≤ Super) = ((T ≤ Base) ∧ (T ≠ Base)) ∨ (T ≁ Base) ∨
((Super ≤ T) ∧ (T ≠ Super)) ∨ (T ≁ Super)
```
That is, `T` must be (strictly) less than `Base`, (strictly) greater
than `Super`, or incomparable to either.
If we tried to union those back together, we should get `always`, since
`x ∨ ¬x` should always be true, no matter what `x` is. But instead we
would get:
```
(Base ≤ T ≤ Super) ∨ ((T ≤ Base) ∧ (T ≠ Base)) ∨ (T ≁ Base) ∨ ((Super ≤ T) ∧ (T ≠
Super)) ∨ (T ≁ Super)
```
Nothing would simplify relative to each other, because we'd have to look
at all five union elements to see that together they do in fact combine
to `always`.
The fine-grained representation was nice, because it made it easier to
[work out the math](https://dcreager.net/theory/constraints/) for
intersections and unions of each kind of constraint. But being able to
simplify is more important, since the example above comes up immediately
in #20093 when trying to handle constrained typevars.
The fix in this PR is to go back to a more coarse-grained
representation, where each individual constraint consists of a positive
range (which might be `always` / `Never ≤ T ≤ object`), and zero or more
negative ranges. The intuition is to think of a constraint as a region
of the type space (representable as a range) with zero or more "holes"
removed from it.
With this representation, negating a range constraint produces:
```
¬(Base ≤ T ≤ Super) = (always ∧ ¬(Base ≤ T ≤ Super))
```
(That looks trivial, because it is! We just move the positive range to
the negative side.)
The math is not that much harder than before, because there are only
three combinations to consider (each for intersection and union) —
though the fact that there can be multiple holes in a constraint does
require some nested loops. But the mdtest suite gives me confidence that
this is not introducing any new issues, and it definitely removes a
troublesome TODO.
(As an aside, this change also means that we are back to having each
clause contain no more than one individual constraint for any typevar.
This turned out to be important, because part of our simplification
logic was also depending on that!)
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This mainly removes an internal inconsistency, where we didn't remove
the `Self` type variable when eagerly binding `Self` to an instance
type. It has no observable effect, apparently.
builds on top of https://github.com/astral-sh/ruff/pull/20328
## Test Plan
None
## Summary
Fixes https://github.com/astral-sh/ty/issues/1161
Include `NamedTupleFallback` members in `NamedTuple` instance
completions.
- Augment instance attribute completions when completing on NamedTuple
instances by merging members from
`_typeshed._type_checker_internals.NamedTupleFallback`
## Test Plan
Adds a minimal completion test `namedtuple_fallback_instance_methods`
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
This project was [recently removed from
mypy_primer](https://github.com/astral-sh/ruff/pull/20378), so we need
to remove it from `good.txt` in order for ecosystem-analyzer to work
correctly.
## Test Plan
Run mypy_primer and ecosystem-analyzer on this branch.
## Summary
https://github.com/astral-sh/ruff/pull/20165 added a lot of false
positives around calls to `builtins.open()`, because our missing support
for PEP-613 type aliases means that we don't understand typeshed's
overloads for `builtins.open()` at all yet, and therefore always select
the first overload. This didn't use to matter very much, but now that we
have a much stricter implementation of protocol assignability/subtyping
it matters a lot, because most of the stdlib functions dealing with I/O
(`pickle`, `marshal`, `io`, `json`, etc.) are annotated in typeshed as
taking in protocols of some kind.
In lieu of full PEP-613 support, which is blocked on various things and
might not land in time for our next alpha release, this PR adds some
temporary special-casing for `builtins.open()` to avoid the false
positives. We just infer `Todo` for anything that isn't meant to match
typeshed's first `open()` overload. This should be easy to rip out again
once we have proper support for PEP-613 type aliases, which hopefully
should be pretty soon!
## Test Plan
Added an mdtest
## Summary
Fixes https://github.com/astral-sh/ty/issues/377.
We were treating any function as being assignable to any callback
protocol, because we were trying to figure out a type's `Callable`
supertype by looking up the `__call__` attribute on the type's
meta-type. But a function-literal's meta-type is `types.FunctionType`,
and `types.FunctionType.__call__` is `(...) -> Any`, which is not very
helpful!
While working on this PR, I also realised that assignability between
class-literals and callback protocols was somewhat broken too, so I
fixed that at the same time.
## Test Plan
Added mdtests
## Summary
Resolves#20266
Definition of the frozen dataclass attribute can be instantiation of a
nested frozen dataclass as well as a non-nested one.
### Problem explanation
The `function_call_in_dataclass_default` function is invoked during the
"defined scope" stage, after all scopes have been processed. At this
point, the semantic references the top-level scope. When
`SemanticModel::lookup_attribute` executes, it searches for bindings in
the top-level module scope rather than the class scope, resulting in an
error.
To solve this issue, the lookup should be evaluated through the class
scope.
## Test Plan
- Added test case from issue
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
## Summary
Fixes#19842
Prevent infinite loop with I002 and UP026
- Implement isort-aware handling for UP026 (deprecated mock import):
- Add CLI integration tests in crates/ruff/tests/lint.rs:
## Test Plan
I have added two integration tests
`pyupgrade_up026_respects_isort_required_import_fix` and
`pyupgrade_up026_respects_isort_required_import_from_fix` in
`crates/ruff/tests/lint.rs`.
## Summary
This looks like it should fix the errors that we've been seeing in sympy
in recent mypy-primer runs.
## Test Plan
I wasn't able to reproduce the sympy failures locally; it looks like
there is probably a dependency on the order in which files are checked.
So I don't have a minimal reproducible example, and wasn't able to add a
test :/ Obviously I would be happier if we could commit a regression
test here, but since the change is straightforward and clearly
desirable, I'm not sure how many hours it's worth trying to track it
down.
Mypy-primer is still failing in CI on this PR, because it fails on the
"old" ty commit already (i.e. on main). But it passes [on a no-op PR
stacked on top of this](https://github.com/astral-sh/ruff/pull/20370),
which strongly suggests this PR fixes the problem.
## Summary
Resolves#20282
Makes the rule fix always unsafe, because the replacement may not be
semantically equivalent to the original expression, potentially changing
the behavior of the code.
Updated docstring with examples.
## Test Plan
- Added two tests from issue and regenerated the snapshot
---------
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Fixes#20204
Recognize t-strings, generators, and lambdas in RUF016
- Accept boolean literals as valid index and slice bounds.
- Add TString, Generator, and Lambda to `CheckableExprType`.
- Expand RUF016.py fixture and update snapshots accordingly.
Our token-based rules and `noqa` extraction used an `Indexer` that kept
track of f-string ranges but not t-strings. We've updated the `Indexer`
and downstream uses thereof to handle both f-strings and t-strings.
Most of the diff is renaming and adding tests.
Note that much of the "new" logic gets to be naive because the lexer has
already ensured that f and t-string "starts" are paired with their
respective "ends", even amidst nesting and so on.
Finally: one could imagine wanting to know if a given interpolated
string range corresponds to an f-string or a t-string, but I didn't find
a place where we actually needed this.
Closes#20310
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR adds support for building loongarch64 binaries in CI. As such
support has been merged in uv (astral-sh/uv#15387) it's time to consider
adding it to ruff.
Please note that as Ubuntu is not yet available for loongarch64, I have
elected to use a Debian Trixie container maintained by community
members. In addition, as Debian's pip does not allow installing modules
system-wide, I have modified the workflow to install additional modules
in a virtual environment.
Since the workflow is shared between all targets, the only way to handle
this difference (between Debian and Ubuntu) is just to install pip in a
venv for all targets. If there is a better (and less intrusive) way to
work around this, please let me know.
## Test Plan
Tests are included in CI and the loongarch64 artifacts built in [this
workflow](https://github.com/SkyBird233/ruff/actions/runs/17640270032/job/50125471548)
has been smoke tested.
## Summary
This PR addresses an issue for a variadic argument when involved in
argument type expansion of overload call evaluation.
The issue is that the expansion of the variadic argument could result in
argument list of different arity. For example, in `*args: tuple[int] |
tuple[int, str]`, the expansion would lead to the variadic argument
being unpacked into 1 and 2 element respectively. This means that the
parameter matching that was performed initially isn't sufficient and
each expanded argument list would need to redo the parameter matching
again.
This is currently done by redoing the parameter matching directly,
maintaining the state of argument forms (and the conflicting forms), and
updating the `Bindings` values if it changes.
Closes: astral-sh/ty#735
## Test Plan
Update existing mdtest.
This PR removes the `Constraints` trait. We removed the `bool`
implementation several weeks back, and are using `ConstraintSet`
everywhere. There have been discussions about trying to include the
reason for an assignability failure as part of the result, but that
there are no concrete plans to do so soon, and it's not clear that we'll
need the `Constraints` trait to do that. (We can ideally just update the
`ConstraintSet` type directly.)
In the meantime, this just complicates the code for no good reason.
This PR is a pure refactoring, and contains no behavioral changes.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This is the GitHub analog to #20117. This PR prepares to add a GitHub
output format to ty by moving the implementation from `ruff_linter` to
`ruff_db`. Hopefully this one is a bit easier to review
commit-by-commit. Almost all of the refactoring this time is in the
first commit, then the second commit adds the new `OutputFormat` variant
and moves the file into `ruff_db`. The third commit is just a small
touch up to use a private method that accommodates ty files so that we
can run the tests and update/move the snapshots.
I had to push a fourth commit to fix and test diagnostics without a
span/file.
## Test Plan
Existing tests
Previously, `Type::object` would find the definition of the `object`
class in typeshed, load that in (to produce a `ClassLiteral` and
`ClassType`), and then create a `NominalInstance` of that class.
It's possible that we are using a typeshed that doesn't define `object`.
We will not be able to do much useful work with that kind of typeshed,
but it's still a possibility that we have to support at least without
panicking. Previously, we would handle this situation by falling back on
`Unknown`.
In most cases, that's a perfectly fine fallback! But `object` is also
our top type — the type of all values. `Unknown` is _not_ an acceptable
stand-in for the top type.
This PR adds a new `NominalInstance` variant for "instances of
`object`". Unlike other nominal instances, we do not need to load in
`object`'s `ClassType` to instantiate this variant. We will use this new
variant even when the current typeshed does not define an `object`
class, ensuring that we have a fully static representation of our top
type at all times.
There are several operations that need access to a nominal instance's
class, and for this new `object` variant we load it lazily only when
it's needed. That means this operation is now fallible, since this is
where the "typeshed doesn't define `object`" failure shows up.
This new approach also has the benefit of avoiding some salsa cycles
that were cropping up while I was debugging #20093, since the new
constraint set representation was trying to instantiate `Type::object`
while in the middle of processing its definition in typeshed. Cycle
handling was kicking in correctly and returning the `Unknown` fallback
mentioned above. But the constraint set implementation depends on
`Type::object` being a distinct and fully static type, highlighting that
this is a correctness fix, not just an optimization fix.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Use `Type::Divergent` to avoid "too many iterations" panic on an
infinitely-nested tuple in an implicit instance attribute.
The regression here is from checking all tuple elements to see if they
contain a Divergent type. It's 5% on one project, 1% on another, and
zero on the rest. I spent some time looking into eliminating this
regression by tracking a flag on inference results to note if they could
possibly contain any Divergent type, but this doesn't really work --
there are too many different ways a type containing a Divergent type
could enter an inference result. Still thinking about whether there are
other ways to reduce this. One option is if we see certain kinds of
non-atomic types that are commonly expensive to check for Divergent, we
could make `has_divergent_type` a Salsa query on those types.
## Test Plan
Added mdtest.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
The debug representation isn't as useful as calling `.display(db)`, but
it's still kind-of annoying when `dbg!()` calls don't compile locally
due to the compiler not being able to guarantee that an object of type
`impl Constraints` implements `Debug`
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#20235
• Fix `RUF102` to properly handle rule redirects when validating noqa
codes
• Update `code_is_valid` to check redirect targets before determining
validity
• Add test case for rule redirects (TCH002 in this case)
## Test Plan
<!-- How was it tested? -->
I have added a test case for rule redirects to
`crates/ruff_linter/resources/test/fixtures/ruff/RUF102.py`.
## Summary
`CallableTypeOf[bound_method]` would previously bind `self` to the
bound method type itself, instead of binding it to the instance type
stored inside the bound method type.
## Test Plan
Added regression test
This PR adds a new `ty_extensions.ConstraintSet` class, which is used to
expose constraint sets to our mdtest framework. This lets us write a
large collection of unit tests that exercise the invariants and rewrite
rules of our constraint set implementation.
As part of this, `is_assignable_to` and friends are updated to return a
`ConstraintSet` instead of a `bool`, and we implement
`ConstraintSet.__bool__` to return when a constraint set is always
satisfied. That lets us still use
`static_assert(is_assignable_to(...))`, since the assertion will coerce
the constraint set to a bool, and also lets us
`reveal_type(is_assignable_to(...))` to see more detail about
whether/when the two types are assignable. That lets us get rid of
`reveal_when_assignable_to` and friends, since they are now redundant
with the expanded capabilities of `is_assignable_to`.
## Summary
When adding an enum literal `E = Literal[Color.RED]` to a union which
already contained a subtype of that enum literal(!), we were previously
not simplifying the union correctly. My assumption is that our property
tests didn't catch that earlier, because the only possible non-trivial
subytpe of an enum literal that I can think of is `Any & E`. And in
order for that to be detected by the property tests, it would have to
randomly generate `Any & E | E` and then also compare that with `E` on
the other side (in an equivalence test, or the subtyping-antisymmetry
test).
closes https://github.com/astral-sh/ty/issues/1155
## Test Plan
* Added a regression test.
* I also ran the property tests for a while, but probably not for two
months worth of daily CI runs.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Closes#18349
After this change:
- All deprecated rules are deselected by default
- They are only selected if the user specifically selects them by code,
e.g. `--select UP038`
- Thus, `--select ALL --select UP --select UP0` won't select the
deprecated rule UP038
- Documented the change in version policy. From now on, deprecating a
rule should increase the minor version
## Test Plan
Integration tests in "integration_tests.rs"
Also tested with a temporary test package:
```
~> ../../ruff/target/debug/ruff.exe check --select UP038
warning: Rule `UP038` is deprecated and will be removed in a future release.
warning: Detected debug build without --no-cache.
UP038 Use `X | Y` in `isinstance` call instead of `(X, Y)`
--> main.py:2:11
|
1 | def main():
2 | print(isinstance(25, (str, int)))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Convert to `X | Y`
Found 1 error.
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
~> ../../ruff/target/debug/ruff.exe check --select UP03
warning: Detected debug build without --no-cache.
All checks passed!
~> ../../ruff/target/debug/ruff.exe check --select UP0
warning: Detected debug build without --no-cache.
All checks passed!
~> ../../ruff/target/debug/ruff.exe check --select UP
warning: Detected debug build without --no-cache.
All checks passed!
~> ../../ruff/target/debug/ruff.exe check --select ALL
# warnings and errors, but because of other errors, UP038 was deselected
```
- **Stabilize `airflow3-suggested-update` (`AIR311`)**
- **Stabilize `airflow3-suggested-to-move-to-provider` (`AIR312`)**
- **Stabilize `airflow3-removal` (`AIR301`)**
- **Stabilize `airflow3-moved-to-provider` (`AIR302`)**
- **Stabilize `airflow-dag-no-schedule-argument` (`AIR002`)**
I put this all in one PR to make it easier to double check with @Lee-W
before we merge this. I also made a few minor documentation changes and
updated one error message that I want to make sure are okay. But for the
most part this just moves the rules from `RuleGroup::Preview` to
`RuleGroup::Stable`!
Fixes#17749
This stabilizes the behavior introduced in #16565 which (roughly) tries
to match an import like `import a.b.c` to an actual directory path
`a/b/c` in order to label it as first-party, rather than simply looking
for a directory `a`.
Mainly this affects the sorting of imports in the presence of namespace
packages, but a few other rules are affected as well.
This one has been a bit contentious in the past. It usually uncovers
~700 ecosystem hits. See:
- https://github.com/astral-sh/ruff/pull/16657
- https://github.com/astral-sh/ruff/issues/16690
But I think there's consensus that it's okay to merge as-is. We'd love
an
autofix since it's so common, but we can't reliably tell what a user
meant. The
pattern is ambiguous after all 😆
This is the first rule that actually needed its test case relocated, but
the
docs looked good.
## Summary
This PR Removes deprecated UP038 as per instructed in #18727closes#18727
## Test Plan
I have run tests non of them failing
One Question i have is do we have to document that UP038 is removed?
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
closes#7710
## Test Plan
It is is removal so i don't think we have to add tests otherwise i have
followed test plan mentioned in contributing.md
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
In #11115 we moved from defaulting to $HOME/Library/Application Support
to $XDG_HOME on macOS and added a deprecation warning if we find files
in the old location. So this PR removes the warning
closes#19145
## Test Plan
Ran `cargo test`
Summary
--
Rule and test/snapshot updated, the docs look good
My one hesitation here is that we could hold off stabilizing the rule
until its fix is also ready for stabilization, but this is also the only
preview PTH rule, so I think it's okay to stabilize the rule and later
(probably in the next minor release) stabilize the fixes together.
The tests looked good. For the docs, I added a `## See also` section
pointing to
the closely-related F841 (unused-variable) and the corresponding section
to F841
pointing back to RUF059. It seems like you'd probably want both of these
active
or at least to know about the other when reading the docs.
The constraint representation that we added in #19997 was subtly wrong,
in that it didn't correctly model that type assignability is a _partial_
order — it's possible for two types to be incomparable, with neither a
subtype of the other. That means the negation of a constraint like `T ≤
t` (typevar `T` must be a subtype of `t`) is **_not_** `t < T`, but
rather `t < T ∨ T ≁ t` (using ≁ to mean "not comparable to").
That means we need to update our constraint representation to be an
enum, so that we can track both _range_ constraints (upper/lower bound
on the typevar), and these new _incomparable_ constraints.
Since we need an enum now, that also lets us simplify how we were
modeling range constraints. Before, we let the lower/upper bounds be
either open (<) or closed (≤). Now, range constraints are always closed,
and we add a third kind of constraint for _not equivalent_ (≠). We can
translate an open upper bound `T < t` into `T ≤ t ∧ T ≠ t`.
We already had the logic for doing adding _clauses_ to a _set_ by doing
a pairwise simplification. We copy that over to where we add
_constraints_ to a _clause_. To calculate the intersection or union of
two constraints, the new enum representation makes it easy to break down
all of the possibilities into a small number of cases: intersect range
with range, intersect range with not-equivalent, etc. I've done the math
[here](https://dcreager.net/theory/constraints/) to show that the
simplifications for each of these cases is correct.
## Summary
Resolves#19357
Skip UP008 diagnostic for `builtins.super(P, self)` calls when
`__class__` is not referenced locally, preventing incorrect fixes.
**Note:** I haven't found concrete information about which cases
`__class__` will be loaded into the scope. Let me know if anyone has
references, it would be useful to enhance the implementation. I did a
lot of tests to determine when `__class__` is loaded. Considered
sources:
1. [Python doc
super](https://docs.python.org/3/library/functions.html#super)
2. [Python doc classes](https://docs.python.org/3/tutorial/classes.html)
3. [pep-3135](https://peps.python.org/pep-3135/#specification)
As I understand it, Python will inject at runtime into local scope a
`__class__` variable if it detects references to `super` or `__class__`.
This allows calling `super()` and passing appropriate parameters.
However, the compiler doesn't do the same for `builtins.super`, so we
need to somehow introduce `__class__` into the local scope.
I figured out `__class__` will be in scope with valid value when two
conditions are met:
1. `super` or `__class__` names have been loaded within function scope
4. `__class__` is not overridden.
I think my solution isn't elegant, so I would be appreciate a detailed
review.
## Test Plan
Added 19 test cases, updated snapshots.
---------
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
- Renames functions to drop `expect_` from names.
- Make functions return `Option<LineColumn>` to appropriately signal
when range is not available.
- Update existing consumers to use `unwrap_or_default()`. Uncertain if
there are better fallback behaviors for individual consumers.
This adds a new `backend: internal | uv` option to the LSP
`FormatOptions` allowing users to perform document and range formatting
operations though uv. The idea here is to prototype a solution for users
to transition to a `uv format` command without encountering version
mismatches (and consequently, formatting differences) between the LSP's
version of `ruff` and uv's version of `ruff`.
The primarily alternative to this would be to use uv to discover the
`ruff` version used to start the LSP in the first place. However, this
would increase the scope of a minimal `uv format` command beyond "run a
formatter", and raise larger questions about how uv should be used to
coordinate toolchain discovery. I think those are good things to
explore, but I'm hesitant to let them block a `uv format`
implementation. Another downside of using uv to discover `ruff`, is that
it needs to be implemented _outside_ the LSP; e.g., we'd need to change
the instructions on how to run the LSP and implement it in each editor
integration, like the VS Code plugin.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Specifically, the [`if_not_else`] lint will sometimes flag
code to change the order of `if` and `else` bodies if this
would allow a `!` to be removed. While perhaps tasteful in
some cases, there are many cases in my experience where this
bows to other competing concerns that impact readability.
(Such as the relative sizes of the `if` and `else` bodies,
or perhaps an ordering that just makes the code flow in a
more natural way.)
[`if_not_else`]: https://rust-lang.github.io/rust-clippy/master/index.html#/if_not_else
## Summary
This is a follow-up to https://github.com/astral-sh/ruff/pull/19321.
Now lazy snapshots are updated to take into account new bindings on
every symbol reassignment.
```python
def outer(x: A | None):
if x is None:
x = A()
reveal_type(x) # revealed: A
def inner() -> None:
# lazy snapshot: {x: A}
reveal_type(x) # revealed: A
inner()
def outer() -> None:
x = None
x = 1
def inner() -> None:
# lazy snapshot: {x: Literal[1]} -> {x: Literal[1, 2]}
reveal_type(x) # revealed: Literal[1, 2]
inner()
x = 2
```
Closesastral-sh/ty#559.
## Test Plan
Some TODOs in `public_types.md` now work properly.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Adds support for generic PEP695 type aliases, e.g.,
```python
type A[T] = T
reveal_type(A[int]) # A[int]
```
Resolves https://github.com/astral-sh/ty/issues/677.
## Summary
Support cases like the following, where we need the generic context to
include both `Self` and `T` (not just `T`):
```py
from typing import Self
class C:
def method[T](self: Self, arg: T): ...
C().method(1)
```
closes https://github.com/astral-sh/ty/issues/1131
## Test Plan
Added regression test
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Noticed this was not escaped when writing a project that parses the
result of `ruff rule --outputformat json`. This is visible here:
<https://docs.astral.sh/ruff/rules/mixed-case-variable-in-global-scope/#why-is-this-bad>
## Test Plan
documentation only
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
The sub-checks for assignability and subtyping of materializations
performed in `has_relation_in_invariant_position` and
`is_subtype_in_invariant_position` need to propagate the
`HasRelationToVisitor`, or we can stack overflow.
A side effect of this change is that we also propagate the
`ConstraintSet` through, rather than using `C::from_bool`, which I think
may also become important for correctness in cases involving type
variables (though it isn't testable yet, since we aren't yet actually
creating constraints other than always-true and always-false.)
## Test Plan
Added mdtest (derived from code found in pydantic) which
stack-overflowed before this PR.
With this change incorporated, pydantic now checks successfully on my
draft PR for PEP 613 TypeAlias support.
Now that https://github.com/astral-sh/ruff/pull/20263 is merged, we can
update mypy_primer and add the new `egglog-python` project to
`good.txt`. The ecosystem-analyzer run shows that we now add 1,356
diagnostics (where we had over 5,000 previously, due to the unsupported
project layout).
## Summary
I felt it was safer to add the `python` folder *in addition* to a
possibly-existing `src` folder, even though the `src` folder only
contains Rust code for `maturin`-based projects. There might be
non-maturin projects where a `python` folder exists for other reasons,
next to a normal `src` layout.
closes https://github.com/astral-sh/ty/issues/1120
## Test Plan
Tested locally on the egglog-python project.
## Summary
Add backreferences to the original item declaration in TypedDict
diagnostics.
Thanks to @AlexWaygood for the suggestion.
## Test Plan
Updated snapshots
## Summary
An annotated assignment `name: annotation` without a right-hand side was
previously not covered by the range returned from
`DefinitionKind::full_range`, because we did expand the range to include
the right-hand side (if there was one), but failed to include the
annotation.
## Test Plan
Updated snapshot tests
## Summary
Add support for `typing.ReadOnly` as a type qualifier to mark
`TypedDict` fields as being read-only. If you try to mutate them, you
get a new diagnostic:
<img width="787" height="234" alt="image"
src="https://github.com/user-attachments/assets/f62fddf9-4961-4bcd-ad1c-747043ebe5ff"
/>
## Test Plan
* New Markdown tests
* The typing conformance changes are all correct. There are some false
negatives, but those are related to the missing support for the
functional form of `TypedDict`, or to overriding of fields via
inheritance. Both of these topics are tracked in
https://github.com/astral-sh/ty/issues/154
Closesastral-sh/ty#456. Part of astral-sh/ty#994.
After all the foundational work, this is only a small change, but let's
see if it exposes any unresolved issues.
## Summary
Part of astral-sh/ty#994. The goal of this PR was to add correct
behavior for attribute access on the top and bottom materializations.
This is necessary for the end goal of using the top materialization for
narrowing generics (`isinstance(x, list)`): we want methods like
`x.append` to work correctly in that case.
It turned out to be convenient to represent materialization as a
TypeMapping, so it can be stashed in the `type_mappings` list of a
function object. This also allowed me to remove most concrete
`materialize` methods, since they usually just delegate to the subparts
of the type, the same as other type mappings. That is why the net effect
of this PR is to remove a few hundred lines.
## Test Plan
I added a few more tests. Much of this PR is refactoring and covered by
existing tests.
## Followups
Assigning to attributes of top materializations is not yet covered. This
seems less important so I'd like to defer it.
I noticed that the `materialize` implementation of `Parameters` was
wrong; it did the same for the top and bottom materializations. This PR
makes the bottom materialization slightly more reasonable, but
implementing this correctly will require extending the struct.
## Summary
Two minor cleanups:
- Return `Option<ClassType>` rather than `Option<ClassLiteral>` from
`TypeInferenceBuilder::class_context_of_current_method`. Now that
`ClassType::is_protocol` exists as a method as well as
`ClassLiteral::is_protocol`, this simplifies most of the call-sites of
the `class_context_of_current_method()` method.
- Make more use of the `MethodDecorator::try_from_fn_type` method in
`class.rs`. Under the hood, this method uses the new methods
`FunctionType::is_classmethod()` and `FunctionType::is_staticmethod()`
that @sharkdp recently added, so it gets the semantics more precisely
correct than the code it's replacing in `infer.rs` (by accounting for
implicit staticmethods/classmethods as well as explicit ones). By using
these methods we can delete some code elsewhere (the
`FunctionDecorators::from_decorator_types()` constructor)
## Test Plan
Existing tests
## Summary
A small set of additional tests for `TypedDict` that I wrote while going
through the spec. Note that this certainly doesn't make the test suite
exhaustive (see remaining open points in the updated list here:
https://github.com/astral-sh/ty/issues/154).
This PR adds two new `ty_extensions` functions,
`reveal_when_assignable_to` and `reveal_when_subtype_of`. These are
closely related to the existing `is_assignable_to` and `is_subtype_of`,
but instead of returning when the property (always) holds, it produces a
diagnostic that describes _when_ the property holds. (This will let us
construct mdtests that print out constraints that are not always true or
always false — though we don't currently have any instances of those.)
I did not replace _every_ occurrence of the `is_property` variants in
the mdtest suite, instead focusing on the generics-related tests where
it will be important to see the full detail of the constraint sets.
As part of this, I also updated the mdtest harness to accept the shorter
`# revealed:` assertion format for more than just `reveal_type`, and
updated the existing uses of `reveal_protocol_interface` to take
advantage of this.
## Summary
Pull this out of https://github.com/astral-sh/ruff/pull/18473 as an
isolated change to make sure it has no adverse effects.
The wrong behavior is observable on `main` for something like
```py
class C:
def __new__(cls) -> "C":
cls.x = 1
C.x # previously: Attribute `x` can only be accessed on instances
# now: Type `<class 'C'>` has no attribute `x`
```
where we currently treat `x` as an *instance* attribute (because we
consider `__new__` to be a normal function and `cls` to be the "self"
attribute). With this PR, we do not consider `x` to be an attribute,
neither on the class nor on instances of `C`. If this turns out to be an
important feature, we should add it intentionally, instead of
accidentally.
## Test Plan
Ecosystem checks.
## Summary
I'm trying to reduce code complexity for
[RustPython](https://github.com/RustPython/RustPython), we have this
file:
056795eed4/compiler/codegen/src/unparse.rs
which can be replaced entirely by `ruff_python_codegen::Generator`.
Unfortunately we can not create an instance of `Generator` easily,
because `Indentation` is not exported at
cda376afe0/crates/ruff_python_codegen/src/lib.rs (L3)
I have managed to bypass this restriction by doing:
```rust
let contents = r"x = 1";
let module = ruff_python_parser::parse_module(contents).unwrap();
let stylist = ruff_python_codegen::Stylist::from_tokens(module.tokens(), contents);
stylist.indentation()
```
But ideally I'd rather use:
```rust
ruff_python_codegen::Indentation::default()
```
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
### Why
Removal should be grouped into the same category. It doesn't matter
whether it's from a provider or not (and the only case we used to have
was not anyway).
`ProviderReplacement` is used to indicate that we have a replacement and
we might need to install an extra Python package to cater to it.
### What
Move `airflow.operators.postgres_operator.Mapping` from AIR302 to AIR301
and get rid of `ProviderReplace::None`
## Test Plan
<!-- How was it tested? -->
Update the test fixtures accordingly in the first commit and reorganize
them in the second commit
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR implements
https://docs.astral.sh/ruff/rules/yield-from-in-async-function/ as a
syntax semantic error
## Test Plan
<!-- How was it tested? -->
I have written a simple inline test as directed in
[https://github.com/astral-sh/ruff/issues/17412](https://github.com/astral-sh/ruff/issues/17412)
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
update the argument `datasets` as `assets`
## Test Plan
<!-- How was it tested? -->
update fixture accordingly
It's almost certainly bad juju to show literally every single possible
symbol when completions are requested but there is nothing typed yet.
Moreover, since there are so many symbols, it is likely beneficial to
try and winnow them down before sending them to the client.
This change tries to extract text that has been typed and then uses
that as a query to listing all available symbols.
Instead of waiting to land auto-import until it is "ready
for users," it'd be nicer to get incremental progress merged
to `main`. By making it an experimental opt-in, we avoid making
the default completion experience worse but permit developers
and motivated users to try it.
This re-works the `all_symbols` based added previously to work across
all modules available, and not just what is directly in the workspace.
Note that we always pass an empty string as a query, which makes the
results always empty. We'll fix this in a subsequent commit.
This is to facilitate recursive traversal of all modules in an
environment. This way, we can keep asking for submodules.
This also simplifies how this is used in completions, and probably makes
it faster. Namely, since we return the `Module` itself, callers don't
need to invoke the full module resolver just to get the module type.
Note that this doesn't include namespace packages. (Which were
previously not supported in `Module::all_submodules`.) Given how they
can be spread out across multiple search paths, they will likely require
special consideration here.
This is similar to a change made in the "list top-level modules"
implementation that had been masked by poor Salsa failure modes.
Basically, if we can't find a root here, it *must* be a bug. And if we
just silently skip over it, we risk voiding Salsa's purity contract,
leading to more difficult to debug panics.
This did cause one test to fail, but only because the test wasn't
properly setting up roots.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
### What
Change the message from "DAG should have an explicit `schedule`
argument" to "`DAG` or `@dag` should have an explicit `schedule`
argument"
### Why
We're trying to get rid of the idea that DAG in airflow was Directed
acyclic graph. Thus, change it to refer to the class `DAG` or the
decorator `@dag` might help a bit.
## Test Plan
<!-- How was it tested? -->
update the test fixtures accordly
## Summary
This wires up the GitLab output format moved into `ruff_db` in
https://github.com/astral-sh/ruff/pull/20117 to the ty CLI.
While I was here, I made one unrelated change to the CLI docs. Clap was
rendering the escapes around the `\[default\]` brackets for the `full`
output, so I just switched those to parentheses:
```
--output-format <OUTPUT_FORMAT>
The format to use for printing diagnostic messages
Possible values:
- full: Print diagnostics verbosely, with context and helpful hints \[default\]
- concise: Print diagnostics concisely, one per line
- gitlab: Print diagnostics in the JSON format expected by GitLab Code Quality reports
```
## Test Plan
New CLI test, and a manual test with `--config 'terminal.output-format =
"gitlab"'` to make sure this works as a configuration option too. I also
tried piping the output through jq to make sure it's at least valid JSON
This introduces `GotoTarget::Call` that represents the kind of
ambiguous/overloaded click of a callable-being-called:
```py
x = mymodule.MyClass(1, 2)
^^^^^^^
```
This is equivalent to `GotoTarget::Expression` for the same span but
enriched
with information about the actual callable implementation.
That is, if you click on `MyClass` in `MyClass()` it is *both* a
reference to the class and to the initializer of the class. Therefore
it would be ideal for goto-* and docstrings to be some intelligent
merging of both the class and the initializer.
In particular the callable-implementation (initializer) is prioritized
over the callable-itself (class) so when showing docstrings we will
preferentially show the docs of the initializer if it exists, and then
fallback to the docs of the class.
For goto-definition/goto-declaration we will yield both the class and
the initializer, requiring you to pick which you want (this is perhaps
needlessly pedantic but...).
Fixes https://github.com/astral-sh/ty/issues/898
Fixes https://github.com/astral-sh/ty/issues/1010
I decided to split out the addition of these tests from other PRs so
that it's easier to follow changes to the LSP's function call handling.
I'm not particularly concerned with whether the results produced by
these tests are "good" or "bad" in this PR, I'm just establishing a
baseline.
## Summary
Thread visitors through the rest of `apply_type_mapping`: callable and
protocol types.
## Test Plan
Added mdtest that previously stack overflowed.
## Summary
We have the ability to defer type inference of some parts of
definitions, so as to allow us to create a type that may need to be
recursively referenced in those other parts of the definition.
We also have the ability to do type inference in a context where all
name resolution should be deferred (that is, names should be looked up
from all-reachable-definitions rather than from the location of use.)
This is used for all annotations in stubs, or if `from __future__ import
annotations` is active.
Previous to this PR, these two concepts were linked: deferred-inference
always implied deferred-name-resolution, though we also supported
deferred-name-resolution without deferred-inference, via
`DeferredExpressionState`.
For the upcoming `typing.TypeAlias` support, I will defer inference of
the entire RHS of the alias (so as to support cycles), but that doesn't
imply deferred name resolution; at runtime, the RHS of a name annotated
as `typing.TypeAlias` is executed eagerly.
So this PR fully de-couples the two concepts, instead explicitly setting
the `DeferredExpressionState` in those cases where we should defer name
resolution.
It also fixes a long-standing related bug, where we were deferring name
resolution of all names in class bases, if any of the class bases
contained a stringified annotation.
## Test Plan
Added test that failed before this PR.
## Summary
Fuzzer seed 208 seems to be timing out all fuzzer runs on PRs today.
This has happened on multiple unrelated PRs, as well as on an initial
version of this PR that made a comment-only change in ty and didn't skip
any seeds, so the timeout appears to be consistent in CI, on ty main
branch, as of today, but it started happening due to some change in a
factor outside ty; not sure what.
I checked the code generated for seed 208 locally, and it takes about
30s to check on current ty main branch. This is slow for a fuzzer seed,
but shouldn't be slow enough to make it time out after 20min in CI (even
accounting for GH runners being slower than my laptop.)
I tried to bisect the slowness of checking that code locally, but I
didn't go back far enough to find the change that made it slow. In fact
it seems like it became significantly faster in the last few days (on an
older checkout I had to stop it after several minutes.) So whatever the
cause of the slowness, it's not a recent change in ty.
I don't want to rabbit-hole on this right now (fuzzer-discovered issues
are lower-priority than real-world-code issues), and need a working CI,
so skip this seed for now until we can investigate it.
## Test Plan
CI. This PR contains a no-op (comment) change in ty, so that the fuzz
test is triggered in CI and we can verify it now works (as well as
verify, on the previous commit, that the fuzzer job is timing out on
that seed, even with just a no-op change in ty.)
Reverts astral-sh/ruff#20156. As @sharkdp noted in his post-merge
review, there were several issues with that PR that I didn't spot before
merging — but I'm out for four days now, and would rather not leave
things in an inconsistent state for that long. I'll revisit this on
Wednesday.
## Summary
These projects all check successfully now.
(Pandas still takes 9s, as the comment in `bad.txt` said, but I don't
think this is slow enough to exclude it; mypy-primer overall still runs
in 4 minutes, faster than e.g. the test suite on Windows.)
## Test Plan
mypy-primer CI.
## Summary
This error is about assigning to attributes rather than reading
attributes, so I think `invalid-assignment` makes more sense than
`invalid-attribute-access`
## Test Plan
existing mdtests updated
## Summary
In `is_disjoint_from_impl`, we should unpack type aliases before we
check `TypedDict`. This change probably doesn't have any visible effect
until we have a more discriminating implementation of disjointness for
`TypedDict`, but making the change now can avoid some confusion/bugs in
future.
In `type_ordering.rs`, we should order `TypedDict` near more similar
types, and leave Union/Intersection together at the end of the list.
This is not necessary for correctness, but it's more consistent and it
could have saved me some confusion trying to figure out why I was only
getting an unreachable panic when my code example included a `TypedDict`
type.
## Test Plan
None besides existing tests.
## Summary
Now that we have `Type::TypeAlias`, which can wrap a union, and the
possibility of unions including non-unpacked type aliases (which is
necessary to support recursive type aliases), we can no longer assume in
`UnionType::normalized_impl` that normalizing each element of an
existing union will result in a set of elements that we can order and
then place raw into `UnionType` to create a normalized union. It's now
possible for those elements to themselves include union types (unpacked
from an alias). So instead, we need to feed those elements into the full
`UnionBuilder` (with alias-unpacking turned on) to flatten/normalize
them, and then order them.
## Test Plan
Added mdtest.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR fixes#7352 by exposing the `show_fix_diff` option used in our
snapshot tests in the CLI. As the issue suggests, we plan to make this
the default output format in the future, so this is added to the `full`
output format in preview for now.
This turned out to be pretty straightforward. I just used our existing
`Applicability` settings to determine whether or not to print the diff.
The snapshot differences are because we now set
`Applicability::DisplayOnly` for our snapshot tests. This
`Applicability` is also used to determine whether or not the fix icon
(`[*]`) is rendered, so this is now shown for display-only fixes in our
snapshots. This was already the case previously, but we were only
setting `Applicability::Unsafe` in these tests and ignoring the
`Applicability` when rendering fix diffs. CLI users can't enable
display-only fixes, so this is only a test change for now, but this
should work smoothly if we decide to expose a `--display-only-fixes`
flag or similar in the future.
I also deleted the `PrinterFlags::SHOW_FIX_DIFF` flag. This was
completely unused before, and it seemed less confusing just to delete it
than to enable it in the right place and check it along with the
`OutputFormat` and `preview`.
## Test Plan
I only added one CLI test for now. I'm kind of assuming that we have
decent coverage of the cases where this shouldn't be firing, especially
the `output_format` CLI test, which shows that this definitely doesn't
affect non-preview `full` output. I'm happy to add more tests with
different combinations of options, if we're worried about any in
particular. I did try `--diff` and `--preview` and a few other
combinations manually.
And here's a screenshot using our trusty UP049 example from the design
discussion confirming that all the colors and other formatting still
look as expected:
<img width="786" height="629" alt="image"
src="https://github.com/user-attachments/assets/94e408bc-af7b-4573-b546-a5ceac2620f2"
/>
And one with an unsafe fix to see the footer:
<img width="782" height="367" alt="image"
src="https://github.com/user-attachments/assets/bbb29e47-310b-4293-b2c2-cc7aee3baff4"
/>
## Related issues and PR
- https://github.com/astral-sh/ruff/issues/7352
- https://github.com/astral-sh/ruff/pull/12595
- https://github.com/astral-sh/ruff/issues/12598
- https://github.com/astral-sh/ruff/issues/12599
- https://github.com/astral-sh/ruff/issues/12600
I think we could probably close all of these issues now. I think we've
either resolved or avoided most of them, and if we encounter them again
with the new output format, it would probably make sense to open new
ones anyway.
## Summary
This PR fixes various TODOs around overload call when a variadic
argument is used.
The reason this bug existed is because the specialization wouldn't
account for unpacking the type of the variadic argument.
This is fixed by expanding `MatchedArgument` to contain the type of that
argument _only_ when it is a variadic argument. The reason is that
there's a split for when the argument type is inferred -- the
non-variadic arguments are inferred using `infer_argument_types` _after_
parameter matching while the variadic argument type is inferred _during_
the parameter matching. And, the `MatchedArgument` is populated _during_
parameter matching which means the unpacking would need to happen during
parameter matching.
This split seems a bit inconsistent but I don't want to spend a lot of
time on trying to merge them such that all argument type inference
happens in a single place. I might look into it while adding support for
`**kwargs`.
## Test Plan
Update existing tests by resolving the todos.
The ecosystem changes looks correct to me except for the `slice` call
but it seems that it's unrelated to this PR as we infer `slice[Any, Any,
Any]` for a `slice(1, 2, 3)` call on `main` as well
([playground](https://play.ty.dev/9eacce00-c7d5-4dd5-a932-4265cb2bb4f6)).
This PR adds an implementation of constraint sets.
An individual constraint restricts the specialization of a single
typevar to be within a particular lower and upper bound: the typevar can
only specialize to types that are a supertype of the lower bound, and a
subtype of the upper bound. (Note that lower and upper bounds are fully
static; we take the bottom and top materializations of the bounds to
remove any gradual forms if needed.) Either bound can be “closed” (where
the bound is a valid specialization), or “open” (where it is not).
You can then build up more complex constraint sets using union,
intersection, and negation operations. We use a disjunctive normal form
(DNF) representation, just like we do for types: a _constraint set_ is
the union of zero or more _clauses_, each of which is the intersection
of zero or more individual constraints. Note that the constraint set
that contains no clauses is never satisfiable (`⋃ {} = 0`); and the
constraint set that contains a single clause, which contains no
constraints, is always satisfiable (`⋃ {⋂ {}} = 1`).
One thing to note is that this PR does not change the logic of the
actual assignability checks, and in particular, we still aren't ever
trying to create an "individual constraint" that constrains a typevar.
Technically we're still operating only on `bool`s, since we only ever
instantiate `C::always_satisfiable` (i.e., `true`) and
`C::unsatisfiable` (i.e., `false`) in the `has_relation_to` methods. So
if you thought that #19838 introduced an unnecessarily complex stand-in
for `bool`, well here you go, this one is worse! (But still seemingly
not yielding a performance regression!) The next PR in this series,
#20093, is where we will actually create some non-trivial constraint
sets and use them in anger.
That said, the PR does go ahead and update the assignability checks to
use the new `ConstraintSet` type instead of `bool`. That part is fairly
straightforward since we had already updated the assignability checks to
use the `Constraints` trait; we just have to actively choose a different
impl type. (For the `is_whatever` variants, which still return a `bool`,
we have to convert the constraint set, but the explicit
`is_always_satisfiable` calls serve as nice documentation of our
intent.)
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
This pull request fixes the bug described in issue
[#19153](https://github.com/astral-sh/ruff/issues/19153).
The issue occurred when `PERF403` incorrectly flagged cases involving
tuple unpacking in a for loop. For example:
```python
def f():
v = {}
for (o, p), x in [("op", "x")]:
v[x] = o, p
```
This code was wrongly suggested to be rewritten into a dictionary
comprehension, which changes the semantics.
Changes in this PR:
Updated the `PERF403` rule to correctly handle tuple unpacking in loop
targets.
Added regression tests to ensure this case (and similar ones) are no
longer flagged incorrectly.
Why:
This ensures that `PERF403` only triggers when a dictionary
comprehension is semantically equivalent to the original loop,
preventing false positives.
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
This is a variant of #20076 that moves some complexity out of
`apply_type_mapping_impl` in `generics.rs`. The tradeoff is that now
every place that applies `TypeMapping::Specialization` must take care to
call `.materialize()` afterwards. (A previous version of this didn't
work because I had missed a spot where I had to call `.materialize()`.)
@carljm as asked in
https://github.com/astral-sh/ruff/pull/20076#discussion_r2305385298 .
## Summary
Adds new rule to catch use of builtins `input()` in async functions.
Issue #8451
## Test Plan
New snapshosts in `ASYNC250.py` with `cargo insta test`.
## Summary
Resolves https://github.com/astral-sh/ty/issues/1103
## Test Plan
Edited `crates/ty_python_semantic/resources/mdtest/literal/boolean.md`
with:
```
# Boolean literals
```python
reveal_type(True) # revealed: Literal[False]
reveal_type(False) # revealed: Literal[False]
```
Ran `cargo test -p ty_python_semantic --test mdtest -- mdtest__literal_boolean`
And we get a test failure:
```
running 1 test
test mdtest__literal_boolean ... FAILED
failures:
---- mdtest__literal_boolean stdout ----
boolean.md - Boolean literals (c336e1af3d538acd)
crates/ty_python_semantic/resources/mdtest/literal/boolean.md:4
unmatched assertion: revealed: Literal[False]
crates/ty_python_semantic/resources/mdtest/literal/boolean.md:4
unexpected error: 13 [revealed-type] "Revealed type: `Literal[True]`"
To rerun this specific test, set the environment variable:
MDTEST_TEST_FILTER='boolean.md - Boolean literals (c336e1af3d538acd)'
MDTEST_TEST_FILTER='boolean.md - Boolean literals (c336e1af3d538acd)'
cargo test -p ty_python_semantic --test mdtest --
mdtest__literal_boolean
--------------------------------------------------
thread 'mdtest__literal_boolean' panicked at
crates/ty_test/src/lib.rs:138:5:
Some tests failed.
note: run with `RUST_BACKTRACE=1` environment variable to display a
backtrace
failures:
mdtest__literal_boolean
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 263
filtered out; finished in 0.18s
error: test failed, to rerun pass `-p ty_python_semantic --test mdtest`
```
As expected.
And when i checkout main and keep the same mdtest file all tests pass (as the repro).
## Summary
I spun this off from #19919 to separate the rendering code change and
snapshot updates from the (much smaller) changes to expose this in the
CLI. I grouped all of the `ruff_linter` snapshot changes in the final
commit in an effort to make this easier to review. The code changes are
in [this
range](619395eb41).
I went through all of the snapshots, albeit fairly quickly, and they all
looked correct to me. In the last few commits I was trying to resolve an
existing issue in the alignment of the line number separator:
73720c73be/crates/ruff_linter/src/rules/flake8_comprehensions/snapshots/ruff_linter__rules__flake8_comprehensions__tests__C409_C409.py.snap (L87-L89)
In the snapshot above on `main`, you can see that a double-digit line
number at the end of the context lines for a snippet was causing a
misalignment with the other separators. That's now resolved. The one
downside is that this can lead to a mismatch with the diagnostic above:
```
C409 [*] Unnecessary list literal passed to `tuple()` (rewrite as a tuple literal)
--> C409.py:4:6
|
2 | t2 = tuple([1, 2])
3 | t3 = tuple((1, 2))
4 | t4 = tuple([
| ______^
5 | | 1,
6 | | 2
7 | | ])
| |__^
8 | t5 = tuple(
9 | (1, 2)
|
help: Rewrite as a tuple literal
1 | t1 = tuple([])
2 | t2 = tuple([1, 2])
3 | t3 = tuple((1, 2))
- t4 = tuple([
4 + t4 = (
5 | 1,
6 | 2
- ])
7 + )
8 | t5 = tuple(
9 | (1, 2)
10 | )
note: This is an unsafe fix and may remove comments or change runtime behavior
```
But I don't think we can avoid that without really reworking this
rendering to make the diagnostic and diff rendering aware of each other.
Anyway, this should only happen in relatively rare cases where the
diagnostic is near a digit boundary and also near a context boundary.
Most of our diagnostics line up nicely.
Another potential downside of the new rendering format is its handling
of long stretches of `+` or `-` lines:
```
help: Replace with `Literal[...] | None`
21 | ...
22 |
23 |
- def func6(arg1: Literal[
- "hello",
- None # Comment 1
- , "world"
- ]):
24 + def func6(arg1: Literal["hello", "world"] | None):
25 | ...
26 |
27 |
note: This is an unsafe fix and may remove comments or change runtime behavior
```
To me it just seems a little hard to tell what's going on with just a
long streak of `-`-prefixed lines. I saw an even more exaggerated
example at some point, but I think this is also fairly rare. Most of the
snapshots seem more like the examples we looked at on Discord with
plenty of `|` lines and pairs of `+` and `-` lines.
## Test Plan
Existing tests plus one new test in `ruff_db` to isolate a line
separator alignment issue
## Summary
Decrease the maximum number of literals in a union before we collapse to
the supertype. The better fix for this will be
https://github.com/astral-sh/ty/issues/957, but it is very tempting to
solve this for now by simply decreasing the limit by one, to get below
the salsa limit of 200.
closes https://github.com/astral-sh/ty/issues/660
## Test Plan
Added a regression test that would previously lead to a "too many cycle
iterations" panic.
## Summary
With this PR, we stop performing boundness analysis for implicit
instance attributes:
```py
class C:
def __init__(self):
if False:
self.x = 1
C().x # would previously show an error, with this PR we pretend the attribute exists
```
This PR is potentially just a temporary measure until we find a better
fix. But I have already invested a lot of time trying to find the root
cause of https://github.com/astral-sh/ty/issues/758 (and [this
example](https://github.com/astral-sh/ty/issues/758#issuecomment-3206108262),
which I'm not entirely sure is related) and I still don't understand
what is going on. This PR fixes the performance problems in both of
these problems (in a rather crude way).
The impact of the proposed change on the ecosystem is small, and the
three new diagnostics are arguably true positives (previously hidden
because we considered the code unreachable, based on e.g. `assert`ions
that depended on implicit instance attributes). So this seems like a
reasonable fix for now.
Note that we still support cases like these:
```py
class D:
if False: # or any other expression that statically evaluates to `False`
x: int = 1
D().x # still an error
class E:
if False: # or any other expression that statically evaluates to `False`
def f(self):
self.x = 1
E().x # still an error
```
closes https://github.com/astral-sh/ty/issues/758
## Test Plan
Updated tests, benchmark results
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#19664
Fix allowed unused imports matching for top-level modules.
I've simply replaced `from_dotted_name` with `user_defined`. Since
QualifiedName for imports is created in
crates/ruff_python_semantic/src/imports.rs, I guess it's acceptable to
use `user_defined` here. Please tell me if there is better way.
0c5089ed9e/crates/ruff_python_semantic/src/imports.rs (L62)
## Test Plan
<!-- How was it tested? -->
I've added a snapshot test
`f401_allowed_unused_imports_top_level_module`.
## Summary
This PR is a first step toward adding a GitLab output format to ty. It
converts the `GitlabEmitter` from `ruff_linter` to a `GitlabRenderer` in
`ruff_db` and updates its implementation to handle non-Ruff files and
diagnostics without primary spans. I tried to break up the changes here
so that they're easy to review commit-by-commit, or at least in groups
of commits:
- [preparatory changes in-place in `ruff_linter` and a `ruff_db`
skeleton](0761b73a61)
- [moving the code over with no implementation changes mixed
in](0761b73a61..8f909ea0bb)
- [tidying up the code now in
`ruff_db`](9f047c4f9f..e5e217fcd6)
This wasn't strictly necessary, but I also added some `Serialize`
structs instead of calling `json!` to make it a little clearer that we
weren't modifying the schema (e4c4bee35d).
I plan to follow this up with a separate PR exposing this output format
in the ty CLI, which should be quite straightforward.
## Test Plan
Existing tests, especially the two that show up in the diff as renamed
nearly without changes
## Summary
closes https://github.com/astral-sh/ty/issues/692
If the expression (or any child expressions) is not definitely bound the
reachability constraint evaluation is determined as ambiguous.
This fixes the infinite cycles panic in the following code:
```py
from typing import Literal
class Toggle:
def __init__(self: "Toggle"):
if not self.x:
self.x: Literal[True] = True
```
Credit of this solution is for David.
## Test Plan
- Added a test case with too many cycle iterations panic.
- Previous tests.
---------
Co-authored-by: David Peter <mail@david-peter.de>
Part of #994. This adds a new field to the Specialization struct to
record when we're dealing with the top or bottom materialization of an
invariant generic. It also implements subtyping and assignability for
these objects.
Next planned steps after this is done are to implement other operations
on top/bottom materializations; probably attribute access is an
important one.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Adds new rule to find and report use of `httpx.Client` in synchronous
functions.
See issue #8451
## Test Plan
New snapshots for `ASYNC212.py` with `cargo insta test`.
There are some situations that we have a confusing diagnostics due to
identical class names.
## Class with same name from different modules
```python
import pandas
import polars
df: pandas.DataFrame = polars.DataFrame()
```
This yields the following error:
**Actual:**
error: [invalid-assignment] "Object of type `DataFrame` is not
assignable to `DataFrame`"
**Expected**:
error: [invalid-assignment] "Object of type `polars.DataFrame` is not
assignable to `pandas.DataFrame`"
## Nested classes
```python
from enum import Enum
class A:
class B(Enum):
ACTIVE = "active"
INACTIVE = "inactive"
class C:
class B(Enum):
ACTIVE = "active"
INACTIVE = "inactive"
```
**Actual**:
error: [invalid-assignment] "Object of type `Literal[B.ACTIVE]` is not
assignable to `B`"
**Expected**:
error: [invalid-assignment] "Object of type
`Literal[my_module.C.B.ACTIVE]` is not assignable to `my_module.A.B`"
## Solution
In this MR we added an heuristics to detect when to use a fully
qualified name:
- There is an invalid assignment and;
- They are two different classes and;
- They have the same name
The fully qualified name always includes:
- module name
- nested classes name
- actual class name
There was no `QualifiedDisplay` so I had to implement it from scratch.
I'm very new to the codebase, so I might have done things inefficiently,
so I appreciate feedback.
Should we pre-compute the fully qualified name or do it on demand?
## Not implemented
### Function-local classes
Should we approach this in a different PR?
**Example**:
```python
# t.py
from __future__ import annotations
def function() -> A:
class A:
pass
return A()
class A:
pass
a: A = function()
```
#### mypy
```console
t.py:8: error: Incompatible return value type (got "t.A@5", expected "t.A") [return-value]
```
From my testing the 5 in `A@5` comes from the like number.
#### ty
```console
error[invalid-return-type]: Return type does not match returned value
--> t.py:4:19
|
4 | def function() -> A:
| - Expected `A` because of return type
5 | class A:
6 | pass
7 |
8 | return A()
| ^^^ expected `A`, found `A`
|
info: rule `invalid-return-type` is enabled by default
```
Fixes https://github.com/astral-sh/ty/issues/848
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Fixes#19581
I decided to add in a `indent_first_line` function into
[`textwrap.rs`](https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_trivia/src/textwrap.rs),
as it solely focuses on text manipulation utilities. It follows the same
design as `indent()`, and there may be situations in the future where it
can be reused as well.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
While looking at some logging output that I added to
`ReachabilityConstraintBuilder::add_and_constraint` in order to debug
https://github.com/astral-sh/ty/issues/1091, I noticed that it seemed to
suggest that the TDD was built in an imbalanced way for code like the
following, where we have a sequence of non-nested `if` conditions:
```py
def f(t1, t2, t3, t4, …):
x = 0
if t1:
x = 1
if t2:
x = 2
if t3:
x = 3
if t4:
x = 4
…
```
To understand this a bit better, I added some code to the
`ReachabilityConstraintBuilder` to render the resulting TDD. On `main`,
we get a tree that looks like the following, where you can see a pattern
of N sub-trees that grow linearly with N (number of `if` statements).
This results in an overall tree structure that has N² nodes (see graph
below):
<img alt="normal order"
src="https://github.com/user-attachments/assets/aab40ce9-e82a-4fcd-823a-811f05f15f66"
/>
If we zoom in to one of these subgraphs, we can see what the problem is.
When we add new constraints that represent combinations like `t1 AND ~t2
AND ~t3 AND t4 AND …`, they start with the evaluation of "early"
conditions (`t1`, `t2`, …). This means that we have to create new
subgraphs for each new `if` condition because there is little sharing
with the previous structure. We evaluate the Boolean condition in a
right-associative way: `t1 AND (~t2 AND (~t3 AND t4)))`:
<img width="500" align="center"
src="https://github.com/user-attachments/assets/31ea7182-9e00-4975-83df-d980464f545d"
/>
If we change the ordering of TDD atoms, we can change that to a
left-associative evaluation: `(((t1 AND ~t2) AND ~t3) AND t4) …`. This
means that we can re-use previous subgraphs `(t1 AND ~t2)`, which
results in a much more compact graph structure overall (note how "late"
conditions are now at the top, and "early" conditions are further down
in the graph):
<img alt="reverse order"
src="https://github.com/user-attachments/assets/96a6b7c1-3d35-4192-a917-0b2d24c6b144"
/>
If we count the number of TDD nodes for a growing number if `if`
statements, we can see that this change results in a slower growth. It's
worth noting that the growth is still superlinear, though:
<img width="800" height="600" alt="plot"
src="https://github.com/user-attachments/assets/22e8394f-e74e-4a9e-9687-0d41f94f2303"
/>
On the actual code from the referenced ticket (the `t_main.py` file
reduced to its main function, with the main function limited to 2000
lines instead of 11000 to allow the version on `main` to run to
completion), the effect is much more dramatic. Instead of 26 million TDD
nodes (`main`), we now only create 250 thousand (this branch), which is
slightly less than 1%.
The change in this PR allows us to build the semantic index and
type-check the problematic `t_main.py` file in
https://github.com/astral-sh/ty/issues/1091 in 9 seconds. This is still
not great, but an obvious improvement compared to running out of memory
after *minutes* of execution.
An open question remains whether this change is beneficial for all kinds
of code patterns, or just this linear sequence of `if` statements. It
does not seem unreasonable to think that referring to "earlier"
conditions is generally a good idea, but I learned from Doug that it's
generally not possible to find a TDD-construction heuristic that is
non-pathological for all kinds of inputs. Fortunately, it seems like
this change here results in performance improvements across *all of our
benchmarks*, which should increase the confidence in this change:
| Benchmark | Improvement |
|---------------------|-------------------------|
| hydra-zen | +13% |
| DateType | +5% |
| sympy (walltime) | +4% |
| attrs | +4% |
| pydantic (walltime) | +2% |
| pandas (walltime) | +2% |
| altair (walltime) | +2% |
| static-frame | +2% |
| anyio | +1% |
| freqtrade | +1% |
| colour-science | +1% |
| tanjun | +1% |
closes https://github.com/astral-sh/ty/issues/1091
---------
Co-authored-by: Douglas Creager <dcreager@dcreager.net>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Extend the following rules.
### AIR311
* `airflow.sensors.base.BaseSensorOperator` →
airflow.sdk.bases.sensor.BaseSensorOperator`
* `airflow.sensors.base.PokeReturnValue` →
airflow.sdk.bases.sensor.PokeReturnValue`
* `airflow.sensors.base.poke_mode_only` →
airflow.sdk.bases.sensor.poke_mode_only`
* `airflow.decorators.base.DecoratedOperator` →
airflow.sdk.bases.decorator.DecoratedOperator`
* `airflow.models.param.Param` → airflow.sdk.definitions.param.Param`
* `airflow.decorators.base.DecoratedMappedOperator` →
`airflow.sdk.bases.decorator.DecoratedMappedOperator`
* `airflow.decorators.base.DecoratedOperator` →
`airflow.sdk.bases.decorator.DecoratedOperator`
* `airflow.decorators.base.TaskDecorator` →
`airflow.sdk.bases.decorator.TaskDecorator`
* `airflow.decorators.base.get_unique_task_id` →
`airflow.sdk.bases.decorator.get_unique_task_id`
* `airflow.decorators.base.task_decorator_factory` →
`airflow.sdk.bases.decorator.task_decorator_factory`
### AIR312
* `airflow.sensors.bash.BashSensor` →
`airflow.providers.standard.sensor.bash.BashSensor`
* `airflow.sensors.python.PythonSensor` →
`airflow.providers.standard.sensors.python.PythonSensor`
## Test Plan
<!-- How was it tested? -->
update the test fixture accordingly in the second commit and reorg in
the third
## Summary
Properly preserve type qualifiers when accessing attributes on unions
and intersections. This is a prerequisite for
https://github.com/astral-sh/ruff/pull/19579.
Also fix a completely wrong implementation of
`map_with_boundness_and_qualifiers`. It now closely follows
`map_with_boundness` (just above).
## Test Plan
I thought about it, but didn't find any easy way to test this. This only
affected `Type::member`. Things like validation of attribute writes
(where type qualifiers like `ClassVar` and `Final` are important) were
already handling things correctly.
## Summary
Add a subtly different test case for recursive PEP 695 type aliases,
which does require that we relax our union simplification, so we don't
eagerly unpack aliases from user-provided union annotations.
## Test Plan
Added mdtest.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Part of https://github.com/astral-sh/ruff/pull/20100 |
https://github.com/astral-sh/ruff/pull/20100#issuecomment-3225349156
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ruff/issues/20088
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo nextest run flake8_use_pathlib`
---------
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
Part of #20009 (i forgot to delete it in this PR)
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
Closes#19302
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This adds an auto-fix for `Logging statement uses f-string` Ruff G004,
so users don't have to resolve it manually.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I ran the auto-fixes on a Python file locally and and it worked as
expected.
<!-- How was it tested? -->
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Summary
--
This PR aims to resolve (or help to resolve) #18442 and #19357 by
encoding the CPython semantics around the `__class__` cell in our
semantic model. Namely,
> `__class__` is an implicit closure reference created by the compiler
if any methods in a class body refer to either `__class__` or super.
from the Python
[docs](https://docs.python.org/3/reference/datamodel.html#creating-the-class-object).
As noted in the variant docs by @AlexWaygood, we don't fully model this
behavior, opting always to create the `__class__` cell binding in a new
`ScopeKind::DunderClassCell` around each method definition, without
checking if any method in the class body actually refers to `__class__`
or `super`.
As such, this PR fixes#18442 but not #19357.
Test Plan
--
Existing tests, plus the tests from #19783, which now pass without any
rule-specific code.
Note that we opted not to alter the behavior of F841 here because
flagging `__class__` in these cases still seems helpful. See the
discussion in
https://github.com/astral-sh/ruff/pull/20048#discussion_r2296252395 and
in the test comments for more information.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Mikko Leppänen <mleppan23@gmail.com>
## Summary
Our internal inlay hints structure (`ty_ide::InlayHint`) now more
closely resembles `lsp_types::InlayHint`.
This mainly allows us to convert to `lsp_types::InlayHint` with less
hassle, but it also allows us to manage the different parts of the inlay
hint better, which in the future will allow us to implement features
like goto on the type part of the type inlay hint.
It also really isn't important to store a specific `Type` instance in
the `InlayHintContent`. So we remove this and use `InlayHintLabel`
instead which just shows the representation of the type (along with
other information).
We see a similar structure used in rust-analyzer too.
## Summary
This has been here for awhile (since our initial PEP 695 type alias
support) but isn't really correct. The right-hand-side of a PEP 695 type
alias is a distinct scope, and we don't mark it as an "eager" nested
scope, so it automatically gets "deferred" resolution of names from
outer scopes (just like a nested function). Thus it's
redundant/unnecessary for us to use `DeferredExpressionState::Deferred`
for resolving that RHS expression -- that's for deferring resolution of
individual names within a scope. Using it here causes us to wrongly
ignore applicable outer-scope narrowing.
## Test Plan
Added mdtest that failed before this PR (the second snippet -- the first
snippet always passed.)
## Summary
As noted in a code TODO, our `Diff` rendering code previously didn't
have any
special handling for notebooks. This was particularly obvious when the
diffs
were rendered right next to the corresponding diagnostic because the
diagnostic
used cell-based line numbers, while the diff was still using line
numbers from
the concatenated source. This PR updates the diff rendering to handle
notebooks
too.
The main improvements shown in the example below are:
- Line numbers are now remapped to be relative to their cell
- Context lines from other cells are suppressed
```
error[unused-import][*]: `math` imported but unused
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^
3 |
4 | print('hello world')
|
help: Remove unused import: `math`
ℹ Safe fix
1 1 | # cell 2
2 |-import math
3 2 |
4 3 | print('hello world')
```
I tried a few different approaches here before finally just splitting
the notebook into separate text ranges by cell and diffing each one
separately. It seems to work and passes all of our tests, but I don't
know if it's actually enforced anywhere that a single edit doesn't span
cells. Such an edit would silently be dropped right now since it would
fail the `contains_range` check. I also feel like I may have overlooked
an existing way to partition a file into cells like this.
## Test Plan
Existing notebook tests, plus a new one in `ruff_db`
## Summary
Implement validation for `TypedDict` constructor calls and dictionary
literal assignments, including support for `total=False` and proper
field management.
Also add support for `Required` and `NotRequired` type qualifiers in
`TypedDict` classes, along with proper inheritance behavior and the
`total=` parameter.
Support both constructor calls and dict literal syntax
part of https://github.com/astral-sh/ty/issues/154
### Basic Required Field Validation
```py
class Person(TypedDict):
name: str
age: int | None
# Error: Missing required field 'name' in TypedDict `Person` constructor
incomplete = Person(age=25)
# Error: Invalid argument to key "name" with declared type `str` on TypedDict `Person`
wrong_type = Person(name=123, age=25)
# Error: Invalid key access on TypedDict `Person`: Unknown key "extra"
extra_field = Person(name="Bob", age=25, extra=True)
```
<img width="773" height="191" alt="Screenshot 2025-08-07 at 17 59 22"
src="https://github.com/user-attachments/assets/79076d98-e85f-4495-93d6-a731aa72a5c9"
/>
### Support for `total=False`
```py
class OptionalPerson(TypedDict, total=False):
name: str
age: int | None
# All valid - all fields are optional with total=False
charlie = OptionalPerson()
david = OptionalPerson(name="David")
emily = OptionalPerson(age=30)
frank = OptionalPerson(name="Frank", age=25)
# But type validation and extra fields still apply
invalid_type = OptionalPerson(name=123) # Error: Invalid argument type
invalid_extra = OptionalPerson(extra=True) # Error: Invalid key access
```
### Dictionary Literal Validation
```py
# Type checking works for both constructors and dict literals
person: Person = {"name": "Alice", "age": 30}
reveal_type(person["name"]) # revealed: str
reveal_type(person["age"]) # revealed: int | None
# Error: Invalid key access on TypedDict `Person`: Unknown key "non_existing"
reveal_type(person["non_existing"]) # revealed: Unknown
```
### `Required`, `NotRequired`, `total`
```python
from typing import TypedDict
from typing_extensions import Required, NotRequired
class PartialUser(TypedDict, total=False):
name: Required[str] # Required despite total=False
age: int # Optional due to total=False
email: NotRequired[str] # Explicitly optional (redundant)
class User(TypedDict):
name: Required[str] # Explicitly required (redundant)
age: int # Required due to total=True
bio: NotRequired[str] # Optional despite total=True
# Valid constructions
partial = PartialUser(name="Alice") # name required, age optional
full = User(name="Bob", age=25) # name and age required, bio optional
# Inheritance maintains original field requirements
class Employee(PartialUser):
department: str # Required (new field)
# name: still Required (inherited)
# age: still optional (inherited)
emp = Employee(name="Charlie", department="Engineering") # ✅
Employee(department="Engineering") # ❌
e: Employee = {"age": 1} # ❌
```
<img width="898" height="683" alt="Screenshot 2025-08-11 at 22 02 57"
src="https://github.com/user-attachments/assets/4c1b18cd-cb2e-493a-a948-51589d121738"
/>
## Implementation
The implementation reuses existing validation logic done in
https://github.com/astral-sh/ruff/pull/19782
### ℹ️ Why I did NOT synthesize an `__init__` for `TypedDict`:
`TypedDict` inherits `dict.__init__(self, *args, **kwargs)` that accepts
all arguments.
The type resolution system finds this inherited signature **before**
looking for synthesized members.
So `own_synthesized_member()` is never called because a signature
already exists.
To force synthesis, you'd have to override Python’s inheritance
mechanism, which would break compatibility with the existing ecosystem.
This is why I went with ad-hoc validation. IMO it's the only viable
approach that respects Python’s
inheritance semantics while providing the required validation.
### Refacto of `Field`
**Before:**
```rust
struct Field<'db> {
declared_ty: Type<'db>,
default_ty: Option<Type<'db>>, // NamedTuple and dataclass only
init_only: bool, // dataclass only
init: bool, // dataclass only
is_required: Option<bool>, // TypedDict only
}
```
**After:**
```rust
struct Field<'db> {
declared_ty: Type<'db>,
kind: FieldKind<'db>,
}
enum FieldKind<'db> {
NamedTuple { default_ty: Option<Type<'db>> },
Dataclass { default_ty: Option<Type<'db>>, init_only: bool, init: bool },
TypedDict { is_required: bool },
}
```
## Test Plan
Updated Markdown tests
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
This PR limits the argument type expansion size for an overload call
evaluation to 512.
The limit chosen is arbitrary but I've taken the 256 limit from Pyright
into account and bumped it x2 to start with.
Initially, I actually started out by trying to refactor the entire
argument type expansion to be lazy. Currently, expanding a single
argument at any position eagerly creates the combination (argument
lists) and returns that (`Vec<CallArguments>`) but I thought we could
make it lazier by converting the return type of `expand` from
`Iterator<Item = Vec<CallArguments>>` to `Iterator<Item = Iterator<Item
= CallArguments>>` but that's proving to be difficult to implement
mainly because we **need** to maintain the previous expansion to
generate the next expansion which is the main reason to use
`std::iter::successors` in the first place.
Another approach would be to eagerly expand all the argument types and
then use the `combinations` from `itertools` to generate the
combinations but we would need to find the "boundary" between arguments
lists produced from expanding argument at position 1 and position 2
because that's important for the algorithm.
Closes: https://github.com/astral-sh/ty/issues/868
## Test Plan
Add test case to demonstrate the limit along with the diagnostic
snapshot stating that the limit has been reached.
Part of astral-sh/ty#994
## Summary
Add new special forms to `ty_extensions`, `Top[T]` and `Bottom[T]`.
Remove `ty_extensions.top_materialization` and
`ty_extensions.bottom_materialization`.
## Test Plan
Converted the existing `materialization.md` mdtest to the new syntax.
Added some tests for invalid use of the new special form.
In effect, we make the Salsa query aspect keyed only on whether we want
global symbols. We move everything else (hierarchical and querying) to
an aggregate step *after* the query.
This was a somewhat involved change since we want to return a flattened
list from visiting the source while also preserving enough information
to reform the symbols into a hierarchical structure that the LSP
expects. But I think overall the API has gotten simpler and we encode
more invariants into the type system. (For example, previously you got a
runtime assertion if you tried to provide a query string while enabling
hierarchical mode. But now that's prevented by construction.)
Basically, this splits the implementation into two pieces:
the first piece does the traversal and finds *all* symbols
across the workspace. The second piece does filtering based
on a user provided query string. Only the first piece is
cached by Salsa.
This brings warm "workspace symbols" requests down from
500-600ms to 100-200ms.
While this doesn't typically matter, when ty returns a very
large list of symbols, this can have an impact. Specifically,
when searching `async` in home-assistant, this gets times
closer to 500ms versus closer to 600ms before this change.
It looks like an overall ~50ms improvement (so around 10%),
but variance is all over the place and I didn't do any
statistical tests.
But this does make intuitive sense. Previously, we were
allocating intermediate strings, doing UTF-8 decoding and
consulting Unicode casing tables. Now we're just doing what
is likely a single DFA scan. In effect, we front load all
of the Unicode junk into regex compilation.
There is a small amount of subtlety to this matching routine,
and it could be implemented in a faster way. So let's right some
tests for what we have to ensure we don't break anything when
we optimize it.
## Summary
Looks like an oversight at some point that led to two identical globals,
the one in `ty_project` just calls `ty_python_semantic::register_lints`.
## Summary
Removes the `module_ptr` field from `AstNodeRef` in release mode, and
change `NodeIndex` to a `NonZeroU32` to reduce the size of
`Option<AstNodeRef<_>>` fields.
I believe CI runs in debug mode, so this won't show up in the memory
report, but this reduces memory by ~2% in release mode.
## Summary
Previously we held off from doing this because we weren't sure that it
was worth the added complexity cost. But our code has changed in the
months since we made that initial decision, and I think the structure of
the code is such that it no longer really leads to much added complexity
to add precise inference when unpacking a string literal or a bytes
literal.
The improved inference we gain from this has real benefits to users (see
the mypy_primer report), and this PR doesn't appear to have a
performance impact.
## Test plan
mdtests
## Summary
We use the `System` abstraction in ty to abstract away the host/system
on which ty runs.
This has a few benefits:
* Tests can run in full isolation using a memory system (that uses an
in-memory file system)
* The LSP has a custom implementation where `read_to_string` returns the
content as seen by the editor (e.g. unsaved changes) instead of always
returning the content as it is stored on disk
* We don't require any file system polyfills for wasm in the browser
However, it does require extra care that we don't accidentally use
`std::fs` or `std::env` (etc.) methods in ty's code base (which is very
easy).
This PR sets up Clippy and disallows the most common methods, instead
pointing users towards the corresponding `System` methods.
The setup is a bit awkward because clippy doesn't support inheriting
configurations. That means, a crate can only override the entire
workspace configuration or not at all.
The approach taken in this PR is:
* Configure the disallowed methods at the workspace level
* Allow `disallowed_methods` at the workspace level
* Enable the lint at the crate level using the warn attribute (in code)
The obvious downside is that it won't work if we ever want to disallow
other methods, but we can figure that out once we reach that point.
What about false positives: Just add an `allow` and move on with your
life :) This isn't something that we have to enforce strictly; the goal
is to catch accidental misuse.
## Test Plan
Clippy found a place where we incorrectly used `std::fs::read_to_string`
Adds a method to `TStringValue` to detect whether the t-string is empty
_as an iterable_. Note the subtlety here that, unlike f-strings, an
empty t-string is still truthy (i.e. `bool(t"")==True`).
Closes#19951
## Summary
Rename `TypeAliasType::Bare` to `TypeAliasType::ManualPEP695`, and
`BareTypeAliasType` to `ManualPEP695TypeAliasType`.
Why?
Both existing variants of `TypeAliasType` are specific to features added
in PEP 695 (which introduced both the `type` statement and
`types.TypeAliasType`), so it doesn't make sense to name one with the
name `PEP695` and not the other.
A "bare" type alias, in my mind, is a legacy type alias like `IntOrStr =
int | str`, which is "bare" in that there is nothing at all
distinguishing it as a type alias. I will want to use the "bare" name
for this variant, in a future PR.
The renamed variant here describes a type alias created with `IntOrStr =
types.TypeAliasType("IntOrStr", int | str)`, which is not "bare", it's
just "manually" instantiated instead of using the `type` statement
syntax sugar. (This is useful when using the `typing_extensions`
backport of `TypeAliasType` on older Python versions.)
## Test Plan
Pure rename, existing tests pass.
## Summary
This PR fixes https://github.com/astral-sh/ty/issues/1071
The core issue is that `CallableType` is a salsa interned but
`Signature` (which `CallableType` stores) ignores the `Definition` in
its `Eq` and `Hash` implementation.
This PR tries to simplest fix by removing the custom `Eq` and `Hash`
implementation. The main downside of this fix is that it can increase
memory usage because `CallableType`s that are equal except for their
`Definition` are now interned separately.
The alternative is to remove `Definition` from `CallableType` and
instead, call `bindings` directly on the callee (call_expression.func).
However, this would require
addressing the TODO
here
39ee71c2a5/crates/ty_python_semantic/src/types.rs (L4582-L4586)
This might probably be worth addressing anyway, but is the more involved
fix. That's why I opted for removing the custom `Eq` implementation.
We already "ignore" the definition during normalization, thank's to
Alex's work in https://github.com/astral-sh/ruff/pull/19615
## Test Plan
https://github.com/user-attachments/assets/248d1cb1-12fd-4441-adab-b7e0866d23eb
While implementing similar logic for initializers I noticed that this
code appeared to be walking the ancestors in the wrong direction, and so
if you have nested function calls it would always grab the outermost one
instead of the closest-ancestor.
The four copies of the test are because there's something really evil in
our caching that can't seem to be demonstrated in our cursor testing
framework, which I'm filing a followup for.
Summary
--
This is a preparatory PR in support of #19919. It moves our `Diff`
rendering code from `ruff_linter` to `ruff_db`, where we have direct
access to the `DiagnosticStylesheet` used by our other diagnostic
rendering code. As shown by the tests, this shouldn't cause any visible
changes. The colors aren't exactly the same, as I note in a TODO
comment, but I don't think there's any existing way to see those, even
in tests.
The `Diff` implementation is mostly unchanged. I just switched from a
Ruff-specific `SourceFile` to a `DiagnosticSource` (removing an
`expect_ruff_source_file` call) and updated the `LineStyle` struct and
other styling calls to use `fmt_styled` and our existing stylesheet.
In support of these changes, I added three styles to our stylesheet:
`insertion` and `deletion` for the corresponding diff operations, and
`underline`, which apparently we _can_ use, as I hoped on Discord. This
isn't supported in all terminals, though. It worked in ghostty but not
in st for me.
I moved the `calculate_print_width` function from the now-deleted
`diff.rs` to a method on `OneIndexed`, where it was available everywhere
we needed it. I'm not sure if that's desirable, or if my other changes
to the function are either (using `ilog10` instead of a loop). This does
make it `const` and slightly simplifies things in my opinion, but I'm
happy to revert it if preferred.
I also inlined a version of `show_nonprinting` from the
`ShowNonprinting` trait in `ruff_linter`:
f4be05a83b/crates/ruff_linter/src/text_helpers.rs (L3-L5)
This trait is now only used in `source_kind.rs`, so I'm not sure it's
worth having the trait or the macro-generated implementation (which is
only called once). This is obviously closely related to our unprintable
character handling in diagnostic rendering, but the usage seems
different enough not to try to combine them.
f4be05a83b/crates/ruff_db/src/diagnostic/render.rs (L990-L998)
We could also move the trait to another crate where we can use it in
`ruff_db` instead of inlining here, of course.
Finally, this PR makes `TextEmitter` a very thin wrapper around a
`DisplayDiagnosticsConfig`. It's still used in a few places, though,
unlike the other emitters we've replaced, so I figured it was worth
keeping around. It's a pretty nice API for setting all of the options on
the config and then passing that along to a `DisplayDiagnostics`.
Test Plan
--
Existing snapshot tests with diffs
"Why would you do this? This looks like you just replaced `bool` with an
overly complex trait"
Yes that's correct!
This should be a no-op refactoring. It replaces all of the logic in our
assignability, subtyping, equivalence, and disjointness methods to work
over an arbitrary `Constraints` trait instead of only working on `bool`.
The methods that `Constraints` provides looks very much like what we get
from `bool`. But soon we will add a new impl of this trait, and some new
methods, that let us express "fuzzy" constraints that aren't always true
or false. (In particular, a constraint will express the upper and lower
bounds of the allowed specializations of a typevar.)
Even once we have that, most of the operations that we perform on
constraint sets will be the usual boolean operations, just on sets.
(`false` becomes empty/never; `true` becomes universe/always; `or`
becomes union; `and` becomes intersection; `not` becomes negation.) So
it's helpful to have this separate PR to refactor how we invoke those
operations without introducing the new functionality yet.
Note that we also have translations of `Option::is_some_and` and
`is_none_or`, and of `Iterator::any` and `all`, and that the `and`,
`or`, `when_any`, and `when_all` methods are meant to short-circuit,
just like the corresponding boolean operations. For constraint sets,
that depends on being able to implement the `is_always` and `is_never`
trait methods.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Part of: https://github.com/astral-sh/ty/issues/868
This PR adds a heuristic to avoid argument type expansion if it's going
to eventually lead to no matching overload.
This is done by checking whether the non-expandable argument types are
assignable to the corresponding annotated parameter type. If one of them
is not assignable to all of the remaining overloads, then argument type
expansion isn't going to help.
## Test Plan
Add mdtest that would otherwise take a long time because of the number
of arguments that it would need to expand (30).
This is a fairly simple but effective way to add docstrings to like 95%
of completions from initial experimentation.
Fixes https://github.com/astral-sh/ty/issues/1036
Although ironically this approach *does not* work specifically for
`print` and I haven't looked into why.
## Summary
Resolves#19561
Fixes the [unnecessary-future-import
(UP010)](https://docs.astral.sh/ruff/rules/unnecessary-future-import/)
rule to correctly identify when imported __future__ modules are actually
used in the code, preventing false positives.
I assume there is no way to check usage in `analyze::statements`,
because we don't have any usage bindings for imports. To determine
unused imports, we have to fully scan the file to create bindings and
then check usage, similar to [unused-import
(F401)](https://docs.astral.sh/ruff/rules/unused-import/#unused-import-f401).
So, `Rule::UnnecessaryFutureImport` was moved from the
`analyze::statements` to the `analyze::deferred_scopes` stage. This
caused the need to change the logic of future import handling to a
bindings-based approach.
Also, the diagnostic report was changed.
Before
```
|
1 | from __future__ import nested_scopes, generators
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP010
```
after
```
|
1 | from __future__ import nested_scopes, generators
| ^^^^^^^^^^^^^ UP010
```
I believe this is the correct way, because `generators` may be used, but
`nested_scopes` is not.
### Special case
I've found out about some specific case.
```python
from __future__ import nested_scopes
nested_scopes = 1
```
Here we can treat `nested_scopes` as an unused import because the
variable `nested_scopes` shadows it and we can safely remove the future
import (my fix does it).
But
[F401](https://docs.astral.sh/ruff/rules/unused-import/#unused-import-f401)
not triggered for such case
([sandbox](https://play.ruff.rs/296d9c7e-0f02-4659-b0c0-78cc21f3de76))
```
from foo import print_function
print_function = 1
```
In my mind, `print_function` here is an unused import and should be
deleted (my IDE highlight it). What do you think?
## Test Plan
Added test cases and snapshots:
- Split test file into separate _0 and _1 files for appropriate checks.
- Added test cases to verify fixes when future module are used.
---------
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
This commit corrects the type checker's behavior when handling
`dataclass_transform` decorators that don't explicitly specify
`field_specifiers`. According to [PEP 681 (Data Class
Transforms)](https://peps.python.org/pep-0681/#dataclass-transform-parameters),
when `field_specifiers` is not provided, it defaults to an empty tuple,
meaning no field specifiers are supported and
`dataclasses.field`/`dataclasses.Field` calls should be ignored.
Fixes https://github.com/astral-sh/ty/issues/980
This basically splits `list_modules` into a higher level "aggregation"
routine and a lower level "get modules for one search path" routine.
This permits Salsa to cache the lower level components, e.g., many
search paths refer to directories that rarely change. This saves us
interaction with the system.
This did require a fair bit of surgery in terms of being careful about
adding file roots. Namely, now that we rely even more on file roots
existing for correct handling of cache invalidation, there were several
spots in our code that needed to be updated to add roots (that we
weren't previously doing). This feels Not Great, and it would be better
if we had some kind of abstraction that handled this for us. But it
isn't clear to me at this time what that looks like.
This ensures there is some level of consistency between the APIs.
This did require exposing a couple more things on `Module` for good
error messages. This also motivated a switch to an interned struct
instead of a tracked struct. This ensures that `list_modules` and
`resolve_modules` reuse the same `Module` values when the inputs are the
same.
Ref https://github.com/astral-sh/ruff/pull/19883#discussion_r2272520194
This makes `import <CURSOR>` and `from <CURSOR>` completions work.
This also makes `import os.<CURSOR>` and `from os.<CURSOR>`
completions work. In this case, we are careful to only offer
submodule completions.
These tests were added as a regression check that a panic
didn't occur. So we were asserting a bit more than necessary.
In particular, these will soon return completions for modules,
which creates large snapshots that we don't need.
So modify these to just check there is sensible output that
doesn't panic.
The actual implementation wasn't too bad. It's not long
but pretty fiddly. I copied over the tests from the existing
module resolver and adapted them to work with this API. Then
I added a number of my own tests as well.
Previously, if the module was just `foo-stubs`, we'd skip over
stripping the `-stubs` suffix which would lead to us returning
`None`.
This function is now a little convoluted and could be simpler
if we did an intermediate allocation. But I kept the iterative
approach and added a special case to handle `foo-stubs`.
These tests capture existing behavior.
I added these when I stumbled upon what I thought was an
oddity: we prioritize `foo.pyi` over `foo.py`, but
prioritize `foo/__init__.py` over `foo.pyi`.
(I plan to investigate this more closely in follow-up
work. Particularly, to look at other type checkers. It
seems like we may want to change this to always prioritize
stubs.)
This is a port of the logic in https://github.com/astral-sh/uv/pull/7691
The basic idea is we use CONDA_DEFAULT_ENV as a signal for whether
CONDA_PREFIX is just the ambient system conda install, or the user has
explicitly activated a custom one. If the former, then the conda is
treated like a system install (having lowest priority). If the latter,
the conda is treated like an activated venv (having priority over
everything but an Actual activated venv).
Fixes https://github.com/astral-sh/ty/issues/611
## Summary
Closes: https://github.com/astral-sh/ty/issues/669
(This turned out to be simpler that I thought :))
## Test Plan
Update existing test cases.
### Ecosystem report
Most of them are basically because ty has now started inferring more
precise types for the return type to an overloaded call and a lot of the
types are defined using type aliases, here's some examples:
<details><summary>Details</summary>
<p>
> attrs (https://github.com/python-attrs/attrs)
> + tests/test_make.py:146:14: error[unresolved-attribute] Type
`Literal[42]` has no attribute `default`
> - Found 555 diagnostics
> + Found 556 diagnostics
This is accurate now that we infer the type as `Literal[42]` instead of
`Unknown` (Pyright infers it as `int`)
> optuna (https://github.com/optuna/optuna)
> + optuna/_gp/search_space.py:181:53: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `tuple[int | float, int | float]`, found `tuple[Unknown |
ndarray[Unknown, <class 'float'>], Unknown | ndarray[Unknown, <class
'float'>]]`
> + optuna/_gp/search_space.py:181:83: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `int | float`, found `Unknown | ndarray[Unknown, <class
'float'>]`
> + tests/gp_tests/test_search_space.py:109:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `tuple[int | float, int
| float]`, found `Unknown | ndarray[Unknown, <class 'float'>]`
> + tests/gp_tests/test_search_space.py:110:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `int | float`, found
`Unknown | ndarray[Unknown, <class 'float'>]`
> - Found 559 diagnostics
> + Found 563 diagnostics
Same as above where ty is now inferring a more precise type like
`Unknown | ndarray[tuple[int, int], <class 'float'>]` instead of just
`Unknown` as before
> jinja (https://github.com/pallets/jinja)
> + src/jinja2/bccache.py:298:39: error[invalid-argument-type] Argument
to bound method `write_bytecode` is incorrect: Expected `IO[bytes]`,
found `_TemporaryFileWrapper[str]`
> - Found 186 diagnostics
> + Found 187 diagnostics
This requires support for type aliases to match the correct overload.
> hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
> + src/hydra_zen/wrapper/_implementations.py:945:16:
error[invalid-return-type] Return type does not match returned value:
expected `DataClass_ | type[@Todo(type[T] for protocols)] | ListConfig |
DictConfig`, found `@Todo(unsupported type[X] special form) | (((...) ->
Any) & dict[Unknown, Unknown]) | (DataClass_ & dict[Unknown, Unknown]) |
dict[Any, Any] | (ListConfig & dict[Unknown, Unknown]) | (DictConfig &
dict[Unknown, Unknown]) | (((...) -> Any) & list[Unknown]) | (DataClass_
& list[Unknown]) | list[Any] | (ListConfig & list[Unknown]) |
(DictConfig & list[Unknown])`
> + tests/annotations/behaviors.py:60:28: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/behaviors.py:64:21: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:167:17: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:524:17:
error[unresolved-attribute] Type `<class 'int'>` has no attribute
`_target_`
> - Found 561 diagnostics
> + Found 566 diagnostics
Same as above, this requires support for type aliases to match the
correct overload.
> paasta (https://github.com/yelp/paasta)
> + paasta_tools/utils.py:4188:19: warning[redundant-cast] Value is
already of type `list[str]`
> - Found 888 diagnostics
> + Found 889 diagnostics
This is correct.
> colour (https://github.com/colour-science/colour)
> + colour/plotting/diagrams.py:448:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/diagrams.py:462:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/models.py:419:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:230:9: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:474:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:495:17: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:513:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> + colour/plotting/temperature.py:514:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> - Found 480 diagnostics
> + Found 488 diagnostics
Most of them are correct except for the last two diagnostics which I'm
not sure
what's happening, it's trying to index into an `np.ndarray` type (which
is
inferred correctly) but I think it might be picking up an incorrect
overload
for the `__getitem__` method.
Scipy's diagnostics also requires support for type alises to pick the
correct overload.
</p>
</details>