## Summary
prek allows you to set priorities, and can run tasks of the same
priority concurrently (e.g., we can run Ruff's Python formatting and
`cargo fmt` at the same time). On my machine, this takes `uvx prek run
-a` from 19.4s to 5.0s (~a 4x speed-up).
This PR adjusts the logic for skipping formatting so that a `fmt: skip`
can affect multiple statements if they lie on the same line.
Specifically, a `fmt: skip` comment will now suppress all the statements
in the suite in which it appears whose range intersects the line
containing the skip directive. For example:
```python
x=[
'1'
];x=2 # fmt: skip
```
remains unchanged after formatting.
(Note that compound statements are somewhat special and were handled in
a previous PR - see #20633).
Closes#17331 and #11430.
Simplest to review commit by commit - the key diffs of interest are the
commit introducing the core logic, and the diff between the snapshots
introduced in the last commit (compared to the second commit).
# Implementation
On `main` we format a suite of statements by iterating through them. If
we meet a statement with a leading or trailing (own-line)`fmt: off`
comment, then we suppress formatting until we meet a `fmt: on` comment.
Otherwise we format the statement using its own formatting rule.
How are `fmt: skip` comments handled then? They are handled internally
to the formatting of each statement. Specifically, calling `.fmt` on a
statement node will first check to see if there is a trailing,
end-of-line `fmt: skip` (or `fmt: off`/`yapf: off`), and if so then
write the node with suppressed formatting.
In this PR we move the responsibility for handling `fmt: skip` into the
formatting logic of the suite itself. This is done as follows:
- Before beginning to format the suite, we do a pass through the
statements and collect the data of ranges with skipped formatting. More
specifically, we create a map with key given by the _first_ skipped
statement in a block and value a pair consisting of the _last_ skipped
statement and the _range_ to write verbatim.
- We iterate as before, but if we meet a statement that is a key in the
map constructed above, we pause to write the associated range verbatim.
We then advance the iterator to the last statement in the block and
proceed as before.
## Addendum on range formatting
We also had to make some changes to range formatting in order to support
this new behavior. For example, we want to make sure that
```python
<RANGE_START>x=1<RANGE_END>;x=2 # fmt: skip
```
formats verbatim, rather than becoming
```python
x = 1;x=2 # fmt: skip
```
Recall that range formatting proceeds in two steps:
1. Find the smallest enclosing node containing the range AND that has
enough info to format the range (so it may be larger than you think,
e.g. a docstring has enclosing node given by the suite, not the string
itself.)
2. Carve out the formatted range from the result of formatting that
enclosing node.
We had to modify (1), since the suite knows how to format skipped nodes,
but nodes may not "know" they are skipped. To do this we altered the
`visit_body` bethod of the `FindEnclosingNode` visitor: now we iterate
through the statements and check for skipped ranges intersecting the
format range. If we find them, we return without descending. The result
is to consider the statement containing the suite as the enclosing node
in this case.
## Summary
If parent violates LSP against grandparent, and child has the same
violation (but matches parent), we no longer flag the LSP violation on
child, since it can't be fixed without violating parent.
If parent violates LSP against grandparent, and child violates LSP
against both parent and grandparent, we emit two diagnostics (one for
each violation).
If parent violates LSP against grandparent, and child violates LSP
against parent (but not grandparent), we flag it.
Closes https://github.com/astral-sh/ty/issues/2000.
## Summary
Ruff's `--fix` for `RUF100` can inadvertently remove trailing comments
(e.g., `pylint` or `mypy` suppressions) by interpreting them as
descriptions. This PR adds a "Conflict with other linters" section to
the rule documentation to clarify this behavior and provide the
double-hash (`# noqa # pylint`) workaround.
## Fixes
Fixes#20762
This makes it so, e.g., `os<CURSOR>` will suggest the top-level stdlib
`os` module even if there is an `os` symbol elsewhere in your project.
The way this is done is somewhat overwrought, but it's done to avoid
suggesting top-level modules over other symbols already in scope.
Fixesastral-sh/issues#1852
Part of this was already done, but it was half-assed. We now look at the
search path that a symbol came from and centralize a symbol's origin
classification.
The preference ordering here is maybe not the right one, but we can
iterate as users give us feedback. Note also that the preference
ordering based on the origin is pretty low in the relevance sorting.
This means that other more specific criteria will and can override this.
This results in some nice improvements to our evaluation tasks.
This commit adds two new tests. One checks that a symbol in the current
project gets priority over a symbol in the standard library. Another
checks that a symbol in a third party dependency gets priority over a
symbol in the standard library. We don't get either of these right
today.
Note that these comparisons are done ceteris paribus. A symbol from the
standard library could still be ranked above a symbol elsewhere.
(Although I believe currently this is somewhat rare.)
I apparently don't know how to use my own API. Previously,
we would skip, e.g., `.venv`, but still descend into it.
This was annoying in practice because I sometimes have an
environment in one of the truth task directories. The eval
command should ignore that entirely, but it ended up
choking on it without properly ignoring hidden files
and directories.
Previously, we would only sort by name and file path (unless the symbol
refers to an entire module). This led to somewhat confusing ordering,
since the file path is absolute and can be anything.
Sorting by module name after symbol name gives a more predictable
ordering in common practice.
This is mostly just an improvement for debugging purposes, i.e., when
looking at the output of `all_symbols` directly. This mostly shouldn't
impact completion ordering since completions do their own ranking.
This moves the information we want to use to rank completions into
`Completion` itself. (This was the primary motivation for creating a
`CompletionBuilder` in the first place.)
The principal advantage here is that we now only need to compute the
relevance information for each completion exactly once. Previously, we
were computing it on every comparison, which might end up doing
redundant work for a not insignifcant number of completions.
The relevance information is also specifically constructed from the
builder so that, in the future, we might choose to short-circuit
construction if we know we'll never send it back to the client (e.g.,
its ranking is worse than the lowest ranked completion). But we don't
implement that optimization yet.
I want to be able to attach extra data to each `Completion`, but not
burden callers with the need to construct it. This commit helps get us
to that point by requiring callers to use a `CompletionBuilder` for
construction instead of a `Completion` itself.
I think this will also help in the future if it proves to be the case
that we can improve performance by delaying work until we actually build
a `Completion`, which might only happen if we know we won't throw it
out. But we aren't quite there yet.
This also lets us tighten things up a little bit and makes completion
construction less noisy. The downside is that callers no longer need to
consider "every" completion field.
There should not be any behavior changes here.
I went through https://github.com/astral-sh/ty/issues/1274 and tried to
extract what I could into eval tasks. Some of the suggestions from that
issue have already been done, but most haven't.
This captures the status quo.
This TODO is very old -- we have long since recorded this definition.
Updating the test to actually assert the declaration requires a new
helper method for declarations, to complement the existing
`first_public_binding` helper.
---------
Co-authored-by: Claude <noreply@anthropic.com>
When working with constraint sets, we track transitive relationships
between the constraints in the set. For instance, in `S ≤ int ∧ int ≤
T`, we can infer that `S ≤ T`. However, we should only consider fully
static types when looking for a "pivot" for this kind of transitive
relationship. The same pattern does not hold for `S ≤ Any ∧ Any ≤ T`;
because the two `Any`s can materialize to different types, we cannot
infer that `S ≤ T`.
Fixes https://github.com/astral-sh/ty/issues/2371
Resolves#21892
## Summary
This PR updates `docs/editors/features.md` to clarify that Jupyter
Notebooks are now included by default as of version 0.6.0.
## Summary
Updates the fix title for RUF102 to either specify which rule code to
remove, or clarify
that the entire suppression comment should be removed.
## Test Plan
Updated test snapshots.
## Summary
This PR fixes `super()` handling when the first parameter (`self` or
`cls`) is annotated with a TypeVar, like `Self`.
Previously, `super()` would incorrectly resolve TypeVars to their bounds
before creating the `BoundSuperType`. So if you had `self: Self` where
`Self` is bounded by `Parent`, we'd process `Parent` as a
`NominalInstance` and end up with `SuperOwnerKind::Instance(Parent)`.
As a result:
```python
class Parent:
@classmethod
def create(cls) -> Self:
return cls()
class Child(Parent):
@classmethod
def create(cls) -> Self:
return super().create() # Error: Argument type `Self@create` does not satisfy upper bound `Parent`
```
We now track two additional variants on `SuperOwnerKind` for TypeVar
owners:
- `InstanceTypeVar`: for instance methods where self is a TypeVar (e.g.,
`self: Self`).
- `ClassTypeVar`: for classmethods where `cls` is a `TypeVar` wrapped in
`type[...]` (e.g., `cls: type[Self]`).
Closes https://github.com/astral-sh/ty/issues/2122.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
We enabled [`CompletionListisIncomplete`] in our LSP server a while back
in order to have more of a say in how we rank and filter completions.
When it isn't set, the client tends to ask for completions less
frequently and will instead do its own filtering.
But... we did not enable it for the playground. Which I guess didn't
result in anything noticeably bad until we started limiting completions
to 1,000 suggestions. This meant that if the _initial_ completion
response didn't include the ultimate desired answer, then it would never
show up in the results until the client requested completions again.
This in turn led to some very poor completions in some cases.
This all gets fixed by simply enabling `isIncomplete` for Monaco.
Fixesastral-sh/ty#2340
[`CompletionList::isIncomplete`](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#completionList)
## Summary
Fixes https://github.com/astral-sh/ty/issues/2363
Fixes https://github.com/astral-sh/ty/issues/2013
And several other bugs with the same root cause. And makes any similar
bugs impossible by construction.
Previously we distinguished "no annotation" (Rust `None`) from
"explicitly annotated with something of type `Unknown`" (which is not an
error, and results in the annotation being of Rust type
`Some(Type::DynamicType(Unknown))`), even though semantically these
should be treated the same.
This was a bit of a bug magnet, because it was easy to forget to make
this `None` -> `Unknown` translation everywhere we needed to. And in
fact we did fail to do it in the case of materializing a callable,
leading to a top-materialized callable still having (rust) `None` return
type, which should have instead materialized to `object`.
This also fixes several other bugs related to not handling un-annotated
return types correctly:
1. We previously considered the return type of an unannotated `async
def` to be `Unknown`, where it should be `CoroutineType[Any, Any,
Unknown]`.
2. We previously failed to infer a ParamSpec if the return type of the
callable we are inferring against was not annotated.
3. We previously wrongly returned `Unknown` from `some_dict.get("key",
None)` if the value type of `some_dict` included a callable type with
un-annotated return type.
We now make signature return types and annotated parameter types
required, and we eagerly insert `Unknown` if there's no annotation. Most
of the diff is just a bunch of mechanical code changes where we
construct these types, and simplifications where we use them.
One exception is type display: when a callable type has un-annotated
parameters, we want to display them as un-annotated, but if it has a
parameter explicitly annotated with something of `Unknown` type, we want
to display that parameter as `x: Unknown` (it would be confusing if it
looked like your annotation just disappeared entirely).
Fortunately, we already have a mechanism in place for handling this: the
`inferred_annotation` flag, which suppresses display of an annotation.
Previously we used it only for `self` and `cls` parameters with an
inferred annotated type -- but we now also set it for any un-annotated
parameter, for which we infer `Unknown` type.
We also need to normalize `inferred_annotation`, since it's display-only
and shouldn't impact type equivalence. (This is technically a
previously-existing bug, it just never came up when it only affected
self types -- now it comes up because we have tests asserting that `def
f(x)` and `def g(x: Unknown)` are equivalent.)
## Test Plan
Added mdtests.