@beartype 0.19.0 gently glides into your CI workflow for a crash miracle landing. Engines go brrrrrrrr.
@beartype 0.19.0 narrowly avoids the grazing sheep in this terrifying metaphor.
@beartype 0.19.0 invites you to experience either the future of QA or a new catastrophe for QA – all from the comfort of your (t)rusty keyboard. It now thrums with untold power and the lurid afterglow of our out-of-control release cycle.
pip install --upgrade beartype # <-- engines hit full throttle, stomach hits full empty
@beartype 0.19.0 is proudly brought to you by...
GitHub Sponsors: When You Befriend the Bear, You Got a Bear
This release comes courtesy these proud GitHub Sponsors, without whom @leycec's cats would currently be eating grasshoppers:
- @sesco-llc (SESCO Enterprises), "The Power of Innovation in Trading": this inspires me to get out of the house and do something
https://sescollc.com - @DylanModesitt (Dylan Modesitt), quantitative strategies energy trading associate: ...wikipedia, don't fail me now!
https://dylanmodesitt.com - @tactile-metrology (Tactile Metrology), "Software and hardware that you can touch." When I want to be touched by software and hardware, I call @tactile-metrology:
https://metrolo.gy imagine if this domain actually worked. how cool would that be!?
Thanks so much, masters of fintech and metrology.
The Masters of Fintech and Metrology. That's who.
@beartype 0.19.0: What Broke This Time?
Probably, a whole lot. Hopefully, a whole little. The truth lies in the middle.
@beartype 0.19.0 sidles up to your codebase in its blind spot with something suspicious in its paws. Questionable new features include:
-
beartype.door.infer_hint()
: let BeartypeAI™ write your type hints for you, because you no longer have der Wille zur Macht to constantly deal with all this [redacted pejorative
]:# I've got a crazy object here, @beartype. What's the crazy type hint that # matches my crazy object? This is gonna really suck. I can *FEEL* it coming # through my monitor tonight. >>> beartype.door.infer_hint(pygments.lexers.PythonLexer().tokens["root"]) list[typing.Union[tuple[str | collections.abc.Callable[ typing.Concatenate[object, object, ...], object], ...], tuple[str | pygments.token._TokenType[str], ...], typing.Annotated[ collections.abc.Collection[str], beartype.vale.IsInstance[ pygments.lexer.include]]]] # <-- I have no idea. Neither does that cute intern.
-
beartype.claw.beartype_all()
+BeartypeConf(claw_skip_package_names)
: a single one-liner type-checks your entire app stack at runtime or test-time while ignoring problematic third-party packages that inexplicably hate @beartype for "reasons":beartype_all(conf=BeartypeConf(claw_skip_package_names=('bad_package', 'dumb.submodule')))
-
**kwargs: int | str
: @beartype type-checks annotated variadic keyword arguments! yay! uhh... wait. wasn't @beartype always doing that? these emoji suggest otherwise: 😄 → 😭def i_am_simply_shocked(**kwargs: int | str): ... # <-- @beartype actually checks this now
-
Deeper
O(1)
type-checking. @beartype 0.19.0 now deeply type-checks type hints like:frozenset[...]
.set[...]
.collections.ChainMap[...]
.collections.Counter[...]
.collections.deque[...]
.collections.abc.Collection[...]
.collections.abc.ItemsView[...]
.collections.abc.KeysView[...]
.collections.abc.MutableSet[...]
.collections.abc.Set[...]
.collections.abc.ValuesView[...]
.typing.AbstractSet[...]
.typing.ChainMap[...]
.typing.Collection[...]
.typing.Counter[...]
.typing.Deque[...]
.typing.FrozenSet[...]
.typing.ItemsView[...]
.typing.KeysView[...]
.typing.MutableSet[...]
.typing.Set[...]
.typing.ValuesView[...]
.
-
Shallow
O(1)
type-checking support for exciting (yet wildly unpopular) PEP standards that nobody uses. @beartype 0.19.0 now quietly ignores these PEPs without throwing up everywhere:- PEP 612 – Parameter Specification Variables (e.g.,
def muh_decorator_closure(*args: P.args, **kwargs: P.kwargs):
). - PEP 646 – Variadic Generics (e.g.,
Ts = typing.TypeVarTuple('Ts')
). - PEP 692 – Using TypedDict for more precise
**kwargs
typing (e.g.,def muh_kwargs_func(**kwargs: typing.Unpack[MuhTypedDict]]):
).
- PEP 612 – Parameter Specification Variables (e.g.,
-
Official
multiprocessing
support. @beartype 0.19.0 now officially supports fork-based distributed workloads based on the awfulpickle
module. 🤣 -
Third-party decorator integration. @beartype 0.19.0 now officially supports popular Just-in-Time (JIT) decorators for machine learning (ML) like:
@equinox.filter_jit
.@jax.jit
.@numba.njit
.
-
Python 3.13 +
--disable-gil
+--enable-experimental-jit
. Pinch me, I must be hyperventilating into a paper bag again. -
Sane build toolchain: from
setuptools
+setup.py
🤮 to Hatch +pyproject.toml
. 🥂 -
Sane publishing toolchain: from antiquated GitHub Actions tokens 🤮 to PyPI-specific "Trusted Publishers". 🥂
-
Critical bugs resolutions: blah, blah. Who cares. I'm tired. So are you.
@beartype 0.19.0 feature list mollifies even the unruly pirate crowd in the back
infer_hint():
Introducing BeartypeAI™, Your Chummy QA Pal
Chummy Unpaid QA Pal BeartypeAI™ is on the job and grumbling already about union overtime. Because your team hates type hints (and you're reluctantly starting to admit they might be onto something), BeartypeAI™ does what nobody else wants to do. Authoring type hints is a thankless janitorial fetch quest that smells bad and consumes your last will to code.
Allow our new beartype.door.infer_hint()
function to automate your ongoing stomache pain away. Type hints may be like that kidney stone the size of your mother-in-law's big head, but that's no reason to curl up on a gurney clutching your side in blinding agony. But first:
"What are type hints, really – aside from the second-worst ongoing maintenance nightmare in Python next to
pyproject.toml
semantic versioning dependency bumps?"
Type hints are the most compact description of the internal structure of your objects. If you know the type hint of an object, you know the object better than the object knows itself. Type hints are the ultimate self-documentation. Unlike docstrings, type hints never lie or @beartype breaks your app. Type hints are both human-readable and machine-readable. They're literally the only thing that is.
Many type hints are trivial to write. All of us can sling around breezy list[int] | None
type hints while yawning. It's not impressive. My cats can write that type hint with one sleepless eye open. Seriously. Why do cats sleep with one eye open, anyway? Doesn't that kinda defeat the purpose of... I dunno, sleeping? Must suck to be a paranoid cat. Uhhh. Back to the discussion.
Some type hints, however, are non-trivial. You can't write them. Nobody can. They contain more square brackets than an 80's ASCII roguelike with an understated name like Death Gehenna or Eternal Furnaces of NetSwargy. When your type hint looks like this, the end of code maintainability cannot be far:
that feeling when your type hints resemble incomprehensible toddlers
Moreover, you don't even know the internal structure of most objects. Somebody else wrote those objects. They forgot how those objects worked a hot minute after clocking out at 4:12AM seven days deep into a crunch-time death march last January. They documented how those objects worked, but their documentation doesn't make sense and lies about everything. Now nobody knows how those objects work.
But what if somebody did know how those objects work? What if somebody knew Python better than Python knew itself? Introducing... somebody.
infer_hint():
Deep Introspection for the Deep Code Diver
Let @beartype
ease your weary burden, traveller. It is dangerous to go alone:
# Crazy object you could understand. But... ain't nobody got that kinda time.
>>> from pygments.lexers import PythonLexer
>>> root_tokens = PythonLexer().tokens["root"]
# I've got a crazy object here, @beartype. What's the crazy type hint that
# matches my crazy object? This is gonna really suck. I can *FEEL* it coming
# through my monitor tonight.
>>> from beartype.door import infer_hint
>>> infer_hint(root_tokens)
list[ # <-- what could possibly go wrong?
typing.Union[ # <-- sucky stuff starts
tuple[str | collections.abc.Callable[typing.Concatenate[object, object, ...], object], ...], # <-- sucky stuff intensifies
tuple[str | pygments.token._TokenType[str], ...], # <-- so much sucky stuff
typing.Annotated[collections.abc.Collection[str], beartype.vale.IsInstance[pygments.lexer.include]] # <-- go to heck, typing!
] # <-- a pox on your square brackets
] # <-- i have no idea and neither do you
...uhh. If you say so, @beartype
. I guess? </weeps_in_square_bracket_hell>
codebase narrowly dodges another inexpert potshot from pygments
BeartypeAI™: Even a Broken Algorithm is Right Twice a Commit
beartype.door.infer_hint()
(i.e., the algorithm hereafter known simply as BeartypeAI™) knows all about deep introspection of arbitrarily complex objects. Here's what BeartypeAI™ knows:
BeartypeAI™: "I know that your type hints kinda suck. Wait... where are your type hints? Oh, Gods. You don't type hint. I'm panicking. I'm panicking."
You: "Tell my team something my team don't know, BeartypeAI™. We hate type hints. So we don't type hint. We code instead. You should try it sometime. But... hey, aren't you a type-checking bear? Can you even code with those fat paws?"
BeartypeAI™: "Hold my QA beer."
BeartypeAI™ is here to tell you something you don't know and wouldn't care about even if you did.
BeartypeAI™ is here to write your type hints for you. Why? Because you hate type hints. Just:
- Feed
beartype.door.infer_hint()
arbitrarily complex objects. - Annotate those objects with the type hints it regurgitates.
It's impossible to unpack how much madness is happening inside BeartypeAI™. Let's try anyway.
>>> from beartype.door import infer_hint # <-- all your dreams begin here
# Show me the type hint describing a useless object, @beartype!
>>> infer_hint(object())
<class 'object'> # <-- makes sense
# Show me the type hint describing a useless type, @beartype!
>>> infer_hint(object)
type[object] # <-- so. cool.
# Show me the type hint describing a list of strings, @beartype!
>>> infer_hint(['expose', 'extreme', 'explosions!',])
list[str] # <-- hole in one
# Show me the type hint describing a tuple of crazy stuff, @beartype!
>>> infer_hint((b'heh', [0xBEEEEEEEF, 'ohnoyoudont',]))
tuple[bytes, list[int | str]] # <-- no idea, but i trust it
# Show me the type hint describing an insane recursive list, @beartype!
>>> recursive_list = ['this is fine', b'but...',]
>>> recursive_list.append(recursive_list)
>>> infer_hint(recursive_list)
list[str | bytes | beartype.door._func.infer.inferhint.BeartypeInferHintContainerRecursion] # <-- just go with it
BeartypeAI™ pounds back another as your git log
explodes in the distance
Homebrew Collection Classes: Annotate the Unannotatable
We've all been there. Some "genius"-tier wise guy devbro invented his own homebrew pure-Python collection called WeirdoCustomList
without subclassing a standard collections.abc
abstract base class. Sure, they could have just subclassed collections.abc.MutableSequence
to write their special-needs alternative to the builtin list
type. That would have been too easy. They're a masochist, so they wrote everything from scratch.
Homebrew collections are more common than you think. They're friggin' everywhere! They're multiplying like meat flies! The cat's choking on bloody homebrow collections! Look above, for example. See that nasty typing.Annotated[collections.abc.Collection[str], IsInstance[pygments.lexer.include]]
type hint that BeartypeAI™ wrote for you? Yeah.
That's right. The public pygments.lexer.include
type is actually a homebrew collection. They thought they were being smart. Sadly, they were being dumb. Because they wrote a homebrew collection from scratch, their collection type is unsubscriptable. You can't subscript it with child type hints like with list[str]
, so you can't use their collection type as a type hint factory to write type hints, so you can't actually validate their homebrew collection. In fact, you can't validate any homebrew collections.
...until now. BeartypeAI™ knows all about homebrew collections. BeartypeAI™ knows that you can actually validate homebrew collections – but only if you write a custom beartype validator leveraging the PEP 593-compliant typing.Annotated[...]
type hint factory in concert with the @beartype-specific beartype.vale.IsInstance[...]
validator. Please don't do this manually. You value your precious life force that is leaking all over your keyboard as we speak.
That's what the typing.Annotated[collections.abc.Collection[str], IsInstance[pygments.lexer.include]]
type hint is all about. BeartypeAI™ correctly detected that this homebrew collection is actually just an instance of the pygments.lexer.include
class that is externally usable as a collection of strings.
Let's get more explicit. Nobody understands pygments
– not even pygments
. Instead, consider...
BeartypeAI™ casually flings itself above another picturesque yet ultimately incomprehensible codebase
Exhibit (A) – Weirdo Custom List
It's... it's hideous!
from beartype.door import infer_hint
from collections.abc import Iterable, Iterator
# Define a weirdo custom list.
class WeirdoCustomList(object):
'''
Weirdo custom list. The one. The only.
Weirdo custom list pretends to mean no harm. Weirdo custom list is
lonely at night and only wants to be snuggle-friends. *How could you
refuse weirdo custom list in its example of need?*
'''
def __init__(self, items: list) -> None: self._items = items
def __contains__(self, item: object) -> bool: return item in self._items
def __iter__(self) -> Iterator: return iter(self._items)
def __len__(self) -> int: return len(self._items)
def __getitem__(self, index: int) -> object: return self._items[index]
def __reversed__(self) -> Iterator: return reversed(self._items)
def count(self, item: object) -> int: return self._items.count(item)
def index(self, *args, **kwargs) -> int:
return self._items.index(*args, **kwargs)
def __delitem__(self, index: int) -> None: del self._items[index]
def __setitem__(self, index: int, item: object) -> None:
self._items[index] = item
def __iadd__(self, item: object) -> object: self._items += item
def append(self, item: object) -> None: self._items.append(item)
def clear(self) -> None: self._items.clear()
def extend(self, items: Iterable) -> None: self._items.extend(items)
def insert(self, index: int, item: object) -> None:
self._items.insert(index, item)
def pop(self, *args, **kwargs) -> object:
return self._items.pop(*args, **kwargs)
def remove(self, item: object) -> None: self._items.remove(item)
def reverse(self) -> None: self._items.reverse()
# Infer the type hint for a weirdo custom list of strings.
print(infer_hint(WeirdoCustomList([
'No way,', '@beartype.', 'No.', "Friggin'.", 'Way.'])))
...which prints:
typing.Annotated[collections.abc.MutableSequence[str], IsInstance[WeirdoCustomList]]
"What's so hot about that?", you may now be thinking. Allow me to now pontificate boringly.
WeirdoCustomList
isn't subscriptable. It's not a type hint factory. Moreover, despite being a mutable sequence, WeirdoCustomList
doesn't actually subclass the standard collections.abc.MutableSequence
protocol. Yet, BeartypeAI™ correctly detected that this particular weirdo custom list is a mutable sequence of strings. How? It's best not to ask weirdo custom list these questions. 🤣
wake up BeartypeAI™ when those type hints start making sense
Callable[...]
Type Hints: Gods, They Suck.
Annotating callables (especially callbacks) with PEP-compliant Callable[...]
type hints is basically impossible. Personally, I've never gotten a single Callable[...]
type hint to work right. They never match the callables they're supposed to when I write them myself. mypy
and pyright
always vomit all over themselves and then me. I mostly just give up now and use the unsubscripted collections.abc.Callable
abstract base class instead of full-blown Callable[...]
type hints...
...until now. BeartypeAI™ knows literally everything there is to know about annotating callables. What doesn't BeartypeAI™ know? Well, friends:
-
BeartypeAI™ knows that a
lambda
function accepting no parameters is annotated as...>>> infer_hint(lambda: "No. Friggin'. Way.") collections.abc.Callable[[], object] # <-- woah
-
BeartypeAI™ knows that a
lambda
function accepting multiple parameters is annotated as...>>> infer_hint(lambda you, will, believe: "@beartype, I am your code father.") collections.abc.Callable[[object, object, object], object] # <-- i don't know what's happening here, but i like it
-
BeartypeAI™ knows that a normal function accepting two mandatory annotated parameters and one optional annotated parameter is "best" annotated with a PEP 612-compliant
typing.Concatenate[...]
subscription as...>>> def i_am_tired(this: float, so: int, boring: str = "y u so boring, @beartype!?"): ... >>> infer_hint(i_am_tired) collections.abc.Callable[typing.Concatenate[float, int, ...], object] # <-- don't ask, just accept.
-
BeartypeAI™ knows that a decorator wrapper function accepting a PEP 612-compliant parameter specification is annotated as...
>>> from typing import ParamSpec >>> P = ParamSpec('P') >>> def so_param_so_spec(*args: P.args, **kwargs: P.kwargs): ... >>> infer_hint(so_param_so_spec) collections.abc.Callable[~P, object] # <-- yer frickin' blowin' mah mind here, yo
-
BeartypeAI™ knows that a decorator wrapper function accepting two mandatory annotated parameters followed by a PEP 612-compliant parameter specification is annotated with a PEP 612-compliant
typing.Concatenate[...]
subscription as...>>> from typing import ParamSpec >>> P = ParamSpec('P') >>> def more_param_more_spec( ... go_crazy: int, ... dont_mind_if_i_do: str, ... *args: P.args, ... **kwargs: P.kwargs ... ): ... >>> infer_hint(more_param_more_spec) collections.abc.Callable[typing.Concatenate[int, str, ~P], object] # <-- pretty sure the universe just exploded
What I'm trying to say here is that BeartypeAI™ knows all and sees all and doesn't like it what it sees, but is still doing it's best for everybody. It knows more than me. It probably knows more than even you, even though you know everything. That's how much BeartypeAI™ knows.
BeartypeAI™ takes the high road when it comes to Callable[...]
type hints
Tensors Type Hints: Gods, They Suck Too.
Tensor type hints really suck. So your team wants to annotate NumPy, JAX, PyTorch, or TensorFlow arrays, huh? That's a perfectly reasonable request. Too bad, though. Because tensor type hints suck.
Tensor type hints suck so bad you have to use third-party packages like jaxtyping
just to make them work, despite the fact that both NumPy and JAX ship type hint-centric subpackages like numpy.typing
and jax.typing
that are supposed to make tensor type hints "just work." Of course, tensor type hints don't "just work." They don't even work...
...until now. BeartypeAI™ knows literally everything there is to know about annotating tensor type hints. Actually, that's a lie. I really wanted BeartypeAI™ to know literally everything there is to know about annotating tensor type hints in time for @beartype 0.19.0rc1
. Sadly, I played video games instead. I only got around to implementing BeartypeAI™ support for inferring NumPy tensor type hints.
Still, NumPy is better than nothing. One out of four ain't bad. Right? ...anybody? 😮💨
# Define the greatest NumPy array that has ever existed.
>>> from numpy import asarray
>>> best_array_is_best = asarray((1, 0, 3, 5, 2, 6, 4, 9, 2, 3, 8, 4, 1, 3, 7, 7, 5, 0,))
# Create a type hint validating that array, @beartype! Look. Just do it.
>>> from beartype.door import infer_hint
>>> infer_hint(best_array_is_best)
typing.Annotated[numpy.NDArray[int], beartype.vale.IsAttr['ndim', beartype.vale.IsEqual[1]]] # <-- wtf, @beartype
And... that's the type hint. That type hint requires no third-party dependencies. It's all BeartypeAI™, all one-liner. Nobody's writing that sort of gruelling bracket hell on their own. Not even @leycec. Just let somebody else do your suffering for you. That somebody is BeartypeAI™. Who knew?
your codebase grips its hat as BeartypeAI™ boldly shows off for no reason
infer_hint()
Time Complexity: All Roads Lead to O(1)
infer_hint()
is the first @beartype API to respect the long-standing BeartypeConf(strategy=BeartypeStrategy.O*)configuration option. Previously, *all* @beartype APIs defaulted to
O(1)` constant-time behaviour by randomly sampling container items for improved scalability. In the future, all @beartype APIs will allow you to customize this behaviour by specifying alternate iteration strategies like:
O(n)
linear-time behaviour, in which @beartype exhaustively examines all possible container items with recursion.O(log n)
logarithmic-time behaviour, in which @beartype recursively examines only a logarithmic subset of all possible container items – a scalable compromise between non-deterministicO(1)
immediacy and deterministicO(n)
lethargy.
Now, infer_hint()
is the first @beartype API to fully support two of those three strategies. Witness as history unfolds with a discomfiting "plop!":
- (Default)
infer_hint(obj)
is equivalent toinfer_hint(obj, conf=BeartypeConf(strategy=BeartypeStrategy.On)))
. Under theO(n)
strategy,infer_hint()
exhaustively examines all possible container items with recursion. To infer authoritative type hints from interactive REPLs and Jupyter Notebooks,infer_hint()
differs from the remainder of the @beartype codebase by defaulting toO(n)
-style linear-time iteration. This is generally what most users "probably" want when inferring type hints. Since you are reading this, you are not one of those users. infer_hint(obj, conf=BeartypeConf(strategy=BeartypeStrategy.O1)))
. Under theO(1)
strategy,infer_hint()
pseudo-randomly examines only a single container item at each nesting level. This is generally what algorithms like structural similarity see below and multiple dispatch more seeing below want. To activate the Hyperlight Drive, these use cases want to explicitly pass aconf
enabling theO1
strategy.
BeartypeStrategy.O1
: punch it, bald man!
@leycec punches it with fear in his heart
infer_hint()
Use Cases: Where We Pontificate Both Laconically and Loquaciously
What do those words even mean? Doesn't matter. Thankfully, what does matter is that type hint inference has real-world use cases that far exceed just "write my type hints for me, cause i h8 type hints m8. fr!" Just get a gander of these algorithmic goodies:
- Worst-case
O(1)
structural similarity comparison – faster even than==
-based equality comparison between arbitrary objects, which has worst-caseO(n)
linear-time complexity and thus scales poorly. Hyper-fast object comparison is what we sayin'. - Best- and average-case
O(1)
and worst-caseO(k)
multiple-dispatch fork
the number of callables being dispatched to – probably the fastest multiple-dispatch algorithm in any language. Hyper-fast dispatch is what we still sayin'.
Let's plumb these depths like Mario on a Piranha plant pipe bender.
@beartype smokes two bug-filled joints. then, @beartype smokes two more.
Use Case #1: Structural Similarity (So It Does That Too Now, Huh?)
In the beginning, there was:
# The "is" operator. Test whether two objects are literally identical.
>>> "I like big bugs and I cannot lie." is "I like big bugs and I cannot lie."
True
# The "==" operator. Test whether two objects are semantically identical.
>>> ['Other', 'devbros', 'may', 'deny',] == ['Other', 'devbros', 'may', 'deny',]
True
But what if you want to test whether two objects are merely structurally similar (i.e., have a similar internal structure but are neither literally nor semantically identical)? Without BeartypeAI™, you can't do that. But you have BeartypeAI™. You no longer have to accept the mouldy table scraps that the standard Python library has left you.
Structural similarity compares the large-scale "shape" of two objects without regard for the small-scale minutiae (like the exact items) in those objects.
Structural similarity thus combines:
- All the benefits of the
is
operator (likeO(1)
time complexity when callinginfer_hint(obj, conf=BeartypeConf(strategy=BeartypeStrategy.On)))
) with... - All the benefits of the
==
operator (like actually computing meaningful work) with... - None of the disadvantages of either.
Tensors offer a useful way to understand structural similarity: "Do two tensors have the same dtype
(i.e., type of all items in a tensor) and ndim
(i.e., dimensionality)? If yay, those two tensors are structurally similar; if nay, those two tensors are structurally dissimilar."
There are two different kinds of structural similarity, broadly speaking:
- Exact structural similarity, in which you test whether two objects have the exact same "shape": e.g.,
>>> from beartype.door import infer_hint, is_bearable # <-- boring stuff
# @beartype builds excitement builds. Declare data structures for great glory of your code.
>>> awesome_data_structure = [{"hoh, boy": int, "lol, golgo 13": [lambda: None]}, 0xFAAAAAACE]
>>> baleful_data_structure = [0xDEAFDEFF, {"nopleaseno": [lambda: False], "NOOOO!": object}]
# Describe the internal structure of your final masterpiece.
>>> awesome_hint = infer_hint(awesome_data_structure)
list[int | dict[str, list[collections.abc.Callable[[], object]] | type[int]]] # <-- ok
>>> baleful_hint = infer_hint(baleful_data_structure)
list[int | dict[str, type[object] | list[collections.abc.Callable[[], object]]]] # <-- whateva you say, bear
# Do these two objects have the exact same internal structure?
>>> awesome_hint == baleful_hint
False # <----- no dice, huh? *sigh*
- Substructural similarity, in which you test whether one object has a "shape" that "fits inside" that of another object. As the prior example illustrates, two data structures can have a similar internal structure but ultimately differ on a specific detail that nobody particularly cares about. Whereas
awesome_hint
contains a dictionary mapping to integers,baleful_hint
contains a dictionary mapping to merely objects. You are now thinking: "Integers are objects, you doltish man. Shouldn't we be able to ignore these awkward trivialities?" I object to being called a dolt while admitting you make a point. Make things vaguer by harnessing the perfidious power ofbeartype.door.infer_hint()
+beartype.door.is_bearable()
. Arise, substructural similarity! A new darkness!
>>> from beartype.door import infer_hint, is_bearable # <-- boring stuff
# Excitement builds. Declare data structures for great glory of your code.
>>> awesome_data_structure = [{"uhh...": int, "wat!?!": [lambda: None]}, 0xFEEEEEEED]
>>> baleful_data_structure = [0xBABEEEE, {"ohgod": [lambda: True], "NOOOO!": object}]
# Describe the internal structure of your great glory.
>>> infer_hint(awesome_data_structure)
list[int | dict[str, list[collections.abc.Callable[[], object]] | type[int]]] # <-- ok
>>> infer_hint(baleful_data_structure)
list[int | dict[str, type[object] | list[collections.abc.Callable[[], object]]]] # <-- i don't know. sure, i guess?
# Do these two objects have a similar internal structure?
>>> is_bearable(baleful_data_structure, infer_hint(awesome_data_structure))
True # <----- WTF-F-F-F-F-
Structural similarity cheatsheet, because the one-liner is a harsh mistress:
- Exact structural similarity is
infer_hint(obj_1) == infer_hint(obj_2)
. - Substructural similarity is
is_bearable(obj_1, infer_hint(obj_2))
.
Structural similarity: when you care about what your objects care about.
say goodbye to expensive comparisons that never really liked you anyway
Use Case #3: Single Dispatch (Dis Ain't Yo Momma's Dispatch)
"Dispatch" is a common decision problem in... well, basically any modern language that matters. So, not C. i have no regrets for igniting this flame war
Everyone's familiar with single-dispatch polymorphism, whereby an object-oriented language dynamically routes a call of an object's method to that object's "deepest" subclass overriding that method. In Python, we call this the method-resolution order (MRO) of an object. It's quite boring and pedantic stuff, really. I personally wouldn't click any of those links – especially not on the weekend.
But what if you want to perform single-dispatch outside of a class hierarchy that you directly control? Moreover, what if want to single-dispatch on arbitrary type hints deeply describing the internal structures of objects? Classes are superficial; they fail to fully convey the types of items contained in instances of those classes, which is why we do this type hint thing.
Moreover, what if we want to perform multiple-dispatch, whereby the callable that is dynamically routed to (i.e., called) depends not simply on the type of a single object but an arbitrary number of objects? This is the dynamical Hell we now find ourselves in.
Interestingly, it turns out that combining the beartype.door.infer_hint()
+ beartype.door.is_bearable()
functions trivially yields highly efficient O(1)
algorithms that transparently implement both single- and multiple-dispatch. First, the full-throttle single dispatch algorithm:
from beartype import BeartypeConf, BeartypeStrategy, beartype
from beartype.door import infer_hint, is_bearable
from collections.abc import Callable
_CONF_STRATEGY_O1 = BeartypeConf(strategy=BeartypeStrategy.O1)
'''
Beartype configuration enabling :math:`O(1)` constant-time random sampling.
'''
@beartype
def single_dispatch(obj: object, dispatcher: dict[object, Callable]) -> Callable:
'''
Callable suitable for dispatching the passed object from a type hint of the
passed dispatch dictionary.
Parameters
----------
obj : object
Object to be dispatched.
dispatcher : dict[object, Callable]
**Dispatch dictionary** (i.e., dictionary mapping from various type hints
to corresponding callables dispatching the passed object when that object
is validated by those type hints).
'''
# O(1) type hint inference, I choose you!
obj_hint = infer_hint(obj, conf=_CONF_STRATEGY_O1)
# Go for the O(1) short-circuit, @beartype. Do it.
dispatch_callable = dispatcher.get(obj_hint)
if dispatch_callable and is_bearable(obj, obj_hint):
return dispatch_callable
# Oh, noes! Disaster. Fallback to the O(k) iteration. Pretend this is okay.
for dispatch_hint, dispatch_callable in dispatcher.items():
if is_bearable(obj, obj_hint):
# Inject this inferred type hint and corresponding callable back into
# the dispatch dictionary, reducing the next call of this function
# passed a similar object to the O(1) short-circuit above. This
# guarantees amortized O(1) time complexity. </high_fives_all_around>
dispatcher[obj_hint] = dispatch_callable
return dispatch_callable
raise DispatchException(f'Passed object {repr(obj)} sucks. Blowing everything up!')
This exhibits time complexity:
- Amortized worst-case
O(1)
. oh by gods - Non-amortized best- and average-case
O(1)
. the gods glare with envy - Non-amortized worst-case
O(k)
fork
callables being dispatched to. the gods smugly look down and snicker
Generally speaking, we expect k
to be small in the average case – like, k < 10
small. So this is basically O(1)
single-dispatch in even the non-amortized worst case.
Totally realistic and compelling usage resembles something like:
# User-defined callables to be dispatched to. Excitement builds.
def join_list_of_strs(lst: list[str]) -> str:
return ''.join(lst)
def join_list_of_bytes(lst: list[bytes]) -> bytes:
return b''.join(lst)
# User-defined object to be dispatched on. Excitement peaks.
list_of_things = [b'This. ', b'String. ', b'Bytes.']
# User-defined callable suitable for this object. Excitement subsides.
join_list_of_things = single_dispatch(list_of_things, dispatcher={
list[str]: join_list_of_strs,
list[bytes]: join_list_of_bytes,
})
# Pass this object to this callable. Excitement is in the gutter now.
assert join_list_of_things(list_of_things) == b'This. String. Bytes.'
That's probably the fastest possible single-dispatch algorithm in any language. But here's where the bullet train really goes off the rails...
@beartype flies foolishly close to the fathomless void so you don't have to
Use Case #3: Multiple Dispatch (Dat Sasquatch Ain't Got Nuthin' on Us)
The single-dispatch algorithm trivially generalizes to multiple-dispatch as well. How? With cleverness, grit, and twin handlebar moustaches. 👨 👨 ← gritty moustauche twins
Let's reduce the multiple-dispatch case (of dispatching over multiple objects) to the single-dispatch case (of dispatching on a single object) by concatenating those multiple objects into a single object. Specifically, let's encapsulate those multiple objects into a tuple. Tuples are highly space- and time-efficient in Python. More importantly, tuples whose items are hashable are themselves hashable. By subscripting fixed-length tuple[...]
types by the multiple type hints (almost all of which are hashable) to be dispatched across, we can actually leverage the almost exact same algorithm as above to perform O(1)
multiple dispatch:
from beartype import BeartypeConf, BeartypeStrategy, beartype
from beartype.door import infer_hint, is_bearable
from collections.abc import Callable, Collection
_CONF_STRATEGY_O1 = BeartypeConf(strategy=BeartypeStrategy.O1)
'''
Beartype configuration enabling :math:`O(1)` constant-time random sampling.
'''
@beartype
def multiple_dispatch(
*args: object, dispatcher: dict[object, Callable]) -> Callable:
'''
Callable suitable for dispatching all objects in the passed collection from a
type hint of the passed dispatch dictionary.
Parameters
----------
*args : object
Tuple of all objects to be dispatched.
dispatcher : dict[object, Callable]
**Dispatch dictionary** (i.e., dictionary mapping from various type hints
to corresponding callables dispatching the passed objects when those objects
are validated by those type hints).
'''
# O(1) type hint inference, I choose you!
obj_hint = infer_hint(args, conf=_CONF_STRATEGY_O1)
# Go for the O(1) short-circuit, @beartype. Do it.
dispatch_callable = dispatcher.get(obj_hint)
if dispatch_callable and is_bearable(args, obj_hint):
return dispatch_callable
# Oh, noes! Disaster. Fallback to the O(k) iteration. Pretend this is okay.
for dispatch_hint, dispatch_callable in dispatcher.items():
if is_bearable(args, obj_hint):
# Inject this inferred type hint and corresponding callable back into
# the dispatch dictionary, reducing the next call of this function
# passed a similar object to the O(1) short-circuit above. This
# guarantees amortized O(1) time complexity. </high_fives_all_around>
dispatcher[obj_hint] = dispatch_callable
return dispatch_callable
raise DispatchException(f'Passed object {repr(obj)} sucks. Blowing everything up!')
That's... literally the exact same function. The signature just accepts variadic positional arguments *args
rather than a single obj
. Whatevah!
Crucially, this is still amortized worst-case O(1)
and non-amortized worst-case O(k)
multiple-dispatch for k
the number of callables being dispatched. In other words, we have a cardinality-invariant dispatch algorithm. The time complexity of this algorithm is unconditionally O(k)
regardless of the number of objects being dispatched over or the size of those objects. Since we generally expect k
to be small, this is still basically O(1)
multiple-dispatch. wuuuuuuuuuut
Totally realistic and compelling usage intensifies holistically:
# User-defined callables to be dispatched to. Excitement builds.
def join_list_of_strs_plus_str(lst: list[str], text: str) -> str:
return ''.join(lst) + text
def join_list_of_bytes_plus_bytes(lst: list[bytes], text: bytes) -> bytes:
return b''.join(lst) + text
# User-defined objects to be dispatched on. Excitement peaks.
list_of_things = [b'This. ', b'String. ', b'Still. ',]
plus_thing = b'Bytes.'
# Tuple of all user-defined objects to be dispatched over.
list_of_things_plus_thing = (list_of_things, plus_thing)
# User-defined callable suitable for these objects. Excitement subsides.
join_list_of_things_plus_thing = multiple_dispatch(*list_of_things_plus_thing, dispatcher={
tuple[list[str], str]: join_list_of_strs_plus_str,
tuple[list[bytes], bytes]: join_list_of_bytes_plus_bytes,
})
# Pass these objects to this callable. Excitement is in the gutter now.
assert join_list_of_things_plus_thing(*list_of_things_plus_thing) == b'This. String. Still. Bytes.'
That's definitely the fastest possible multiple-dispatch algorithm in any language. Can't do better than O(1)
. Suck it, Julia. Suck it.
I've tested that rickety jerry-rigged shadow madness. Against all odds... it somehow works. No idea how, honestly. Probably falls down in edge cases, honestly. But at least for one fleeting moment in the rain, we had a dream of something beautiful. 🤣
O(1)
multiple dispatch: the skull means it wants to help you
**kwargs
: The Type-checking Chickens Come Home to Roost
A few bicycle trips ago, my wife asked me a real eye-opener as the stinging sweat trickled down:
Why does "his chickens came home to roost" always mean that something bad just happened?
Isn't it a good thing when the chickens come home to roost?
Isn't that what chickens are supposed to do at night? Roost?
Me:
I see that you too have autism.
Seriously. Metaphors, folks. What good are they for if they make less sense than cats wearing pizza hats? Which leads us straight to...
Variadic keyword arguments. All this time, it was reasonable to believe that @beartype was type-checking annotated variadic keyword arguments like def func_in_a_funk(**kwargs: int)
. In actuality, @beartype was type-checking nothing there. Annotated variadic keyword arguments were silently ignored. Our flimsy reasons for doing nothing were fivefold:
- (A) It was hard, because...
- (B) @leycec is lazy, because...
- (C) Nobody noticed, because...
- (D) Nobody annotates variadic keyword arguments, because...
- (E) The types of the values of excess keyword arguments vary by keyword. In other words, annotating variadic keyword arguments is hard, increases fragility, usually increases ambiguity, and never quite works as intended.
The last reason is a good reason. The rest are bad reasons. Therefore, @beartype now type-checks annotated variadic keyword arguments as standardized by PEP 484 a decade ago.
The syntax is a bit odd, though. Although boring, this is worth belabouring. Python usually wants you to explicitly spell everything out. "Explicit is better than implicit" – except when it's not, apparently. All variadic keyword arguments are dictionaries mapping from strings (i.e., excess parameter names passed by keyword) to arbitrary objects (i.e., the values of those parameters). So far, so boring.
Since all variadic keyword arguments necessarily satisfy the type hint dict[str, object]
, however, Python interprets the type hint annotating a variadic keyword argument as the child value type hint of that dictionary. Thus, def oh_boy(**kwargs: float)
is semantically equivalent to def oh_boy(kwargs: dict[str, float])
from the perspective of the body of oh_boy()
. So far, still boring... albeit kinda weird and meandering now.
Above, I self-importantly wrote that "The types of the values of excess keyword arguments vary by keyword." The real-life embodiment of this is:
def fugly_muffin(**kwargs) -> int:
return kwargs['hear_nuffin'] + len(kwargs['see_nuffin'])
assert ugly_muffin(hear_nuffin=0xDEAF, see_nuffin='NOPE.') == 57012
Above, the first excess keyword parameter hear_nuffin
has type int
while the second excess keyword parameter see_nuffin
has type str
. How do you type this madness as a single type hint when the types vary by keyword? Simple:
-
The Beartype-Friendly Way, which is also the awful way. This approach has the benefit of being supported by @beartype, but the drawback of sucking. You ambiguously type
**kwargs
as the union of the types of the values of all excess keyword arguments: e.g.,# This is trash. What you goin' do? def fugly_muffin(**kwargs: int | str) -> int: return kwargs['hear_nuffin'] + len(kwargs['see_nuffin'])
-
The PEP 692 Way, which is honestly also kinda awful. This has the benefit of being unambiguous (unlike the "Beartype-Friendly Way" above), but the multiple drawbacks of not currently being supported by @beartype, requiring Python ≥ 3.12, and expanding into 500 lines of sad-faced boilerplate that will break your will to sling code on Friday nights. Under PEP 692, you type
**kwargs
as thetyping.Unpack
of atyping.TypedDict
subclass that you define just to unambiguously constrain the types of the values of all excess keyword arguments: e.g.,# This is still trash -- merely a different and larger kind of trash. from typing import TypedDict, Unpack class _GodsThisIsTrashy(TypedDict): ''' You didn' see nuffin' except 500 lines of boilerplate that burn the eyes. ''' hear_nuffin: int see_nuffin: str def fugly_muffin(**kwargs: _GodsThisIsTrashy) -> int: ''' Still trashy after all these years. ''' return kwargs['hear_nuffin'] + len(kwargs['see_nuffin'])
Even if @beartype supports PEP 692 by the time you read this, Spoiler from the sad future: ...it still doesn't!? I'd still personally opt for the @beartype
-friendly def fugly_muffin(**kwargs: int | str):
approach. Sure, it's ambiguous. But it's also a trivial 9 characters rather than 500 lines of sad-faced boilerplate. Consider the number of callables that accept **kwargs
in your codebase. Are you really gonna define one unique TypedDict
subclass just to unambiguously annotate each **kwargs
parameter? Really? Some of us might think we are. But then we try and fall down clutching our rib cages. The finger-breaking reality of that much boilerplate has broken greater devs than us before.
I scoff into my "Revenge of the Nerds"-era pocket protector. Annotating variadic keyword arguments may still suck after all these years – but at least @beartype now supports the slightly less sucktastic way.
five out of ten men who are pigs support this feature. do you?
Theory Crafting Time: @leycec Spins Fake News Faster Than a Turboprop
Let's create a fake PEP and pretend it exists. In other words, this subsection is of no value to anyone whatsoever. Still, unhinged dreams exist for a reason. If I was a CPython typing
dev, this is the Mirror World PEP 692 that I personally would have written to trivialize **kwargs
type hints:
from typing import Unpack
def fugly_muffin(**kwargs: Unpack['hear_nuffin': int, 'see_nuffin': str]) -> int:
return kwargs['hear_nuffin'] + len(kwargs['see_nuffin'])
Trivial. Right? Just subscript typing.Unpack[...]
with dictionary-like key-value pairs of the names and types of all excess keyword arguments accepted by that callable. This is already valid Python syntax as shown above. No changes to the CPython PEG (Parser Expression Grammar) or parser are required. This syntax promotes trivial, readable, maintainable, debuggable one-line type hints for **kwargs
. No extraneous 500-line TypeDef
subclasses or whatevah boilerplate are required.
Moreover, this same syntax easily generalizes to Callable[...]
type hints. Currently, Callable[...]
type hints fail to support keyword arguments; they only support positional-only parameters. Since nobody uses positional-only arguments, Callable[...]
type hints are basically useless as defined. Instead, Callable[...]
type hints could be readily extended using the exact same syntactic mechanism to support both keyword and keyword-only arguments: e.g.,
from collections.abc import Callable
# This should, like, totally be the way you annotate callbacks in Python and stuff.
def call_back(callback: Callable[['trust_me_bro': int, 'grifter': str], int]) -> int:
return callback(trust_me_bro=42, grifter='y u so grifty, Grifty McGrift?')
Dictionary-like syntax should totally be the standard way to type keyword parameters. We know I mean standardization business, because I just used the word "totally."
man-pig ponders the existential nature of bad standards, as nobody cares
beartype_all()
: It Actually Works Now, Kinda
Ah, yes. The venerable beartype.claw.beartype_all()
import hook. Anybody remember that thing? Me neither. Nobody uses that thing, because that thing blows up whenever you look at it. Let's back up.
beartype_all()
unconditionally type-checks literally everything. Whereas beartype_this_package()
only type-checks your package, beartype_all()
type-checks both your package and everybody else's packages too. Fake footnote: Technically, this includes even the standard CPython library. Pragmatically, the standard CPython library contains no type hints whatsoever. Why? Because CPython devs hate runtime typing. This fake footnote means nothing. I wasted my time writing this. You wasted your time reading this. We cry tears in the rain together.
It's the "everybody else's packages too" part that is the problem there. Although you expect your package to be type-checked with @beartype
, nobody else does. Nobody expects their package to be type-checked with @beartype
without their permission or knowledge – at least, not until you open 317 pending issues on their issue tracker enumerating every shocking yet mundane "error" in their package when type-checked with @beartype
. This is why beartype_all()
fails you when you need it most. You can't control other people's intransigence towards the Bear... until now.
Introducing the new BeartypeConf(claw_skip_package_names: Collection[str] = ())
configuration option! claw_skip_package_names
is a package name blacklist (e.g., ban, deny, ignore, or omit list of the names of all packages and modules to be excluded from consideration), enabling you to selectively ignore one or more problematic third-party packages when type-checking the entire Universe via beartype_all()
. The Universe just got a little smaller and a lot smarter, folks.
Because claw_skip_package_names
is sane:
- This blacklist accepts any valid Python collection (e.g.,
list
,tuple
,set
, whatevahs). - The items of this blacklist are the absolute names of any (sub)package or (sub)module you want ignored. This includes both:
- Top-level package names (e.g.,
'bad_apple'
). - Sub-level module names (e.g.,
'lovely_cat.ugly_dog'
).
- Top-level package names (e.g.,
Because reality is even more disappointing than public education prepared us for, the claw_skip_package_names
"option" is basically mandatory. It's not optional despite being called an option. Whenever you call beartype_all()
, you also need to pass claw_skip_package_names
: e.g.,
from beartype import BeartypeConf
from beartype.claw import beartype_all
# Beartype everything except sucky packages that hate everything good in the world.
beartype_all(conf=BeartypeConf(claw_skip_package_names=(
'some_sucky_package',
'another_package_hates_you',
'this_package_is_great.this_submodule_is_trash',
)))
claw_skip_package_names
: because the Universe is kinda like in the Aliens franchise.
when you fly with @beartype, you fly with a man who is a pig
beartype_all()
+ pytest-beartype
: They Were Meant for Each Other
pytest-beartype
plugin users are now thoughtfully chewing their upper lips and thinking:
What about us!? We deserve better, too. Neglect us and we'll burn this issue tracker down.
You do you. That's why pytest-beartype
maintainer (and all-around Python-Zig tooling God) @tusharsadhwani has already implemented support for command-line equivalents of both the beartype_all()
import hook and the claw_skip_package_names
option. In short, just:
pytest --beartype-packages='*' --beartype-skip-packages='awful_package, horrible.submodule'
Let's unpack this. CLI stuff is always so crufty, isn't it? Passing:
--beartype-packages='*'
instructspytest-beartype
to internally call the universalbeartype_all()
import hook rather than the localbeartype_packages()
import hook. Good.--beartype-skip-packages
blacklists those packages and modules from consideration. Good.
--beartype-skip-packages
: because not all heroes write one-liners.
sick burn, pig-man. but what does this have to do with @beartype? the answer may shock somebody.
A Deeper Shade of Gray O(1)
Type-checking
@beartype 0.19.0 deeply type-checks a ton of fun containers I've loosely dubbed reiterables.
A reiterable is a collection satisfying the collections.abc.Collection
protocol with guaranteed O(1)
read-only access to only the first collection item. Reiterables include sets, frozen sets, dictionary views, deques (i.e., double-ended queues), and all other containers matched by one or more of the following PEP 484- or 585-compliant type hints:
frozenset[...]
.set[...]
.collections.ChainMap[...]
.collections.Counter[...]
.collections.deque[...]
.collections.abc.Collection[...]
.collections.abc.ItemsView[...]
.collections.abc.KeysView[...]
.collections.abc.MutableSet[...]
.collections.abc.Set[...]
.collections.abc.ValuesView[...]
.typing.AbstractSet[...]
.typing.ChainMap[...]
.typing.Collection[...]
.typing.Counter[...]
.typing.Deque[...]
.typing.FrozenSet[...]
.typing.ItemsView[...]
.typing.KeysView[...]
.typing.MutableSet[...]
.typing.Set[...]
.typing.ValuesView[...]
.
@beartype now deeply type-checks almost all of the core PEP 484 and 585 standards, resolving feature request #167 kindly submitted by the perennial brilliant @langfield (...how I miss that awesome guy!) several lifetimes ago back when I was probably a wandering vagabond Buddhist monk with a bad attitude, a begging bowl the size of my emaciated torso, and an honestly pretty cool straw hat that glinted dangerously in the firelight.
There's still a bit of low-hanging fruit dangling its juicy skin here and there – but not much. The biggest offenders that have yet to be deeply type-checked are:
-
Iterable[...]
type hints. Not hard, so I claim. Just needs a bit of spit and polish, so I claim. I'm claiming lots of things without hard evidence here. -
Callable type hints (e.g.,
collections.abc.Callable[...]
,typing.Callable[...]
). Thankfully, BeartypeAI™ now provides a trivialO(1)
one-liner for deeply type-checking any callable type hinthint
against any arbitrary callablefunc
:def is_func_bearable(func: Callable, hint: object) -> bool: return is_subhint(infer_hint(func), hint) # <-- lolbro
-
Type variables (e.g.,
typing.TypeVar('T')
. Still no idea how to dynamically generate efficient code type-checking type variables, honestly. It's feasible, but let's avoid thinking about this until there's absolutely nothing left to do. 😅
...how is to possible that so much and yet so little has changed? Please manage our time better or we're never gonna cross that finish line, GitHub. @leycec assumes no responsibility for just playing video games for a year.
@beartype: It's actually starting to do stuff, now.
if i'm reading these schematics right, @beartype actually does stuff now
PEPs 612 + 646 + 692: @beartype Now Shallowly Loves You!
@beartype's love for PEP 612 – Parameter Specification Variables, PEPs646 – Variadic Generics, and PEP 692 – Using TypedDict for more precise **kwargs
typing may be a tepid pool of mucky brackish water you can barely dip your toes into – but at least @beartype 0.19.0 tried, daggumit.
PEP 612: Make Your Decorator Closure So Complex It Explodes
@beartype 0.19.0 now supports you in your aspirations to obfuscate decorator closures beyond the dark horizon of MIT Python obfuscation competitions by silently ignoring those aspirations:
# Guido himself defined a decorator so complex it "logs to a database"
# while exploding @beartype with soul-sucking parameter specifications.
from typing import Awaitable, Callable, ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
def add_logging(f: Callable[P, R]) -> Callable[P, Awaitable[R]]:
async def inner(*args: P.args, **kwargs: P.kwargs) -> R:
await log_to_database()
return f(*args, **kwargs)
return inner
@beartype doesn't pretend to understand what typing.ParamSpec('P').kwargs
means, but @beartype doesn't have to. @beartype is here to crush bugs and play video games... and @beartype is all outta video games.
that one unforgettable moment when @beartype 0.19.0 reveals its true nature
PEP 692: Finally, TypedDict
Is Useful for Something
@beartype 0.19.0 now supports you in your aspirations to precisely type-check **kwargs
by silently ignoring those aspirations:
from beartype import beartype
from typing import TypedDict, Unpack
class Kwargs(TypedDict):
this_kwarg_must_be_a_string: str
this_kwarg_must_be_a_complex_number_just_kidding_its_actually_an_integer: int
@beartype
def function_accepts_two_kwargs(**kwargs: Unpack[Kwargs]) -> None: ...
That's better than @beartype used to do (which was blow chunks everywhere). We don't deeply type-check this yet, but we will. Would @leycec lie!? 😓
newer and sleaker @beartype does a surprise fly-by over your codebase. hats are almost lost.
PEP 646: Even Type Variables Are Now Tuples, Huh?
It's all tuples all the way down with PEP 646.
@beartype 0.19.0 now supports you in your aspirations to precisely type-check... actually, I really have no idea. But that's okay, because neither does @beartype 0.19.0. What is PEP 646 besides really confusing? Couldn't tell ya. All I know is that @beartype now silently ignores PEP 646-compliant type variable tuples (i.e., typing.TypeVarTuple
objects):
from beartype import beartype
from typing import TypeVar, TypeVarTuple
DType = TypeVar('DType')
Shape = TypeVarTuple('Shape')
@beartype
class Array(Generic[DType, *Shape]):
def __abs__(self) -> Array[DType, *Shape]: ...
def __add__(self, other: Array[DType, *Shape]) -> Array[DType, *Shape]: ...
I kinda get it, but I kinda don't. Since @beartype 0.19.0 doesn't get it any more than I do, @beartype doesn't deeply type-check TypeVarTuple
objects yet. Will it ever? No idea. Let's pretend:
"Yes! Absolutely! All your dreams will be realized by... @beartype 42.42.42!?"
@beartype doesn't even deeply type-check TypeVar
objects yet – which is the slightly lower-hanging fruit here. Oh, when will free time materialize for @leycec? What has @beartype done to deserve this punishing development schedule? I fear for your immortal git log
, @beartype. 😨
@beartype 42.42.42 chortles as it contemplates the darkness of the past
Multiprocessing Queues: @beartype No Longer Hates You!
@beartype 0.19.0 officially supports the standard multiprocessing
API for fork-based distributed workloads. All beartype exceptions (i.e., exception subclasses published by the beartype.roar
subpackage) now support pickling and unpickling via the standard pickle
module, which then suffices to support the standard multiprocessing
package, which shockingly leverages pickle
rather than dill
in 2024.
@beartype 0.19.0: "Why is pickle
still even a thing!?"
You're stupid, multiprocessing
. I hate that in an API.
Third-party Decorators: @beartype No Longer Hates You!
@beartype 0.19.0 goes hard on integration with external decorators published by third-party packages that previously hated @beartype. This includes popular Just-in-Time (JIT) decorators for machine learning (ML) like:
@equinox.filter_jit
.@jax.jit
.@numba.njit
.
@beartype should now support almost everybody else's decorators. In fact, @beartype now generically supports all pseudo-callable wrapper objects (i.e., objects defining both the __call__()
and __wrapped__
dunder attributes).
The @beartype
decorator should also now be context-free. You may now chain (i.e., list) @beartype
above or below most third-party decorators. Since beartype.claw
import hooks (like beartype_this_package()
and beartype_package()
) inject @beartype
above all other decorators, beartype.claw
import hooks now transparently support all other decorators... probably. 😬
If you previously blacklisted @beartype from type-checking callables decorated by any of the above with @typing.no_type_check
, let us give thanks as you remove @typing.no_type_check
everywhere.
Examples or it only happened in the DMT hyperplane:
from beartype import beartype
from jax import (
jit,
numpy as jax_numpy,
)
from jaxtyping import (
Array,
Float,
)
@beartype # <-- *GOOD*. @beartype goes last! patiently suffer in silence, @beartype.
@jit # <-- *GOOD*. @jax.jit goes first! yoink.
def what_would_chat_gpt_do(
probably_hallucinate_everything: Float[Array, '']) -> Float[Array, '']:
# One-liner: "Do what I say, not what I code."
return probably_hallucinate_everything + 1
assert what_would_chat_gpt_do(jax_numpy.array(1.0)) == jax_numpy.array(2.0)
what_would_chat_gpt_do('If this is a JAX array, we all have serious problems.')
...which raises the expected type-checking violation:
Traceback (most recent call last):
File "/home/leycec/tmp/mopy.py", line 22, in <module>
what_would_chat_gpt_do('If this is a JAX array, we all have serious problems.')
File "<@beartype(PjitFunction.__call__) at 0x7fd9afc96140>", line 29, in __call__
beartype.roar.BeartypeCallHintParamViolation: Object
PjitFunction.__call__() parameter probably_hallucinate_everything='If
this is a JAX array, we all have serious problems.' violates type hint
<class 'jaxtyping.Float[Array, '']'>, as str 'If this is a JAX array, we
all have serious problems.' not instance of <protocol
"jaxtyping.Float[Array, '']">.
The perspicacious user may now be thinking:
"WAIT. What is a
PjitFunction.__call__()
? That's ambiguous and means less than my cat licking itself. Your type-checking violation message sucks, huh?"
You're not wrong. But we're tired. At least @beartype works now for various definitions of "works." If you just hit this ambiguous type-checking violation message in your workflow and want @beartype to justifiably do something about it, bang on our issue tracker until the cats start squalling and biting @leycec in the face. Works every time.
@beartype 0.19.0: we broke our sanity for your security.
@beartype 0.19.0: it's been a long journey, fam.
CPython 3.13: Finally, No Longer Feel Emberassed about CPython
@beartype 0.19.0 officially supports Python 3.13, the first official CPython release you no longer need to feel ashamed of running in public.
Python 3.13 supports an official LLVM-based Just-in-Time (JIT) compiler via the PEP 744-compliant --enable-experimental-jit
compile-time option. OMMMMMMMMMMMMMMMMG..... It's happening. It's really happening. My breathing is now laboured and making awkwardly squishy noises that upset the cat.
Python 3.13 also supports no-GIL GIL-free multi-threading via the PEP 703-compliant --disable-gil
compile-time option. Yes! YES! YEEEEEEESSSSS!!!! Wait. Where am I? What are these fingers on this keyboard? This must be what Xanadu is typed of.
If I worked on a proprietary Python package, I'd have money. I'd also:
- Hard-require
python >=3.13
for production workloads as soon as CPython 3.13 lands in October. - Hand-compile CPython 3.13 with
--enable-experimental-jit
for development workloads.
It's time to go fast. Finally, it's time to feel shameless.
OMG IZ @beartype + CPython 3.13 + --enable-experimental-jit
WTF FAFO!!!
Bear Beta Fan Club: Announcing Release Candidates Your CI Cares About
@beartype 0.18.0 broke the entire world. @leycec can now admit that to himself while clutching his Maine Coon teddy cat. If your codebase survived @beartype 0.18.0, you deserve an "I Survived @beartype 0.18.0 and All I Got Was This Lousy Badge" badge. The ill-fated @beartype 0.18.0 release cycle that nearly broke my fingers taught me many things: suffering, pain, agony, blah, blah... You know. Just the standard stuff, really.
@beartype 0.18.0 taught me that @beartype has become a lot bigger than me. Other people and people-like AI that are doing meaningful things with their lives and synthetic lives (respectively) now depend on new @beartype releases not throwing up all over everybody.
@beartype ≥ 0.19.0 intends to avoid that throw-up. Several days before releasing any new major version like 0.19.0, 0.20.0, or 0.21.0: ...we see the number sequence I trust
- I will officially publish at least one release candidate on PyPI.
- Everyone in the Bear Beta Fan Club (...the pro bono lawyers say this means you) will be encouraged to download, install, and exercise this release candidate against your downstream use cases, codebases, apps, APIs, workflows, and test suites. Those who fail to do this will be mocked as their code burns against the night sky. I mock them even as I cry a little. 🥲
- I will wait several days for the radioactive fallout to subside.
- Assuming no issues or regressions arise, ...lolbro I will officially publish a new major version.
- Else:
- I will resolve all issues and regressions that arise like red-headed dolls clutching sharp implements. Night of the Living QA: The Bear is Baaaaaack.
- I will recursively return back in time to step 1. by releasing yet another release candidate.
- Repeat as needed for pain.
Basically, I'm just doing standard beta releases now. That's all I had to say. Instead, I laboriously enumerated a workflow that doesn't really make sense when you squint at it. Oh, well. This too was wasted time.
The sins of the fathers must never be repeated. Never forget @beartype 0.18.0! Never forgive @leycec! Wait. Shouldn't @leycec be forgiven already at some point!? <-- dat poor guy
@beartype 0.18.0: shocking behind-the-scenes tell-all reveals sordid truth of what went wrong that fateful day
Beartype Release Motto: "Release late. Release rarely. Release safely."
In discussion thread #433, @jedie wisely asks the question we're all wondering:
There are many, many commits since last release: v0.18.5...main
What's the release cycle? Seems it's not:
Release early, release often
isn't it?
Indeed, @jedie. It isn't. You're right about everything. I now quote myself like a narcissist. gods what am i become
For ordinary packages, "Release early, release often" is the best possible advice. For @beartype, this is the worst possible advice. Why? Because @beartype is mission-critical. When @beartype breaks, increasingly the entire Python ecosystem breaks. This includes PyTorch – which then transitively includes ChatGPT, OpenAI, Microsoft, and by extension the entirety of American late-stage capitalism. Do we grok the stakes here? The stakes somehow become a whole lot more bigly than "one bald autist has fun smashing code together in a remote Canadian cottage."
I should probably be paid to do this hyper-cuboidal tesseract we call @beartype. Imagine if all neurosurgeons were unpaid volunteers. This is hyperbole, but it's also not. @beartype is the neurosurgeon that fixes bugs during LLM training. Much like Soviets under the USSR, I pretend that I'm being paid by behaving responsibly towards the rest of humanity. "Release early, release often" is what I used to believe. Then I broke PyTorch with the ill-fated @beartype 0.18.0 release. Now, I choose wisely.
The new motto is:
Release late. Release rarely. Release safely.
On the bright side, "Release safely." is good! We can all agree. On the dark side, "Release late." and "Release rarely." are both bad. We still agree. But one out of two ain't bad. Right?
New @beartype release will probably land as follows:
- A new stable minor release (like @beartype
0.19.0
) once every six months or so. - A new unstable minor prerelease (e.g., @beartype
0.19.0rc0
) once every month or so.
This broadly parallels CPython's shift to a yearly release schedule with intermittent mid-yearly alpha and beta pre-releases. Since @beartype isn't as bigly as CPython, we can and should go faster and harder than CPython on releases – but we can't go that much faster or harder. Realistically speaking, @beartype releases will remain slower than your average open-source Python package.
Sucks, huh? I know and commiserate by blowing smoke out of gigantic nostrils on a picturesque beach.
that feeling when you're only in month 1 of an interminable 6-month release cycle
Hatch + pyproject.toml
: The Build System We Deserved 10 Years Ago, Today
@beartype 0.19.0 now sports a sane build system. It's sporty! Somehow, we found the strength to refactor the archaic @beartype 0.18.0 toolchain from setuptools
+ setup.py
🤮 to Hatch + pyproject.toml
🥂 . This includes support for modern packaging standards like PEP 517 and 621.
It went great, actually. Thanks for asking. I highly recommend Hatch for all projects – new and curmudgeonly alike. It's like Rust's Cargo, only Python. It actually works, unlike everything else.
Hatch: because you're too bald to fight Python anymore.
Caveat emptor:
- For most users, this doesn't matter. Celebrate.
- For package maintainers like @harens (...I'm so sorry), this means that all third-party @beartype packages in the wild now need to be manually bumped to depend on Hatch (rather than
setuptools
) at build time. If your packaging ecosystem also packages Hatch, this is trivial. Else, I sympathize with your growing toothache but can do nothing for you. Emoji man sighs. 😮💨
@beartype 0.19.0 now gives thanks for this build system it is about to blow up
PyPI Trusted Publishers: Because Deployment Wasn't Hard Enough
@beartype 0.19.0 now sports a sane publishing system. It's less insane! Somehow, we found the stamina to refactor the archaic @beartype 0.18.0 release workflow from antiquated (and unsurprisingly insecure) GitHub Actions tokens 🤮 to PyPI-specific "Trusted Publishers" (i.e., PyPI's modern implementation of OpenID Connect (OIDC)). 🤷
In theory, doing so should resolve the current plethora of "Unverified details" that currently pollutes @beartype's PyPI project page. We're not unverified, PyPI! You're unverified.
In practice, doing so will almost certainly change nothing and thus have no benefit whatsoever. Indeed, doing so will probably prevent our entire release workflow from behaving as expected – further squandering scarce open-source volunteerism for no particularly good reason whatsoever.
Bureaucracy: "What is it good for when @leycec could just be playing video games about robot assassins who insist they meant well instead?"
@beartype 0.19.0: on its way to a PyPI project page near you
You. Are. Beartype.
Announcing all the fave @beartype users from the ashes of our issue tracker:
@posita, @wesselb, @iamrecursion, @patrick-kidger, @langfield, @JelleZijlstra, @RobPasMue, @GithubCamouflaged, @kloczek, @uriyasama, @danielgafni, @JWCS, @rbroderi, @AlanCoding, @tvdboom, @crypdick, @jvesely, @komodovaran, @kaparoo, @MaximilienLC, @fleimgruber, @EtaoinWu, @alexoshin, @gabrieldemarmiesse, @James4Ever0, @NLPShenanigans, @rtbs-dev, @yurivict, @st--, @murphyk, @dosisod, @Rogdham, @alisaifee, @denisrosset, @damarro3, @ruancomelli, @jondequinor, @harshita-gupta, @jakebailey, @denballakh, @jaanli, @creatorrr, @msvensson222, @avolchek, @femtomc, @AdrienPensart, @jakelongo, @Artur-Galstyan, @ArneBachmann, @danielward27, @WeepingClown13, @rbnhd, @radomirgr, @rwiegan, @brettc, @spagdoon0411, @helderco, @paulwouters, @jamesbraza, @dcharatan, @kasium, @AdrienPensart, @sunildkumar, @peske, @mentalisttraceur, @awf, @PhilipVinc, @dcharatan, @empyrealapp, @rlkelly, @KyleKing, @skeggse, @RomainBrault, @pablovela5620, @thiswillbeyourgithub, @WeepingClown13, @JWCS, @Logan-Pageler, @knyazer, @Moosems, @frrad, @minmax, @jaanli, @jonnyhyman, @f-fuchs, @jennydaman, @denballakh, @bionicles, @taranlu-houzz, @adamtheturtle
center right: your codebase. center left: @beartype. everybody else: @beartype's competition, which doesn't of course exist.