It’s very weird to me that Python, as an inherently untyped language, is trying to bolt on stronger typing through modules like Protocol and typing.
If typing is a good thing, why not make it an optional first-class part of the language? Maybe make strong typing a one liner option at the top of a source file? This growing maze of modules and recommendations is so unwieldy. For example, the typing module works kind of in conjunction with language elements that aren’t what newbs learn in Python 101, like type specifiers in function args. I feel like this stuff is driving Python away from simplicity.
I don’t get this complaint.
Python is not adding typing, it’s just improving on its static type checker. Nothing is really changing at runtime. Even if your type annotations are completely wrong, your code will run just fine. It’s up for the developers and the team to know how much they will benefit from adopting it.
I’m not complaining, just reflecting that it is weird to me. The static type checker is almost an admission that type checking is a Good Thing, but Python continues to resist adding runtime checking. Modules like typing and Protocol don’t seem to do anything at runtime, and because of that are deeply weird to me - what kind of include doesn’t have runtime code? I haven’t seen anything quite like it in any other language I’ve coded in. It just seems included for the coders’ IDE to throw warnings, and that’s it.
Then again, it’s entirely possible I just don’t get around much. I’m not a software guy, I’m hardware, and occasionally I’ll write a tool or website to help with a specific task.
I suppose the alternative is just as weird or weirder, where there are almost two separate languages, one strongly typed and one not typed at all.
How would they add runtime checking without breaking all existing code?
But I think warning people is a good start, because those checks can be added to your CI pipeline and reject any incoming code that contains warnings. That way you can enforce type checking for a subset of modules and keep backwards compatibility.
By making it opt-in. But that’s not much different from static typing then, except that it won’t actually work when you screw up typing
It’s very weird to me that Python, as an inherently untyped language
I don’t think this is true. Python is dynamically typed, but types exist. More importantly, Python is the poster child of duck typing. What is duck typing if not a way to implicitly specify protocols? If you’re relying on protocols to work, why not have tests for it? If you want to test protocols, aren’t you actually doing type checks?
If typing is a good thing,
…which undoubtedly is.
(…) why not make it an optional first-class part of the language?
It already is, isn’t it?
But some people already have Python code that does not do type checking. What would be the point of refusing to run that code?
Python provides flexibility. You want to use it for fast experiments in a jupiter notebook? Skip types. You’re doing a months long backend project? Add types, either through implicit types (protocols) or explicit types.
I don’t see how flexibility is a bad thing. Don’t want it? Don’t use it. You can still use it in the simple way, like 30 years ago, it’s just providing more options to be used in different contexts.
Considering Python is almost as old as C++, I would say that Python has done a much better job at incorporating new features in a sound architectural way compared to C++, while keeping new features at the same complexity level as the older ones. Or compared to javascript, which let a spinoff language emerge (something that Van Rossum very much wanted to avoid with mypy) and drive it towards adopting some features that were being asked for years (ie classes in ES6)
This is pretty cool, but I have no idea what the significance is.
The key part is this:
it turns out protocols really are just a formalization of Python’s duck typing.
Meaning, this is just a way to say that if you are defining some system that needs to conform with some interface, you can have type checked even if your have different objects from different classes. No need for TypeVar or define a crazy hierarchy: as long as the types implements the methods defined by the Protocol, the type checker will be happy.
It’s amazing how often people celebrate some new feature in a language and I’m like: TypeScript has been doing this for years now.
[Edit: This is also how interfaces work in Go, and it’s just the old advice “Code against interfaces, not classes.”]
I’s amazing how often people celebrate some new feature in a language and I’m like: TypeScript has been doing this for years now.
Whatever another programming language supports or does is entirely irrelevant if what you’re working with is Python.
In practice, Protocols are a way to make “superclasses” that you can never add features to (for example,
readinto
despite being critical for performance is utterly broken in Python). This should normally be avoided at almost all costs, but for some reason people hate real base classes?If you really want to do something like the original article, where there’s a C-implemented class that you can’t change, you’re best off using a (named)
Union
of two similar types, not aProtocol
.I suppose they are useful for operator overloading but that’s about it. But I’m not sure if type checkers actually implement that properly anyway; overloading is really nasty in a dynamically-typed language.
In practice, Protocols are a way to make “superclasses” that you can never add features to (for example, readinto despite being critical for performance is utterly broken in Python).
Not really, and this interpretation is oblivious to the concept of protocols and misses the whole point of them.
The point of a protocol is to specify that a type supports a specific set of methods with specific signatures, aka duck typing, and provide the necessary and sufficient infrastructure to check if objects comply with a protocol and throw an error in case it doesn’t.
Also, protocol classes can be inherited, and protocols can be extended.
Then - ignoring dunders that have weird rules - what, pray tell, is the point of protocols, other than backward compatibility with historical fragile ducks (at the cost of future backwards compatibility)? Why are people afraid of using real base classes?
The fact that it is possible to subclass a
Protocol
is useless since you can’t enforce subclassing, which is necessary for maintainable software refactoring, unless it’s a purely internal interface (in which case theUnion
approach is probably still better).That PEP link includes broken examples so it’s really not worth much as a reference.
(for that matter, the
Sequence
interface is also broken in Python, in case you need another historical example of why protocols are a bad idea).(…) what, pray tell, is the point of protocols, other than backward compatibility with historical fragile ducks (at the cost of future backwards compatibility)?
Got both of those wrong. The point of protocols is to have a way to validate duck typing errors by adding a single definition of a duck. This is not something that only applies to backwards compatible, nor does it affects backwards compatibility.
Why are people afraid of using real base classes?
You’re missing the whole point of prototypes. The point of a prototype is that you want duck typing, not inheritance. Those are entirely different things, and with prototypes you only need to specify a single prototype and use it in a function definition, and it automatically validates each and any object passed to it without having to touch it’s definition or add a base class.
That still doesn’t explain why duck typing is ever a thing beyond “I’m too lazy to write
extends BaseClass
”. There’s simply no reason to want it.I already explained it to you: protocols apply to types you do not own or control, let alone can specify a base class.
You just specify a protocol, specify that a function requires it, and afterwards you can pass anything to it as-is and you’ll be able to validate your calls.
and I already explained that
Union
is a thing.