Python Enhancement Proposals

  • Python »
  • PEP Index »

PEP 526 – Syntax for Variable Annotations

Notice for reviewers, global and local variable annotations, class and instance variable annotations, annotating expressions, where annotations aren’t allowed, variable annotations in stub files, preferred coding style for variable annotations, changes to standard library and documentation, other uses of annotations, rejected/postponed proposals, backwards compatibility, implementation.

This PEP is a historical document: see Annotated assignment statements , ClassVar and typing.ClassVar for up-to-date specs and documentation. Canonical typing specs are maintained at the typing specs site ; runtime typing behaviour is described in the CPython documentation.

See the typing specification update process for how to propose changes to the typing spec.

This PEP has been provisionally accepted by the BDFL. See the acceptance message for more color:

This PEP was drafted in a separate repo: .

There was preliminary discussion on python-ideas and at .

Before you bring up an objection in a public forum please at least read the summary of rejected ideas listed at the end of this PEP.

PEP 484 introduced type hints, a.k.a. type annotations. While its main focus was function annotations, it also introduced the notion of type comments to annotate variables:

This PEP aims at adding syntax to Python for annotating the types of variables (including class variables and instance variables), instead of expressing them through comments:

PEP 484 explicitly states that type comments are intended to help with type inference in complex cases, and this PEP does not change this intention. However, since in practice type comments have also been adopted for class variables and instance variables, this PEP also discusses the use of type annotations for those variables.

Although type comments work well enough, the fact that they’re expressed through comments has some downsides:

  • Text editors often highlight comments differently from type annotations.
  • There’s no way to annotate the type of an undefined variable; one needs to initialize it to None (e.g. a = None # type: int ).
  • Variables annotated in a conditional branch are difficult to read: if some_value : my_var = function () # type: Logger else : my_var = another_function () # Why isn't there a type here?
  • Since type comments aren’t actually part of the language, if a Python script wants to parse them, it requires a custom parser instead of just using ast .
  • Type comments are used a lot in typeshed. Migrating typeshed to use the variable annotation syntax instead of type comments would improve readability of stubs.
  • In situations where normal comments and type comments are used together, it is difficult to distinguish them: path = None # type: Optional[str] # Path to module source
  • It’s impossible to retrieve the annotations at runtime outside of attempting to find the module’s source code and parse it at runtime, which is inelegant, to say the least.

The majority of these issues can be alleviated by making the syntax a core part of the language. Moreover, having a dedicated annotation syntax for class and instance variables (in addition to method annotations) will pave the way to static duck-typing as a complement to nominal typing defined by PEP 484 .

While the proposal is accompanied by an extension of the typing.get_type_hints standard library function for runtime retrieval of annotations, variable annotations are not designed for runtime type checking. Third party packages will have to be developed to implement such functionality.

It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention. Type annotations should not be confused with variable declarations in statically typed languages. The goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools.

This PEP does not require type checkers to change their type checking rules. It merely provides a more readable syntax to replace type comments.


Type annotation can be added to an assignment statement or to a single expression indicating the desired type of the annotation target to a third party type checker:

This syntax does not introduce any new semantics beyond PEP 484 , so that the following three statements are equivalent:

Below we specify the syntax of type annotations in different contexts and their runtime effects.

We also suggest how type checkers might interpret annotations, but compliance to these suggestions is not mandatory. (This is in line with the attitude towards compliance in PEP 484 .)

The types of locals and globals can be annotated as follows:

Being able to omit the initial value allows for easier typing of variables assigned in conditional branches:

Note that, although the syntax does allow tuple packing, it does not allow one to annotate the types of variables when tuple unpacking is used:

Omitting the initial value leaves the variable uninitialized:

However, annotating a local variable will cause the interpreter to always make it a local:

as if the code were:

Duplicate type annotations will be ignored. However, static type checkers may issue a warning for annotations of the same variable by a different type:

Type annotations can also be used to annotate class and instance variables in class bodies and methods. In particular, the value-less notation a: int allows one to annotate instance variables that should be initialized in __init__ or __new__ . The proposed syntax is as follows:

Here ClassVar is a special class defined by the typing module that indicates to the static type checker that this variable should not be set on instances.

Note that a ClassVar parameter cannot include any type variables, regardless of the level of nesting: ClassVar[T] and ClassVar[List[Set[T]]] are both invalid if T is a type variable.

This could be illustrated with a more detailed example. In this class:

stats is intended to be a class variable (keeping track of many different per-game statistics), while captain is an instance variable with a default value set in the class. This difference might not be seen by a type checker: both get initialized in the class, but captain serves only as a convenient default value for the instance variable, while stats is truly a class variable – it is intended to be shared by all instances.

Since both variables happen to be initialized at the class level, it is useful to distinguish them by marking class variables as annotated with types wrapped in ClassVar[...] . In this way a type checker may flag accidental assignments to attributes with the same name on instances.

For example, annotating the discussed class:

As a matter of convenience (and convention), instance variables can be annotated in __init__ or other methods, rather than in the class:

The target of the annotation can be any valid single assignment target, at least syntactically (it is up to the type checker what to do with this):

Note that even a parenthesized name is considered an expression, not a simple name:

It is illegal to attempt to annotate variables subject to global or nonlocal in the same function scope:

The reason is that global and nonlocal don’t own variables; therefore, the type annotations belong in the scope owning the variable.

Only single assignment targets and single right hand side values are allowed. In addition, one cannot annotate variables used in a for or with statement; they can be annotated ahead of time, in a similar manner to tuple unpacking:

As variable annotations are more readable than type comments, they are preferred in stub files for all versions of Python, including Python 2.7. Note that stub files are not executed by Python interpreters, and therefore using variable annotations will not lead to errors. Type checkers should support variable annotations in stubs for all versions of Python. For example:

Annotations for module level variables, class and instance variables, and local variables should have a single space after corresponding colon. There should be no space before the colon. If an assignment has right hand side, then the equality sign should have exactly one space on both sides. Examples:

  • Yes: code : int class Point : coords : Tuple [ int , int ] label : str = '<unknown>'
  • No: code : int # No space after colon code : int # Space before colon class Test : result : int = 0 # No spaces around equality sign
  • A new covariant type ClassVar[T_co] is added to the typing module. It accepts only a single argument that should be a valid type, and is used to annotate class variables that should not be set on class instances. This restriction is ensured by static checkers, but not at runtime. See the classvar section for examples and explanations for the usage of ClassVar , and see the rejected section for more information on the reasoning behind ClassVar .
  • Function get_type_hints in the typing module will be extended, so that one can retrieve type annotations at runtime from modules and classes as well as functions. Annotations are returned as a dictionary mapping from variable or arguments to their type hints with forward references evaluated. For classes it returns a mapping (perhaps collections.ChainMap ) constructed from annotations in method resolution order.
  • Recommended guidelines for using annotations will be added to the documentation, containing a pedagogical recapitulation of specifications described in this PEP and in PEP 484 . In addition, a helper script for translating type comments into type annotations will be published separately from the standard library.

Runtime Effects of Type Annotations

Annotating a local variable will cause the interpreter to treat it as a local, even if it was never assigned to. Annotations for local variables will not be evaluated:

However, if it is at a module or class level, then the type will be evaluated:

In addition, at the module or class level, if the item being annotated is a simple name , then it and the annotation will be stored in the __annotations__ attribute of that module or class (mangled if private) as an ordered mapping from names to evaluated annotations. Here is an example:

__annotations__ is writable, so this is permitted:

But attempting to update __annotations__ to something other than an ordered mapping may result in a TypeError:

(Note that the assignment to __annotations__ , which is the culprit, is accepted by the Python interpreter without questioning it – but the subsequent type annotation expects it to be a MutableMapping and will fail.)

The recommended way of getting annotations at runtime is by using typing.get_type_hints function; as with all dunder attributes, any undocumented use of __annotations__ is subject to breakage without warning:

Note that if annotations are not found statically, then the __annotations__ dictionary is not created at all. Also the value of having annotations available locally does not offset the cost of having to create and populate the annotations dictionary on every function call. Therefore, annotations at function level are not evaluated and not stored.

While Python with this PEP will not object to:

since it will not care about the type annotation beyond “it evaluates without raising”, a type checker that encounters it will flag it, unless disabled with # type: ignore or @no_type_check .

However, since Python won’t care what the “type” is, if the above snippet is at the global level or in a class, __annotations__ will include {'alice': 'well done', 'bob': 'what a shame'} .

These stored annotations might be used for other purposes, but with this PEP we explicitly recommend type hinting as the preferred use of annotations.

  • Should we introduce variable annotations at all? Variable annotations have already been around for almost two years in the form of type comments, sanctioned by PEP 484 . They are extensively used by third party type checkers (mypy, pytype, PyCharm, etc.) and by projects using the type checkers. However, the comment syntax has many downsides listed in Rationale. This PEP is not about the need for type annotations, it is about what should be the syntax for such annotations.
  • Introduce a new keyword: The choice of a good keyword is hard, e.g. it can’t be var because that is way too common a variable name, and it can’t be local if we want to use it for class variables or globals. Second, no matter what we choose, we’d still need a __future__ import.

The problem with this is that def means “define a function” to generations of Python programmers (and tools!), and using it also to define variables does not increase clarity. (Though this is of course subjective.)

  • Use function based syntax : It was proposed to annotate types of variables using var = cast(annotation[, value]) . Although this syntax alleviates some problems with type comments like absence of the annotation in AST, it does not solve other problems such as readability and it introduces possible runtime overhead.

Are x and y both of type T , or do we expect T to be a tuple type of two items that are distributed over x and y , or perhaps x has type Any and y has type T ? (The latter is what this would mean if this occurred in a function signature.) Rather than leave the (human) reader guessing, we forbid this, at least for now.

  • Parenthesized form (var: type) for annotations: It was brought up on python-ideas as a remedy for the above-mentioned ambiguity, but it was rejected since such syntax would be hairy, the benefits are slight, and the readability would be poor.

it is ambiguous, what should the types of y and z be? Also the second line is difficult to parse.

  • Allow annotations in with and for statement: This was rejected because in for it would make it hard to spot the actual iterable, and in with it would confuse the CPython’s LL(1) parser.
  • Evaluate local annotations at function definition time: This has been rejected by Guido because the placement of the annotation strongly suggests that it’s in the same scope as the surrounding code.
  • Store variable annotations also in function scope: The value of having the annotations available locally is just not enough to significantly offset the cost of creating and populating the dictionary on each function call.
  • Initialize variables annotated without assignment: It was proposed on python-ideas to initialize x in x: int to None or to an additional special constant like Javascript’s undefined . However, adding yet another singleton value to the language would needed to be checked for everywhere in the code. Therefore, Guido just said plain “No” to this.
  • Add also InstanceVar to the typing module: This is redundant because instance variables are way more common than class variables. The more common usage deserves to be the default.
  • Allow instance variable annotations only in methods: The problem is that many __init__ methods do a lot of things besides initializing instance variables, and it would be harder (for a human) to find all the instance variable annotations. And sometimes __init__ is factored into more helper methods so it’s even harder to chase them down. Putting the instance variable annotations together in the class makes it easier to find them, and helps a first-time reader of the code.
  • Use syntax x: class t = v for class variables: This would require a more complicated parser and the class keyword would confuse simple-minded syntax highlighters. Anyway we need to have ClassVar store class variables to __annotations__ , so a simpler syntax was chosen.
  • Forget about ClassVar altogether: This was proposed since mypy seems to be getting along fine without a way to distinguish between class and instance variables. But a type checker can do useful things with the extra information, for example flag accidental assignments to a class variable via the instance (which would create an instance variable shadowing the class variable). It could also flag instance variables with mutable defaults, a well-known hazard.
  • Use ClassAttr instead of ClassVar : The main reason why ClassVar is better is following: many things are class attributes, e.g. methods, descriptors, etc. But only specific attributes are conceptually class variables (or maybe constants).
  • Do not evaluate annotations, treat them as strings: This would be inconsistent with the behavior of function annotations that are always evaluated. Although this might be reconsidered in future, it was decided in PEP 484 that this would have to be a separate PEP.
  • Annotate variable types in class docstring: Many projects already use various docstring conventions, often without much consistency and generally without conforming to the PEP 484 annotation syntax yet. Also this would require a special sophisticated parser. This, in turn, would defeat the purpose of the PEP – collaborating with the third party type checking tools.
  • Implement __annotations__ as a descriptor: This was proposed to prohibit setting __annotations__ to something non-dictionary or non-None. Guido has rejected this idea as unnecessary; instead a TypeError will be raised if an attempt is made to update __annotations__ when it is anything other than a mapping.

the name slef should be evaluated, just so that if it is not defined (as is likely in this example :-), the error will be caught at runtime. This is more in line with what happens when there is an initial value, and thus is expected to lead to fewer surprises. (Also note that if the target was (this time correctly spelled :-), an optimizing compiler has no obligation to evaluate self as long as it can prove that it will definitely be defined.)

This PEP is fully backwards compatible.

An implementation for Python 3.6 is found on GitHub repo at

This document has been placed in the public domain.


Last modified: 2024-06-11 22:12:09 GMT

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption
  • Start Monitoring for Free

Understanding type annotation in Python

annotated assignment python

Python is highly recognized for being a dynamically typed language, which implies that the datatype of a variable is determined at runtime. In other words, as a Python developer, you are not mandated to declare the data type of the value that a variable accepts because Python realizes the data type of this variable based on the current value it holds.

Understanding type annotation in Python

The flexibility of this feature, however, comes with some disadvantages that you typically would not experience when using a statically typed language like Java or C++:

  • More errors will be detected at runtime that could have been avoided at the development time
  • Absence of compilation could lead to poor performing codes
  • Verbose variables make codes harder to read
  • Incorrect assumptions about the behavior of specific functions
  • Errors due to type mismatch

Python 3.5 introduced type hints , which you can add to your code using the type annotations introduced in Python 3.0. With type hints, you can annotate variables and functions with datatypes. Tools like mypy , pyright , pytypes , or pyre perform the functions of static type-checking and provide hints or warnings when these types are used inconsistently.

This tutorial will explore type hints and how you can add them to your Python code. It will focus on the mypy static type-checking tool and its operations in your code. You’ll learn how to annotate variables, functions, lists, dictionaries, and tuples. You’ll also learn how to work with the Protocol class, function overloading, and annotating constants.

What is static type checking?

Adding type hints to variables.

  • Adding type hints to functions
  • The Any  type

Configuring mypy for type checking

Adding type hints to functions without return statements, adding union type hints in function parameters, when to use the iterable type to annotate function parameters, when to use the sequence type, when to use the mapping class, using the mutablemapping class as a type hint, using the typeddict class as a type hint.

  • Adding type hints to tuples

Creating and using protocols

Annotating overloaded functions, annotating constants with final, dealing with type-checking in third-party packages, before you begin.

To get the most out of this tutorial, you should have:

  • Python ≥3.10 installed
  • Knowledge of how to write functions, f-strings , and running Python code
  • Knowledge of how to use the command-line

We recommend Python ≥3.10, as those versions have new and better type-hinting features. If you’re using Python ≤3.9, Python provides an alternatives type-hint syntax that I’ll demonstrate in the tutorial.

When declaring a variable in statically-typed languages like C and Java, you are mandated to declare the data type of the variable. As a result, you cannot assign a value that does not conform to the data type you specified for the variable. For example, if you declare a variable to be an integer, you can’t assign a string value to it at any point in time.

In statically-typed languages, a compiler monitors the code as it is written and strictly ensures that the developer abides by the rules of the language. If no issues are found, the program can be run.

Using static type-checkers has numerous advantages; some of which include:

  • Detecting type errors
  • Preventing bugs
  • Documenting your code — anyone who wants to use an annotated function will know the type of parameters it accepts and the return value type at a glance
  • Additionally, IDEs understand your code much better and offer good autocompletion suggestions

Static typing in Python is optional and can be introduced gradually (this is known as gradual typing). With gradual typing, you can choose to specify the portion of your code that should be dynamically or statically typed. The static type-checkers will ignore the dynamically-typed portions of your code and will not give out warnings on code that does not have type hints nor prevents inconsistent types from compiling during runtime.

What is mypy?

Since Python is by default, a dynamically-typed language, tools like mypy were created to give you the benefits of a statically-typed environment. mypy is a optional static type checker created by Jukka Lehtosalo. It checks for annotated code in Python and emits warnings if annotated types are used inconsistently.

mypy also checks the code syntax and issues syntax errors when it encounters invalid syntax. Additionally, supports gradual typing, allowing you to add type hints in your code slowly at your own pace.

In Python, you can define a variable with a type hint using the following syntax:

Let’s look at the following variable:

You assign a string value "rocket" to the name variable.

To annotate the variable, you need to append a colon ( : ) after the variable name, and declare a type str :

In Python, you can read the type hints defined on variables using the __annotations__ dictionary:

The __annotations__ dictionary will show you the type hints on all global variables.

annotated assignment python

Over 200k developers use LogRocket to create better digital experiences

annotated assignment python

As mentioned earlier, the Python interpreter does not enforce types, so defining a variable with a wrong type won’t trigger an error:

On the other hand, a static type checker like mypy will flag this as an error:

Declaring type hints for other data types follows the same syntax. The following are some of the simple types you can use to annotate variables:

  • float : float values, such as 3.10
  • int : integers, such as 3 , 7
  • str : strings, such as 'hello'
  • bool : boolean value, which can be True or False
  • bytes : represents byte values, such as b'hello'

Annotating variables with simple types like int , or str may not be necessary because mypy can infer the type. However, when working with complex datatypes like lists, dictionary or tuples, it is important that you declare type hints to the corresponding variables because mypy may struggle to infer types on those variables.

Adding types hints to functions

To annotate a function, declare the annotation after each parameter and the return value:

Let’s annotate the following function that returns a message:

The function accepts a string as the first parameter, a float as the second parameter, and returns a string. To annotate the function parameters, we will append a colon( : ) after each parameter and follow it with the parameter type:

  • language: str
  • version: float

To annotate return value type, add -> immediately after closing the parameter parentheses, just before the function definition colon( : ):

The function now has type hints showing that it receives str and float arguments, and returns str .

When you invoke the function, the output should be similar to what is obtained as follows:

Although our code has type hints, the Python interpreter won’t provide warnings if you invoke the function with wrong arguments:

The function executes successfully, even when you passed a Boolean True as the first argument , and a string "Python" as the second argument. To receive warnings about these mistakes, we need to use a static type-checker like mypy.

Static type-checking with mypy

We will now begin our tutorial on static type-checking with mypy to get warnings about type errors in our code.

Create a directory called type_hints and move it into the directory:

Create and activate the virtual environment:

Install the latest version of mypy with pip :

With mypy installed, create a file called and enter the following code:

Save the file and exit. We’re going to reuse the same function from the previous section.

Next, run the file with mypy:

As you can see, mypy does not emit any warnings. Static typing in Python is optional, and with gradual typing, you should not receive any warnings unless you opt in by adding type hints to functions. This allows you to annotate your code slowly.

Let’s now understand why mypy doesn’t show us any warnings.

More great articles from LogRocket:

  • Don't miss a moment with The Replay , a curated newsletter from LogRocket
  • Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app
  • Use React's useEffect to optimize your application's performance
  • Switch between multiple versions of Node
  • Discover how to use the React children prop with TypeScript
  • Explore creating a custom mouse cursor with CSS
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

The Any type

As we noted, mypy ignores code with no type hints. This is because it assumes the Any type on code without hints.

The following is how mypy sees the function:

The Any type is a dynamic type that’s compatible with, well, any type. So mypy will not complain whether the function argument types are bool , int , bytes , etc.

Now that we know why mypy doesn’t always issue warnings, let’s configure it to do that.

mypy can be configured to suit your workflow and code practices. You can run mypy in strict mode, using the --strict option to flag any code without type hints:

The --strict option is the most restrictive option and doesn’t support gradual typing. Most of the time, you won’t need to be this strict. Instead, adopt gradual typing to add the type hints in phases.

mypy also provides a --disallow-incomplete-defs option. This option flags functions that don’t have all of their parameters and return values annotated. This option is so handy when you forget to annotate a return value or a newly added parameter, causing mypy to warn you. You can think of this as your compiler that reminds you to abide by the rules of static typing in your code development.

To understand this, add the type hints to the parameters only and omit the return value types (pretending you forgot):

Run the file with mypy without any command-line option:

As you can see, mypy does not warn us that we forgot to annotate the return type. It assumes the Any type on the return value. If the function was large, it would be difficult to figure out the type of value it returns. To know the type, we would have to inspect the return value, which is time-consuming.

To protect ourselves from these issues, pass the --disallow-incomplete-defs option to mypy:

Now run the file again with the --disallow-incomplete-defs option enabled:

Not only does the --disallow-incomplete-defs option warn you about missing type hint, it also flags any datatype-value mismatch. Consider the example below where bool and str values are passed as arguments to a function that accepts str and float respectively:

Let’s see if mypy will warn us about this now:

Great! mypy warns us that we passed the wrong arguments to the function.

Now, let’s eliminate the need to type mypy with the --disallow-incomplete-defs option.

mypy allows you save the options in a mypy.ini file. When running mypy , it will check the file and run with the options saved in the file.

You don’t necessarily need to add the --disallow-incomplete-defs  option each time you run the file using mypy. Mypy gives you an alternative of adding this configuration in a mypy.ini file where you can add some mypy configurations.

Create the mypy.ini file in your project root directory and enter the following code:

In the mypy.ini file, we tell mypy that we are using Python 3.10 and that we want to disallow incomplete function definitions.

Save the file in your project, and next time you can run mypy without any command-line options:

mypy has many options you can add in the mypy file. I recommend referring to the mypy command line documentation to learn more.

Not all functions have a return statement. When you create a function with no return statement, it still returns a None value:

The None value isn’t totally useful as you may not be able to perform an operation with it. It only shows that the function was executed successfully. You can hint that a function has no return type by annotating the return value with None :

When a function accepts a parameter of more than one type, you can use the union character ( | ) to separate the types.

For example, the following function accepts a parameter that can be either str or int :

You can invoke the function show_type  with a string or an integer, and the output depends on the data type of the argument it receives.

To annotate the parameter, we will use the union character | , which was introduced in Python 3.10, to separate the types as follows:

The union | now shows that the parameter num is either str or int .

If you’re using Python ≤3.9, you need to import Union from the typing module. The parameter can be annotated as follows:

Adding type hints to optional function parameters

Not all parameters in a function are required; some are optional. Here’s an example of a function that takes an optional parameter:

The second parameter title is an optional parameter that has a default value of None if it receives no argument at the point of invoking the function. The typing module provides the Optional[<datatype>] annotation to annotate this optional parameter with a type hint:

Below is an example of how you can perform this annotation:

Adding type hints to lists

Python lists are annotated based on the types of the elements they have or expect to have. Starting with Python ≥3.9, to annotate a list, you use the list type, followed by [] . [] contains the element’s type data type.

For example, a list of strings can be annotated as follows:

If you’re using Python ≤3.8, you need to import List from the typing module:

In function definitions, the Python documentation recommends that the list type should be used to annotate the return types:

However, for function parameters, the documentation recommends using these abstract collection types:

The Iterable type should be used when the function takes an iterable and iterates over it.

An iterable is an object that can return one item at a time. Examples range from lists, tuples, and strings to anything that implements the __iter__ method.

You can annotate an Iterable as follows, in Python ≥3.9:

In the function, we define the items parameter and assign it an Iterable[int] type hint, which specifies that the Iterable contains int elements.

The Iterable type hint accepts anything that has the __iter__ method implemented. Lists and tuples have the method implemented, so you can invoke the double_elements function with a list or a tuple, and the function will iterate over them.

To use Iterable in Python ≤3.8, you have to import it from the typing module:

Using Iterable in parameters is more flexible than if we had a list type hint or any other objects that implements the __iter__ method. This is because you wouldn’t need to convert a tuple for example, or any other iterable to a list before passing it into the function.

A sequence is a collection of elements that allows you to access an item or compute its length.

A Sequence type hint can accept a list, string, or tuple. This is because they have special methods: __getitem__ and __len__ . When you access an item from a sequence using  items[index] , the __getitem__ method is used. When getting the length of the sequence len(items) , the __len__ method is used.

In the following example, we use the Sequence[int] type to accept a sequence that has integer items:

This function accepts a sequence and access the last element from it with data[-1] . This uses the __getitem__ method on the sequence to access the last element.

As you can see, we can call the function with a tuple or list and the function works properly. We don’t have to limit parameters to list if all the function does is get an item.

For Python ≤3.8, you need to import Sequence from the typing module:

Adding type hints to dictionaries

To add type hints to dictionaries, you use the dict type followed by [key_type, value_type] :

For example, the following dictionary has both the key and the value as a string:

You can annotate it as follows:

The dict type specifies that the person dictionary keys are of type str and values are of type str .

If you’re using Python ≤3.8, you need to import Dict from the typing module.

In function definitions, the documentation recommends using dict as a return type:

For function parameters, it recommends using these abstract base classes:

  • MutableMapping

In function parameters, when you use the dict type hints, you limit the arguments the function can take to only dict , defaultDict , or OrderedDict . But, there are many dictionary subtypes, such as UserDict and ChainMap , that can be used similarly.

You can access an element and iterate or compute their length like you can with a dictionary. This is because they implement:

  • __getitem__ : for accessing an element
  • __iter__ : for iterating
  • __len__ : computing the length

So instead of limiting the structures the parameter accepts, you can use a more generic type Mapping since it accepts:

  • defaultdict
  • OrderedDict

Another benefit of the Mapping type is that it specifies that you are only reading the dictionary and not mutating it.

The following example is a function that access items values from a dictionary:

The Mapping type hint in the above function has the [str, str] depiction that specifies that the student data structure has keys and values both of type str .

If you’re using Python ≤3.8, import Mapping from the typing module:

Use MutableMapping as a type hint in a parameter when the function needs to mutate the dictionary or its subtypes. Examples of mutation are deleting items or changing item values.

The MutableMapping class accepts any instance that implements the following special methods:

  • __getitem__
  • __setitem__
  • __delitem__

The __delitem__ and __setitem__ methods are used for mutation, and these are methods that separate Mapping type from the MutableMapping type.

In the following example, the function accepts a dictionary and mutates it:

In the function body, the value in the first_name variable is assigned to the dictionary and replaces the value paired to the first_name key. Changing a dictionary key value invokes the __setitem__ method.

If you are on Python ≤3.8, import MutableMapping from the typing module.

So far, we have looked at how to annotate dictionaries with dict , Mapping , and MutableMapping , but most of the dictionaries have only one type: str . However, dictionaries can contain a combination of other data types.

Here is an example of a dictionary whose keys are of different types:

The dictionary values range from str , int , and list . To annotate the dictionary, we will use a TypedDict that was introduced in Python 3.8. It allows us to annotate the value types for each property with a class-like syntax:

We define a class StudentDict that inherits from TypedDict . Inside the class, we define each field and its expected type.

With the TypedDict defined, you can use it to annotate a dictionary variable as follows:

You can also use it to annotate a function parameter that expects a dictionary as follows:

If the dictionary argument doesn’t match StudentDict , mypy will show a warning.

Adding type hints to tuples

A tuple stores a fixed number of elements. To add type hints to it, you use the tuple type, followed by [] , which takes the types for each elements.

The following is an example of how to annotate a tuple with two elements:

Regardless of the number of elements the tuple contains, you’re required to declare the type for each one of them.

The tuple type can be used as a type hint for a parameter or return type value:

If your tuple is expected to have an unknown amount of elements of a similar type, you can use tuple[type, ...] to annotate them:

To annotate a named tuple, you need to define a class that inherits from NamedTuple . The class fields define the elements and their types:

If you have a function that takes a named tuple as a parameter, you can annotate the parameter with the named tuple:

There are times when you don’t care about the argument a function takes. You only care if it has the method you want.

To implement this behavior, you’d use a protocol. A protocol is a class that inherits from the Protocol class in the typing module. In the protocol class, you define one or more methods that the static type checker should look for anywhere the protocol type is used.

Any object that implements the methods on the protocol class will be accepted. You can think of a protocol as an interface found in programming languages such as Java, or TypeScript. Python provides predefined protocols, a good example of this is the Sequence type. It doesn’t matter what kind of object it is, as long as it implements the __getitem__ and __len__ methods, it accepts them.

Let’s consider the following code snippets. Here is an example of a function that calculates age by subtracting the birth year from the current year:

The function takes two parameters: current_year , an integer, and data , an object. Within the function body, we find the difference between the current_year and the value returned from get_birthyear() method.

Here is an example of a class that implements the get_birthyear method:

This is one example of such a class, but there could be other classes such as Dog or Cat that implements the get_birthyear method. Annotating all the possible types would be cumbersome.

Since we only care about the get_birthyear() method. To implement this behavior, let’s create our protocol:

The class HasBirthYear inherits from Protocol , which is part of the typing module. To make the Protocol aware about the get_birthyear method, we will redefine the method exactly as it is done in the Person class example we saw earlier. The only exception would be the function body, where we have to replace the body with an ellipsis ( ... ).

With the Protocol defined, we can use it on the calc_age function to add a type hint to the data parameter:

Now the data parameter has been annotated with the HasBirthYear Protocol. The function can now accept any object as long it has the get_birthyear method.

Here is the full implementation of the code using Protocol :

Running the code with mypy will give you no issues.

Some functions produce different outputs based on the inputs you give them. For example, let’s look at the following function:

When you call the function with an integer as the first argument, it returns an integer. If you invoke the function with a list as the first argument, it returns a list with each element added with the second argument value.

Now, how can we annotate this function? Based on what we know so far, our first instinct would be to use the union syntax:

However, this could be misleading due to its ambiguity. The above code describes a function that accepts an integer as the first argument, and the function returns either a list or an int . Similarly, when you pass a list as the first argument, the function will return either a list or an int .

You can implement function overloading to properly annotate this function. With function overloading, you get to define multiple definitions of the same function without the body, add type hints to them, and place them before the main function implementations.

To do this, annotate the function with the overload decorator from the typing module. Let’s define two overloads before the add_number function implementation:

We define two overloads before the main function add_number . The overloads parameters are annotated with the appropriate types and their return value types. Their function bodies contains an ellipsis ( ... ).

The first overload shows that if you pass int as the first argument, the function will return int .

The second overload shows that if you pass a list as the first argument, the function will return a list .

Finally, the main add_number implementation does not have any type hints.

As you can now see, the overloads annotate the function behavior much better than using unions.

At the time of writing, Python does not have an inbuilt way of defining constants . Starting with Python 3.10, you can use the Final type from the typing module. This will mean mypy will emit warnings if there are attempts to change the variable value.

Running the code with mypy with issue a warning:

This is because we are trying to modify the MIN variable value to MIN = MIN + 3 .

Note that, without mypy or any static file-checker, Python won’t enforce this and the code will run without any issues:

As you can see, during runtime you can change the variable value MIN any time. To enforce a constant variable in your codebase, you have to depend on mypy.

While you may be able to add annotations to your code, the third-party modules you use may not have any type hints. As a result, mypy will warn you.

If you receive those warnings, you can use a type comment that will ignore the third-party module code:

You also have the option of adding type hints with stubs. To learn how to use stubs, see Stub files in the mypy documentation.

This tutorial explored the differences between statically typed and dynamically typed codes. You learned the different approaches you can use to add type hints to your functions and classes. You also learned about static type-checking with mypy and how to add type hints to variables, functions, lists, dictionaries, and tuples as well as working with Protocols, function overloading, and how to annotate constants.

To continue building your knowledge, visit typing — Support for type hints . To learn more about mypy, visit the mypy documentation .

Get set up with LogRocket's modern error tracking in minutes:

  • Visit to get an app ID

Install LogRocket via npm or script tag. LogRocket.init() must be called client-side, not server-side

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)

annotated assignment python

Stop guessing about your digital experience with LogRocket

Recent posts:.

Comparing Cms Options For Django: Wagtail Vs Django Cms

Wagtail vs. Django CMS: Comparing CMS options for Django

Selecting a CMS for Django can be challenging. Two popular options are Wagtail and Django CMS. Let’s explore their features, pros, and cons.

annotated assignment python

Handling file uploads in Next.js using UploadThing

Manage file uploads in your Next.js app using UploadThing, a file upload tool to be used in full-stack TypeScript applications.

annotated assignment python

Exploring advanced support for Vite 5 in Storybook 8

Explore the latest updates in Storybook 8, focusing on its improved support for Vite 5 as a build tool.

annotated assignment python

Using Next.js with Suspense to create a loading component

Next.js 13 introduced some new features like support for Suspense, a React feature that lets you delay displaying a component until the children have finished loading.

annotated assignment python

Leave a Reply Cancel reply

  • Python »
  • 3.12.4 Documentation »
  • Python HOWTOs »
  • Annotations Best Practices
  • Theme Auto Light Dark |

Annotations Best Practices ¶

Larry Hastings

Accessing The Annotations Dict Of An Object In Python 3.10 And Newer ¶

Python 3.10 adds a new function to the standard library: inspect.get_annotations() . In Python versions 3.10 and newer, calling this function is the best practice for accessing the annotations dict of any object that supports annotations. This function can also “un-stringize” stringized annotations for you.

If for some reason inspect.get_annotations() isn’t viable for your use case, you may access the __annotations__ data member manually. Best practice for this changed in Python 3.10 as well: as of Python 3.10, o.__annotations__ is guaranteed to always work on Python functions, classes, and modules. If you’re certain the object you’re examining is one of these three specific objects, you may simply use o.__annotations__ to get at the object’s annotations dict.

However, other types of callables–for example, callables created by functools.partial() –may not have an __annotations__ attribute defined. When accessing the __annotations__ of a possibly unknown object, best practice in Python versions 3.10 and newer is to call getattr() with three arguments, for example getattr(o, '__annotations__', None) .

Before Python 3.10, accessing __annotations__ on a class that defines no annotations but that has a parent class with annotations would return the parent’s __annotations__ . In Python 3.10 and newer, the child class’s annotations will be an empty dict instead.

Accessing The Annotations Dict Of An Object In Python 3.9 And Older ¶

In Python 3.9 and older, accessing the annotations dict of an object is much more complicated than in newer versions. The problem is a design flaw in these older versions of Python, specifically to do with class annotations.

Best practice for accessing the annotations dict of other objects–functions, other callables, and modules–is the same as best practice for 3.10, assuming you aren’t calling inspect.get_annotations() : you should use three-argument getattr() to access the object’s __annotations__ attribute.

Unfortunately, this isn’t best practice for classes. The problem is that, since __annotations__ is optional on classes, and because classes can inherit attributes from their base classes, accessing the __annotations__ attribute of a class may inadvertently return the annotations dict of a base class. As an example:

This will print the annotations dict from Base , not Derived .

Your code will have to have a separate code path if the object you’re examining is a class ( isinstance(o, type) ). In that case, best practice relies on an implementation detail of Python 3.9 and before: if a class has annotations defined, they are stored in the class’s __dict__ dictionary. Since the class may or may not have annotations defined, best practice is to call the get method on the class dict.

To put it all together, here is some sample code that safely accesses the __annotations__ attribute on an arbitrary object in Python 3.9 and before:

After running this code, ann should be either a dictionary or None . You’re encouraged to double-check the type of ann using isinstance() before further examination.

Note that some exotic or malformed type objects may not have a __dict__ attribute, so for extra safety you may also wish to use getattr() to access __dict__ .

Manually Un-Stringizing Stringized Annotations ¶

In situations where some annotations may be “stringized”, and you wish to evaluate those strings to produce the Python values they represent, it really is best to call inspect.get_annotations() to do this work for you.

If you’re using Python 3.9 or older, or if for some reason you can’t use inspect.get_annotations() , you’ll need to duplicate its logic. You’re encouraged to examine the implementation of inspect.get_annotations() in the current Python version and follow a similar approach.

In a nutshell, if you wish to evaluate a stringized annotation on an arbitrary object o :

If o is a module, use o.__dict__ as the globals when calling eval() .

If o is a class, use sys.modules[o.__module__].__dict__ as the globals , and dict(vars(o)) as the locals , when calling eval() .

If o is a wrapped callable using functools.update_wrapper() , functools.wraps() , or functools.partial() , iteratively unwrap it by accessing either o.__wrapped__ or o.func as appropriate, until you have found the root unwrapped function.

If o is a callable (but not a class), use o.__globals__ as the globals when calling eval() .

However, not all string values used as annotations can be successfully turned into Python values by eval() . String values could theoretically contain any valid string, and in practice there are valid use cases for type hints that require annotating with string values that specifically can’t be evaluated. For example:

PEP 604 union types using | , before support for this was added to Python 3.10.

Definitions that aren’t needed at runtime, only imported when typing.TYPE_CHECKING is true.

If eval() attempts to evaluate such values, it will fail and raise an exception. So, when designing a library API that works with annotations, it’s recommended to only attempt to evaluate string values when explicitly requested to by the caller.

Best Practices For __annotations__ In Any Python Version ¶

You should avoid assigning to the __annotations__ member of objects directly. Let Python manage setting __annotations__ .

If you do assign directly to the __annotations__ member of an object, you should always set it to a dict object.

If you directly access the __annotations__ member of an object, you should ensure that it’s a dictionary before attempting to examine its contents.

You should avoid modifying __annotations__ dicts.

You should avoid deleting the __annotations__ attribute of an object.

__annotations__ Quirks ¶

In all versions of Python 3, function objects lazy-create an annotations dict if no annotations are defined on that object. You can delete the __annotations__ attribute using del fn.__annotations__ , but if you then access fn.__annotations__ the object will create a new empty dict that it will store and return as its annotations. Deleting the annotations on a function before it has lazily created its annotations dict will throw an AttributeError ; using del fn.__annotations__ twice in a row is guaranteed to always throw an AttributeError .

Everything in the above paragraph also applies to class and module objects in Python 3.10 and newer.

In all versions of Python 3, you can set __annotations__ on a function object to None . However, subsequently accessing the annotations on that object using fn.__annotations__ will lazy-create an empty dictionary as per the first paragraph of this section. This is not true of modules and classes, in any Python version; those objects permit setting __annotations__ to any Python value, and will retain whatever value is set.

If Python stringizes your annotations for you (using from __future__ import annotations ), and you specify a string as an annotation, the string will itself be quoted. In effect the annotation is quoted twice. For example:

This prints {'a': "'str'"} . This shouldn’t really be considered a “quirk”; it’s mentioned here simply because it might be surprising.

Table of Contents

  • Accessing The Annotations Dict Of An Object In Python 3.10 And Newer
  • Accessing The Annotations Dict Of An Object In Python 3.9 And Older
  • Manually Un-Stringizing Stringized Annotations
  • Best Practices For __annotations__ In Any Python Version
  • __annotations__ Quirks

Previous topic

Python support for the Linux perf profiler

Isolating Extension Modules

  • Report a Bug
  • Show Source

1. Type Annotations And Hints

By Bernd Klein . Last modified: 13 Jul 2023.

On this page ➤

This chapter of our Python tutorial is about Type hints or type annotations. Both terms are synonyms and can be used interchangeably. Both terms refer to the practice of adding type information to variables, function parameters, and return values in Python code.

But let's start by looking at how Python is designed: Python is both a strongly typed and a dynamically typed language. Strong typing means that variables do have a type and that the type matters when performing operations on a variable. Dynamic typing means that the type of the variable is determined only during runtime. This means that types don't have to be declared in the program.

Type Hints Who cares

Yet, with Python 3.5 Type Annotations have been introduced. Are they really necessary? Do we have to use them?

Type Hints Who cares

The Python language itself doesn't care. The Python compiler itself does not enforce or check type annotations. Python remains a dynamically typed language, and type annotations are considered optional metadata. The Python interpreter does not perform any type checking based on these annotations during runtime.

Sitution in C and C++

If you know another programming language such as C or C++, you are used to declaring what data type you are working with. For example, this is how you would declare an integer variable in C or C++ like this.

This is known as type declaration. From this moment on, we - and the C/C++ compiler - know that "a" is of type integer. We can assign integers to a:

However, this is completely different in Python. Python doesn't know type declaration. Variables are just references to objects, as we have seen in chapter our chapter on Data Types and Variables

Live Python training

instructor-led training course

Enjoying this page? We offer live Python training courses covering the content of this site.

See: Live Python courses overview

Adding Type Hints to Variables

If you know another programming language such as C or C++, you are used to declaring what data type you are working with. For example, this is how you would declare an integer in C.

Yet, Python is a strictly typed programming language, whereas C and C++ are weakly typed. When we assign a "value" to a variable, Python automatically creates an object of the corresponding class:

Guessing by the variable name 'programming_language', we assume that the one who wrote the Python code intended this variable to reference strings. Yet, Python doesn't "care". All kind of data types can be assigned to this variable name:

We will now demonstrate what Python offers to take care of these "type intentions", or as it is called in Python jargon "type hints", aka "type annotations". It's possible to define a variable with a type hint using the following syntax in Python:

We can change our previous variable declaration accordingly with a Python type hint:

Alternatively, we could have written this code like this:

Even though this looks now very similar to C or C++, one shouldn't be mislead. The behaviour of Python hasn't changed. We can still assign any data type to this variable. Python doesn't care, as we see in the following code:

Who Cares About Type Hints / Annotations

Type Hints we care: mypy, pyright, pydantic, IDEs

It's important to note that type annotations in Python are purely optional and do not affect the runtime behavior of your code. They are simply a way to provide additional information to tools that can help improve the quality and maintainability of your code.

However, there are external tools like

  • IDEs like PyCharm, Spyder, VisualCode

that can analyze your code and perform static type checking based on the type annotations. These tools parse the code, interpret the type hints, and provide feedback on potential type-related errors and inconsistencies.

mypy is a static type checker and checks for annotated code in Python. It emits warnings if annotated types are inconsistently used. It allows gradual typing, this means you can add type hints as you like.

mypy is an external program, which needs to be installed with for example

After this it can be ran in a shell (e.g. bash under Linux) with the Python program to be checked as an argument:

We will demonstrate with the following Python code how to use mypy. You have to know a few things about the Jupyter-Notebooks cells (ipython shell): With the shell magic %%writefile we can write the content of a cell into a file with the name . In IPython syntax, the exclamation mark (!) allows users to run shell commands (from your operating system) from inside a Jupyter Notebook code cell. Simply start a line of code with ! and it will run the command in the shell. We use this to call mypy on the Python file.

Let's test this annotated code now with mypy:

Displaying the type of an expression


If you find yourself unsure about how mypy handles a specific section of code, you can utilize reveal_type(expr) to request mypy to exhibit the inferred static type of an expression, offering helpful insights.

The mypy documentation says:

"reveal_type is only understood by mypy and doesn’t exist in Python, if you try to run your program. You’ll have to remove any reveal_type calls before you can run your code. reveal_type is always available and you don’t need to import it."

This means that you will get an exception, if you run a Python program containing a reveal_type:

It makes only sense, if you use it with a mypy call:

There is a way to use it in Python programs, if you set reveal_type to a function 'doing nothing', in case we are not in TYPE_CHECKING mode.

Reveal Locals

At any line in a file, you have the option to employ reveal_locals() to view the types of all local variables simultaneously.

pyright can also be used instead of mypy .

It's important to note that both pyright and mypy are actively maintained and have a strong community backing. They share many common features and can both provide valuable static type checking for Python projects. The choice between them depends on your specific needs, preferences, and the particular characteristics of your project.

Difference between pyright and mypy

Pyright implements its own parser, which recovers gracefully from syntax errors and continues parsing the remainder of the source file. By comparison, mypy uses the parser built in to the Python interpreter, and it does not support recovery after a syntax error.

From the README: Pyright is typically 5x or more faster than mypy and other type checkers that are written in Python. It is meant for large Python source bases. It can run in a “watch” mode and performs fast incremental updates when files are modified.

Type Comments

No first-class syntax support for explicitly marking variables as being of a specific type is added by this PEP. To help with type inference in complex cases, a comment of the following format may be used:

Type annotations for variables in Python are a way to provide explicit type information about the expected type of a variable. They can help improve code readability, provide documentation, and enable static type checking with tools like mypy . Here's how you can use type annotations for variables:

As we have mentioned before: It's important to note that type annotations for variables in Python are optional and do not enforce the type at runtime. Python remains a dynamically typed language, so the actual type of a variable can still change during runtime.

Tuples and Lists

Type annotations for tuples and lists in Python allow you to specify the expected types of the elements within these data structures. Here's how you can use type annotations for tuples and lists:

To make code more readable we can

Since Python3.9+ we can write:

Literal Ellipsis

In Python type annotations, the literal ellipsis (...) is a special type hint called "ellipsis" or "ellipsis type". It represents an unspecified or unknown type. The ellipsis type hint is often used when the specific type of a value or a part of a type is not known or is intentionally left unspecified.

Type Aliases

PEP 613 summarizes Type Aliases like this:

It's recommended to capitalize type aliases.! They are user-defined types like classes. Classes are also usually capitalized!

Necessity for TypeVars:

Readability Counts (Zen of Python)

Type aliases can be used in annotated function definitions. By using the alias Url in the following example, we clearly improve the readability of the code:

Another example:

Type aliases shouldn't be confused with "untyped global expressions" and "typed global" expressions:

TypeVar is a Python construct used in type hints to indicate a placeholder for a generic type, allowing for flexible and reusable code.

Note that alias names should be uppercased by convention! They are user-defined types like classes. Classes are also usually uppercased!

Imagine, we would like to define three functions with the same name but a different signature (type annotation. We would like to do the following, which is not possible in Python.

We will start with an extremely simple function. This is a function which takes one object as an input and returns the object without any changes.

The above function definition is untyped. We can annotate it by using Any :

There is a problem in this way of annotation. Any can be really anything, which is fine but not in the case of this function. The nature of the function is like this: The argument can be any type, but the return type depends on the input type, or more precise: It has to be the same type as the input argument.

In the following code snippet we use a TypeVar. The function identity takes now an argument obj of type T and returns an object of the same type T. The T in the function signature is a type variable, which serves as a placeholder for a specific type that will be determined when the function is used.

By using the type variable T in the function signature and as the return type, the function maintains type safety and allows for a wide range of types to be handled without sacrificing type checking. It provides flexibility and code reusability, as the function can be used with different types while ensuring consistency in the return type.

We define now another simple function with a TypeVar:

mypy cannot find out if 42 * x has the same type as x . We can use a constraint for the types.

If we define

The T can stand for anything, as we have seen.

If we write

the types must be str or bytes

A TypeVar() expression must always directly be assigned to a variable (it should not be used as part of a larger expression). The argument to TypeVar() must be a string equal to the variable name to which it is assigned. Type variables must not be redefined .

Type variables created using the TypeVar() expression should always be assigned directly to a variable and not used as part of a larger expression. The argument passed to TypeVar() must be a string that matches the variable name to which it is assigned.

It is important to note that type variables should not be redefined.

TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parameterized by type variables). For example, we can define a type variable that ranges over just str and bytes. By default, a type variable ranges over all possible types. PEP 484 Type Hints

We called the function concat two str arguments and two bytes arguments, but not with a mix of str and bytes arguments. A mix is not possible, as we can see in the following code example:

Using Union

Union[X, Y] is equivalent to X | Y and means either X or Y.

To define a union, use e.g. Union[int, str] or the shorthand int | str . Using that shorthand is recommended.

The following rules apply:

  • The arguments must be types and there must be at least one.
  • Unions of unions are flattened, e.g.: Union[Union[int, str], float] == Union[int, str, float]
  • Unions of a single argument vanish, e.g.: Union[int] == int # The constructor actually returns int
  • Redundant arguments are skipped, e.g.: Union[int, str, int] == Union[int, str] == int | str
  • When comparing unions, the argument order is ignored, e.g.: Union[int, str] == Union[str, int]
  • You cannot subclass or instantiate a Union.

Let's rewrite the previous example with unions:

Have you noticed the difference between TypeVar and Union ? In the case of TypeVar, we had "The type getting in has to get out" whereas in Union they can be different!

Creating New Types

The new types will be treated by the type checker as if they were subclasses of the original types.

By using NewType you can declare a type without actually creating new class instances. In the type checker, NewType('UserId', int) creates a subclass of int named "UserId"

NewType('UserId', int) is not a class but the identity function, so x is NewType('NewType', int)(x) is always true.

UserId is similiar as if it had been created with

Recall that the use of a type alias declares two types to be equivalent to one another. Doing

Alias = Original

will make the static type checker treat Alias as being exactly equivalent to Original in all cases. This is useful when you want to simplify complex type signatures.

In contrast, NewType declares one type to be a subtype of another. Doing

Derived = NewType('Derived', Original)

will make the static type checker treat Derived as a subclass of Original, which means a value of type Original cannot be used in places where a value of type Derived is expected. This is useful when you want to prevent logic errors with minimal runtime cost.

The cast function is a utility provided by the typing module in Python. It allows you to explicitly specify the type of an expression or variable, providing a hint to the type checker without affecting the runtime behavior of the code.

The cast function has the following signature:

The first argument, typ, is the type that you want to cast the value to. The second argument, val, is the value that you want to cast.

The following is a useful example illustrating the usage of cast :

In this example, the function calculate_average takes a list of numbers as input and calculates the average value. The variable total represents the sum of the numbers, and count represents the number of elements in the list. To calculate the average, we divide total by count .

The use of cast(float, total) is an explicit type cast annotation. It tells mypy to treat total as a float type in the context of the division, even though it was originally calculated as the sum of integers. This helps to avoid potential type mismatch warnings or errors from mypy .

Note that cast is a runtime no-op, meaning it has no effect on the actual execution of the code. Its purpose is to provide a hint to the type checker (e.g., mypy) about the intended type of a value in a specific context.

On this page

Python typing.Annotated examples


Type checking in Python serves as a syntax for declaring types of variables, function parameters, and return values. It helps in early detection of potential errors, makes code more understandable, and facilitates better auto-completion and type inference in IDEs. With the advent of Annotated in the typing module, developers can go a step further by providing custom metadata to their type hints.

With the growing complexity of Python applications, ensuring code clarity and reducing the chances for bugs have become paramount. The typing module, introduced in Python 3.5, brought static type checking to the language, enhancing predictability and readability. This guide explores how to use the typing module, focusing on the Annotated class, which allows for more detailed type definitions. Through annotated examples, we aim to simplify your understanding of Python’s type annotations and their practical applications.

Basic Usage of Annotated

Annotated can be used to add extra information about the type hint. This is especially useful for scenarios where simple type hints are not enough. The first example shows how to use Annotated to add a simple description to a type hint.

This shows how Annotated could be used to add a descriptive comment to a type hint, making the code more readable and providing documentation inline.

Using Annotated for Validation

In more advanced scenarios, Annotated can also incorporate validation or conversion logic. This example illustrates annotating a parameter to indicate it should be validated as a positive integer.

Here, Annotated serves both as a documentation tool and a rudimentary form of input validation, though actual runtime checks depend on external libraries or custom implementations.

Combining Annotated with Other Type Hints

The real power of Annotated is unleashed when it is used in combination with other types from the typing module, like Generics. The following example demonstrates its application with a generic list.

This example highlights how Annotated can be used to provide comprehensive type hints that include both the type of the container and requirements for the elements it contains, enhancing understandability and maintainability.

Given its adaptable and informative nature, Annotated in Python’s typing module is an invaluable tool for enhancing code readability and robustness. By embedding additional information directly within type hints, developers can create more expressive and self-documented code bases. While the static checking tools’ support for Annotated is still evolving, its utility in documenting and designing clearer APIs is undeniable. As the Python ecosystem continues to mature, embracing advanced typing features like Annotated becomes not just advantageous but necessary for high-quality software development.

Next Article: How to Check Python versions on Mac

Previous Article: Python: Using Type Hints when Handling Exceptions with Try/Catch

Series: Basic Python Tutorials

Related Articles

  • Python Warning: Secure coding is not enabled for restorable state
  • Python TypeError: write() argument must be str, not bytes
  • 4 ways to install Python modules on Windows without admin rights
  • Python TypeError: object of type ‘NoneType’ has no len()
  • Python: How to access command-line arguments (3 approaches)
  • Understanding ‘Never’ type in Python 3.11+ (5 examples)
  • Python: 3 Ways to Retrieve City/Country from IP Address
  • Using Type Aliases in Python: A Practical Guide (with Examples)
  • Python: Defining distinct types using NewType class
  • Using Optional Type in Python (explained with examples)
  • Python: How to Override Methods in Classes
  • Python: Define Generic Types for Lists of Nested Dictionaries

Search tutorials, examples, and resources

  • PHP programming
  • Symfony & Doctrine
  • Laravel & Eloquent
  • Tailwind CSS
  • Sequelize.js
  • Mongoose.js

Join us and get access to thousands of tutorials and a community of expert Pythonistas.

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.


Christopher Bailey

In this lesson, you’ll learn about annotations in Python. Annotations were introduced in Python 3.0 originally without any specific purpose. They were simply a way to associate arbitrary expressions to function arguments and return values.

Years later, PEP 484 defined how to add type hints to your Python code, based off work that Jukka Lehtosalo had done on his Ph.D. project, Mypy. The main way to add type hints is using annotations. As type checking is becoming more and more common, this also means that annotations should mainly be reserved for type hints.

Function Annotations

For functions, you can annotate arguments and the return value. This is done as follows:

For arguments, the syntax is argument: annotation , while the return type is annotated using -> annotation . Note that the annotation must be a valid Python expression.

When running the code, you can also inspect the annotations. They are stored in a special .__annotations__ attribute on the function:

Sometimes you might be confused by how Mypy is interpreting your type hints. For those cases, there are special Mypy expressions: reveal_type() and reveal_locals() . You can add these to your code before running Mypy, and Mypy will dutifully report which types it has inferred. For example, save the following code to :

Next, run this code through Mypy:

Remember that the expressions reveal_type() and reveal_locals() are for troubleshooting in Mypy. If you were to run the Python script interpreter, it would crash with a NameError :

Variable Annotations

In the definition of circumference() in the previous section, you only annotated the arguments and the return value. You did not add any annotations inside the function body. More often than not, this is enough.

However, sometimes the type checker needs help in figuring out the types of variables as well. Variable annotations were defined in PEP 526 and introduced in Python 3.6. The syntax is the same as for function argument annotations. Annotations of variables are stored in the module level __annotations__ dictionary:

00:00 This video is about using annotations in your Python code. A little background on annotations. First introduced in Python 3.0, they were used as a way to associate arbitrary expressions to function arguments and return values. A few years passed and then PEP 484 defined how to add type hints to your Python code.

00:24 The main way to add the type hints is by using the annotations, like you saw before in the type hint videos. And although there may have been other uses for annotations in the past, as type checking is becoming more and more common, annotations should mainly be reserved for type hints.

00:42 Now this may seem as a bit of a review, annotation style is the same as you did for the type hints before. To do function annotations, you annotate the arguments and the return value.

00:56 The syntax for arguments is the argument’s name, colon ( : ), space, and then whatever the annotation is. And then the syntax for return types is that space, greater than symbol arrow ( -> ), and then the annotation. As a note, annotations must be a valid Python expression.

01:17 One kind of interesting note is that you can do inspections of your annotations. To inspect the annotations while running your code, you can use the special .__annotations__ attribute on the function.

01:29 So again, that would be a dot ( . ) with two underscores ( __ ) and then the word annotations and then two underscores ( __ ).

01:36 Let me have you work with annotations a little bit more with an example. Working here in the REPL, I’ll have you import math and then define a function named circumference() .

01:49 circumference() takes an argument of radius , which is a float , and this function returns a float .

01:59 Then the return is 2 * math.pi * radius . Great. So, here’s circumference() . It’s expecting a radius . When running the code, you can actually inspect these annotations that you’ve put in already.

02:16 They’re stored in a special dunder method of .__annotations__ , so you could type circumference. and then two underscores and you’ll see .__annotations__ as the very first choice here. And here you can see, there’s the two annotations that you put in for the types.

02:35 The radius being of a float and the return value being of a float . If you’re to use circumference() and give it a value of 1.23 , you can see there a float going in and a float coming out.

02:52 So, how is Mypy interpreting your hints? If you have questions about that, there are a couple special Mypy expressions that can provide a little more clarity. One is reveal_type() and the other is reveal_locals() . Let me give you an example.

03:09 Let’s create a new file.

03:13 This file is going to be named .

03:18 Let me adjust everything here. Okay. So, for , again import math . And here you’re going to use the Mypy expression of reveal_type() of math.pi .

03:33 I’ll have you set the radius = 1 .

03:45 And then you can use another statement that’s part of Mypy of reveal_locals() . So, please note that this is a troubleshooting step. If you were to try to run this Python script, , you’re going to end up with errors because those are not valid Python functions. But let’s look at what happens in the terminal when you run mypy . Make sure you’ve saved .

04:09 Oh, I need to exit my REPL. There you go. So here, mypy . Here at line 4, Revealed type is 'builtins.float' , from the math library.

04:24 And here on line 9, it’s asking to reveal the locals, and the local types are the circumference , which is a float , and the radius , which is an int in this case.

04:33 So Mypy has correctly inferred the types of all these built-in floats and ints. And again, if you were to try to run with these two statements in it, you are going to get some errors.

04:46 Here it’s going to say 'reveal_type' is not defined . So, you need to make sure you remove those statements before saving your code. A recent development is the ability to add annotations for variables, also.

05:00 This is defined in PEP 526 and introduced in Python 3.6. It uses the same syntax as for function arguments, with the name—in this case, pi —colon, then the type. If you’re going to assign a value, then you would have an equal sign ( = ) with the spaces before and after, and the value.

05:22 You might remember that .__annotations__ dictionary that you checked out before for a function. Well, annotations of variables are stored in the module level in an __annotations__ dictionary.

05:33 Let me have you try this out with some code. So, say you had a variable named pi . Here, you’d put a colon after it and say pi is a float , and you can set its value. Again, when you’re defining circumference() here,

05:49 you’d do it the same way you did before, circumference(radius: float) with a return of a float .

05:55 This time, you’re going to use pi instead of math.pi , times the radius . So here, the annotations for circumference() would be stored in its own dunder method of .__annotations__ that you saw before.

06:08 But if you type __annotations__ , itself, as a function, it’ll be stored in the module-level __annotations__ dictionary. And here you can see pi and its annotation of being of a <class 'float'> .

06:29 You’re also allowed to annotate a variable, even without giving it a value. This will add the annotation to the __annotations__ dictionary also, while the variable still is going to remain undefined.

06:42 So make a variable named nothing , with a colon, and say that it’s of type str (string) as an annotation. So if you type nothing by itself, you’re going to get a NameError , 'nothing' is not defined . But if you were to look it up in __annotations__ , you’ll see here along with 'pi' of <class 'float'> , 'nothing' is of a <class 'str'> .

07:06 In the next video, I’ll take you back to looking at type comments.

Become a Member to join the conversation.

annotated assignment python

Python Type Annotations Full Guide

A website containing documentation and tutorials for the software team..

This notebook explores how and when to use Python’s type annotations to enhance your code. Note: Unless otherwise specified, any code written in this notebook is written in the Python 3.10 version of the Python programming language. Lines of code that are feature-specific to versions 3.9 and 3.10 will be annotated accordingly. IMPORTANT: Type annotations only need to be done on the first occurrence of a variable’s name in a scope.

Table of Contents

Introduction, how to use union-typed variables, optional variables, nested collections, tuple unpacking, inheritance, namedtuples, dataclasses, shape typing, data type typing, other advanced types, type aliases, type variables, structural subtyping and generic collections (abc), user-defined generics.

Python Type Annotations , also known as type signatures or “type hints”, are a way to indicate the intended data types associated with variable names. In addition to writing more readable, understandable, and maintainable code, type annotations can also be used by static type checkers like mypy to verify type consistency and to catch programming errors before they are found the traditional way, at runtime. It should be noted that type annotations create no new logic at runtime and thus are designed to generate nearly zero runtime overhead, so there’s no risk of decreased performance.

The typing module is the core Python module to handle advanced type annotations. Introduced in Python 3.5, this module adds extra functionality on top of the built-in type annotations to account for more specific type circumstances such as pre-python 3.9 structural subtyping, pre-python 3.10 union types, callables, generics, and others.

Basic Variable Type Annotations

General form (with or without assigned value):

Here are some examples of basic annotated types:

Dynamically (Union) Typed Variables

If the dynamic typing is needed, use Union (pre-Python 3.10, imported from typing ) or the pipe operator, | (Python 3.10+):

Since for Union types, you have no way of knowing/ensuring exactly what type a variable is at compile-time, you must use either assert isinstance(...) or if isinstance(...) statements to fulfill the runtime type-checking and type-safety that type checkers can’t verify. See examples below.

Oftentimes, values need the option to end up in a “null” or empty state. These are known as optional values, which use the type format Optional[T] where T is the possible non-None type. Alternatively, new to Python 3.10, the new T | None syntax may be used, as seen below.

For the same reason as union types, optional types should be only used after its exact type has been resolved at runtime. As a best practice, this means utilizing Python’s is operator instead of the == operator to check for identity instead of equality. See example below.

See PEP 526 - Syntax for Variable Type Annotations for more info.


When making type annotations for a collection, it is important to also annotate the type of data that is stored within that collection. While collections should almost always be typed to their “deepest” known sub-type, there’s a point where type annotations lose their elegance and instead may transform into monstrous nested strings of death. In such cases, Type Aliases may be used to reduce clutter (more on that later).


In the case of JSON files and other cases where there are unknown types from a function call, annotate as far as is known about the result as possible (ex. at the least, we know json.load will return a dict mapping str to objects):

Note: For pre-Python 3.9 code, built-in collection types’ annotations are imported from the typing module as their uppercase variants (i.e. List[int] )

  • See TypeAliases for more info on TypeAliases.
  • See NewTypes for more info on NewTypes.

Note: This is the only real way to do tuple unpacking right now (see PEP 526 ). Hopefully in a future release they devise a more elegant method.

See PEP 526 - Syntax for Variable Type Annotations for more info on variable type annotations.

Function Signatures and Callables

Functions’ arguments are all typed normally, and the return type is typed with an arrow ( -> ) followed by the return type followed by the colon terminating the function signature. Here are some examples.

Simple function:

Function with default values:

Slightly more complex function:

Functions with *args (variable-length positional arguments) or **kwargs (variable-length keyword arguments) are typed a little differently than usual in that the collection that stores them does not need annotation. Here’s a simple example from the mypy docs:

Functions designed to never return look like this:

See Function Signatures from mypy docs for more info.

Callables are special types of objects that can be called. The type annotation is written as Callable[[P], R] where P is a comma-separated list of types corresponding to the types of the input parameters of the callable, in order, and R is the return type. Here are some examples of callables in practice:

Note: Callables, when used for Decorators , need a way to specify generic parameters, so use ParamSpec from the typing module in the event that’s necessary.

Class Type Annotations

Classes are typed as you would expect although there are some nuances that are handled more explicitly. For instance, class variables must be explicitly typed as ClassVar[T] where T is the type of the class variable.

Note: the method return-typed with the class name is a feature added in Python 3.10. Pre-Python 3.10 code can use this feature as well if the line from __future__ import annotations is written at the top of the file to enable it.

In cases where subtypes of a class are used, the subtype must be annotated with the supertype if the intention is to re-assign the variable between the subtypes of the supertype.

  • Python typing - ClassVars
  • Structural Subtyping and Generic Collections
  • User-defined Generics

Iterators and Generators

Iterators and generators are objects that implement __next__ or are functions that include the keyword yield in the body.

Iterators are classes in python that implement __iter__ and __next__ . These are usually iterated over with for loops, but you can use them in other ways such as casting them directly to a sequence or even using the built-in next function to iterate manually. Iterator types are annotated as Iterator[T] where T is the type(s) of the items yielded.

Here is an example of an iterator that counts up in triplets until the max_val passed.

Generators are like iterators in that they continuously “return” a next value, but they differ in that they can return objects instead of just numerical values, and they can be written as functions. Generator return types are annotated as Generator[Y, S, R] where Y is the type of the yielded values, S is the type of the values expected to be sent to the generator (if applicable), and R is the type of the return value of the generator (if applicable). Not all generators have send or return values, so these may be replaced with None if not applicable.

In the case below, the generator function only returns integers, so we can type it as an iterator of integers for simplicity’s sake. However, if desired, it can also use the traditional method of generator typing.

This next example cannot be typed as an iterator because it returns objects, so it’s a generator.

Here we have a generator with yield, send, and return support. This example’s parameter max_num starts at infinity, and can be passed as either a an int or a float.

Note: the pipe ( | ) operator between types used above is not supported pre-Python3.10 (See Dynamically Typed (Union) Variables ).

Advanced Python Data Types

Enums don’t need to be typed in their construction since their type is inherently being defined. As such, they do need to be annotated when referenced (see example below).

NamedTuples are typed normally and constructed as a class inheriting from typing.NamedTuple .

Note: while there is an alternative (legacy) method of creating namedtuples using collections.namedtuple , it is not recommended to use this method and is recommended to use the typing.NamedTuple instead as the former requires you to enter the type name as an argument string and does not support type annotation.

Dataclasses are also typed as expected.

Numpy Arrays

Numpy arrays are typed with the PyPI package nptyping (ver. 2.0.0+). This is so that we get explicit shape typing, and an overall cleaner annotation system. Unfortunately, this means that type checkers like mypy can’t actually check the details of the typed numpy array (only whether the variable is or is not an ndarray), so at the moment, it’s almost purely a glorified comment.

Type annotations are formatted as NDArray[S, T] where S is the intended shape of the array ( see nptyping Shape expressions ), and T is the intended data type of the array ( see nptyping dtypes ). Additionally, the structure of an array can also be annotated ( see nptyping Structure expressions ).

Shapes are represented as strings containing a comma-separated list of integers corresponding to the shape of the ndarray. For example, a 2D array of shape (3, 4) would be represented as "3, 4" . In addition, shapes can also be more dynamically typed with wildcards (*) in place of single dimension numbers to represent any length for that dimension, and they can also be labeled and named. A full detailing of Shape expressions can be found here .

Data types are imported from nptyping explicitly. Some commonly used types that can be imported are Int , UInt8 , Float , Bool , String , and Any . A full list of available dtypes can be found here

Here are some examples of type annotated numpy arrays.

See Nptyping Documentation for more info on how to use nptyping.

Note: While numpy does have its own numpy.typing library, for a variety of reasons, we no longer use this library and thus do not recommend it.

Here are a list of other advanced types that are not covered in the above sections with links to their type annotation documentation:

  • Awaitables & Asynchronous Iterators/Generators
  • Final (Uninheritable) Attributes
  • metaclasses

Advanced Python Type Annotations

A Type Alias is simply a synonym for a type, and is used to make the code more readable. To create one, simply assign a type annotation to a variable name. Beginning in Python 3.10, this assigned variable can be typed with typing.TypeAlias . Here is an example.

New Types are a way to definte types that wrap existing types in Python. What this means is that you can define a new type that is a subtype of an existing type with almost no class/inheritance overhead, and then use that new type in place of the existing type.

See Python typing - NewType for more info.

Type Variables are a way to define a type that can be used in place of a type (with or without constraints on what those types may be), but is not a type. This is useful for defining generic types. Let’s take a look at what the class signature for typing.TypeVar .

Here’s an example of TypeVar in practice:

  • Python typing - TypeVars

Also known as “duck types”, generic collections are a way of defining a type of collection that fits a certain set of operations. These types are all the Abstract Base Classes (ABCs) of common Python collections. For example, a list is an generic Sequence , and a dict is a generic Mapping Here are some examples of some common generic collections:

More information and other abstract base classes can be found here .

  • Python typing - Generics
  • mypy - Protocols and Structural Subtyping

Oftentimes when you want to create your own collection, you want to be adapatable as to what types it can take. In this case, we combine TypeVar and Generic to create a generic collection. Here are some simple examples:

Note: the usage of T as a type variable is a convention and can be substituted with any name. Similarly, the convention for user-defined generic mappings or other paired values is K , V (usually used as Key, Value)

Type inference and type annotations ¶

Type inference ¶.

For most variables, if you do not explicitly specify its type, mypy will infer the correct type based on what is initially assigned to the variable.

Note that mypy will not use type inference in dynamically typed functions (those without a function type annotation) — every local variable type defaults to Any in such functions. For more details, see Dynamically typed code .

Explicit types for variables ¶

You can override the inferred type of a variable by using a variable type annotation:

Without the type annotation, the type of x would be just int . We use an annotation to give it a more general type Union[int, str] (this type means that the value can be either an int or a str ).

The best way to think about this is that the type annotation sets the type of the variable, not the type of the expression. For instance, mypy will complain about the following code:

To explicitly override the type of an expression you can use cast(<type>, <expression>) . See Casts for details.

Note that you can explicitly declare the type of a variable without giving it an initial value:

Explicit types for collections ¶

The type checker cannot always infer the type of a list or a dictionary. This often arises when creating an empty list or dictionary and assigning it to a new variable that doesn’t have an explicit variable type. Here is an example where mypy can’t infer the type without some help:

In these cases you can give the type explicitly using a type annotation:

Using type arguments (e.g. list[int] ) on builtin collections like list , dict , tuple , and set only works in Python 3.9 and later. For Python 3.8 and earlier, you must use List (e.g. List[int] ), Dict , and so on.

Compatibility of container types ¶

A quick note: container types can sometimes be unintuitive. We’ll discuss this more in Invariance vs covariance . For example, the following program generates a mypy error, because mypy treats list[int] as incompatible with list[object] :

The reason why the above assignment is disallowed is that allowing the assignment could result in non-int values stored in a list of int :

Other container types like dict and set behave similarly.

You can still run the above program; it prints x . This illustrates the fact that static types do not affect the runtime behavior of programs. You can run programs with type check failures, which is often very handy when performing a large refactoring. Thus you can always ‘work around’ the type system, and it doesn’t really limit what you can do in your program.

Context in type inference ¶

Type inference is bidirectional and takes context into account.

Mypy will take into account the type of the variable on the left-hand side of an assignment when inferring the type of the expression on the right-hand side. For example, the following will type check:

The value expression [1, 2] is type checked with the additional context that it is being assigned to a variable of type list[object] . This is used to infer the type of the expression as list[object] .

Declared argument types are also used for type context. In this program mypy knows that the empty list [] should have type list[int] based on the declared type of arg in foo :

However, context only works within a single statement. Here mypy requires an annotation for the empty list, since the context would only be available in the following statement:

Working around the issue is easy by adding a type annotation:

Silencing type errors ¶

You might want to disable type checking on specific lines, or within specific files in your codebase. To do that, you can use a # type: ignore comment.

For example, say in its latest update, the web framework you use can now take an integer argument to run() , which starts it on localhost on that port. Like so:

However, the devs forgot to update their type annotations for run , so mypy still thinks run only expects str types. This would give you the following error:

If you cannot directly fix the web framework yourself, you can temporarily disable type checking on that line, by adding a # type: ignore :

This will suppress any mypy errors that would have raised on that specific line.

You should probably add some more information on the # type: ignore comment, to explain why the ignore was added in the first place. This could be a link to an issue on the repository responsible for the type stubs, or it could be a short explanation of the bug. To do that, use this format:

Type ignore error codes ¶

By default, mypy displays an error code for each error:

It is possible to add a specific error-code in your ignore comment (e.g. # type: ignore[attr-defined] ) to clarify what’s being silenced. You can find more information about error codes here .

Other ways to silence errors ¶

You can get mypy to silence errors about a specific variable by dynamically typing it with Any . See Dynamically typed code for more information.

You can ignore all mypy errors in a file by adding a # mypy: ignore-errors at the top of the file:

You can also specify per-module configuration options in your The mypy configuration file . For example:

Finally, adding a @typing.no_type_check decorator to a class, method or function causes mypy to avoid type checking that class, method or function and to treat it as not having any type annotations.

annotated assignment python

  • Python »
  • 3.6.3 Documentation »
  • The Python Language Reference »

7. Simple statements ¶

A simple statement is comprised within a single logical line. Several simple statements may occur on a single line separated by semicolons. The syntax for simple statements is:

7.1. Expression statements ¶

Expression statements are used (mostly interactively) to compute and write a value, or (usually) to call a procedure (a function that returns no meaningful result; in Python, procedures return the value None ). Other uses of expression statements are allowed and occasionally useful. The syntax for an expression statement is:

An expression statement evaluates the expression list (which may be a single expression).

In interactive mode, if the value is not None , it is converted to a string using the built-in repr() function and the resulting string is written to standard output on a line by itself (except if the result is None , so that procedure calls do not cause any output.)

7.2. Assignment statements ¶

Assignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects:

(See section Primaries for the syntax definitions for attributeref , subscription , and slicing .)

An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.

Assignment is defined recursively depending on the form of the target (list). When a target is part of a mutable object (an attribute reference, subscription or slicing), the mutable object must ultimately perform the assignment and decide about its validity, and may raise an exception if the assignment is unacceptable. The rules observed by various types and the exceptions raised are given with the definition of the object types (see section The standard type hierarchy ).

Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows.

  • If the target list is empty: The object must also be an empty iterable.
  • If the target list is a single target in parentheses: The object is assigned to that target.
  • If the target list contains one target prefixed with an asterisk, called a “starred” target: The object must be an iterable with at least as many items as there are targets in the target list, minus one. The first items of the iterable are assigned, from left to right, to the targets before the starred target. The final items of the iterable are assigned to the targets after the starred target. A list of the remaining items in the iterable is then assigned to the starred target (the list can be empty).
  • Else: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.

Assignment of an object to a single target is recursively defined as follows.

If the target is an identifier (name):

  • If the name does not occur in a global or nonlocal statement in the current code block: the name is bound to the object in the current local namespace.
  • Otherwise: the name is bound to the object in the global namespace or the outer namespace determined by nonlocal , respectively.

The name is rebound if it was already bound. This may cause the reference count for the object previously bound to the name to reach zero, causing the object to be deallocated and its destructor (if it has one) to be called.

If the target is an attribute reference: The primary expression in the reference is evaluated. It should yield an object with assignable attributes; if this is not the case, TypeError is raised. That object is then asked to assign the assigned object to the given attribute; if it cannot perform the assignment, it raises an exception (usually but not necessarily AttributeError ).

Note: If the object is a class instance and the attribute reference occurs on both sides of the assignment operator, the RHS expression, a.x can access either an instance attribute or (if no instance attribute exists) a class attribute. The LHS target a.x is always set as an instance attribute, creating it if necessary. Thus, the two occurrences of a.x do not necessarily refer to the same attribute: if the RHS expression refers to a class attribute, the LHS creates a new instance attribute as the target of the assignment:

This description does not necessarily apply to descriptor attributes, such as properties created with property() .

If the target is a subscription: The primary expression in the reference is evaluated. It should yield either a mutable sequence object (such as a list) or a mapping object (such as a dictionary). Next, the subscript expression is evaluated.

If the primary is a mutable sequence object (such as a list), the subscript must yield an integer. If it is negative, the sequence’s length is added to it. The resulting value must be a nonnegative integer less than the sequence’s length, and the sequence is asked to assign the assigned object to its item with that index. If the index is out of range, IndexError is raised (assignment to a subscripted sequence cannot add new items to a list).

If the primary is a mapping object (such as a dictionary), the subscript must have a type compatible with the mapping’s key type, and the mapping is then asked to create a key/datum pair which maps the subscript to the assigned object. This can either replace an existing key/value pair with the same key value, or insert a new key/value pair (if no key with the same value existed).

For user-defined objects, the __setitem__() method is called with appropriate arguments.

If the target is a slicing: The primary expression in the reference is evaluated. It should yield a mutable sequence object (such as a list). The assigned object should be a sequence object of the same type. Next, the lower and upper bound expressions are evaluated, insofar they are present; defaults are zero and the sequence’s length. The bounds should evaluate to integers. If either bound is negative, the sequence’s length is added to it. The resulting bounds are clipped to lie between zero and the sequence’s length, inclusive. Finally, the sequence object is asked to replace the slice with the items of the assigned sequence. The length of the slice may be different from the length of the assigned sequence, thus changing the length of the target sequence, if the target sequence allows it.

CPython implementation detail: In the current implementation, the syntax for targets is taken to be the same as for expressions, and invalid syntax is rejected during the code generation phase, causing less detailed error messages.

Although the definition of assignment implies that overlaps between the left-hand side and the right-hand side are ‘simultaneous’ (for example a, b = b, a swaps two variables), overlaps within the collection of assigned-to variables occur left-to-right, sometimes resulting in confusion. For instance, the following program prints [0, 2] :

7.2.1. Augmented assignment statements ¶

Augmented assignment is the combination, in a single statement, of a binary operation and an assignment statement:

(See section Primaries for the syntax definitions of the last three symbols.)

An augmented assignment evaluates the target (which, unlike normal assignment statements, cannot be an unpacking) and the expression list, performs the binary operation specific to the type of assignment on the two operands, and assigns the result to the original target. The target is only evaluated once.

An augmented assignment expression like x += 1 can be rewritten as x = x + 1 to achieve a similar, but not exactly equal effect. In the augmented version, x is only evaluated once. Also, when possible, the actual operation is performed in-place , meaning that rather than creating a new object and assigning that to the target, the old object is modified instead.

Unlike normal assignments, augmented assignments evaluate the left-hand side before evaluating the right-hand side. For example, a[i] += f(x) first looks-up a[i] , then it evaluates f(x) and performs the addition, and lastly, it writes the result back to a[i] .

With the exception of assigning to tuples and multiple targets in a single statement, the assignment done by augmented assignment statements is handled the same way as normal assignments. Similarly, with the exception of the possible in-place behavior, the binary operation performed by augmented assignment is the same as the normal binary operations.

For targets which are attribute references, the same caveat about class and instance attributes applies as for regular assignments.

7.2.2. Annotated assignment statements ¶

Annotation assignment is the combination, in a single statement, of a variable or attribute annotation and an optional assignment statement:

The difference from normal Assignment statements is that only single target and only single right hand side value is allowed.

For simple names as assignment targets, if in class or module scope, the annotations are evaluated and stored in a special class or module attribute __annotations__ that is a dictionary mapping from variable names (mangled if private) to evaluated annotations. This attribute is writable and is automatically created at the start of class or module body execution, if annotations are found statically.

For expressions as assignment targets, the annotations are evaluated if in class or module scope, but not stored.

If a name is annotated in a function scope, then this name is local for that scope. Annotations are never evaluated and stored in function scopes.

If the right hand side is present, an annotated assignment performs the actual assignment before evaluating annotations (where applicable). If the right hand side is not present for an expression target, then the interpreter evaluates the target except for the last __setitem__() or __setattr__() call.

PEP 526 - Variable and attribute annotation syntax PEP 484 - Type hints

7.3. The assert statement ¶

Assert statements are a convenient way to insert debugging assertions into a program:

The simple form, assert expression , is equivalent to

The extended form, assert expression1, expression2 , is equivalent to

These equivalences assume that __debug__ and AssertionError refer to the built-in variables with those names. In the current implementation, the built-in variable __debug__ is True under normal circumstances, False when optimization is requested (command line option -O). The current code generator emits no code for an assert statement when optimization is requested at compile time. Note that it is unnecessary to include the source code for the expression that failed in the error message; it will be displayed as part of the stack trace.

Assignments to __debug__ are illegal. The value for the built-in variable is determined when the interpreter starts.

7.4. The pass statement ¶

pass is a null operation — when it is executed, nothing happens. It is useful as a placeholder when a statement is required syntactically, but no code needs to be executed, for example:

7.5. The del statement ¶

Deletion is recursively defined very similar to the way assignment is defined. Rather than spelling it out in full details, here are some hints.

Deletion of a target list recursively deletes each target, from left to right.

Deletion of a name removes the binding of that name from the local or global namespace, depending on whether the name occurs in a global statement in the same code block. If the name is unbound, a NameError exception will be raised.

Deletion of attribute references, subscriptions and slicings is passed to the primary object involved; deletion of a slicing is in general equivalent to assignment of an empty slice of the right type (but even this is determined by the sliced object).

Changed in version 3.2: Previously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block.

7.6. The return statement ¶

return may only occur syntactically nested in a function definition, not within a nested class definition.

If an expression list is present, it is evaluated, else None is substituted.

return leaves the current function call with the expression list (or None ) as return value.

When return passes control out of a try statement with a finally clause, that finally clause is executed before really leaving the function.

In a generator function, the return statement indicates that the generator is done and will cause StopIteration to be raised. The returned value (if any) is used as an argument to construct StopIteration and becomes the StopIteration.value attribute.

In an asynchronous generator function, an empty return statement indicates that the asynchronous generator is done and will cause StopAsyncIteration to be raised. A non-empty return statement is a syntax error in an asynchronous generator function.

7.7. The yield statement ¶

A yield statement is semantically equivalent to a yield expression . The yield statement can be used to omit the parentheses that would otherwise be required in the equivalent yield expression statement. For example, the yield statements

are equivalent to the yield expression statements

Yield expressions and statements are only used when defining a generator function, and are only used in the body of the generator function. Using yield in a function definition is sufficient to cause that definition to create a generator function instead of a normal function.

For full details of yield semantics, refer to the Yield expressions section.

7.8. The raise statement ¶

If no expressions are present, raise re-raises the last exception that was active in the current scope. If no exception is active in the current scope, a RuntimeError exception is raised indicating that this is an error.

Otherwise, raise evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException . If it is a class, the exception instance will be obtained when needed by instantiating the class with no arguments.

The type of the exception is the exception instance’s class, the value is the instance itself.

A traceback object is normally created automatically when an exception is raised and attached to it as the __traceback__ attribute, which is writable. You can create an exception and set your own traceback in one step using the with_traceback() exception method (which returns the same exception instance, with its traceback set to its argument), like so:

The from clause is used for exception chaining: if given, the second expression must be another exception class or instance, which will then be attached to the raised exception as the __cause__ attribute (which is writable). If the raised exception is not handled, both exceptions will be printed:

A similar mechanism works implicitly if an exception is raised inside an exception handler or a finally clause: the previous exception is then attached as the new exception’s __context__ attribute:

Exception chaining can be explicitly suppressed by specifying None in the from clause:

Additional information on exceptions can be found in section Exceptions , and information about handling exceptions is in section The try statement .

Changed in version 3.3: None is now permitted as Y in raise X from Y .

New in version 3.3: The __suppress_context__ attribute to suppress automatic display of the exception context.

7.9. The break statement ¶

break may only occur syntactically nested in a for or while loop, but not nested in a function or class definition within that loop.

It terminates the nearest enclosing loop, skipping the optional else clause if the loop has one.

If a for loop is terminated by break , the loop control target keeps its current value.

When break passes control out of a try statement with a finally clause, that finally clause is executed before really leaving the loop.

7.10. The continue statement ¶

continue may only occur syntactically nested in a for or while loop, but not nested in a function or class definition or finally clause within that loop. It continues with the next cycle of the nearest enclosing loop.

When continue passes control out of a try statement with a finally clause, that finally clause is executed before really starting the next loop cycle.

7.11. The import statement ¶

The basic import statement (no from clause) is executed in two steps:

  • find a module, loading and initializing it if necessary
  • define a name or names in the local namespace for the scope where the import statement occurs.

When the statement contains multiple clauses (separated by commas) the two steps are carried out separately for each clause, just as though the clauses had been separated out into individual import statements.

The details of the first step, finding and loading modules are described in greater detail in the section on the import system , which also describes the various types of packages and modules that can be imported, as well as all the hooks that can be used to customize the import system. Note that failures in this step may indicate either that the module could not be located, or that an error occurred while initializing the module, which includes execution of the module’s code.

If the requested module is retrieved successfully, it will be made available in the local namespace in one of three ways:

  • If the module name is followed by as , then the name following as is bound directly to the imported module.
  • If no other name is specified, and the module being imported is a top level module, the module’s name is bound in the local namespace as a reference to the imported module
  • If the module being imported is not a top level module, then the name of the top level package that contains the module is bound in the local namespace as a reference to the top level package. The imported module must be accessed using its full qualified name rather than directly

The from form uses a slightly more complex process:

  • find the module specified in the from clause, loading and initializing it if necessary;
  • check if the imported module has an attribute by that name
  • if not, attempt to import a submodule with that name and then check the imported module again for that attribute
  • if the attribute is not found, ImportError is raised.
  • otherwise, a reference to that value is stored in the local namespace, using the name in the as clause if it is present, otherwise using the attribute name

If the list of identifiers is replaced by a star ( '*' ), all public names defined in the module are bound in the local namespace for the scope where the import statement occurs.

The public names defined by a module are determined by checking the module’s namespace for a variable named __all__ ; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character ( '_' ). __all__ should contain the entire public API. It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module).

The wild card form of import — from module import * — is only allowed at the module level. Attempting to use it in class or function definitions will raise a SyntaxError .

When specifying what module to import you do not have to specify the absolute name of the module. When a module or package is contained within another package it is possible to make a relative import within the same top package without having to mention the package name. By using leading dots in the specified module or package after from you can specify how high to traverse up the current package hierarchy without specifying exact names. One leading dot means the current package where the module making the import exists. Two dots means up one package level. Three dots is up two levels, etc. So if you execute from . import mod from a module in the pkg package then you will end up importing pkg.mod . If you execute from ..subpkg2 import mod from within pkg.subpkg1 you will import pkg.subpkg2.mod . The specification for relative imports is contained within PEP 328 .

importlib.import_module() is provided to support applications that determine dynamically the modules to be loaded.

7.11.1. Future statements ¶

A future statement is a directive to the compiler that a particular module should be compiled using syntax or semantics that will be available in a specified future release of Python where the feature becomes standard.

The future statement is intended to ease migration to future versions of Python that introduce incompatible changes to the language. It allows use of the new features on a per-module basis before the release in which the feature becomes standard.

A future statement must appear near the top of the module. The only lines that can appear before a future statement are:

  • the module docstring (if any),
  • blank lines, and
  • other future statements.

The features recognized by Python 3.0 are absolute_import , division , generators , unicode_literals , print_function , nested_scopes and with_statement . They are all redundant because they are always enabled, and only kept for backwards compatibility.

A future statement is recognized and treated specially at compile time: Changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime.

For any given release, the compiler knows which feature names have been defined, and raises a compile-time error if a future statement contains a feature not known to it.

The direct runtime semantics are the same as for any import statement: there is a standard module __future__ , described later, and it will be imported in the usual way at the time the future statement is executed.

The interesting runtime semantics depend on the specific feature enabled by the future statement.

Note that there is nothing special about the statement:

That is not a future statement; it’s an ordinary import statement with no special semantics or syntax restrictions.

Code compiled by calls to the built-in functions exec() and compile() that occur in a module M containing a future statement will, by default, use the new syntax or semantics associated with the future statement. This can be controlled by optional arguments to compile() — see the documentation of that function for details.

A future statement typed at an interactive interpreter prompt will take effect for the rest of the interpreter session. If an interpreter is started with the -i option, is passed a script name to execute, and the script includes a future statement, it will be in effect in the interactive session started after the script is executed.

7.12. The global statement ¶

The global statement is a declaration which holds for the entire current code block. It means that the listed identifiers are to be interpreted as globals. It would be impossible to assign to a global variable without global , although free variables may refer to globals without being declared global.

Names listed in a global statement must not be used in the same code block textually preceding that global statement.

Names listed in a global statement must not be defined as formal parameters or in a for loop control target, class definition, function definition, import statement, or variable annotation.

CPython implementation detail: The current implementation does not enforce some of these restriction, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program.

Programmer’s note: global is a directive to the parser. It applies only to code parsed at the same time as the global statement. In particular, a global statement contained in a string or code object supplied to the built-in exec() function does not affect the code block containing the function call, and code contained in such a string is unaffected by global statements in the code containing the function call. The same applies to the eval() and compile() functions.

7.13. The nonlocal statement ¶

The nonlocal statement causes the listed identifiers to refer to previously bound variables in the nearest enclosing scope excluding globals. This is important because the default behavior for binding is to search the local namespace first. The statement allows encapsulated code to rebind variables outside of the local scope besides the global (module) scope.

Names listed in a nonlocal statement, unlike those listed in a global statement, must refer to pre-existing bindings in an enclosing scope (the scope in which a new binding should be created cannot be determined unambiguously).

Names listed in a nonlocal statement must not collide with pre-existing bindings in the local scope.

Table Of Contents

  • 7.1. Expression statements
  • 7.2.1. Augmented assignment statements
  • 7.2.2. Annotated assignment statements
  • 7.3. The assert statement
  • 7.4. The pass statement
  • 7.5. The del statement
  • 7.6. The return statement
  • 7.7. The yield statement
  • 7.8. The raise statement
  • 7.9. The break statement
  • 7.10. The continue statement
  • 7.11.1. Future statements
  • 7.12. The global statement
  • 7.13. The nonlocal statement

Previous topic

6. Expressions

8. Compound statements

  • Report a Bug
  • Show Source

Type Hinting and Annotations in Python

Type Hints Annotations Python

Python is a dynamically typed language. We don’t have to explicitly mention the data types for the declared variables or functions. Python interpreter assigns the type for variables at the runtime based on the variable’s value at that time. We also have statically typed languages like Java, C, or C++, where we need to declare the variable type at the time of declaration and the variable types are known at compile time.

Starting from Python 3.5 there was an introduction of something called type hints in PEP 484 and PEP 483 . This addition to the Python language helped structure our code and make it feel more like a statically typed language. This helps to avoid bugs but at the same time makes the code more verbose.

However, the Python runtime does not enforce function and variable type annotations. They can be used by third-party tools such as type checkers, IDEs, linters, etc.

Also read: The Magic Methods in Python

Type Checking, Type Hints, and Code Compilation

At first, we had some external third-party libraries for example the static type checker like mypy that started doing type hinting, and a lot of those ideas from mypy were actually brought into the canonical Python itself and integrated directly into Python.

Now, the thing with type hints is that it does not modify how Python itself runs. The type hints do get compiled along with the rest of the code but they do not affect how Python executes your code.

Let’s go through an example and get an overview by assigning type hints to a function.


In the function declared above, we are assigning built-in data types to the arguments. It’s the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int)

This function is taking two arguments num1 and num2 , that’s what Python sees when it’s going to run the code. It is expecting two variables. Python would have been just fine even if we do not put any type hints that we specified saying that num1 and num2 should be integers .

So according to it, we should be passing two integer values to our code and that would work fine. However, what if we try to pass an integer and a string ?

Type Hinting tells us to pass in int values, yet we are passing a str . When we try to run the code, it runs fine with no issues. Python interpreter has no problem compiling the code if there is a valid data type present in out type hints like int, str, dict, and so on.

Why use Type Hinting at all?

In the example above, we saw that the code runs fine even if we pass a string value to it. Python has no problems multiplying an int with str . However, there are some really good reasons to use type hints even if Python ignores them.

  • One of the things is that it helps IDEs display context-sensitive help information such as not only the function parameters but also what the expected type is.
  • Type hints are often used for code documentation. There are multiple automated code document generators that use type hints when they generate the documentation for example if we are writing our own code libraries with lots of functions and classes and also the included comments.
  • Even if Python does not use type hinting at all, it helps us to leverage it to use a more declarative approach while writing our code and also to provide a runtime validation using external libraries.

Using a Type Checker

There are several type checkers for Python. One of them is the mypy.

Let’s use the same code that we ran before using an int and str . We will use the static type checker mypy and see what Python has to say about our code.

  • Installing mypy
  • Code with Type Hints using a type checker while running the code

In the terminal run the file with the type checker prefixed:

While we run our file with the type checker, now the Python interpreter has a problem with our code . The expected argument is an int data type for both the arguments and we are passing a string in one of them. The type checker tracked the bug and shows that in our output. The mypy type checker helped us address the problem in our code.

More Examples of Type Hinting in Python

In the above example, we used int and str types while hinting. Similarly, other data types can also be used for type hinting as well . We can also declare a type hint for the return type in a function.

Let’s go through the code and see some examples.

Here we are type hinting at different data types for our arguments. Note that we can also assign default values to our parameters if there is no argument provided.

In the above code, the return type has also been declared. When we try to run the code using a type checker like mypy, Python will have no problems running the code as we have a string as the return type, which matches with the type hint provided.

This code has a return type of None . When we try to run this code using the mypy type checker, Python will raise an exception, since it is expecting a return type of None , and the code is returning a string.

The above code shows type hints which are usually referred to as Variable Annotations. Just like we provided type hints to our functions in the above examples, even the variables can hold similar information and help make code more declarative and documented.

The typing Module

A lot of times we have more advanced or more complicated types that have to be passed as an argument to a function. Python has a built-in typing module that enables us to write such types of annotated variables, making the code even more documented. We have to import the typing module to our file and then use those functions. These include data structures such as List, Dictionary, Set, and Tuple .

Let’s see the code a get an overview along with the comments as explanations.

There are numerous other ways to utilize type hints in Python. Using types does not affect the performance of the code and we are not getting any extra functionalities as well. However, using type hints provides robustness to our code and provides documentation for people who would read the code later.

It certainly helps to avoid introducing difficult-to-find bugs. Using types while writing code is becoming popular in the current technology scenario and Python is also following the pattern by providing us with easy-to-use functions for the same. For more information, please refer to the official documentation.

Python Type Hints Documentation

Proposal: Annotate types in multiple assignment

In the latest version of Python (3.12.3), type annotation for single variable assignment is available:

However, in some scenarios like when we want to annotate the tuple of variables in return, the syntax of type annotation is invalid:

In this case, I propose two new syntaxes to support this feature:

  • Annotate directly after each variable:
  • Annotate the tuple of return:

In other programming languages, as I know, Julia and Rust support this feature in there approaches:

I’m pretty sure this has already been suggested. Did you go through the mailing list and searched for topics here? Without doing that, there’s nothing to discuss here. (Besides linking to them).

Secondly, try to not edit posts, but post a followup. Some people read these topics in mailing list mode and don’t see your edits.



For reference, PEP 526 has a note about this in the “Rejected/Postponed Proposals” section:

Allow type annotations for tuple unpacking: This causes ambiguity: it’s not clear what this statement means: x, y: T Are x and y both of type T , or do we expect T to be a tuple type of two items that are distributed over x and y , or perhaps x has type Any and y has type T ? (The latter is what this would mean if this occurred in a function signature.) Rather than leave the (human) reader guessing, we forbid this, at least for now.

Personally I think the meaning of this is rather clear, especially when combined with an assignment, and I would like to see this.

Thank you for your valuable response, both regarding the discussion convention for Python development and the history of this feature.

I have found a related topic here:[email protected]/thread/5NZNHBDWK6EP67HSK4VNDTZNIVUOXMRS/

Here’s the part I find unconvincing:

Under what circumstances will fun() be hard to annotate, but a, b will be easy?

It’s better to annotate function arguments and return values, not variables. The preferred scenario is that fun() has a well-defined return type, and the type of a, b can be inferred (there is no reason to annotate it). This idea is presupposing there are cases where that’s difficult, but I’d like to see some examples where that applies.

Does this not work?

You don’t need from __future__ as of… 3.9, I think?


3.10 if you want A | B too: PEP 604 , although I’m not sure which version the OP is using and 3.9 hasn’t reached end of life yet.

We can’t always infer it, so annotating a variable is sometimes necessary or useful. But if the function’s return type is annotated then a, b = fun() allows type-checkers to infer the types of a and b . This stuff isn’t built in to Python and is evolving as typing changes, so what was inferred in the past might be better in the future.

So my question above was: are there any scenarios where annotating the function is difficult, but annotating the results would be easy? That seems like the motivating use case.

Would it be a solution to put it on the line above? And not allow assigning on the same line? Then it better mirrors function definitions.

It’s a long thread, so it might have been suggested already.

Actually, in cases where the called function differs from the user-defined function, we should declare the types when assignment unpacking.

Here is a simplified MWE:

NOTE: In PyTorch, the __call__ function is internally wrapped from forward .

Can’t you write this? That’s shorter than writing the type annotations.

This is the kind of example I was asking for, thanks. Is the problem that typing tools don’t trace the return type through the call because the wrapping isn’t in python?

I still suggest to read the thread you linked, like I’m doing right now.

The __call__ function is not the same as forward . There might be many other preprocessing and postprocessing steps involved inside it.

Yeah, quite a bit of pre-processing in fact… unless you don’t have hooks by the looks of it:

Related Topics

Topic Replies Views Activity
Ideas 4 638 October 12, 2022
Python Help 4 492 December 16, 2021
Ideas 2 455 February 2, 2024
Python Help 2 586 June 26, 2023
Ideas 21 1132 March 21, 2024

Run PyTorch locally or get started quickly with one of the supported cloud platforms

Whats new in PyTorch tutorials

Familiarize yourself with PyTorch concepts and modules

Bite-size, ready-to-deploy PyTorch code examples

Master PyTorch basics with our engaging YouTube tutorial series

Learn about the tools and frameworks in the PyTorch Ecosystem

Join the PyTorch developer community to contribute, learn, and get your questions answered

A place to discuss PyTorch code, issues, install, research

Find resources and get questions answered

Award winners announced at this year's PyTorch Conference

Build innovative and privacy-aware AI experiences for edge devices

End-to-end solution for enabling on-device inference capabilities across mobile and edge devices

Explore the documentation for comprehensive guidance on how to use PyTorch

Read the PyTorch Domains documentation to learn more about domain-specific libraries

Catch up on the latest technical news and happenings

Stories from the PyTorch ecosystem

Learn about the latest PyTorch tutorials, new, and more

Learn how our community solves real, everyday machine learning problems with PyTorch

Find events, webinars, and podcasts

Learn more about the PyTorch Foundation

  • Become a Member
  • TorchScript >
  • TorchScript Language Reference

TorchScript Language Reference ¶

This reference manual describes the syntax and core semantics of the TorchScript language. TorchScript is a statically typed subset of the Python language. This document explains the supported features of Python in TorchScript and also how the language diverges from regular Python. Any features of Python that are not mentioned in this reference manual are not part of TorchScript. TorchScript focuses specifically on the features of Python that are needed to represent neural network models in PyTorch.

  • Terminology

Type System

Type Annotation


Simple Statements

Compound Statements

Python Values

torch.* APIs

Terminology ¶

This document uses the following terminologies:



Indicates that the given symbol is defined as.


Represents real keywords and delimiters that are part of the syntax.

| B

Indicates either A or B.


Indicates grouping.

Indicates optional.

Indicates a regular expression where term A is repeated at least once.

Indicates a regular expression where term A is repeated zero or more times.

Type System ¶

TorchScript is a statically typed subset of Python. The largest difference between TorchScript and the full Python language is that TorchScript only supports a small set of types that are needed to express neural net models.

TorchScript Types ¶

The TorchScript type system consists of TSType and TSModuleType as defined below.

TSType represents the majority of TorchScript types that are composable and that can be used in TorchScript type annotations. TSType refers to any of the following:

Meta Types, e.g., Any

Primitive Types, e.g., int , float , and str

Structural Types, e.g., Optional[int] or List[MyClass]

Nominal Types (Python classes), e.g., MyClass (user-defined), torch.tensor (built-in)

TSModuleType represents torch.nn.Module and its subclasses. It is treated differently from TSType because its type schema is inferred partly from the object instance and partly from the class definition. As such, instances of a TSModuleType may not follow the same static type schema. TSModuleType cannot be used as a TorchScript type annotation or be composed with TSType for type safety considerations.

Meta Types ¶

Meta types are so abstract that they are more like type constraints than concrete types. Currently TorchScript defines one meta-type, Any , that represents any TorchScript type.

The Any type represents any TorchScript type. Any specifies no type constraints, thus there is no type-checking on Any . As such it can be bound to any Python or TorchScript data types (e.g., int , TorchScript tuple , or an arbitrary Python class that is not scripted).

Any is the Python class name from the typing module. Therefore, to use the Any type, you must import it from typing (e.g., from typing import Any ).

Since Any can represent any TorchScript type, the set of operators that are allowed to operate on values of this type on Any is limited.

Operators Supported for Any Type ¶

Assignment to data of Any type.

Binding to parameter or return of Any type.

x is , x is not where x is of Any type.

isinstance(x, Type) where x is of Any type.

Data of Any type is printable.

Data of List[Any] type may be sortable if the data is a list of values of the same type T and that T supports comparison operators.

Compared to Python

Any is the least constrained type in the TorchScript type system. In that sense, it is quite similar to the Object class in Python. However, Any only supports a subset of the operators and methods that are supported by Object .

Design Notes ¶

When we script a PyTorch module, we may encounter data that is not involved in the execution of the script. Nevertheless, it has to be described by a type schema. It is not only cumbersome to describe static types for unused data (in the context of the script), but also may lead to unnecessary scripting failures. Any is introduced to describe the type of the data where precise static types are not necessary for compilation.

This example illustrates how Any can be used to allow the second element of the tuple parameter to be of any type. This is possible because x[1] is not involved in any computation that requires knowing its precise type.

The example above produces the following output:

The second element of the tuple is of Any type, thus can bind to multiple types. For example, (1, 2.0) binds a float type to Any as in Tuple[int, Any] , whereas (1, (100, 200)) binds a tuple to Any in the second invocation.

This example illustrates how we can use isinstance to dynamically check the type of the data that is annotated as Any type:

Primitive Types ¶

Primitive TorchScript types are types that represent a single type of value and go with a single pre-defined type name.

Structural Types ¶

Structural types are types that are structurally defined without a user-defined name (unlike nominal types), such as Future[int] . Structural types are composable with any TSType .

Tuple , List , Optional , Union , Future , Dict represent Python type class names that are defined in the module typing . To use these type names, you must import them from typing (e.g., from typing import Tuple ).

namedtuple represents the Python class collections.namedtuple or typing.NamedTuple .

Future and RRef represent the Python classes torch.futures and torch.distributed.rpc .

Await represent the Python class torch._awaits._Await

Apart from being composable with TorchScript types, these TorchScript structural types often support a common subset of the operators and methods of their Python counterparts.

This example uses typing.NamedTuple syntax to define a tuple:

This example uses collections.namedtuple syntax to define a tuple:

This example illustrates a common mistake of annotating structural types, i.e., not importing the composite type classes from the typing module:

Running the above code yields the following scripting error:

The remedy is to add the line from typing import Tuple to the beginning of the code.

Nominal Types ¶

Nominal TorchScript types are Python classes. These types are called nominal because they are declared with a custom name and are compared using class names. Nominal classes are further classified into the following categories:

Among them, TSCustomClass and TSEnum must be compilable to TorchScript Intermediate Representation (IR). This is enforced by the type-checker.

Built-in Class ¶

Built-in nominal types are Python classes whose semantics are built into the TorchScript system (e.g., tensor types). TorchScript defines the semantics of these built-in nominal types, and often supports only a subset of the methods or attributes of its Python class definition.

Special Note on torch.nn.ModuleList and torch.nn.ModuleDict ¶

Although torch.nn.ModuleList and torch.nn.ModuleDict are defined as a list and dictionary in Python, they behave more like tuples in TorchScript:

In TorchScript, instances of torch.nn.ModuleList or torch.nn.ModuleDict are immutable.

Code that iterates over torch.nn.ModuleList or torch.nn.ModuleDict is completely unrolled so that elements of torch.nn.ModuleList or keys of torch.nn.ModuleDict can be of different subclasses of torch.nn.Module .

The following example highlights the use of a few built-in Torchscript classes ( torch.* ):

Custom Class ¶

Unlike built-in classes, semantics of custom classes are user-defined and the entire class definition must be compilable to TorchScript IR and subject to TorchScript type-checking rules.

Classes must be new-style classes. Python 3 supports only new-style classes. In Python 2.x, a new-style class is specified by subclassing from the object.

Instance data attributes are statically typed, and instance attributes must be declared by assignments inside the __init__() method.

Method overloading is not supported (i.e., you cannot have multiple methods with the same method name).

MethodDefinition must be compilable to TorchScript IR and adhere to TorchScript’s type-checking rules, (i.e., all methods must be valid TorchScript functions and class attribute definitions must be valid TorchScript statements).

torch.jit.ignore and torch.jit.unused can be used to ignore the method or function that is not fully torchscriptable or should be ignored by the compiler.

TorchScript custom classes are quite limited compared to their Python counterpart. Torchscript custom classes:

Do not support class attributes.

Do not support subclassing except for subclassing an interface type or object.

Do not support method overloading.

Must initialize all its instance attributes in __init__() ; this is because TorchScript constructs a static schema of the class by inferring attribute types in __init__() .

Must contain only methods that satisfy TorchScript type-checking rules and are compilable to TorchScript IRs.

Python classes can be used in TorchScript if they are annotated with @torch.jit.script , similar to how a TorchScript function would be declared:

A TorchScript custom class type must “declare” all its instance attributes by assignments in __init__() . If an instance attribute is not defined in __init__() but accessed in other methods of the class, the class cannot be compiled as a TorchScript class, as shown in the following example:

The class will fail to compile and issue the following error:

In this example, a TorchScript custom class defines a class variable name, which is not allowed:

It leads to the following compile-time error:

Enum Type ¶

Like custom classes, semantics of the enum type are user-defined and the entire class definition must be compilable to TorchScript IR and adhere to TorchScript type-checking rules.

Value must be a TorchScript literal of type int , float , or str , and must be of the same TorchScript type.

TSEnumType is the name of a TorchScript enumerated type. Similar to Python enum, TorchScript allows restricted Enum subclassing, that is, subclassing an enumerated is allowed only if it does not define any members.

TorchScript supports only enum.Enum . It does not support other variations such as enum.IntEnum , enum.Flag , enum.IntFlag , and .

Values of TorchScript enum members must be of the same type and can only be int , float , or str types, whereas Python enum members can be of any type.

Enums containing methods are ignored in TorchScript.

The following example defines the class Color as an Enum type:

The following example shows the case of restricted enum subclassing, where BaseColor does not define any member, thus can be subclassed by Color :

TorchScript Module Class ¶

TSModuleType is a special class type that is inferred from object instances that are created outside TorchScript. TSModuleType is named by the Python class of the object instance. The __init__() method of the Python class is not considered a TorchScript method, so it does not have to comply with TorchScript’s type-checking rules.

The type schema of a module instance class is constructed directly from an instance object (created outside the scope of TorchScript) rather than inferred from __init__() like custom classes. It is possible that two objects of the same instance class type follow two different type schemas.

In this sense, TSModuleType is not really a static type. Therefore, for type safety considerations, TSModuleType cannot be used in a TorchScript type annotation or be composed with TSType .

Module Instance Class ¶

TorchScript module type represents the type schema of a user-defined PyTorch module instance. When scripting a PyTorch module, the module object is always created outside TorchScript (i.e., passed in as parameter to forward ). The Python module class is treated as a module instance class, so the __init__() method of the Python module class is not subject to the type-checking rules of TorchScript.

forward() and other methods decorated with @torch.jit.export must be compilable to TorchScript IR and subject to TorchScript’s type-checking rules.

Unlike custom classes, only the forward method and other methods decorated with @torch.jit.export of the module type need to be compilable. Most notably, __init__() is not considered a TorchScript method. Consequently, module type constructors cannot be invoked within the scope of TorchScript. Instead, TorchScript module objects are always constructed outside and passed into torch.jit.script(ModuleObj) .

This example illustrates a few features of module types:

The TestModule instance is created outside the scope of TorchScript (i.e., before invoking torch.jit.script ).

__init__() is not considered a TorchScript method, therefore, it does not have to be annotated and can contain arbitrary Python code. In addition, the __init__() method of an instance class cannot be invoked in TorchScript code. Because TestModule instances are instantiated in Python, in this example, TestModule(2.0) and TestModule(2) create two instances with different types for its data attributes. self.x is of type float for TestModule(2.0) , whereas self.y is of type int for TestModule(2.0) .

TorchScript automatically compiles other methods (e.g., mul() ) invoked by methods annotated via @torch.jit.export or forward() methods.

Entry-points to a TorchScript program are either forward() of a module type, functions annotated as torch.jit.script , or methods annotated as torch.jit.export .

The following example shows an incorrect usage of module type. Specifically, this example invokes the constructor of TestModule inside the scope of TorchScript:

Type Annotation ¶

Since TorchScript is statically typed, programmers need to annotate types at strategic points of TorchScript code so that every local variable or instance data attribute has a static type, and every function and method has a statically typed signature.

When to Annotate Types ¶

In general, type annotations are only needed in places where static types cannot be automatically inferred (e.g., parameters or sometimes return types to methods or functions). Types of local variables and data attributes are often automatically inferred from their assignment statements. Sometimes an inferred type may be too restrictive, e.g., x being inferred as NoneType through assignment x = None , whereas x is actually used as an Optional . In such cases, type annotations may be needed to overwrite auto inference, e.g., x: Optional[int] = None . Note that it is always safe to type annotate a local variable or data attribute even if its type can be automatically inferred. The annotated type must be congruent with TorchScript’s type-checking.

When a parameter, local variable, or data attribute is not type annotated and its type cannot be automatically inferred, TorchScript assumes it to be a default type of TensorType , List[TensorType] , or Dict[str, TensorType] .

Annotate Function Signature ¶

Since a parameter may not be automatically inferred from the body of the function (including both functions and methods), they need to be type annotated. Otherwise, they assume the default type TensorType .

TorchScript supports two styles for method and function signature type annotation:

Python3-style annotates types directly on the signature. As such, it allows individual parameters to be left unannotated (whose type will be the default type of TensorType ), or allows the return type to be left unannotated (whose type will be automatically inferred).

Note that when using Python3 style, the type self is automatically inferred and should not be annotated.

Mypy style annotates types as a comment right below the function/method declaration. In the Mypy style, since parameter names do not appear in the annotation, all parameters have to be annotated.

In this example:

a is not annotated and assumes the default type of TensorType .

b is annotated as type int .

The return type is not annotated and is automatically inferred as type TensorType (based on the type of the value being returned).

The following example uses Mypy style annotation. Note that parameters or return values must be annotated even if some of them assume the default type.

Annotate Variables and Data Attributes ¶

In general, types of data attributes (including class and instance data attributes) and local variables can be automatically inferred from assignment statements. Sometimes, however, if a variable or attribute is associated with values of different types (e.g., as None or TensorType ), then they may need to be explicitly type annotated as a wider type such as Optional[int] or Any .

Local Variables ¶

Local variables can be annotated according to Python3 typing module annotation rules, i.e.,

In general, types of local variables can be automatically inferred. In some cases, however, you may need to annotate a multi-type for local variables that may be associated with different concrete types. Typical multi-types include Optional[T] and Any .

Instance Data Attributes ¶

For ModuleType classes, instance data attributes can be annotated according to Python3 typing module annotation rules. Instance data attributes can be annotated (optionally) as final via Final .

InstanceAttrIdentifier is the name of an instance attribute.

Final indicates that the attribute cannot be re-assigned outside of __init__ or overridden in subclasses.

Type Annotation APIs ¶

Torch.jit.annotate(t, expr) ¶.

This API annotates type T to an expression expr . This is often used when the default type of an expression is not the type intended by the programmer. For instance, an empty list (dictionary) has the default type of List[TensorType] ( Dict[TensorType, TensorType] ), but sometimes it may be used to initialize a list of some other types. Another common use case is for annotating the return type of tensor.tolist() . Note, however, that it cannot be used to annotate the type of a module attribute in __init__ ; torch.jit.Attribute should be used for this instead.

In this example, [] is declared as a list of integers via torch.jit.annotate (instead of assuming [] to be the default type of List[TensorType] ):

See torch.jit.annotate() for more information.

Type Annotation Appendix ¶

Torchscript type system definition ¶, unsupported typing constructs ¶.

TorchScript does not support all features and types of the Python3 typing module. Any functionality from the typing module that is not explicitly specified in this documentation is unsupported. The following table summarizes typing constructs that are either unsupported or supported with restrictions in TorchScript.



In development

Not supported

Not supported

Not supported

Not supported

Supported for module attributes, class attribute, and annotations, but not for functions.

Not supported

In development

Type aliases

Not supported

Nominal typing

In development

Structural typing

Not supported


Not supported


Not supported

Expressions ¶

The following section describes the grammar of expressions that are supported in TorchScript. It is modeled after the expressions chapter of the Python language reference .

Arithmetic Conversions ¶

There are a number of implicit type conversions that are performed in TorchScript:

A Tensor with a float or int data type can be implicitly converted to an instance of FloatType or IntType provided that it has a size of 0, does not have require_grad set to True , and will not require narrowing.

Instances of StringType can be implicitly converted to DeviceType .

The implicit conversion rules from the two bullet points above can be applied to instances of TupleType to produce instances of ListType with the appropriate contained type.

Explicit conversions can be invoked using the float , int , bool , and str built-in functions that accept primitive data types as arguments and can accept user-defined types if they implement __bool__ , __str__ , etc.

Atoms are the most basic elements of expressions.

Identifiers ¶

The rules that dictate what is a legal identifier in TorchScript are the same as their Python counterparts .

Evaluation of a literal yields an object of the appropriate type with the specific value (with approximations applied as necessary for floats). Literals are immutable, and multiple evaluations of identical literals may obtain the same object or distinct objects with the same value. stringliteral , integer , and floatnumber are defined in the same way as their Python counterparts.

Parenthesized Forms ¶

A parenthesized expression list yields whatever the expression list yields. If the list contains at least one comma, it yields a Tuple ; otherwise, it yields the single expression inside the expression list. An empty pair of parentheses yields an empty Tuple object ( Tuple[] ).

List and Dictionary Displays ¶

Lists and dicts can be constructed by either listing the container contents explicitly or by providing instructions on how to compute them via a set of looping instructions (i.e. a comprehension ). A comprehension is semantically equivalent to using a for loop and appending to an ongoing list. Comprehensions implicitly create their own scope to make sure that the items of the target list do not leak into the enclosing scope. In the case that container items are explicitly listed, the expressions in the expression list are evaluated left-to-right. If a key is repeated in a dict_display that has a key_datum_list , the resultant dictionary uses the value from the rightmost datum in the list that uses the repeated key.

Primaries ¶

Attribute references ¶.

The primary must evaluate to an object of a type that supports attribute references that have an attribute named identifier .

Subscriptions ¶

The primary must evaluate to an object that supports subscription.

If the primary is a List , Tuple , or str , the expression list must evaluate to an integer or slice.

If the primary is a Dict , the expression list must evaluate to an object of the same type as the key type of the Dict .

If the primary is a ModuleList , the expression list must be an integer literal.

If the primary is a ModuleDict , the expression must be a stringliteral .

A slicing selects a range of items in a str , Tuple , List , or Tensor . Slicings may be used as expressions or targets in assignment or del statements.

Slicings with more than one slice item in their slice lists can only be used with primaries that evaluate to an object of type Tensor .

The primary must desugar or evaluate to a callable object. All argument expressions are evaluated before the call is attempted.

Power Operator ¶

The power operator has the same semantics as the built-in pow function (not supported); it computes its left argument raised to the power of its right argument. It binds more tightly than unary operators on the left, but less tightly than unary operators on the right; i.e. -2 ** -3 == -(2 ** (-3)) . The left and right operands can be int , float or Tensor . Scalars are broadcast in the case of scalar-tensor/tensor-scalar exponentiation operations, and tensor-tensor exponentiation is done elementwise without any broadcasting.

Unary and Arithmetic Bitwise Operations ¶

The unary - operator yields the negation of its argument. The unary ~ operator yields the bitwise inversion of its argument. - can be used with int , float , and Tensor of int and float . ~ can only be used with int and Tensor of int .

Binary Arithmetic Operations ¶

The binary arithmetic operators can operate on Tensor , int , and float . For tensor-tensor ops, both arguments must have the same shape. For scalar-tensor or tensor-scalar ops, the scalar is usually broadcast to the size of the tensor. Division ops can only accept scalars as their right-hand side argument, and do not support broadcasting. The @ operator is for matrix multiplication and only operates on Tensor arguments. The multiplication operator ( * ) can be used with a list and integer in order to get a result that is the original list repeated a certain number of times.

Shifting Operations ¶

These operators accept two int arguments, two Tensor arguments, or a Tensor argument and an int or float argument. In all cases, a right shift by n is defined as floor division by pow(2, n) , and a left shift by n is defined as multiplication by pow(2, n) . When both arguments are Tensors , they must have the same shape. When one is a scalar and the other is a Tensor , the scalar is logically broadcast to match the size of the Tensor .

Binary Bitwise Operations ¶

The & operator computes the bitwise AND of its arguments, the ^ the bitwise XOR, and the | the bitwise OR. Both operands must be int or Tensor , or the left operand must be Tensor and the right operand must be int . When both operands are Tensor , they must have the same shape. When the right operand is int , and the left operand is Tensor , the right operand is logically broadcast to match the shape of the Tensor .

Comparisons ¶

A comparison yields a boolean value ( True or False ), or if one of the operands is a Tensor , a boolean Tensor . Comparisons can be chained arbitrarily as long as they do not yield boolean Tensors that have more than one element. a op1 b op2 c ... is equivalent to a op1 b and b op2 c and ... .

Value Comparisons ¶

The operators < , > , == , >= , <= , and != compare the values of two objects. The two objects generally need to be of the same type, unless there is an implicit type conversion available between the objects. User-defined types can be compared if rich comparison methods (e.g., __lt__ ) are defined on them. Built-in type comparison works like Python:

Numbers are compared mathematically.

Strings are compared lexicographically.

lists , tuples , and dicts can be compared only to other lists , tuples , and dicts of the same type and are compared using the comparison operator of corresponding elements.

Membership Test Operations ¶

The operators in and not in test for membership. x in s evaluates to True if x is a member of s and False otherwise. x not in s is equivalent to not x in s . This operator is supported for lists , dicts , and tuples , and can be used with user-defined types if they implement the __contains__ method.

Identity Comparisons ¶

For all types except int , double , bool , and torch.device , operators is and is not test for the object’s identity; x is y is True if and only if x and y are the same object. For all other types, is is equivalent to comparing them using == . x is not y yields the inverse of x is y .

Boolean Operations ¶

User-defined objects can customize their conversion to bool by implementing a __bool__ method. The operator not yields True if its operand is false, False otherwise. The expression x and y first evaluates x ; if it is False , its value ( False ) is returned; otherwise, y is evaluated and its value is returned ( False or True ). The expression x or y first evaluates x ; if it is True , its value ( True ) is returned; otherwise, y is evaluated and its value is returned ( False or True ).

Conditional Expressions ¶

The expression x if c else y first evaluates the condition c rather than x. If c is True , x is evaluated and its value is returned; otherwise, y is evaluated and its value is returned. As with if-statements, x and y must evaluate to a value of the same type.

Expression Lists ¶

A starred item can only appear on the left-hand side of an assignment statement, e.g., a, *b, c = ... .

Simple Statements ¶

The following section describes the syntax of simple statements that are supported in TorchScript. It is modeled after the simple statements chapter of the Python language reference .

Expression Statements ¶

Assignment statements ¶, augmented assignment statements ¶, annotated assignment statements ¶, the raise statement ¶.

Raise statements in TorchScript do not support try\except\finally .

The assert Statement ¶

Assert statements in TorchScript do not support try\except\finally .

The return Statement ¶

Return statements in TorchScript do not support try\except\finally .

The del Statement ¶

The pass statement ¶, the print statement ¶, the break statement ¶, the continue statement: ¶, compound statements ¶.

The following section describes the syntax of compound statements that are supported in TorchScript. The section also highlights how Torchscript differs from regular Python statements. It is modeled after the compound statements chapter of the Python language reference .

The if Statement ¶

Torchscript supports both basic if/else and ternary if/else .

Basic if/else Statement ¶

elif statements can repeat for an arbitrary number of times, but it needs to be before else statement.

Ternary if/else Statement ¶

A tensor with 1 dimension is promoted to bool :

A tensor with multi dimensions are not promoted to bool :

Running the above code yields the following RuntimeError .

If a conditional variable is annotated as final , either the true or false branch is evaluated depending on the evaluation of the conditional variable.

In this example, only the True branch is evaluated, since a is annotated as final and set to True :

The while Statement ¶

while…else statements are not supported in Torchscript. It results in a RuntimeError .

The for-in Statement ¶

for...else statements are not supported in Torchscript. It results in a RuntimeError .

For loops on tuples: these unroll the loop, generating a body for each member of the tuple. The body must type-check correctly for each member.

For loops on lists: for loops over a nn.ModuleList will unroll the body of the loop at compile time, with each member of the module list.

The with Statement ¶

The with statement is used to wrap the execution of a block with methods defined by a context manager.

If a target was included in the with statement, the return value from the context manager’s __enter__() is assigned to it. Unlike python, if an exception caused the suite to be exited, its type, value, and traceback are not passed as arguments to __exit__() . Three None arguments are supplied.

try , except , and finally statements are not supported inside with blocks.

Exceptions raised within with block cannot be suppressed.

The tuple Statement ¶

Iterable types in TorchScript include Tensors , lists , tuples , dictionaries , strings , torch.nn.ModuleList , and torch.nn.ModuleDict .

You cannot convert a List to Tuple by using this built-in function.

Unpacking all outputs into a tuple is covered by:

The getattr Statement ¶

Attribute name must be a literal string.

Module type object is not supported (e.g., torch._C).

Custom class object is not supported (e.g., torch.classes.*).

The hasattr Statement ¶

The zip statement ¶.

Arguments must be iterables.

Two iterables of same outer container type but different length are supported.

Both the iterables must be of the same container type:

This example fails because the iterables are of different container types:

Two iterables of the same container Type but different data type is supported:

The enumerate Statement ¶

Iterable types in TorchScript include Tensors , lists , tuples , dictionaries , strings , torch.nn.ModuleList and torch.nn.ModuleDict .

Python Values ¶

Resolution rules ¶.

When given a Python value, TorchScript attempts to resolve it in the following five different ways:

When a Python value is backed by a Python implementation that can be compiled by TorchScript, TorchScript compiles and uses the underlying Python implementation.

Example: torch.jit.Attribute

When a Python value is a wrapper of a native PyTorch op, TorchScript emits the corresponding operator.

Example: torch.jit._logging.add_stat_value

For a limited set of torch.* API calls (in the form of Python values) that TorchScript supports, TorchScript attempts to match a Python value against each item in the set.

When matched, TorchScript generates a corresponding SugaredValue instance that contains lowering logic for these values.

Example: torch.jit.isinstance()

For Python built-in functions and constants, TorchScript identifies them by name, and creates a corresponding SugaredValue instance that implements their functionality.

Example: all()

For Python values from unrecognized modules, TorchScript attempts to take a snapshot of the value and converts it to a constant in the graph of the function(s) or method(s) that are being compiled.

Example: math.pi

Python Built-in Functions Support ¶

TorchScript Support for Python Built-in Functions

Built-in Function

Support Level



Only supports / / type inputs. | Doesn’t honor override.





Only supports type input.


Only supports / / type inputs.






Only ASCII character set is supported.













Doesn’t honor override.


Manual index specification not supported. | Format type modifier not supported.



Attribute name must be string literal.



Attribute name must be string literal.


’s hash is based on identity not numeric value.


Only supports type input.


Only supports type input.



argument not supported. | Doesn’t honor override.


provides better support when checking against container types like int].






Only ASCII character set is supported.



, and arguments are not supported.






argument is not supported.





argument is not supported.



and arguments are not supported.



It can only be used in ’s method.





Python Built-in Values Support ¶

TorchScript Support for Python Built-in Values

Built-in Value

Support Level







torch.* APIs ¶

Remote procedure calls ¶.

TorchScript supports a subset of RPC APIs that supports running a function on a specified remote worker instead of locally.

Specifically, following APIs are fully supported:

rpc_sync() makes a blocking RPC call to run a function on a remote worker. RPC messages are sent and received in parallel to execution of Python code.

More details about its usage and examples can be found in rpc_sync() .

rpc_async() makes a non-blocking RPC call to run a function on a remote worker. RPC messages are sent and received in parallel to execution of Python code.

More details about its usage and examples can be found in rpc_async() .

remote.() executes a remote call on a worker and gets a Remote Reference RRef as the return value.

More details about its usage and examples can be found in remote() .

Asynchronous Execution ¶

TorchScript enables you to create asynchronous computation tasks to make better use of computation resources. This is done via supporting a list of APIs that are only usable within TorchScript:

Creates an asynchronous task executing func and a reference to the value of the result of this execution. Fork will return immediately.

Synonymous to torch.jit._fork() , which is only kept for backward compatibility reasons.

More details about its usage and examples can be found in fork() .

Forces completion of a torch.jit.Future[T] asynchronous task, returning the result of the task.

Synonymous to torch.jit._wait() , which is only kept for backward compatibility reasons.

More details about its usage and examples can be found in wait() .

Type Annotations ¶

TorchScript is statically-typed. It provides and supports a set of utilities to help annotate variables and attributes:

Provides a type hint to TorchScript where Python 3 style type hints do not work well.

One common example is to annotate type for expressions like [] . [] is treated as List[torch.Tensor] by default. When a different type is needed, you can use this code to hint TorchScript: torch.jit.annotate(List[int], []) .

More details can be found in annotate()

Common use cases include providing type hint for torch.nn.Module attributes. Because their __init__ methods are not parsed by TorchScript, torch.jit.Attribute should be used instead of torch.jit.annotate in the module’s __init__ methods.

More details can be found in Attribute()

An alias for Python’s typing.Final . torch.jit.Final is kept only for backward compatibility reasons.

Meta Programming ¶

TorchScript provides a set of utilities to facilitate meta programming:

Returns a boolean value indicating whether the current program is compiled by torch.jit.script or not.

When used in an assert or an if statement, the scope or branch where torch.jit.is_scripting() evaluates to False is not compiled.

Its value can be evaluated statically at compile time, thus commonly used in if statements to stop TorchScript from compiling one of the branches.

More details and examples can be found in is_scripting()

Returns a boolean value indicating whether the current program is traced by torch.jit.trace / torch.jit.trace_module or not.

More details can be found in is_tracing()

This decorator indicates to the compiler that a function or method should be ignored and left as a Python function.

This allows you to leave code in your model that is not yet TorchScript compatible.

If a function decorated by @torch.jit.ignore is called from TorchScript, ignored functions will dispatch the call to the Python interpreter.

Models with ignored functions cannot be exported.

More details and examples can be found in ignore()

This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception.

This allows you to leave code in your model that is not yet TorchScript compatible and still export your model.

If a function decorated by @torch.jit.unused is called from TorchScript, a runtime error will be raised.

More details and examples can be found in unused()

Type Refinement ¶

Returns a boolean indicating whether a variable is of the specified type.

More details about its usage and examples can be found in isinstance() .

  • TorchScript Types
  • Operators Supported for Any Type
  • Design Notes
  • Primitive Types
  • Structural Types
  • Nominal Types
  • Special Note on torch.nn.ModuleList and torch.nn.ModuleDict
  • Custom Class
  • TorchScript Module Class
  • Module Instance Class
  • When to Annotate Types
  • Annotate Function Signature
  • Local Variables
  • Instance Data Attributes
  • torch.jit.annotate(T, expr)
  • TorchScript Type System Definition
  • Unsupported Typing Constructs
  • Arithmetic Conversions
  • Identifiers
  • Parenthesized Forms
  • List and Dictionary Displays
  • Attribute References
  • Subscriptions
  • Power Operator
  • Unary and Arithmetic Bitwise Operations
  • Binary Arithmetic Operations
  • Shifting Operations
  • Binary Bitwise Operations
  • Value Comparisons
  • Membership Test Operations
  • Identity Comparisons
  • Boolean Operations
  • Conditional Expressions
  • Expression Lists
  • Expression Statements
  • Assignment Statements
  • Augmented Assignment Statements
  • Annotated Assignment Statements
  • The raise Statement
  • The assert Statement
  • The return Statement
  • The del Statement
  • The pass Statement
  • The print Statement
  • The break Statement
  • The continue Statement:
  • Basic if/else Statement
  • Ternary if/else Statement
  • The while Statement
  • The for-in Statement
  • The with Statement
  • The tuple Statement
  • The getattr Statement
  • The hasattr Statement
  • The zip Statement
  • The enumerate Statement
  • Resolution Rules
  • Python Built-in Functions Support
  • Python Built-in Values Support
  • Remote Procedure Calls
  • Asynchronous Execution
  • Type Annotations
  • Meta Programming
  • Type Refinement

annotated assignment python

Access comprehensive developer documentation for PyTorch

Get in-depth tutorials for beginners and advanced developers

Find development resources and get your questions answered

To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy .

  • Get Started
  • Learn the Basics
  • PyTorch Recipes
  • Introduction to PyTorch - YouTube Series
  • Developer Resources
  • Contributor Awards - 2023
  • About PyTorch Edge
  • PyTorch Domains
  • Blog & News
  • PyTorch Blog
  • Community Blog
  • Community Stories
  • PyTorch Foundation
  • Governing Board

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotated assignment in a match block does not generate SETUP_ANNOTATIONS #105164


martindemello commented May 31, 2023 • edited by bedevere-bot Loading

Compiling the following code:


There is no opcode generated despite the access later on. (Found via - for some reason this does not generate a runtime error, but it does cause issues for tools like pytype.)

cpython 3.10

  • ❤️ 1 reaction


yilei commented May 31, 2023

FWIW, it's a runtime error when the module is imported, not run as main: match 0: case 0: x: int = 1 $ cat import lib $ python Traceback (most recent call last): File "", line 1, in <module> import lib File "", line 3, in <module> x: int = 1 NameError: name '__annotations__' is not defined

Sorry, something went wrong.


brandtbucher commented May 31, 2023 • edited Loading

I think in needs to be taught about .

Looks like picked this up already. Feel free to ping me for review!

  • 👍 2 reactions


hauntsaninja commented Sep 7, 2023

Thanks, looks like this is fixed


No branches or pull requests


  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Assigning Multiple Variables in One Line in Python

In this video, we will explore how to assign multiple variables in one line in Python. This technique allows for concise and readable code, especially when you need to initialize multiple variables simultaneously. This tutorial is perfect for students, professionals, or anyone interested in enhancing their Python programming skills.

Why Assign Multiple Variables in One Line?

Assigning multiple variables in one line helps to:

  • Write more concise and readable code.
  • Initialize multiple variables simultaneously.
  • Simplify code maintenance and debugging.

Key Concepts

1. Multiple Assignment:

  • The ability to assign values to multiple variables in a single line of code.

2. Tuple Unpacking:

  • A technique that allows you to assign values from a tuple to multiple variables in one line.

How to Assign Multiple Variables in One Line

1. Basic Multiple Assignment:

  • Assign values to multiple variables separated by commas.
  • Use tuples to assign multiple variables in a single line.

3. List Unpacking:

  • Similar to tuple unpacking, but using lists.

Practical Examples

Example 1: Basic Multiple Assignment

  • Assign values to multiple variables in one line.
  • Example: a, b, c = 1, 2, 3

Example 2: Tuple Unpacking

  • Use tuple unpacking to assign values.
  • Example: x, y = (4, 5)

Example 3: List Unpacking

  • Use list unpacking to assign values.
  • Example: m, n, o = [6, 7, 8]

Practical Applications

Data Initialization:

  • Initialize multiple variables with values in one line for cleaner and more concise code.

Function Returns:

  • Assign multiple return values from a function call to separate variables in one line.

Swapping Variables:

  • Swap values of two variables in one line using tuple unpacking.
  • Example: a, b = b, a

Video Thumbnail


  • Documentation
  • System Status


  • Rollbar Academy


  • Software Development
  • Engineering Management
  • Platform/Ops
  • Customer Support
  • Software Agency

Use Cases

  • Low-Risk Release
  • Production Code Quality
  • DevOps Bridge
  • Effective Testing & QA

What is “except Exception as e” in Python?

What is “except Exception as e” in Python?

Table of Contents

except Exception as e is a construct in Python used for exception handling. It allows you to catch exceptions that occur during the execution of a block of code by using a try block to wrap the code that might raise an exception, and an except block to catch and handle the exception.

The Exception part specifies that any exception of this type or its subclasses should be caught, and the as e part assigns the caught exception to a variable e , which you can then use to access details about the exception.

Take a look at this example:

Running that will print:

This is what happens step-by-step:

  • The try block attempts to execute result = 10 / 0 .
  • Division by zero is not allowed so a ZeroDivisionError is raised.
  • The except Exception as e block catches the ZeroDivisionError .
  • The exception is assigned to the variable e , which contains the error message "division by zero".
  • The print(f"An error occurred: {e}") statement prints the error message to the console.

When using except Exception as e , here are a few things to keep in mind for handling exceptions effectively:

Catch specific exceptions rather than all exceptions

Catching all exceptions with except Exception as e can mask unexpected errors and make debugging more difficult.

💡Best Practice: Whenever possible, catch specific exceptions (e.g., except ZeroDivisionError as e ) to handle different error conditions appropriately.

Clean up resources

Ensure that resources (e.g., files or network connections) are properly released even if an exception occurs.

💡Best Practice: Use a finally block to clean up resources. Like this:

Use chained exceptions to catch one exception and raise another

Chained exceptions allow you to catch one exception and raise another while preserving the original exception's context. This is helpful for debugging because it provides a clear trail of what went wrong.

💡Best Practice: Each function should handle its own specific concerns but communicate issues up the call stack with chained exceptions.

Imagine a scenario where you have a function that validates user input and another function that processes that input. If the input is invalid, the validation function raises a specific error, and the processing function raises a more general error to be handled higher up in the call stack.

When you run that code, the output will be:

The main function catches the ValueError raised by process_input and prints both the general error message and the original exception.

Log exceptions

Logging exceptions helps with debugging and maintaining a record of errors.

💡Best Practice: Use the exception monitoring SDK Rollbar which gives you a real-time feed of all errors, including unhandled exceptions. Rollbar's revolutionary grouping engine uses machine learning to determine patterns on an ongoing basis and identify unique errors.

When you run this code, any exceptions caught in the except block will be logged to Rollbar, allowing you to find and fix errors in your code faster. Try Rollbar today !

Related Resources

How to catch multiple exceptions in Python

How to Catch Multiple Exceptions in Python

How to Fix “IndexError: List Assignment Index Out of Range” in Python

How to Fix “IndexError: List Assignment Index Out of Range” in Python

How to Fix Invalid SyntaxError in Python

How to Fix Invalid SyntaxError in Python

"Rollbar allows us to go from alerting to impact analysis and resolution in a matter of minutes. Without it we would be flying blind."

Error Monitoring

Start continuously improving your code today.

  • Stack Overflow Public questions & answers
  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Talent Build your employer brand
  • Advertising Reach developers & technologists worldwide
  • Labs The future of collective knowledge sharing
  • About the company

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

What is this odd colon behavior doing? [duplicate]

I am using Python 3.6.1, and I have come across something very strange. I had a simple dictionary assignment typo that took me a long time to find.

What is the code context["a"]: 2 doing? It doesn't raise a SyntaxError when it should IMO. At first I thought it was creating a slice. However, typing repr(context["a"]: 2) raises a SyntaxError . I also typed context["a"]: 2 in the console and the console didn't print anything. I thought maybe it returned None , but I'm not so sure.

I've also thought it could be a single line if statement, but that shouldn't be the right syntax either.

Additionally, context["a"] should raise a KeyError .

I am perplexed. What is going on?

martineau's user avatar

  • 2 Does this answer your question? What are variable annotations in Python 3.6? –  Georgy Commented Aug 8, 2020 at 11:28

2 Answers 2

You have accidentally written a syntactically correct variable annotation . That feature was introduced in Python 3.6 (see PEP 526 ).

Although a variable annotation is parsed as part of an annotated assignment , the assignment statement is optional :

Thus, in context["a"]: 2

  • context["a"] is the annotation target
  • 2 is the annotation itself
  • context["a"] is left uninitialised

The PEP states that "the target of the annotation can be any valid single assignment target, at least syntactically (it is up to the type checker what to do with this)" , which means that the key doesn't need to exist to be annotated (hence no KeyError ). Here's an example from the original PEP:

Normally, the annotation expression should evaluate to a Python type -- after all the main use of annotations is type hinting, but it is not enforced. The annotation can be any valid Python expression, regardless of the type or value of the result.

As you can see, at this time type hints are very permissive and rarely useful, unless you have a static type checker such as mypy .

vaultah's user avatar

  • 15 Shouldn't this require an = assignment operator then? The key doesn't exist. This just feels wrong to me. –  justengel Commented Jan 18, 2018 at 14:29
  • 1 In this case, : is the assignment operator. We're just "assigning" a type annotation alone, not a key. I doubt there's any reason for allowing it, just an unintended side affect of adding the annotation syntax. –  chepner Commented Jan 18, 2018 at 15:04
  • 7 It's weird that it'll allow you to annotate a target that hasn't yet been defined though. If my very first line is x: str and immediately followed by type(x) , the interpreter will raise a NameError . IMO the syntax should enforce the object is pre-defined, or is defined on the spot. This just introduces confusion. –  r.ook Commented Jan 18, 2018 at 15:21
  • 3 @Idlehands This defeats the purpose though. Having x = 'i am a string' prior to x: str makes the latter kind of redundant.. This shouldn't have been added at all. It was fine as comment; I never show it used one way or the other. –  Ma0 Commented Jan 18, 2018 at 15:25
  • 3 @Idlehands Allowing this syntax probably enables you to use type hinting in function definitions. In def f(x: str): ... , x is also not defined at the moment of being annotated. –  Graipher Commented Jan 18, 2018 at 22:09

The annotations are automatically stored in __annotations__ which is a dict. For x: y . y must be a valid expression, i.e. y , or whatever is on the right side of the : , has to evaluate. On the other hand x ,must, at a minimum be able to be a key, thus hashable.

In addition, the LHS cannot be a set, because sets are unhashable, >>> {2}: 8

SyntaxError: illegal target for annotation

nor a list: >>> [2]: 8

[2]: 8 SyntaxError: only single target (not list) can be annotated

nor a tuple:

>>> (2,3): 8 (2,3): 8 SyntaxError: only single target (not tuple) can be annotated

ShpielMeister's user avatar

Not the answer you're looking for? Browse other questions tagged python python-3.x or ask your own question .

  • Featured on Meta
  • Upcoming sign-up experiments related to tags
  • The return of Staging Ground to Stack Overflow
  • Policy: Generative AI (e.g., ChatGPT) is banned

Hot Network Questions

  • Is there some sort of kitchen utensil/device like a cylinder with a strainer?
  • Is FDISK /MBR really undocumented, and why?
  • What is the purpose of the M1 pin on a Z80
  • What is the safest way to camp in a zombie apocalypse?
  • What happened to Slic3r?
  • TikZ - diagram of a 1D spin chain
  • HTTP: how likely are you to be compromised by using it just once?
  • If a reference is no longer publicly available, should you include the proofs of the results you cite from it?
  • Medical - Must use corrective lens(es)…
  • What is a "general" relation algebra?
  • if people are bred like dogs, what can be achieved?
  • Does "my grades suffered" mean "my grades became worse" or "my grades were bad"?
  • Meaning of あんたの今 here
  • Need help identifying a (possibly) 1984 Winter Olympics bicycle
  • Freewheeling diode in a capacitor
  • Numerical approximation of the integral by using data
  • Clear jel to thicken the filling of a key lime pie?
  • How fast would unrest spread in the Middle Ages?
  • What is the meaning of "Wa’al"?
  • Why does c show up in Schwarzschild's equation for the horizon radius?
  • What does it mean for observations to be uncorrelated and have constant variance?
  • Proof/Reference to a claim about AC and definable real numbers
  • Who is a "sibling"?
  • A polynomial with at least a simple root

annotated assignment python


  1. Anotações de função em Python

    annotated assignment python

  2. Function Annotations in Python

    annotated assignment python

  3. Python Assignment

    annotated assignment python

  4. Assignment-1 (Python Programming)

    annotated assignment python


    annotated assignment python

  6. 7. Type Annotations

    annotated assignment python


  1. Assignment

  2. Assignment #3 Annotated Bibliography Introduction

  3. Variables and Multiple Assignment

  4. Assignment #3 Annotated bibliography Structure

  5. Annotated Bibliography

  6. Python Tip! Assignment operator in Python #python #programming #basic #shorts #django #coding


  1. Python's Assignment Operator: Write Robust Assignments

    Annotated assignment statements with type hints; Assignment expressions with the walrus operator; Managed attribute assignments with properties and descriptors; Implicit assignments in Python; These topics will take you through several interesting and useful examples that showcase the power of Python's assignment statements. Annotated ...

  2. 7. Simple statements

    annotated_assignment_stmt::= augtarget ":" expression ["=" (starred_expression | yield_expression)] ... The future statement is intended to ease migration to future versions of Python that introduce incompatible changes to the language. It allows use of the new features on a per-module basis before the release in which the feature becomes standard.

  3. typing

    typing. Annotated ¶. Special typing form to add context-specific metadata to an annotation. Add metadata x to a given type T by using the annotation Annotated[T, x].Metadata added using Annotated can be used by static analysis tools or at runtime. At runtime, the metadata is stored in a __metadata__ attribute.. If a library or tool encounters an annotation Annotated[T, x] and has no special ...

  4. PEP 526

    PEP 526 - Syntax for Variable Annotations. This PEP is a historical document: see Annotated assignment statements , ClassVar and typing.ClassVar for up-to-date specs and documentation. Canonical typing specs are maintained at the typing specs site; runtime typing behaviour is described in the CPython documentation.

  5. python

    The annotated assignment is a combination of a variable annotation with the assignment of a value. An example would look like this: foo : int = 42. Here, foo is the variable name and int is the type annotation. We can use the ast module to verify that this does indeed create an AnnAssign node and see how these nodes look like:

  6. Understanding type annotation in Python

    Python lists are annotated based on the types of the elements they have or expect to have. Starting with Python ≥3.9, to annotate a list, you use the list type, followed by []. [] contains the element's type data type. For example, a list of strings can be annotated as follows: names: list[str] = ["john", "stanley", "zoe"]

  7. Annotations Best Practices

    Best practice for accessing the annotations dict of other objects-functions, other callables, and modules-is the same as best practice for 3.10, assuming you aren't calling inspect.get_annotations(): you should use three-argument getattr() to access the object's __annotations__ attribute. Unfortunately, this isn't best practice for ...

  8. 1. Type Annotations And Hints

    mypy is a static type checker and checks for annotated code in Python. It emits warnings if annotated types are inconsistently used. It allows gradual typing, this means you can add type hints as you like. mypy is an external program, which needs to be installed with for example. $ python3 -m pip install mypy.

  9. Python typing.Annotated examples

    Combining Annotated with Other Type Hints. The real power of Annotated is unleashed when it is used in combination with other types from the typing module, like Generics. The following example demonstrates its application with a generic list. from typing import Annotated, List def get_first_positive(numbers: Annotated[List[int], "Contains only positive numbers"]): for number in numbers: if ...

  10. Annotations (Video)

    00:00 This video is about using annotations in your Python code. A little background on annotations. First introduced in Python 3.0, they were used as a way to associate arbitrary expressions to function arguments and return values. A few years passed and then PEP 484 defined how to add type hints to your Python code.. 00:24 The main way to add the type hints is by using the annotations, like ...

  11. Python Type Annotations Full Guide

    Introduction. Python Type Annotations, also known as type signatures or "type hints", are a way to indicate the intended data types associated with variable names.In addition to writing more readable, understandable, and maintainable code, type annotations can also be used by static type checkers like mypy to verify type consistency and to catch programming errors before they are found the ...

  12. Different Forms of Assignment Statements in Python

    Multiple- target assignment: x = y = 75. print(x, y) In this form, Python assigns a reference to the same object (the object which is rightmost) to all the target on the left. OUTPUT. 75 75. 7. Augmented assignment : The augmented assignment is a shorthand assignment that combines an expression and an assignment.

  13. Type inference and type annotations

    For Python 3.8 and earlier, you must use List ... Mypy will take into account the type of the variable on the left-hand side of an assignment when inferring the type of the expression on the right-hand side. For example, the following will type check: ... disable_error_code = var-annotated, has-type allow_untyped_defs = True # Silence import ...

  14. 7. Simple statements

    Annotation assignment is the combination, in a single statement, of a variable or attribute annotation and an optional assignment statement: annotated_assignment_stmt::= augtarget ":" expression ["=" expression] The difference from normal Assignment statements is that only single target and only single right hand side value is allowed.

  15. Type Hinting and Annotations in Python

    Explanation: In the function declared above, we are assigning built-in data types to the arguments. It's the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int). This function is taking two arguments num1 and num2, that's what Python sees when it's going to run the code.

  16. Proposal: Annotate types in multiple assignment

    In the latest version of Python (3.12.3), type annotation for single variable assignment is available: a: int = 1 However, in some scenarios like when we want to annotate the tuple of variables in return, the syntax of type annotation is invalid: from typing import Any def fun() -> Any: # when hard to annotate the strict type return 1, True a: int, b: bool = fun() # INVALID In this case, I ...

  17. TorchScript Language Reference

    TorchScript is a statically typed subset of the Python language. This document explains the supported features of Python in TorchScript and also how the language diverges from regular Python. Any features of Python that are not mentioned in this reference manual are not part of TorchScript. ... annotated_assignment_stmt::= augtarget ...

  18. Annotated assignment in a

    3.11 bug and security fixes 3.12 bugs and security fixes 3.13 new features, bugs and security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) triaged The issue has been accepted as valid by a triager. type-bug An unexpected behavior, bug, or error

  19. How to inject class-variable annotations in Python 3.7+?

    and from the Annotated assignment statements section of the Python reference documentation: For simple names as assignment targets, if in class or module scope, the annotations are evaluated and stored in a special class or module attribute __annotations__ that is a dictionary mapping from variable names (mangled if private) to evaluated ...

  20. python

    b, c:int = a ... File "<stdin>", line 2 SyntaxError: only single target (not tuple) can be annotated I am aware that in both cases the type is inferred from the type hint of a, but I have a long variable list (in the __init__ of a class) and I want to be extra-explicit. I am using Python 3.6.8.

  21. Multiple Assignment in Python

    1. Multiple Assignment: The ability to assign values to multiple variables in a single line of code. 2. Tuple Unpacking: A technique that allows you to assign values from a tuple to multiple variables in one line. How to Assign Multiple Variables in One Line. 1. Basic Multiple Assignment: Assign values to multiple variables separated by commas. 2.

  22. What is "except Exception as e" in Python?

    except Exception as e is a construct in Python used for exception handling. It allows you to catch exceptions that occur during the execution of a block of code by using a try block to wrap the code that might raise an exception, and an except block to catch and handle the exception.. The Exception part specifies that any exception of this type or its subclasses should be caught, and the as e ...

  23. python

    Another duplicate would be Typehinting an unpack assignment, which goes into more details on how assignment statements and annotated assignment statements come from different parts of Python's grammar. - Brian61354270. Apr 12 at 15:22. 1


  25. python

    That feature was introduced in Python 3.6 (see PEP 526 ). Although a variable annotation is parsed as part of an annotated assignment, the assignment statement is optional: annotated_assignment_stmt ::= augtarget ":" expression ["=" expression] Thus, in context["a"]: 2. context["a"] is the annotation target. 2 is the annotation itself.