Skip to content

Field Types

Where possible pydantic uses standard library types to define fields, thus smoothing the learning curve. For many useful applications, however, no standard library type exists, so pydantic implements many commonly used types.

If no existing type suits your purpose you can also implement your own pydantic-compatible types with custom properties and validation.

Standard Library Types

pydantic supports many common types from the Python standard library. If you need stricter processing see Strict Types; if you need to constrain the values allowed (e.g. to require a positive int) see Constrained Types.

None, type(None) or Literal[None] (equivalent according to PEP 484)
allows only None value
bool
see Booleans below for details on how bools are validated and what values are permitted
int
pydantic uses int(v) to coerce types to an int; see this warning on loss of information during data conversion
float
similarly, float(v) is used to coerce values to floats
str
strings are accepted as-is, int float and Decimal are coerced using str(v), bytes and bytearray are converted using v.decode(), enums inheriting from str are converted using v.value, and all other types cause an error
bytes
bytes are accepted as-is, bytearray is converted using bytes(v), str are converted using v.encode(), and int, float, and Decimal are coerced using str(v).encode()
list
allows list, tuple, set, frozenset, deque, or generators and casts to a list; see typing.List below for sub-type constraints
tuple
allows list, tuple, set, frozenset, deque, or generators and casts to a tuple; see typing.Tuple below for sub-type constraints
dict
dict(v) is used to attempt to convert a dictionary; see typing.Dict below for sub-type constraints
set
allows list, tuple, set, frozenset, deque, or generators and casts to a set; see typing.Set below for sub-type constraints
frozenset
allows list, tuple, set, frozenset, deque, or generators and casts to a frozen set; see typing.FrozenSet below for sub-type constraints
deque
allows list, tuple, set, frozenset, deque, or generators and casts to a deque; see typing.Deque below for sub-type constraints
datetime.date
see Datetime Types below for more detail on parsing and validation
datetime.time
see Datetime Types below for more detail on parsing and validation
datetime.datetime
see Datetime Types below for more detail on parsing and validation
datetime.timedelta
see Datetime Types below for more detail on parsing and validation
typing.Any
allows any value including None, thus an Any field is optional
typing.Annotated
allows wrapping another type with arbitrary metadata, as per PEP-593. The Annotated hint may contain a single call to the Field function, but otherwise the additional metadata is ignored and the root type is used.
typing.TypeVar
constrains the values allowed based on constraints or bound, see TypeVar
typing.Union
see Unions below for more detail on parsing and validation
typing.Optional
Optional[x] is simply short hand for Union[x, None]; see Unions below for more detail on parsing and validation and Required Fields for details about required fields that can receive None as a value.
typing.List
see Typing Iterables below for more detail on parsing and validation
typing.Tuple
see Typing Iterables below for more detail on parsing and validation
subclass of typing.NamedTuple
Same as tuple but instantiates with the given namedtuple and validates fields since they are annotated. See Annotated Types below for more detail on parsing and validation
subclass of collections.namedtuple
Same as subclass of typing.NamedTuple but all fields will have type Any since they are not annotated
typing.Dict
see Typing Iterables below for more detail on parsing and validation
subclass of typing.TypedDict
Same as dict but pydantic will validate the dictionary since keys are annotated. See Annotated Types below for more detail on parsing and validation
typing.Set
see Typing Iterables below for more detail on parsing and validation
typing.FrozenSet
see Typing Iterables below for more detail on parsing and validation
typing.Deque
see Typing Iterables below for more detail on parsing and validation
typing.Sequence
see Typing Iterables below for more detail on parsing and validation
typing.Iterable
this is reserved for iterables that shouldn't be consumed. See Infinite Generators below for more detail on parsing and validation
typing.Type
see Type below for more detail on parsing and validation
typing.Callable
see Callable below for more detail on parsing and validation
typing.Pattern
will cause the input value to be passed to re.compile(v) to create a regex pattern
ipaddress.IPv4Address
simply uses the type itself for validation by passing the value to IPv4Address(v); see Pydantic Types for other custom IP address types
ipaddress.IPv4Interface
simply uses the type itself for validation by passing the value to IPv4Address(v); see Pydantic Types for other custom IP address types
ipaddress.IPv4Network
simply uses the type itself for validation by passing the value to IPv4Network(v); see Pydantic Types for other custom IP address types
ipaddress.IPv6Address
simply uses the type itself for validation by passing the value to IPv6Address(v); see Pydantic Types for other custom IP address types
ipaddress.IPv6Interface
simply uses the type itself for validation by passing the value to IPv6Interface(v); see Pydantic Types for other custom IP address types
ipaddress.IPv6Network
simply uses the type itself for validation by passing the value to IPv6Network(v); see Pydantic Types for other custom IP address types
enum.Enum
checks that the value is a valid Enum instance
subclass of enum.Enum
checks that the value is a valid member of the enum; see Enums and Choices for more details
enum.IntEnum
checks that the value is a valid IntEnum instance
subclass of enum.IntEnum
checks that the value is a valid member of the integer enum; see Enums and Choices for more details
decimal.Decimal
pydantic attempts to convert the value to a string, then passes the string to Decimal(v)
pathlib.Path
simply uses the type itself for validation by passing the value to Path(v); see Pydantic Types for other more strict path types
uuid.UUID
strings and bytes (converted to strings) are passed to UUID(v), with a fallback to UUID(bytes=v) for bytes and bytearray; see Pydantic Types for other stricter UUID types
ByteSize
converts a bytes string with units to bytes

Typing Iterables

pydantic uses standard library typing types as defined in PEP 484 to define complex objects.

from typing import (
    Deque, Dict, FrozenSet, List, Optional, Sequence, Set, Tuple, Union
)

from pydantic import BaseModel


class Model(BaseModel):
    simple_list: list = None
    list_of_ints: List[int] = None

    simple_tuple: tuple = None
    tuple_of_different_types: Tuple[int, float, str, bool] = None

    simple_dict: dict = None
    dict_str_float: Dict[str, float] = None

    simple_set: set = None
    set_bytes: Set[bytes] = None
    frozen_set: FrozenSet[int] = None

    str_or_bytes: Union[str, bytes] = None
    none_or_str: Optional[str] = None

    sequence_of_ints: Sequence[int] = None

    compound: Dict[Union[str, bytes], List[Set[int]]] = None

    deque: Deque[int] = None


print(Model(simple_list=['1', '2', '3']).simple_list)
#> ['1', '2', '3']
print(Model(list_of_ints=['1', '2', '3']).list_of_ints)
#> [1, 2, 3]

print(Model(simple_dict={'a': 1, b'b': 2}).simple_dict)
#> {'a': 1, b'b': 2}
print(Model(dict_str_float={'a': 1, b'b': 2}).dict_str_float)
#> {'a': 1.0, 'b': 2.0}

print(Model(simple_tuple=[1, 2, 3, 4]).simple_tuple)
#> (1, 2, 3, 4)
print(Model(tuple_of_different_types=[4, 3, 2, 1]).tuple_of_different_types)
#> (4, 3.0, '2', True)

print(Model(sequence_of_ints=[1, 2, 3, 4]).sequence_of_ints)
#> [1, 2, 3, 4]
print(Model(sequence_of_ints=(1, 2, 3, 4)).sequence_of_ints)
#> (1, 2, 3, 4)

print(Model(deque=[1, 2, 3]).deque)
#> deque([1, 2, 3])
from typing import (
    Deque, Optional, Union
)
from collections.abc import Sequence

from pydantic import BaseModel


class Model(BaseModel):
    simple_list: list = None
    list_of_ints: list[int] = None

    simple_tuple: tuple = None
    tuple_of_different_types: tuple[int, float, str, bool] = None

    simple_dict: dict = None
    dict_str_float: dict[str, float] = None

    simple_set: set = None
    set_bytes: set[bytes] = None
    frozen_set: frozenset[int] = None

    str_or_bytes: Union[str, bytes] = None
    none_or_str: Optional[str] = None

    sequence_of_ints: Sequence[int] = None

    compound: dict[Union[str, bytes], list[set[int]]] = None

    deque: Deque[int] = None


print(Model(simple_list=['1', '2', '3']).simple_list)
#> ['1', '2', '3']
print(Model(list_of_ints=['1', '2', '3']).list_of_ints)
#> [1, 2, 3]

print(Model(simple_dict={'a': 1, b'b': 2}).simple_dict)
#> {'a': 1, b'b': 2}
print(Model(dict_str_float={'a': 1, b'b': 2}).dict_str_float)
#> {'a': 1.0, 'b': 2.0}

print(Model(simple_tuple=[1, 2, 3, 4]).simple_tuple)
#> (1, 2, 3, 4)
print(Model(tuple_of_different_types=[4, 3, 2, 1]).tuple_of_different_types)
#> (4, 3.0, '2', True)

print(Model(sequence_of_ints=[1, 2, 3, 4]).sequence_of_ints)
#> [1, 2, 3, 4]
print(Model(sequence_of_ints=(1, 2, 3, 4)).sequence_of_ints)
#> (1, 2, 3, 4)

print(Model(deque=[1, 2, 3]).deque)
#> deque([1, 2, 3])
from typing import (
    Deque
)
from collections.abc import Sequence

from pydantic import BaseModel


class Model(BaseModel):
    simple_list: list = None
    list_of_ints: list[int] = None

    simple_tuple: tuple = None
    tuple_of_different_types: tuple[int, float, str, bool] = None

    simple_dict: dict = None
    dict_str_float: dict[str, float] = None

    simple_set: set = None
    set_bytes: set[bytes] = None
    frozen_set: frozenset[int] = None

    str_or_bytes: str | bytes = None
    none_or_str: str | None = None

    sequence_of_ints: Sequence[int] = None

    compound: dict[str | bytes, list[set[int]]] = None

    deque: Deque[int] = None


print(Model(simple_list=['1', '2', '3']).simple_list)
#> ['1', '2', '3']
print(Model(list_of_ints=['1', '2', '3']).list_of_ints)
#> [1, 2, 3]

print(Model(simple_dict={'a': 1, b'b': 2}).simple_dict)
#> {'a': 1, b'b': 2}
print(Model(dict_str_float={'a': 1, b'b': 2}).dict_str_float)
#> {'a': 1.0, 'b': 2.0}

print(Model(simple_tuple=[1, 2, 3, 4]).simple_tuple)
#> (1, 2, 3, 4)
print(Model(tuple_of_different_types=[4, 3, 2, 1]).tuple_of_different_types)
#> (4, 3.0, '2', True)

print(Model(sequence_of_ints=[1, 2, 3, 4]).sequence_of_ints)
#> [1, 2, 3, 4]
print(Model(sequence_of_ints=(1, 2, 3, 4)).sequence_of_ints)
#> (1, 2, 3, 4)

print(Model(deque=[1, 2, 3]).deque)
#> deque([1, 2, 3])

(This script is complete, it should run "as is")

Infinite Generators

If you have a generator you can use Sequence as described above. In that case, the generator will be consumed and stored on the model as a list and its values will be validated with the sub-type of Sequence (e.g. int in Sequence[int]).

But if you have a generator that you don't want to be consumed, e.g. an infinite generator or a remote data loader, you can define its type with Iterable:

from typing import Iterable
from pydantic import BaseModel


class Model(BaseModel):
    infinite: Iterable[int]


def infinite_ints():
    i = 0
    while True:
        yield i
        i += 1


m = Model(infinite=infinite_ints())
print(m)
#> infinite=<generator object infinite_ints at 0x7f758f852c70>

for i in m.infinite:
    print(i)
    #> 0
    #> 1
    #> 2
    #> 3
    #> 4
    #> 5
    #> 6
    #> 7
    #> 8
    #> 9
    #> 10
    if i == 10:
        break
from collections.abc import Iterable
from pydantic import BaseModel


class Model(BaseModel):
    infinite: Iterable[int]


def infinite_ints():
    i = 0
    while True:
        yield i
        i += 1


m = Model(infinite=infinite_ints())
print(m)
#> infinite=<generator object infinite_ints at 0x7f758f852e30>

for i in m.infinite:
    print(i)
    #> 0
    #> 1
    #> 2
    #> 3
    #> 4
    #> 5
    #> 6
    #> 7
    #> 8
    #> 9
    #> 10
    if i == 10:
        break

(This script is complete, it should run "as is")

Warning

Iterable fields only perform a simple check that the argument is iterable and won't be consumed.

No validation of their values is performed as it cannot be done without consuming the iterable.

Tip

If you want to validate the values of an infinite generator you can create a separate model and use it while consuming the generator, reporting the validation errors as appropriate.

pydantic can't validate the values automatically for you because it would require consuming the infinite generator.

Validating the first value

You can create a validator to validate the first value in an infinite generator and still not consume it entirely.

import itertools
from typing import Iterable
from pydantic import BaseModel, validator, ValidationError
from pydantic.fields import ModelField


class Model(BaseModel):
    infinite: Iterable[int]

    @validator('infinite')
    # You don't need to add the "ModelField", but it will help your
    # editor give you completion and catch errors
    def infinite_first_int(cls, iterable, field: ModelField):
        first_value = next(iterable)
        if field.sub_fields:
            # The Iterable had a parameter type, in this case it's int
            # We use it to validate the first value
            sub_field = field.sub_fields[0]
            v, error = sub_field.validate(first_value, {}, loc='first_value')
            if error:
                raise ValidationError([error], cls)
        # This creates a new generator that returns the first value and then
        # the rest of the values from the (already started) iterable
        return itertools.chain([first_value], iterable)


def infinite_ints():
    i = 0
    while True:
        yield i
        i += 1


m = Model(infinite=infinite_ints())
print(m)
#> infinite=<itertools.chain object at 0x7f758f520340>


def infinite_strs():
    while True:
        yield from 'allthesingleladies'


try:
    Model(infinite=infinite_strs())
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    infinite -> first_value
      value is not a valid integer (type=type_error.integer)
    """
import itertools
from collections.abc import Iterable
from pydantic import BaseModel, validator, ValidationError
from pydantic.fields import ModelField


class Model(BaseModel):
    infinite: Iterable[int]

    @validator('infinite')
    # You don't need to add the "ModelField", but it will help your
    # editor give you completion and catch errors
    def infinite_first_int(cls, iterable, field: ModelField):
        first_value = next(iterable)
        if field.sub_fields:
            # The Iterable had a parameter type, in this case it's int
            # We use it to validate the first value
            sub_field = field.sub_fields[0]
            v, error = sub_field.validate(first_value, {}, loc='first_value')
            if error:
                raise ValidationError([error], cls)
        # This creates a new generator that returns the first value and then
        # the rest of the values from the (already started) iterable
        return itertools.chain([first_value], iterable)


def infinite_ints():
    i = 0
    while True:
        yield i
        i += 1


m = Model(infinite=infinite_ints())
print(m)
#> infinite=<itertools.chain object at 0x7f758f5032e0>


def infinite_strs():
    while True:
        yield from 'allthesingleladies'


try:
    Model(infinite=infinite_strs())
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    infinite -> first_value
      value is not a valid integer (type=type_error.integer)
    """

(This script is complete, it should run "as is")

Unions

The Union type allows a model attribute to accept different types, e.g.:

Info

You may get unexpected coercion with Union; see below.
Know that you can also make the check slower but stricter by using Smart Union

from uuid import UUID
from typing import Union
from pydantic import BaseModel


class User(BaseModel):
    id: Union[int, str, UUID]
    name: str


user_01 = User(id=123, name='John Doe')
print(user_01)
#> id=123 name='John Doe'
print(user_01.id)
#> 123
user_02 = User(id='1234', name='John Doe')
print(user_02)
#> id=1234 name='John Doe'
print(user_02.id)
#> 1234
user_03_uuid = UUID('cf57432e-809e-4353-adbd-9d5c0d733868')
user_03 = User(id=user_03_uuid, name='John Doe')
print(user_03)
#> id=275603287559914445491632874575877060712 name='John Doe'
print(user_03.id)
#> 275603287559914445491632874575877060712
print(user_03_uuid.int)
#> 275603287559914445491632874575877060712
from uuid import UUID
from pydantic import BaseModel


class User(BaseModel):
    id: int | str | UUID
    name: str


user_01 = User(id=123, name='John Doe')
print(user_01)
#> id=123 name='John Doe'
print(user_01.id)
#> 123
user_02 = User(id='1234', name='John Doe')
print(user_02)
#> id=1234 name='John Doe'
print(user_02.id)
#> 1234
user_03_uuid = UUID('cf57432e-809e-4353-adbd-9d5c0d733868')
user_03 = User(id=user_03_uuid, name='John Doe')
print(user_03)
#> id=275603287559914445491632874575877060712 name='John Doe'
print(user_03.id)
#> 275603287559914445491632874575877060712
print(user_03_uuid.int)
#> 275603287559914445491632874575877060712

(This script is complete, it should run "as is")

However, as can be seen above, pydantic will attempt to 'match' any of the types defined under Union and will use the first one that matches. In the above example the id of user_03 was defined as a uuid.UUID class (which is defined under the attribute's Union annotation) but as the uuid.UUID can be marshalled into an int it chose to match against the int type and disregarded the other types.

Warning

typing.Union also ignores order when defined, so Union[int, float] == Union[float, int] which can lead to unexpected behaviour when combined with matching based on the Union type order inside other type definitions, such as List and Dict types (because Python treats these definitions as singletons). For example, Dict[str, Union[int, float]] == Dict[str, Union[float, int]] with the order based on the first time it was defined. Please note that this can also be affected by third party libraries and their internal type definitions and the import orders.

As such, it is recommended that, when defining Union annotations, the most specific type is included first and followed by less specific types.

In the above example, the UUID class should precede the int and str classes to preclude the unexpected representation as such:

from uuid import UUID
from typing import Union
from pydantic import BaseModel


class User(BaseModel):
    id: Union[UUID, int, str]
    name: str


user_03_uuid = UUID('cf57432e-809e-4353-adbd-9d5c0d733868')
user_03 = User(id=user_03_uuid, name='John Doe')
print(user_03)
#> id=UUID('cf57432e-809e-4353-adbd-9d5c0d733868') name='John Doe'
print(user_03.id)
#> cf57432e-809e-4353-adbd-9d5c0d733868
print(user_03_uuid.int)
#> 275603287559914445491632874575877060712
from uuid import UUID
from pydantic import BaseModel


class User(BaseModel):
    id: UUID | int | str
    name: str


user_03_uuid = UUID('cf57432e-809e-4353-adbd-9d5c0d733868')
user_03 = User(id=user_03_uuid, name='John Doe')
print(user_03)
#> id=UUID('cf57432e-809e-4353-adbd-9d5c0d733868') name='John Doe'
print(user_03.id)
#> cf57432e-809e-4353-adbd-9d5c0d733868
print(user_03_uuid.int)
#> 275603287559914445491632874575877060712

(This script is complete, it should run "as is")

Tip

The type Optional[x] is a shorthand for Union[x, None].

Optional[x] can also be used to specify a required field that can take None as a value.

See more details in Required Fields.

Discriminated Unions (a.k.a. Tagged Unions)

When Union is used with multiple submodels, you sometimes know exactly which submodel needs to be checked and validated and want to enforce this. To do that you can set the same field - let's call it my_discriminator - in each of the submodels with a discriminated value, which is one (or many) Literal value(s). For your Union, you can set the discriminator in its value: Field(discriminator='my_discriminator').

Setting a discriminated union has many benefits:

  • validation is faster since it is only attempted against one model
  • only one explicit error is raised in case of failure
  • the generated JSON schema implements the associated OpenAPI specification
from typing import Literal, Union

from pydantic import BaseModel, Field, ValidationError


class Cat(BaseModel):
    pet_type: Literal['cat']
    meows: int


class Dog(BaseModel):
    pet_type: Literal['dog']
    barks: float


class Lizard(BaseModel):
    pet_type: Literal['reptile', 'lizard']
    scales: bool


class Model(BaseModel):
    pet: Union[Cat, Dog, Lizard] = Field(..., discriminator='pet_type')
    n: int


print(Model(pet={'pet_type': 'dog', 'barks': 3.14}, n=1))
#> pet=Dog(pet_type='dog', barks=3.14) n=1
try:
    Model(pet={'pet_type': 'dog'}, n=1)
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Dog -> barks
      field required (type=value_error.missing)
    """
from typing import Literal

from pydantic import BaseModel, Field, ValidationError


class Cat(BaseModel):
    pet_type: Literal['cat']
    meows: int


class Dog(BaseModel):
    pet_type: Literal['dog']
    barks: float


class Lizard(BaseModel):
    pet_type: Literal['reptile', 'lizard']
    scales: bool


class Model(BaseModel):
    pet: Cat | Dog | Lizard = Field(..., discriminator='pet_type')
    n: int


print(Model(pet={'pet_type': 'dog', 'barks': 3.14}, n=1))
#> pet=Dog(pet_type='dog', barks=3.14) n=1
try:
    Model(pet={'pet_type': 'dog'}, n=1)
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Dog -> barks
      field required (type=value_error.missing)
    """

(This script is complete, it should run "as is")

Note

Using the Annotated Fields syntax can be handy to regroup the Union and discriminator information. See below for an example!

Warning

Discriminated unions cannot be used with only a single variant, such as Union[Cat].

Python changes Union[T] into T at interpretation time, so it is not possible for pydantic to distinguish fields of Union[T] from T.

Nested Discriminated Unions

Only one discriminator can be set for a field but sometimes you want to combine multiple discriminators. In this case you can always create "intermediate" models with __root__ and add your discriminator.

from typing import Literal, Union

from typing_extensions import Annotated

from pydantic import BaseModel, Field, ValidationError


class BlackCat(BaseModel):
    pet_type: Literal['cat']
    color: Literal['black']
    black_name: str


class WhiteCat(BaseModel):
    pet_type: Literal['cat']
    color: Literal['white']
    white_name: str


# Can also be written with a custom root type
#
# class Cat(BaseModel):
#   __root__: Annotated[Union[BlackCat, WhiteCat], Field(discriminator='color')]

Cat = Annotated[Union[BlackCat, WhiteCat], Field(discriminator='color')]


class Dog(BaseModel):
    pet_type: Literal['dog']
    name: str


Pet = Annotated[Union[Cat, Dog], Field(discriminator='pet_type')]


class Model(BaseModel):
    pet: Pet
    n: int


m = Model(pet={'pet_type': 'cat', 'color': 'black', 'black_name': 'felix'}, n=1)
print(m)
#> pet=BlackCat(pet_type='cat', color='black', black_name='felix') n=1
try:
    Model(pet={'pet_type': 'cat', 'color': 'red'}, n='1')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Union[BlackCat, WhiteCat]
      No match for discriminator 'color' and value 'red' (allowed values:
    'black', 'white')
    (type=value_error.discriminated_union.invalid_discriminator;
    discriminator_key=color; discriminator_value=red; allowed_values='black',
    'white')
    """
try:
    Model(pet={'pet_type': 'cat', 'color': 'black'}, n='1')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Union[BlackCat, WhiteCat] -> BlackCat -> black_name
      field required (type=value_error.missing)
    """
from typing import Literal, Union

from typing import Annotated

from pydantic import BaseModel, Field, ValidationError


class BlackCat(BaseModel):
    pet_type: Literal['cat']
    color: Literal['black']
    black_name: str


class WhiteCat(BaseModel):
    pet_type: Literal['cat']
    color: Literal['white']
    white_name: str


# Can also be written with a custom root type
#
# class Cat(BaseModel):
#   __root__: Annotated[Union[BlackCat, WhiteCat], Field(discriminator='color')]

Cat = Annotated[Union[BlackCat, WhiteCat], Field(discriminator='color')]


class Dog(BaseModel):
    pet_type: Literal['dog']
    name: str


Pet = Annotated[Union[Cat, Dog], Field(discriminator='pet_type')]


class Model(BaseModel):
    pet: Pet
    n: int


m = Model(pet={'pet_type': 'cat', 'color': 'black', 'black_name': 'felix'}, n=1)
print(m)
#> pet=BlackCat(pet_type='cat', color='black', black_name='felix') n=1
try:
    Model(pet={'pet_type': 'cat', 'color': 'red'}, n='1')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Union[BlackCat, WhiteCat]
      No match for discriminator 'color' and value 'red' (allowed values:
    'black', 'white')
    (type=value_error.discriminated_union.invalid_discriminator;
    discriminator_key=color; discriminator_value=red; allowed_values='black',
    'white')
    """
try:
    Model(pet={'pet_type': 'cat', 'color': 'black'}, n='1')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet -> Union[BlackCat, WhiteCat] -> BlackCat -> black_name
      field required (type=value_error.missing)
    """

(This script is complete, it should run "as is")

Enums and Choices

pydantic uses Python's standard enum classes to define choices.

from enum import Enum, IntEnum

from pydantic import BaseModel, ValidationError


class FruitEnum(str, Enum):
    pear = 'pear'
    banana = 'banana'


class ToolEnum(IntEnum):
    spanner = 1
    wrench = 2


class CookingModel(BaseModel):
    fruit: FruitEnum = FruitEnum.pear
    tool: ToolEnum = ToolEnum.spanner


print(CookingModel())
#> fruit=<FruitEnum.pear: 'pear'> tool=<ToolEnum.spanner: 1>
print(CookingModel(tool=2, fruit='banana'))
#> fruit=<FruitEnum.banana: 'banana'> tool=<ToolEnum.wrench: 2>
try:
    CookingModel(fruit='other')
except ValidationError as e:
    print(e)
    """
    1 validation error for CookingModel
    fruit
      value is not a valid enumeration member; permitted: 'pear', 'banana'
    (type=type_error.enum; enum_values=[<FruitEnum.pear: 'pear'>,
    <FruitEnum.banana: 'banana'>])
    """

(This script is complete, it should run "as is")

Datetime Types

Pydantic supports the following datetime types:

  • datetime fields can be:

    • datetime, existing datetime object
    • int or float, assumed as Unix time, i.e. seconds (if >= -2e10 or <= 2e10) or milliseconds (if < -2e10or > 2e10) since 1 January 1970
    • str, following formats work:

      • YYYY-MM-DD[T]HH:MM[:SS[.ffffff]][Z or [±]HH[:]MM]
      • int or float as a string (assumed as Unix time)
  • date fields can be:

    • date, existing date object
    • int or float, see datetime
    • str, following formats work:

      • YYYY-MM-DD
      • int or float, see datetime
  • time fields can be:

    • time, existing time object
    • str, following formats work:

      • HH:MM[:SS[.ffffff]][Z or [±]HH[:]MM]
  • timedelta fields can be:

    • timedelta, existing timedelta object
    • int or float, assumed as seconds
    • str, following formats work:

      • [-][DD ][HH:MM]SS[.ffffff]
      • [±]P[DD]DT[HH]H[MM]M[SS]S (ISO 8601 format for timedelta)
from datetime import date, datetime, time, timedelta
from pydantic import BaseModel


class Model(BaseModel):
    d: date = None
    dt: datetime = None
    t: time = None
    td: timedelta = None


m = Model(
    d=1966280412345.6789,
    dt='2032-04-23T10:20:30.400+02:30',
    t=time(4, 8, 16),
    td='P3DT12H30M5S',
)

print(m.dict())
"""
{
    'd': datetime.date(2032, 4, 22),
    'dt': datetime.datetime(2032, 4, 23, 10, 20, 30, 400000,
tzinfo=datetime.timezone(datetime.timedelta(seconds=9000))),
    't': datetime.time(4, 8, 16),
    'td': datetime.timedelta(days=3, seconds=45005),
}
"""

(This script is complete, it should run "as is")

Booleans

Warning

The logic for parsing bool fields has changed as of version v1.0.

Prior to v1.0, bool parsing never failed, leading to some unexpected results. The new logic is described below.

A standard bool field will raise a ValidationError if the value is not one of the following:

  • A valid boolean (i.e. True or False),
  • The integers 0 or 1,
  • a str which when converted to lower case is one of '0', 'off', 'f', 'false', 'n', 'no', '1', 'on', 't', 'true', 'y', 'yes'
  • a bytes which is valid (per the previous rule) when decoded to str

Note

If you want stricter boolean logic (e.g. a field which only permits True and False) you can use StrictBool.

Here is a script demonstrating some of these behaviors:

from pydantic import BaseModel, ValidationError


class BooleanModel(BaseModel):
    bool_value: bool


print(BooleanModel(bool_value=False))
#> bool_value=False
print(BooleanModel(bool_value='False'))
#> bool_value=False
try:
    BooleanModel(bool_value=[])
except ValidationError as e:
    print(str(e))
    """
    1 validation error for BooleanModel
    bool_value
      value could not be parsed to a boolean (type=type_error.bool)
    """

(This script is complete, it should run "as is")

Callable

Fields can also be of type Callable:

from typing import Callable
from pydantic import BaseModel


class Foo(BaseModel):
    callback: Callable[[int], int]


m = Foo(callback=lambda x: x)
print(m)
#> callback=<function <lambda> at 0x7f758f5eec20>
from collections.abc import Callable
from pydantic import BaseModel


class Foo(BaseModel):
    callback: Callable[[int], int]


m = Foo(callback=lambda x: x)
print(m)
#> callback=<function <lambda> at 0x7f758f5eedd0>

(This script is complete, it should run "as is")

Warning

Callable fields only perform a simple check that the argument is callable; no validation of arguments, their types, or the return type is performed.

Type

pydantic supports the use of Type[T] to specify that a field may only accept classes (not instances) that are subclasses of T.

from typing import Type

from pydantic import BaseModel
from pydantic import ValidationError


class Foo:
    pass


class Bar(Foo):
    pass


class Other:
    pass


class SimpleModel(BaseModel):
    just_subclasses: Type[Foo]


SimpleModel(just_subclasses=Foo)
SimpleModel(just_subclasses=Bar)
try:
    SimpleModel(just_subclasses=Other)
except ValidationError as e:
    print(e)
    """
    1 validation error for SimpleModel
    just_subclasses
      subclass of Foo expected (type=type_error.subclass; expected_class=Foo)
    """
from pydantic import BaseModel
from pydantic import ValidationError


class Foo:
    pass


class Bar(Foo):
    pass


class Other:
    pass


class SimpleModel(BaseModel):
    just_subclasses: type[Foo]


SimpleModel(just_subclasses=Foo)
SimpleModel(just_subclasses=Bar)
try:
    SimpleModel(just_subclasses=Other)
except ValidationError as e:
    print(e)
    """
    1 validation error for SimpleModel
    just_subclasses
      subclass of Foo expected (type=type_error.subclass; expected_class=Foo)
    """

(This script is complete, it should run "as is")

You may also use Type to specify that any class is allowed.

from typing import Type

from pydantic import BaseModel, ValidationError


class Foo:
    pass


class LenientSimpleModel(BaseModel):
    any_class_goes: Type


LenientSimpleModel(any_class_goes=int)
LenientSimpleModel(any_class_goes=Foo)
try:
    LenientSimpleModel(any_class_goes=Foo())
except ValidationError as e:
    print(e)

    """
    1 validation error for LenientSimpleModel
    any_class_goes
      a class is expected (type=type_error.class)
    """

(This script is complete, it should run "as is")

TypeVar

TypeVar is supported either unconstrained, constrained or with a bound.

from typing import TypeVar
from pydantic import BaseModel

Foobar = TypeVar('Foobar')
BoundFloat = TypeVar('BoundFloat', bound=float)
IntStr = TypeVar('IntStr', int, str)


class Model(BaseModel):
    a: Foobar  # equivalent of ": Any"
    b: BoundFloat  # equivalent of ": float"
    c: IntStr  # equivalent of ": Union[int, str]"


print(Model(a=[1], b=4.2, c='x'))
#> a=[1] b=4.2 c='x'

# a may be None and is therefore optional
print(Model(b=1, c=1))
#> a=None b=1.0 c=1

(This script is complete, it should run "as is")

Literal Type

Note

This is a new feature of the Python standard library as of Python 3.8; prior to Python 3.8, it requires the typing-extensions package.

pydantic supports the use of typing.Literal (or typing_extensions.Literal prior to Python 3.8) as a lightweight way to specify that a field may accept only specific literal values:

from typing import Literal

from pydantic import BaseModel, ValidationError


class Pie(BaseModel):
    flavor: Literal['apple', 'pumpkin']


Pie(flavor='apple')
Pie(flavor='pumpkin')
try:
    Pie(flavor='cherry')
except ValidationError as e:
    print(str(e))
    """
    1 validation error for Pie
    flavor
      unexpected value; permitted: 'apple', 'pumpkin'
    (type=value_error.const; given=cherry; permitted=('apple', 'pumpkin'))
    """

(This script is complete, it should run "as is")

One benefit of this field type is that it can be used to check for equality with one or more specific values without needing to declare custom validators:

from typing import ClassVar, List, Union

from typing import Literal

from pydantic import BaseModel, ValidationError


class Cake(BaseModel):
    kind: Literal['cake']
    required_utensils: ClassVar[List[str]] = ['fork', 'knife']


class IceCream(BaseModel):
    kind: Literal['icecream']
    required_utensils: ClassVar[List[str]] = ['spoon']


class Meal(BaseModel):
    dessert: Union[Cake, IceCream]


print(type(Meal(dessert={'kind': 'cake'}).dessert).__name__)
#> Cake
print(type(Meal(dessert={'kind': 'icecream'}).dessert).__name__)
#> IceCream
try:
    Meal(dessert={'kind': 'pie'})
except ValidationError as e:
    print(str(e))
    """
    2 validation errors for Meal
    dessert -> kind
      unexpected value; permitted: 'cake' (type=value_error.const; given=pie;
    permitted=('cake',))
    dessert -> kind
      unexpected value; permitted: 'icecream' (type=value_error.const;
    given=pie; permitted=('icecream',))
    """
from typing import ClassVar, Union

from typing import Literal

from pydantic import BaseModel, ValidationError


class Cake(BaseModel):
    kind: Literal['cake']
    required_utensils: ClassVar[list[str]] = ['fork', 'knife']


class IceCream(BaseModel):
    kind: Literal['icecream']
    required_utensils: ClassVar[list[str]] = ['spoon']


class Meal(BaseModel):
    dessert: Union[Cake, IceCream]


print(type(Meal(dessert={'kind': 'cake'}).dessert).__name__)
#> Cake
print(type(Meal(dessert={'kind': 'icecream'}).dessert).__name__)
#> IceCream
try:
    Meal(dessert={'kind': 'pie'})
except ValidationError as e:
    print(str(e))
    """
    2 validation errors for Meal
    dessert -> kind
      unexpected value; permitted: 'cake' (type=value_error.const; given=pie;
    permitted=('cake',))
    dessert -> kind
      unexpected value; permitted: 'icecream' (type=value_error.const;
    given=pie; permitted=('icecream',))
    """
from typing import ClassVar

from typing import Literal

from pydantic import BaseModel, ValidationError


class Cake(BaseModel):
    kind: Literal['cake']
    required_utensils: ClassVar[list[str]] = ['fork', 'knife']


class IceCream(BaseModel):
    kind: Literal['icecream']
    required_utensils: ClassVar[list[str]] = ['spoon']


class Meal(BaseModel):
    dessert: Cake | IceCream


print(type(Meal(dessert={'kind': 'cake'}).dessert).__name__)
#> Cake
print(type(Meal(dessert={'kind': 'icecream'}).dessert).__name__)
#> IceCream
try:
    Meal(dessert={'kind': 'pie'})
except ValidationError as e:
    print(str(e))
    """
    2 validation errors for Meal
    dessert -> kind
      unexpected value; permitted: 'cake' (type=value_error.const; given=pie;
    permitted=('cake',))
    dessert -> kind
      unexpected value; permitted: 'icecream' (type=value_error.const;
    given=pie; permitted=('icecream',))
    """

(This script is complete, it should run "as is")

With proper ordering in an annotated Union, you can use this to parse types of decreasing specificity:

from typing import Optional, Union

from typing import Literal

from pydantic import BaseModel


class Dessert(BaseModel):
    kind: str


class Pie(Dessert):
    kind: Literal['pie']
    flavor: Optional[str]


class ApplePie(Pie):
    flavor: Literal['apple']


class PumpkinPie(Pie):
    flavor: Literal['pumpkin']


class Meal(BaseModel):
    dessert: Union[ApplePie, PumpkinPie, Pie, Dessert]


print(type(Meal(dessert={'kind': 'pie', 'flavor': 'apple'}).dessert).__name__)
#> ApplePie
print(type(Meal(dessert={'kind': 'pie', 'flavor': 'pumpkin'}).dessert).__name__)
#> PumpkinPie
print(type(Meal(dessert={'kind': 'pie'}).dessert).__name__)
#> Pie
print(type(Meal(dessert={'kind': 'cake'}).dessert).__name__)
#> Dessert
from typing import Literal

from pydantic import BaseModel


class Dessert(BaseModel):
    kind: str


class Pie(Dessert):
    kind: Literal['pie']
    flavor: str | None


class ApplePie(Pie):
    flavor: Literal['apple']


class PumpkinPie(Pie):
    flavor: Literal['pumpkin']


class Meal(BaseModel):
    dessert: ApplePie | PumpkinPie | Pie | Dessert


print(type(Meal(dessert={'kind': 'pie', 'flavor': 'apple'}).dessert).__name__)
#> ApplePie
print(type(Meal(dessert={'kind': 'pie', 'flavor': 'pumpkin'}).dessert).__name__)
#> PumpkinPie
print(type(Meal(dessert={'kind': 'pie'}).dessert).__name__)
#> Pie
print(type(Meal(dessert={'kind': 'cake'}).dessert).__name__)
#> Dessert

(This script is complete, it should run "as is")

Annotated Types

NamedTuple

from typing import NamedTuple

from pydantic import BaseModel, ValidationError


class Point(NamedTuple):
    x: int
    y: int


class Model(BaseModel):
    p: Point


print(Model(p=('1', '2')))
#> p=Point(x=1, y=2)

try:
    Model(p=('1.3', '2'))
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    p -> x
      value is not a valid integer (type=type_error.integer)
    """

(This script is complete, it should run "as is")

TypedDict

Note

This is a new feature of the Python standard library as of Python 3.8. Prior to Python 3.8, it requires the typing-extensions package. But required and optional fields are properly differentiated only since Python 3.9. We therefore recommend using typing-extensions with Python 3.8 as well.

from typing_extensions import TypedDict

from pydantic import BaseModel, Extra, ValidationError


# `total=False` means keys are non-required
class UserIdentity(TypedDict, total=False):
    name: str
    surname: str


class User(TypedDict):
    identity: UserIdentity
    age: int


class Model(BaseModel):
    u: User

    class Config:
        extra = Extra.forbid


print(Model(u={'identity': {'name': 'Smith', 'surname': 'John'}, 'age': '37'}))
#> u={'identity': {'name': 'Smith', 'surname': 'John'}, 'age': 37}

print(Model(u={'identity': {'name': None, 'surname': 'John'}, 'age': '37'}))
#> u={'identity': {'name': None, 'surname': 'John'}, 'age': 37}

print(Model(u={'identity': {}, 'age': '37'}))
#> u={'identity': {}, 'age': 37}


try:
    Model(u={'identity': {'name': ['Smith'], 'surname': 'John'}, 'age': '24'})
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    u -> identity -> name
      str type expected (type=type_error.str)
    """

try:
    Model(
        u={
            'identity': {'name': 'Smith', 'surname': 'John'},
            'age': '37',
            'email': '[email protected]',
        }
    )
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    u -> email
      extra fields not permitted (type=value_error.extra)
    """

(This script is complete, it should run "as is")

Pydantic Types

pydantic also provides a variety of other useful types:

FilePath
like Path, but the path must exist and be a file
DirectoryPath
like Path, but the path must exist and be a directory
PastDate
like date, but the date should be in the past
FutureDate
like date, but the date should be in the future
EmailStr
requires email-validator to be installed; the input string must be a valid email address, and the output is a simple string
NameEmail
requires email-validator to be installed; the input string must be either a valid email address or in the format Fred Bloggs <[email protected]>, and the output is a NameEmail object which has two properties: name and email. For Fred Bloggs <[email protected]> the name would be "Fred Bloggs"; for [email protected] it would be "fred.bloggs".
PyObject
expects a string and loads the Python object importable at that dotted path; e.g. if 'math.cos' was provided, the resulting field value would be the function cos
Color
for parsing HTML and CSS colors; see Color Type
Json
a special type wrapper which loads JSON before parsing; see JSON Type
PaymentCardNumber
for parsing and validating payment cards; see payment cards
AnyUrl
any URL; see URLs
AnyHttpUrl
an HTTP URL; see URLs
HttpUrl
a stricter HTTP URL; see URLs
FileUrl
a file path URL; see URLs
PostgresDsn
a postgres DSN style URL; see URLs
CockroachDsn
a cockroachdb DSN style URL; see URLs
AmqpDsn
an AMQP DSN style URL as used by RabbitMQ, StormMQ, ActiveMQ etc.; see URLs
RedisDsn
a redis DSN style URL; see URLs
MongoDsn
a MongoDB DSN style URL; see URLs
KafkaDsn
a kafka DSN style URL; see URLs
stricturl
a type method for arbitrary URL constraints; see URLs
UUID1
requires a valid UUID of type 1; see UUID above
UUID3
requires a valid UUID of type 3; see UUID above
UUID4
requires a valid UUID of type 4; see UUID above
UUID5
requires a valid UUID of type 5; see UUID above
SecretBytes
bytes where the value is kept partially secret; see Secrets
SecretStr
string where the value is kept partially secret; see Secrets
IPvAnyAddress
allows either an IPv4Address or an IPv6Address
IPvAnyInterface
allows either an IPv4Interface or an IPv6Interface
IPvAnyNetwork
allows either an IPv4Network or an IPv6Network
NegativeFloat
allows a float which is negative; uses standard float parsing then checks the value is less than 0; see Constrained Types
NegativeInt
allows an int which is negative; uses standard int parsing then checks the value is less than 0; see Constrained Types
PositiveFloat
allows a float which is positive; uses standard float parsing then checks the value is greater than 0; see Constrained Types
PositiveInt
allows an int which is positive; uses standard int parsing then checks the value is greater than 0; see Constrained Types
conbytes
type method for constraining bytes; see Constrained Types
condecimal
type method for constraining Decimals; see Constrained Types
confloat
type method for constraining floats; see Constrained Types
conint
type method for constraining ints; see Constrained Types
condate
type method for constraining dates; see Constrained Types
conlist
type method for constraining lists; see Constrained Types
conset
type method for constraining sets; see Constrained Types
confrozenset
type method for constraining frozen sets; see Constrained Types
constr
type method for constraining strs; see Constrained Types

URLs

For URI/URL validation the following types are available:

  • AnyUrl: any scheme allowed, TLD not required, host required
  • AnyHttpUrl: scheme http or https, TLD not required, host required
  • HttpUrl: scheme http or https, TLD required, host required, max length 2083
  • FileUrl: scheme file, host not required
  • PostgresDsn: user info required, TLD not required, host required, as of V.10 PostgresDsn supports multiple hosts. The following schemes are supported:
    • postgres
    • postgresql
    • postgresql+asyncpg
    • postgresql+pg8000
    • postgresql+psycopg
    • postgresql+psycopg2
    • postgresql+psycopg2cffi
    • postgresql+py-postgresql
    • postgresql+pygresql
  • CockroachDsn: scheme cockroachdb, user info required, TLD not required, host required. Also, its supported DBAPI dialects:
    • cockroachdb+asyncpg
    • cockroachdb+psycopg2
  • AmqpDsn: schema amqp or amqps, user info not required, TLD not required, host not required
  • RedisDsn: scheme redis or rediss, user info not required, tld not required, host not required (CHANGED: user info) (e.g., rediss://:pass@localhost)
  • MongoDsn : scheme mongodb, user info not required, database name not required, port not required from v1.6 onwards), user info may be passed without user part (e.g., mongodb://mongodb0.example.com:27017)
  • stricturl: method with the following keyword arguments: - strip_whitespace: bool = True - min_length: int = 1 - max_length: int = 2 ** 16 - tld_required: bool = True - host_required: bool = True - allowed_schemes: Optional[Set[str]] = None

Warning

In V1.10.0 and v1.10.1 stricturl also took an optional quote_plus argument and URL components were percent encoded in some cases. This feature was removed in v1.10.2, see #4470 for explanation and more details.

The above types (which all inherit from AnyUrl) will attempt to give descriptive errors when invalid URLs are provided:

from pydantic import BaseModel, HttpUrl, ValidationError


class MyModel(BaseModel):
    url: HttpUrl


m = MyModel(url='http://www.example.com')
print(m.url)
#> http://www.example.com

try:
    MyModel(url='ftp://invalid.url')
except ValidationError as e:
    print(e)
    """
    1 validation error for MyModel
    url
      URL scheme not permitted (type=value_error.url.scheme;
    allowed_schemes={'https', 'http'})
    """

try:
    MyModel(url='not a url')
except ValidationError as e:
    print(e)
    """
    1 validation error for MyModel
    url
      invalid or missing URL scheme (type=value_error.url.scheme)
    """

(This script is complete, it should run "as is")

If you require a custom URI/URL type, it can be created in a similar way to the types defined above.

URL Properties

Assuming an input URL of http://samuel:[email protected]:8000/the/path/?query=here#fragment=is;this=bit, the above types export the following properties:

  • scheme: always set - the url scheme (http above)
  • host: always set - the url host (example.com above)
  • host_type: always set - describes the type of host, either:

    • domain: e.g. example.com,
    • int_domain: international domain, see below, e.g. exampl£e.org,
    • ipv4: an IP V4 address, e.g. 127.0.0.1, or
    • ipv6: an IP V6 address, e.g. 2001:db8:ff00:42
  • user: optional - the username if included (samuel above)
  • password: optional - the password if included (pass above)
  • tld: optional - the top level domain (com above), Note: this will be wrong for any two-level domain, e.g. "co.uk". You'll need to implement your own list of TLDs if you require full TLD validation
  • port: optional - the port (8000 above)
  • path: optional - the path (/the/path/ above)
  • query: optional - the URL query (aka GET arguments or "search string") (query=here above)
  • fragment: optional - the fragment (fragment=is;this=bit above)

If further validation is required, these properties can be used by validators to enforce specific behaviour:

from pydantic import BaseModel, HttpUrl, PostgresDsn, ValidationError, validator


class MyModel(BaseModel):
    url: HttpUrl


m = MyModel(url='http://www.example.com')

# the repr() method for a url will display all properties of the url
print(repr(m.url))
#> HttpUrl('http://www.example.com', )
print(m.url.scheme)
#> http
print(m.url.host)
#> www.example.com
print(m.url.host_type)
#> domain
print(m.url.port)
#> 80


class MyDatabaseModel(BaseModel):
    db: PostgresDsn

    @validator('db')
    def check_db_name(cls, v):
        assert v.path and len(v.path) > 1, 'database must be provided'
        return v


m = MyDatabaseModel(db='postgres://user:pass@localhost:5432/foobar')
print(m.db)
#> postgres://user:pass@localhost:5432/foobar

try:
    MyDatabaseModel(db='postgres://user:pass@localhost:5432')
except ValidationError as e:
    print(e)
    """
    1 validation error for MyDatabaseModel
    db
      database must be provided (type=assertion_error)
    """

(This script is complete, it should run "as is")

International Domains

"International domains" (e.g. a URL where the host or TLD includes non-ascii characters) will be encoded via punycode (see this article for a good description of why this is important):

from pydantic import BaseModel, HttpUrl


class MyModel(BaseModel):
    url: HttpUrl


m1 = MyModel(url='http://puny£code.com')
print(m1.url)
#> http://xn--punycode-eja.com
print(m1.url.host_type)
#> int_domain
m2 = MyModel(url='https://www.аррӏе.com/')
print(m2.url)
#> https://www.xn--80ak6aa92e.com/
print(m2.url.host_type)
#> int_domain
m3 = MyModel(url='https://www.example.珠宝/')
print(m3.url)
#> https://www.example.xn--pbt977c/
print(m3.url.host_type)
#> int_domain

(This script is complete, it should run "as is")

Warning

Underscores in Hostnames

In pydantic underscores are allowed in all parts of a domain except the tld. Technically this might be wrong - in theory the hostname cannot have underscores, but subdomains can.

To explain this; consider the following two cases:

  • exam_ple.co.uk: the hostname is exam_ple, which should not be allowed since it contains an underscore
  • foo_bar.example.com the hostname is example, which should be allowed since the underscore is in the subdomain

Without having an exhaustive list of TLDs, it would be impossible to differentiate between these two. Therefore underscores are allowed, but you can always do further validation in a validator if desired.

Also, Chrome, Firefox, and Safari all currently accept http://exam_ple.com as a URL, so we're in good (or at least big) company.

Color Type

You can use the Color data type for storing colors as per CSS3 specification. Colors can be defined via:

  • name (e.g. "Black", "azure")
  • hexadecimal value (e.g. "0x000", "#FFFFFF", "7fffd4")
  • RGB/RGBA tuples (e.g. (255, 255, 255), (255, 255, 255, 0.5))
  • RGB/RGBA strings (e.g. "rgb(255, 255, 255)", "rgba(255, 255, 255, 0.5)")
  • HSL strings (e.g. "hsl(270, 60%, 70%)", "hsl(270, 60%, 70%, .5)")
from pydantic import BaseModel, ValidationError
from pydantic.color import Color

c = Color('ff00ff')
print(c.as_named())
#> magenta
print(c.as_hex())
#> #f0f
c2 = Color('green')
print(c2.as_rgb_tuple())
#> (0, 128, 0)
print(c2.original())
#> green
print(repr(Color('hsl(180, 100%, 50%)')))
#> Color('cyan', rgb=(0, 255, 255))


class Model(BaseModel):
    color: Color


print(Model(color='purple'))
#> color=Color('purple', rgb=(128, 0, 128))
try:
    Model(color='hello')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    color
      value is not a valid color: string not recognised as a valid color
    (type=value_error.color; reason=string not recognised as a valid color)
    """

(This script is complete, it should run "as is")

Color has the following methods:

original
the original string or tuple passed to Color
as_named
returns a named CSS3 color; fails if the alpha channel is set or no such color exists unless fallback=True is supplied, in which case it falls back to as_hex
as_hex
returns a string in the format #fff or #ffffff; will contain 4 (or 8) hex values if the alpha channel is set, e.g. #7f33cc26
as_rgb
returns a string in the format rgb(<red>, <green>, <blue>), or rgba(<red>, <green>, <blue>, <alpha>) if the alpha channel is set
as_rgb_tuple
returns a 3- or 4-tuple in RGB(a) format. The alpha keyword argument can be used to define whether the alpha channel should be included; options: True - always include, False - never include, None (default) - include if set
as_hsl
string in the format hsl(<hue deg>, <saturation %>, <lightness %>) or hsl(<hue deg>, <saturation %>, <lightness %>, <alpha>) if the alpha channel is set
as_hsl_tuple
returns a 3- or 4-tuple in HSL(a) format. The alpha keyword argument can be used to define whether the alpha channel should be included; options: True - always include, False - never include, None (the default) - include if set

The __str__ method for Color returns self.as_named(fallback=True).

Note

the as_hsl* refer to hue, saturation, lightness "HSL" as used in html and most of the world, not "HLS" as used in Python's colorsys.

Secret Types

You can use the SecretStr and the SecretBytes data types for storing sensitive information that you do not want to be visible in logging or tracebacks. SecretStr and SecretBytes can be initialized idempotently or by using str or bytes literals respectively. The SecretStr and SecretBytes will be formatted as either '**********' or '' on conversion to json.

from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError


class SimpleModel(BaseModel):
    password: SecretStr
    password_bytes: SecretBytes


sm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')

# Standard access methods will not display the secret
print(sm)
#> password=SecretStr('**********') password_bytes=SecretBytes(b'**********')
print(sm.password)
#> **********
print(sm.dict())
"""
{
    'password': SecretStr('**********'),
    'password_bytes': SecretBytes(b'**********'),
}
"""
print(sm.json())
#> {"password": "**********", "password_bytes": "**********"}

# Use get_secret_value method to see the secret's content.
print(sm.password.get_secret_value())
#> IAmSensitive
print(sm.password_bytes.get_secret_value())
#> b'IAmSensitiveBytes'

try:
    SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])
except ValidationError as e:
    print(e)
    """
    2 validation errors for SimpleModel
    password
      str type expected (type=type_error.str)
    password_bytes
      byte type expected (type=type_error.bytes)
    """


# If you want the secret to be dumped as plain-text using the json method,
# you can use json_encoders in the Config class.
class SimpleModelDumpable(BaseModel):
    password: SecretStr
    password_bytes: SecretBytes

    class Config:
        json_encoders = {
            SecretStr: lambda v: v.get_secret_value() if v else None,
            SecretBytes: lambda v: v.get_secret_value() if v else None,
        }


sm2 = SimpleModelDumpable(
    password='IAmSensitive', password_bytes=b'IAmSensitiveBytes'
)

# Standard access methods will not display the secret
print(sm2)
#> password=SecretStr('**********') password_bytes=SecretBytes(b'**********')
print(sm2.password)
#> **********
print(sm2.dict())
"""
{
    'password': SecretStr('**********'),
    'password_bytes': SecretBytes(b'**********'),
}
"""

# But the json method will
print(sm2.json())
#> {"password": "IAmSensitive", "password_bytes": "IAmSensitiveBytes"}

(This script is complete, it should run "as is")

Json Type

You can use Json data type to make pydantic first load a raw JSON string. It can also optionally be used to parse the loaded object into another type base on the type Json is parameterised with:

from typing import Any, List

from pydantic import BaseModel, Json, ValidationError


class AnyJsonModel(BaseModel):
    json_obj: Json[Any]


class ConstrainedJsonModel(BaseModel):
    json_obj: Json[List[int]]


print(AnyJsonModel(json_obj='{"b": 1}'))
#> json_obj={'b': 1}
print(ConstrainedJsonModel(json_obj='[1, 2, 3]'))
#> json_obj=[1, 2, 3]
try:
    ConstrainedJsonModel(json_obj=12)
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedJsonModel
    json_obj
      JSON object must be str, bytes or bytearray (type=type_error.json)
    """

try:
    ConstrainedJsonModel(json_obj='[a, b]')
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedJsonModel
    json_obj
      Invalid JSON (type=value_error.json)
    """

try:
    ConstrainedJsonModel(json_obj='["a", "b"]')
except ValidationError as e:
    print(e)
    """
    2 validation errors for ConstrainedJsonModel
    json_obj -> 0
      value is not a valid integer (type=type_error.integer)
    json_obj -> 1
      value is not a valid integer (type=type_error.integer)
    """
from typing import Any

from pydantic import BaseModel, Json, ValidationError


class AnyJsonModel(BaseModel):
    json_obj: Json[Any]


class ConstrainedJsonModel(BaseModel):
    json_obj: Json[list[int]]


print(AnyJsonModel(json_obj='{"b": 1}'))
#> json_obj={'b': 1}
print(ConstrainedJsonModel(json_obj='[1, 2, 3]'))
#> json_obj=[1, 2, 3]
try:
    ConstrainedJsonModel(json_obj=12)
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedJsonModel
    json_obj
      JSON object must be str, bytes or bytearray (type=type_error.json)
    """

try:
    ConstrainedJsonModel(json_obj='[a, b]')
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedJsonModel
    json_obj
      Invalid JSON (type=value_error.json)
    """

try:
    ConstrainedJsonModel(json_obj='["a", "b"]')
except ValidationError as e:
    print(e)
    """
    2 validation errors for ConstrainedJsonModel
    json_obj -> 0
      value is not a valid integer (type=type_error.integer)
    json_obj -> 1
      value is not a valid integer (type=type_error.integer)
    """

(This script is complete, it should run "as is")

Payment Card Numbers

The PaymentCardNumber type validates payment cards (such as a debit or credit card).

from datetime import date

from pydantic import BaseModel
from pydantic.types import PaymentCardBrand, PaymentCardNumber, constr


class Card(BaseModel):
    name: constr(strip_whitespace=True, min_length=1)
    number: PaymentCardNumber
    exp: date

    @property
    def brand(self) -> PaymentCardBrand:
        return self.number.brand

    @property
    def expired(self) -> bool:
        return self.exp < date.today()


card = Card(
    name='Georg Wilhelm Friedrich Hegel',
    number='4000000000000002',
    exp=date(2023, 9, 30),
)

assert card.number.brand == PaymentCardBrand.visa
assert card.number.bin == '400000'
assert card.number.last4 == '0002'
assert card.number.masked == '400000******0002'

(This script is complete, it should run "as is")

PaymentCardBrand can be one of the following based on the BIN:

  • PaymentCardBrand.amex
  • PaymentCardBrand.mastercard
  • PaymentCardBrand.visa
  • PaymentCardBrand.other

The actual validation verifies the card number is:

  • a str of only digits
  • luhn valid
  • the correct length based on the BIN, if Amex, Mastercard or Visa, and between 12 and 19 digits for all other brands

Constrained Types

The value of numerous common types can be restricted using con* type functions:

from decimal import Decimal

from pydantic import (
    BaseModel,
    NegativeFloat,
    NegativeInt,
    PositiveFloat,
    PositiveInt,
    NonNegativeFloat,
    NonNegativeInt,
    NonPositiveFloat,
    NonPositiveInt,
    conbytes,
    condecimal,
    confloat,
    conint,
    conlist,
    conset,
    constr,
    Field,
)


class Model(BaseModel):
    upper_bytes: conbytes(to_upper=True)
    lower_bytes: conbytes(to_lower=True)
    short_bytes: conbytes(min_length=2, max_length=10)
    strip_bytes: conbytes(strip_whitespace=True)

    upper_str: constr(to_upper=True)
    lower_str: constr(to_lower=True)
    short_str: constr(min_length=2, max_length=10)
    regex_str: constr(regex=r'^apple (pie|tart|sandwich)$')
    strip_str: constr(strip_whitespace=True)

    big_int: conint(gt=1000, lt=1024)
    mod_int: conint(multiple_of=5)
    pos_int: PositiveInt
    neg_int: NegativeInt
    non_neg_int: NonNegativeInt
    non_pos_int: NonPositiveInt

    big_float: confloat(gt=1000, lt=1024)
    unit_interval: confloat(ge=0, le=1)
    mod_float: confloat(multiple_of=0.5)
    pos_float: PositiveFloat
    neg_float: NegativeFloat
    non_neg_float: NonNegativeFloat
    non_pos_float: NonPositiveFloat

    short_list: conlist(int, min_items=1, max_items=4)
    short_set: conset(int, min_items=1, max_items=4)

    decimal_positive: condecimal(gt=0)
    decimal_negative: condecimal(lt=0)
    decimal_max_digits_and_places: condecimal(max_digits=2, decimal_places=2)
    mod_decimal: condecimal(multiple_of=Decimal('0.25'))

    bigger_int: int = Field(..., gt=10000)

(This script is complete, it should run "as is")

Where Field refers to the field function.

Arguments to conlist

The following arguments are available when using the conlist type function

  • item_type: Type[T]: type of the list items
  • min_items: int = None: minimum number of items in the list
  • max_items: int = None: maximum number of items in the list
  • unique_items: bool = None: enforces list elements to be unique

Arguments to conset

The following arguments are available when using the conset type function

  • item_type: Type[T]: type of the set items
  • min_items: int = None: minimum number of items in the set
  • max_items: int = None: maximum number of items in the set

Arguments to confrozenset

The following arguments are available when using the confrozenset type function

  • item_type: Type[T]: type of the frozenset items
  • min_items: int = None: minimum number of items in the frozenset
  • max_items: int = None: maximum number of items in the frozenset

Arguments to conint

The following arguments are available when using the conint type function

  • strict: bool = False: controls type coercion
  • gt: int = None: enforces integer to be greater than the set value
  • ge: int = None: enforces integer to be greater than or equal to the set value
  • lt: int = None: enforces integer to be less than the set value
  • le: int = None: enforces integer to be less than or equal to the set value
  • multiple_of: int = None: enforces integer to be a multiple of the set value

Arguments to confloat

The following arguments are available when using the confloat type function

  • strict: bool = False: controls type coercion
  • gt: float = None: enforces float to be greater than the set value
  • ge: float = None: enforces float to be greater than or equal to the set value
  • lt: float = None: enforces float to be less than the set value
  • le: float = None: enforces float to be less than or equal to the set value
  • multiple_of: float = None: enforces float to be a multiple of the set value
  • allow_inf_nan: bool = True: whether to allows infinity (+inf an -inf) and NaN values, defaults to True, set to False for compatibility with JSON, see #3994 for more details, added in V1.10

Arguments to condecimal

The following arguments are available when using the condecimal type function

  • gt: Decimal = None: enforces decimal to be greater than the set value
  • ge: Decimal = None: enforces decimal to be greater than or equal to the set value
  • lt: Decimal = None: enforces decimal to be less than the set value
  • le: Decimal = None: enforces decimal to be less than or equal to the set value
  • max_digits: int = None: maximum number of digits within the decimal. it does not include a zero before the decimal point or trailing decimal zeroes
  • decimal_places: int = None: max number of decimal places allowed. it does not include trailing decimal zeroes
  • multiple_of: Decimal = None: enforces decimal to be a multiple of the set value

Arguments to constr

The following arguments are available when using the constr type function

  • strip_whitespace: bool = False: removes leading and trailing whitespace
  • to_upper: bool = False: turns all characters to uppercase
  • to_lower: bool = False: turns all characters to lowercase
  • strict: bool = False: controls type coercion
  • min_length: int = None: minimum length of the string
  • max_length: int = None: maximum length of the string
  • curtail_length: int = None: shrinks the string length to the set value when it is longer than the set value
  • regex: str = None: regex to validate the string against

Arguments to conbytes

The following arguments are available when using the conbytes type function

  • strip_whitespace: bool = False: removes leading and trailing whitespace
  • to_upper: bool = False: turns all characters to uppercase
  • to_lower: bool = False: turns all characters to lowercase
  • min_length: int = None: minimum length of the byte string
  • max_length: int = None: maximum length of the byte string
  • strict: bool = False: controls type coercion

Arguments to condate

The following arguments are available when using the condate type function

  • gt: date = None: enforces date to be greater than the set value
  • ge: date = None: enforces date to be greater than or equal to the set value
  • lt: date = None: enforces date to be less than the set value
  • le: date = None: enforces date to be less than or equal to the set value

Strict Types

You can use the StrictStr, StrictBytes, StrictInt, StrictFloat, and StrictBool types to prevent coercion from compatible types. These types will only pass validation when the validated value is of the respective type or is a subtype of that type. This behavior is also exposed via the strict field of the ConstrainedStr, ConstrainedBytes, ConstrainedFloat and ConstrainedInt classes and can be combined with a multitude of complex validation rules.

The following caveats apply:

  • StrictBytes (and the strict option of ConstrainedBytes) will accept both bytes, and bytearray types.
  • StrictInt (and the strict option of ConstrainedInt) will not accept bool types, even though bool is a subclass of int in Python. Other subclasses will work.
  • StrictFloat (and the strict option of ConstrainedFloat) will not accept int.
from pydantic import (
    BaseModel,
    StrictBytes,
    StrictBool,
    StrictInt,
    ValidationError,
    confloat,
)


class StrictBytesModel(BaseModel):
    strict_bytes: StrictBytes


try:
    StrictBytesModel(strict_bytes='hello world')
except ValidationError as e:
    print(e)
    """
    1 validation error for StrictBytesModel
    strict_bytes
      byte type expected (type=type_error.bytes)
    """


class StrictIntModel(BaseModel):
    strict_int: StrictInt


try:
    StrictIntModel(strict_int=3.14159)
except ValidationError as e:
    print(e)
    """
    1 validation error for StrictIntModel
    strict_int
      value is not a valid integer (type=type_error.integer)
    """


class ConstrainedFloatModel(BaseModel):
    constrained_float: confloat(strict=True, ge=0.0)


try:
    ConstrainedFloatModel(constrained_float=3)
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedFloatModel
    constrained_float
      value is not a valid float (type=type_error.float)
    """

try:
    ConstrainedFloatModel(constrained_float=-1.23)
except ValidationError as e:
    print(e)
    """
    1 validation error for ConstrainedFloatModel
    constrained_float
      ensure this value is greater than or equal to 0.0
    (type=value_error.number.not_ge; limit_value=0.0)
    """


class StrictBoolModel(BaseModel):
    strict_bool: StrictBool


try:
    StrictBoolModel(strict_bool='False')
except ValidationError as e:
    print(str(e))
    """
    1 validation error for StrictBoolModel
    strict_bool
      value is not a valid boolean (type=value_error.strictbool)
    """

(This script is complete, it should run "as is")

ByteSize

You can use the ByteSize data type to convert byte string representation to raw bytes and print out human readable versions of the bytes as well.

Info

Note that 1b will be parsed as "1 byte" and not "1 bit".

from pydantic import BaseModel, ByteSize


class MyModel(BaseModel):
    size: ByteSize


print(MyModel(size=52000).size)
#> 52000
print(MyModel(size='3000 KiB').size)
#> 3072000

m = MyModel(size='50 PB')
print(m.size.human_readable())
#> 44.4PiB
print(m.size.human_readable(decimal=True))
#> 50.0PB

print(m.size.to('TiB'))
#> 45474.73508864641

(This script is complete, it should run "as is")

Custom Data Types

You can also define your own custom data types. There are several ways to achieve it.

Classes with __get_validators__

You use a custom class with a classmethod __get_validators__. It will be called to get validators to parse and validate the input data.

Tip

These validators have the same semantics as in Validators, you can declare a parameter config, field, etc.

import re
from pydantic import BaseModel

# https://en.wikipedia.org/wiki/Postcodes_in_the_United_Kingdom#Validation
post_code_regex = re.compile(
    r'(?:'
    r'([A-Z]{1,2}[0-9][A-Z0-9]?|ASCN|STHL|TDCU|BBND|[BFS]IQQ|PCRN|TKCA) ?'
    r'([0-9][A-Z]{2})|'
    r'(BFPO) ?([0-9]{1,4})|'
    r'(KY[0-9]|MSR|VG|AI)[ -]?[0-9]{4}|'
    r'([A-Z]{2}) ?([0-9]{2})|'
    r'(GE) ?(CX)|'
    r'(GIR) ?(0A{2})|'
    r'(SAN) ?(TA1)'
    r')'
)


class PostCode(str):
    """
    Partial UK postcode validation. Note: this is just an example, and is not
    intended for use in production; in particular this does NOT guarantee
    a postcode exists, just that it has a valid format.
    """

    @classmethod
    def __get_validators__(cls):
        # one or more validators may be yielded which will be called in the
        # order to validate the input, each validator will receive as an input
        # the value returned from the previous validator
        yield cls.validate

    @classmethod
    def __modify_schema__(cls, field_schema):
        # __modify_schema__ should mutate the dict it receives in place,
        # the returned value will be ignored
        field_schema.update(
            # simplified regex here for brevity, see the wikipedia link above
            pattern='^[A-Z]{1,2}[0-9][A-Z0-9]? ?[0-9][A-Z]{2}$',
            # some example postcodes
            examples=['SP11 9DG', 'w1j7bu'],
        )

    @classmethod
    def validate(cls, v):
        if not isinstance(v, str):
            raise TypeError('string required')
        m = post_code_regex.fullmatch(v.upper())
        if not m:
            raise ValueError('invalid postcode format')
        # you could also return a string here which would mean model.post_code
        # would be a string, pydantic won't care but you could end up with some
        # confusion since the value's type won't match the type annotation
        # exactly
        return cls(f'{m.group(1)} {m.group(2)}')

    def __repr__(self):
        return f'PostCode({super().__repr__()})'


class Model(BaseModel):
    post_code: PostCode


model = Model(post_code='sw8 5el')
print(model)
#> post_code=PostCode('SW8 5EL')
print(model.post_code)
#> SW8 5EL
print(Model.schema())
"""
{
    'title': 'Model',
    'type': 'object',
    'properties': {
        'post_code': {
            'title': 'Post Code',
            'pattern': '^[A-Z]{1,2}[0-9][A-Z0-9]? ?[0-9][A-Z]{2}$',
            'examples': ['SP11 9DG', 'w1j7bu'],
            'type': 'string',
        },
    },
    'required': ['post_code'],
}
"""

(This script is complete, it should run "as is")

Similar validation could be achieved using constr(regex=...) except the value won't be formatted with a space, the schema would just include the full pattern and the returned value would be a vanilla string.

See schema for more details on how the model's schema is generated.

Arbitrary Types Allowed

You can allow arbitrary types using the arbitrary_types_allowed config in the Model Config.

from pydantic import BaseModel, ValidationError


# This is not a pydantic model, it's an arbitrary class
class Pet:
    def __init__(self, name: str):
        self.name = name


class Model(BaseModel):
    pet: Pet
    owner: str

    class Config:
        arbitrary_types_allowed = True


pet = Pet(name='Hedwig')
# A simple check of instance type is used to validate the data
model = Model(owner='Harry', pet=pet)
print(model)
#> pet=<types_arbitrary_allowed.Pet object at 0x7f758f71cc40> owner='Harry'
print(model.pet)
#> <types_arbitrary_allowed.Pet object at 0x7f758f71cc40>
print(model.pet.name)
#> Hedwig
print(type(model.pet))
#> <class 'types_arbitrary_allowed.Pet'>
try:
    # If the value is not an instance of the type, it's invalid
    Model(owner='Harry', pet='Hedwig')
except ValidationError as e:
    print(e)
    """
    1 validation error for Model
    pet
      instance of Pet expected (type=type_error.arbitrary_type;
    expected_arbitrary_type=Pet)
    """
# Nothing in the instance of the arbitrary type is checked
# Here name probably should have been a str, but it's not validated
pet2 = Pet(name=42)
model2 = Model(owner='Harry', pet=pet2)
print(model2)
#> pet=<types_arbitrary_allowed.Pet object at 0x7f758f71f940> owner='Harry'
print(model2.pet)
#> <types_arbitrary_allowed.Pet object at 0x7f758f71f940>
print(model2.pet.name)
#> 42
print(type(model2.pet))
#> <class 'types_arbitrary_allowed.Pet'>

(This script is complete, it should run "as is")

Generic Classes as Types

Warning

This is an advanced technique that you might not need in the beginning. In most of the cases you will probably be fine with standard pydantic models.

You can use Generic Classes as field types and perform custom validation based on the "type parameters" (or sub-types) with __get_validators__.

If the Generic class that you are using as a sub-type has a classmethod __get_validators__ you don't need to use arbitrary_types_allowed for it to work.

Because you can declare validators that receive the current field, you can extract the sub_fields (from the generic class type parameters) and validate data with them.

from pydantic import BaseModel, ValidationError
from pydantic.fields import ModelField
from typing import TypeVar, Generic

AgedType = TypeVar('AgedType')
QualityType = TypeVar('QualityType')


# This is not a pydantic model, it's an arbitrary generic class
class TastingModel(Generic[AgedType, QualityType]):
    def __init__(self, name: str, aged: AgedType, quality: QualityType):
        self.name = name
        self.aged = aged
        self.quality = quality

    @classmethod
    def __get_validators__(cls):
        yield cls.validate

    @classmethod
    # You don't need to add the "ModelField", but it will help your
    # editor give you completion and catch errors
    def validate(cls, v, field: ModelField):
        if not isinstance(v, cls):
            # The value is not even a TastingModel
            raise TypeError('Invalid value')
        if not field.sub_fields:
            # Generic parameters were not provided so we don't try to validate
            # them and just return the value as is
            return v
        aged_f = field.sub_fields[0]
        quality_f = field.sub_fields[1]
        errors = []
        # Here we don't need the validated value, but we want the errors
        valid_value, error = aged_f.validate(v.aged, {}, loc='aged')
        if error:
            errors.append(error)
        # Here we don't need the validated value, but we want the errors
        valid_value, error = quality_f.validate(v.quality, {}, loc='quality')
        if error:
            errors.append(error)
        if errors:
            raise ValidationError(errors, cls)
        # Validation passed without errors, return the same instance received
        return v


class Model(BaseModel):
    # for wine, "aged" is an int with years, "quality" is a float
    wine: TastingModel[int, float]
    # for cheese, "aged" is a bool, "quality" is a str
    cheese: TastingModel[bool, str]
    # for thing, "aged" is a Any, "quality" is Any
    thing: TastingModel


model = Model(
    # This wine was aged for 20 years and has a quality of 85.6
    wine=TastingModel(name='Cabernet Sauvignon', aged=20, quality=85.6),
    # This cheese is aged (is mature) and has "Good" quality
    cheese=TastingModel(name='Gouda', aged=True, quality='Good'),
    # This Python thing has aged "Not much" and has a quality "Awesome"
    thing=TastingModel(name='Python', aged='Not much', quality='Awesome'),
)
print(model)
"""
wine=<types_generics.TastingModel object at 0x7f758f500610>
cheese=<types_generics.TastingModel object at 0x7f758f5005e0>
thing=<types_generics.TastingModel object at 0x7f758f500640>
"""
print(model.wine.aged)
#> 20
print(model.wine.quality)
#> 85.6
print(model.cheese.aged)
#> True
print(model.cheese.quality)
#> Good
print(model.thing.aged)
#> Not much
try:
    # If the values of the sub-types are invalid, we get an error
    Model(
        # For wine, aged should be an int with the years, and quality a float
        wine=TastingModel(name='Merlot', aged=True, quality='Kinda good'),
        # For cheese, aged should be a bool, and quality a str
        cheese=TastingModel(name='Gouda', aged='yeah', quality=5),
        # For thing, no type parameters are declared, and we skipped validation
        # in those cases in the Assessment.validate() function
        thing=TastingModel(name='Python', aged='Not much', quality='Awesome'),
    )
except ValidationError as e:
    print(e)
    """
    2 validation errors for Model
    wine -> quality
      value is not a valid float (type=type_error.float)
    cheese -> aged
      value could not be parsed to a boolean (type=type_error.bool)
    """

(This script is complete, it should run "as is")