Exporting models
As well as accessing model attributes directly via their names (e.g. model.foobar
), models can be converted
and exported in a number of ways:
model.dict(...)
¶
This is the primary way of converting a model to a dictionary. Sub-models will be recursively converted to dictionaries.
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowby_alias
: whether field aliases should be used as keys in the returned dictionary; defaultFalse
exclude_unset
: whether fields which were not explicitly set when creating the model should be excluded from the returned dictionary; defaultFalse
. Prior to v1.0,exclude_unset
was known asskip_defaults
; use ofskip_defaults
is now deprecatedexclude_defaults
: whether fields which are equal to their default values (whether set or otherwise) should be excluded from the returned dictionary; defaultFalse
exclude_none
: whether fields which are equal toNone
should be excluded from the returned dictionary; defaultFalse
Example:
from pydantic import BaseModel
class BarModel(BaseModel):
whatever: int
class FooBarModel(BaseModel):
banana: float
foo: str
bar: BarModel
m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})
# returns a dictionary:
print(m.dict())
"""
{
'banana': 3.14,
'foo': 'hello',
'bar': {'whatever': 123},
}
"""
print(m.dict(include={'foo', 'bar'}))
#> {'foo': 'hello', 'bar': {'whatever': 123}}
print(m.dict(exclude={'foo', 'bar'}))
#> {'banana': 3.14}
(This script is complete, it should run "as is")
dict(model)
and iteration¶
pydantic models can also be converted to dictionaries using dict(model)
, and you can also
iterate over a model's field using for field_name, value in model:
. With this approach the raw field values are
returned, so sub-models will not be converted to dictionaries.
Example:
from pydantic import BaseModel
class BarModel(BaseModel):
whatever: int
class FooBarModel(BaseModel):
banana: float
foo: str
bar: BarModel
m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})
print(dict(m))
"""
{
'banana': 3.14,
'foo': 'hello',
'bar': BarModel(
whatever=123,
),
}
"""
for name, value in m:
print(f'{name}: {value}')
#> banana: 3.14
#> foo: hello
#> bar: whatever=123
(This script is complete, it should run "as is")
model.copy(...)
¶
copy()
allows models to be duplicated, which is particularly useful for immutable models.
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowupdate
: a dictionary of values to change when creating the copied modeldeep
: whether to make a deep copy of the new model; defaultFalse
Example:
from pydantic import BaseModel
class BarModel(BaseModel):
whatever: int
class FooBarModel(BaseModel):
banana: float
foo: str
bar: BarModel
m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})
print(m.copy(include={'foo', 'bar'}))
#> foo='hello' bar=BarModel(whatever=123)
print(m.copy(exclude={'foo', 'bar'}))
#> banana=3.14
print(m.copy(update={'banana': 0}))
#> banana=0 foo='hello' bar=BarModel(whatever=123)
print(id(m.bar), id(m.copy().bar))
#> 140142912945888 140142912945888
# normal copy gives the same object reference for `bar`
print(id(m.bar), id(m.copy(deep=True).bar))
#> 140142912945888 140142912942528
# deep copy gives a new object reference for `bar`
(This script is complete, it should run "as is")
model.json(...)
¶
The .json()
method will serialise a model to JSON. (For models with a custom root type,
only the value for the __root__
key is serialised)
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowby_alias
: whether field aliases should be used as keys in the returned dictionary; defaultFalse
exclude_unset
: whether fields which were not set when creating the model and have their default values should be excluded from the returned dictionary; defaultFalse
. Prior to v1.0,exclude_unset
was known asskip_defaults
; use ofskip_defaults
is now deprecatedexclude_defaults
: whether fields which are equal to their default values (whether set or otherwise) should be excluded from the returned dictionary; defaultFalse
exclude_none
: whether fields which are equal toNone
should be excluded from the returned dictionary; defaultFalse
encoder
: a custom encoder function passed to thedefault
argument ofjson.dumps()
; defaults to a custom encoder designed to take care of all common types**dumps_kwargs
: any other keyword arguments are passed tojson.dumps()
, e.g.indent
.
pydantic can serialise many commonly used types to JSON (e.g. datetime
, date
or UUID
) which would normally
fail with a simple json.dumps(foobar)
.
from datetime import datetime
from pydantic import BaseModel
class BarModel(BaseModel):
whatever: int
class FooBarModel(BaseModel):
foo: datetime
bar: BarModel
m = FooBarModel(foo=datetime(2032, 6, 1, 12, 13, 14), bar={'whatever': 123})
print(m.json())
#> {"foo": "2032-06-01T12:13:14", "bar": {"whatever": 123}}
(This script is complete, it should run "as is")
json_encoders
¶
Serialisation can be customised on a model using the json_encoders
config property; the keys should be types (or names of types for forward references), and
the values should be functions which serialise that type (see the example below):
from datetime import datetime, timedelta
from pydantic import BaseModel
from pydantic.json import timedelta_isoformat
class WithCustomEncoders(BaseModel):
dt: datetime
diff: timedelta
class Config:
json_encoders = {
datetime: lambda v: v.timestamp(),
timedelta: timedelta_isoformat,
}
m = WithCustomEncoders(dt=datetime(2032, 6, 1), diff=timedelta(hours=100))
print(m.json())
#> {"dt": 1969660800.0, "diff": "P4DT4H0M0.000000S"}
(This script is complete, it should run "as is")
By default, timedelta
is encoded as a simple float of total seconds. The timedelta_isoformat
is provided
as an optional alternative which implements ISO 8601 time diff encoding.
The json_encoders
are also merged during the models inheritance with the child
encoders taking precedence over the parent one.
from datetime import datetime, timedelta
from pydantic import BaseModel
from pydantic.json import timedelta_isoformat
class BaseClassWithEncoders(BaseModel):
dt: datetime
diff: timedelta
class Config:
json_encoders = {
datetime: lambda v: v.timestamp()
}
class ChildClassWithEncoders(BaseClassWithEncoders):
class Config:
json_encoders = {
timedelta: timedelta_isoformat
}
m = ChildClassWithEncoders(dt=datetime(2032, 6, 1), diff=timedelta(hours=100))
print(m.json())
#> {"dt": 1969660800.0, "diff": "P4DT4H0M0.000000S"}
(This script is complete, it should run "as is")
Serialising self-reference or other models¶
By default, models are serialised as dictionaries.
If you want to serialise them differently, you can add models_as_dict=False
when calling json()
method
and add the classes of the model in json_encoders
.
In case of forward references, you can use a string with the class name instead of the class itself
from typing import List, Optional
from pydantic import BaseModel
class Address(BaseModel):
city: str
country: str
class User(BaseModel):
name: str
address: Address
friends: Optional[List['User']] = None
class Config:
json_encoders = {
Address: lambda a: f'{a.city} ({a.country})',
'User': lambda u: f'{u.name} in {u.address.city} '
f'({u.address.country[:2].upper()})',
}
User.update_forward_refs()
wolfgang = User(
name='Wolfgang',
address=Address(city='Berlin', country='Deutschland'),
friends=[
User(name='Pierre', address=Address(city='Paris', country='France')),
User(name='John', address=Address(city='London', country='UK')),
],
)
print(wolfgang.json(models_as_dict=False))
#> {"name": "Wolfgang", "address": "Berlin (Deutschland)", "friends": ["Pierre
#> in Paris (FR)", "John in London (UK)"]}
from typing import Optional
from pydantic import BaseModel
class Address(BaseModel):
city: str
country: str
class User(BaseModel):
name: str
address: Address
friends: Optional[list['User']] = None
class Config:
json_encoders = {
Address: lambda a: f'{a.city} ({a.country})',
'User': lambda u: f'{u.name} in {u.address.city} '
f'({u.address.country[:2].upper()})',
}
User.update_forward_refs()
wolfgang = User(
name='Wolfgang',
address=Address(city='Berlin', country='Deutschland'),
friends=[
User(name='Pierre', address=Address(city='Paris', country='France')),
User(name='John', address=Address(city='London', country='UK')),
],
)
print(wolfgang.json(models_as_dict=False))
#> {"name": "Wolfgang", "address": "Berlin (Deutschland)", "friends": ["Pierre
#> in Paris (FR)", "John in London (UK)"]}
from pydantic import BaseModel
class Address(BaseModel):
city: str
country: str
class User(BaseModel):
name: str
address: Address
friends: list['User'] | None = None
class Config:
json_encoders = {
Address: lambda a: f'{a.city} ({a.country})',
'User': lambda u: f'{u.name} in {u.address.city} '
f'({u.address.country[:2].upper()})',
}
User.update_forward_refs()
wolfgang = User(
name='Wolfgang',
address=Address(city='Berlin', country='Deutschland'),
friends=[
User(name='Pierre', address=Address(city='Paris', country='France')),
User(name='John', address=Address(city='London', country='UK')),
],
)
print(wolfgang.json(models_as_dict=False))
#> {"name": "Wolfgang", "address": "Berlin (Deutschland)", "friends": ["Pierre
#> in Paris (FR)", "John in London (UK)"]}
(This script is complete, it should run "as is")
Serialising subclasses¶
Note
New in version v1.5.
Subclasses of common types were not automatically serialised to JSON before v1.5.
Subclasses of common types are automatically encoded like their super-classes:
from datetime import date, timedelta
from pydantic import BaseModel
from pydantic.validators import int_validator
class DayThisYear(date):
"""
Contrived example of a special type of date that
takes an int and interprets it as a day in the current year
"""
@classmethod
def __get_validators__(cls):
yield int_validator
yield cls.validate
@classmethod
def validate(cls, v: int):
return date.today().replace(month=1, day=1) + timedelta(days=v)
class FooModel(BaseModel):
date: DayThisYear
m = FooModel(date=300)
print(m.json())
#> {"date": "2024-10-27"}
(This script is complete, it should run "as is")
Custom JSON (de)serialisation¶
To improve the performance of encoding and decoding JSON, alternative JSON implementations
(e.g. ujson) can be used via the
json_loads
and json_dumps
properties of Config
.
from datetime import datetime
import ujson
from pydantic import BaseModel
class User(BaseModel):
id: int
name = 'John Doe'
signup_ts: datetime = None
class Config:
json_loads = ujson.loads
user = User.parse_raw('{"id": 123,"signup_ts":1234567890,"name":"John Doe"}')
print(user)
#> id=123 signup_ts=datetime.datetime(2009, 2, 13, 23, 31, 30,
#> tzinfo=datetime.timezone.utc) name='John Doe'
(This script is complete, it should run "as is")
ujson
generally cannot be used to dump JSON since it doesn't support encoding of objects like datetimes and does
not accept a default
fallback function argument. To do this, you may use another library like
orjson.
from datetime import datetime
import orjson
from pydantic import BaseModel
def orjson_dumps(v, *, default):
# orjson.dumps returns bytes, to match standard json.dumps we need to decode
return orjson.dumps(v, default=default).decode()
class User(BaseModel):
id: int
name = 'John Doe'
signup_ts: datetime = None
class Config:
json_loads = orjson.loads
json_dumps = orjson_dumps
user = User.parse_raw('{"id":123,"signup_ts":1234567890,"name":"John Doe"}')
print(user.json())
#> {"id":123,"signup_ts":"2009-02-13T23:31:30+00:00","name":"John Doe"}
(This script is complete, it should run "as is")
Note that orjson
takes care of datetime
encoding natively, making it faster than json.dumps
but
meaning you cannot always customise the encoding using Config.json_encoders
.
pickle.dumps(model)
¶
Using the same plumbing as copy()
, pydantic models support efficient pickling and unpickling.
import pickle
from pydantic import BaseModel
class FooBarModel(BaseModel):
a: str
b: int
m = FooBarModel(a='hello', b=123)
print(m)
#> a='hello' b=123
data = pickle.dumps(m)
print(data)
"""
b'\x80\x04\x95\x8e\x00\x00\x00\x00\x00\x00\x00\x8c\x17exporting_models_pickle
\x94\x8c\x0bFooBarModel\x94\x93\x94)\x81\x94}\x94(\x8c\x08__dict__\x94}\x94(\
x8c\x01a\x94\x8c\x05hello\x94\x8c\x01b\x94K{u\x8c\x0e__fields_set__\x94\x8f\x
94(h\x07h\t\x90\x8c\x1c__private_attribute_values__\x94}\x94ub.'
"""
m2 = pickle.loads(data)
print(m2)
#> a='hello' b=123
(This script is complete, it should run "as is")
Advanced include and exclude¶
The dict
, json
, and copy
methods support include
and exclude
arguments which can either be
sets or dictionaries. This allows nested selection of which fields to export:
from pydantic import BaseModel, SecretStr
class User(BaseModel):
id: int
username: str
password: SecretStr
class Transaction(BaseModel):
id: str
user: User
value: int
t = Transaction(
id='1234567890',
user=User(
id=42,
username='JohnDoe',
password='hashedpassword'
),
value=9876543210,
)
# using a set:
print(t.dict(exclude={'user', 'value'}))
#> {'id': '1234567890'}
# using a dict:
print(t.dict(exclude={'user': {'username', 'password'}, 'value': True}))
#> {'id': '1234567890', 'user': {'id': 42}}
print(t.dict(include={'id': True, 'user': {'id'}}))
#> {'id': '1234567890', 'user': {'id': 42}}
(This script is complete, it should run "as is")
The True
indicates that we want to exclude or include an entire key, just as if we included it in a set.
Of course, the same can be done at any depth level.
Special care must be taken when including or excluding fields from a list or tuple of submodels or dictionaries. In this scenario,
dict
and related methods expect integer keys for element-wise inclusion or exclusion. To exclude a field from every
member of a list or tuple, the dictionary key '__all__'
can be used as follows:
import datetime
from typing import List
from pydantic import BaseModel, SecretStr
class Country(BaseModel):
name: str
phone_code: int
class Address(BaseModel):
post_code: int
country: Country
class CardDetails(BaseModel):
number: SecretStr
expires: datetime.date
class Hobby(BaseModel):
name: str
info: str
class User(BaseModel):
first_name: str
second_name: str
address: Address
card_details: CardDetails
hobbies: List[Hobby]
user = User(
first_name='John',
second_name='Doe',
address=Address(
post_code=123456,
country=Country(
name='USA',
phone_code=1
)
),
card_details=CardDetails(
number=4212934504460000,
expires=datetime.date(2020, 5, 1)
),
hobbies=[
Hobby(name='Programming', info='Writing code and stuff'),
Hobby(name='Gaming', info='Hell Yeah!!!'),
],
)
exclude_keys = {
'second_name': True,
'address': {'post_code': True, 'country': {'phone_code'}},
'card_details': True,
# You can exclude fields from specific members of a tuple/list by index:
'hobbies': {-1: {'info'}},
}
include_keys = {
'first_name': True,
'address': {'country': {'name'}},
'hobbies': {0: True, -1: {'name'}},
}
# would be the same as user.dict(exclude=exclude_keys) in this case:
print(user.dict(include=include_keys))
"""
{
'first_name': 'John',
'address': {'country': {'name': 'USA'}},
'hobbies': [
{
'name': 'Programming',
'info': 'Writing code and stuff',
},
{'name': 'Gaming'},
],
}
"""
# To exclude a field from all members of a nested list or tuple, use "__all__":
print(user.dict(exclude={'hobbies': {'__all__': {'info'}}}))
"""
{
'first_name': 'John',
'second_name': 'Doe',
'address': {
'post_code': 123456,
'country': {'name': 'USA', 'phone_code': 1},
},
'card_details': {
'number': SecretStr('**********'),
'expires': datetime.date(2020, 5, 1),
},
'hobbies': [{'name': 'Programming'}, {'name': 'Gaming'}],
}
"""
import datetime
from pydantic import BaseModel, SecretStr
class Country(BaseModel):
name: str
phone_code: int
class Address(BaseModel):
post_code: int
country: Country
class CardDetails(BaseModel):
number: SecretStr
expires: datetime.date
class Hobby(BaseModel):
name: str
info: str
class User(BaseModel):
first_name: str
second_name: str
address: Address
card_details: CardDetails
hobbies: list[Hobby]
user = User(
first_name='John',
second_name='Doe',
address=Address(
post_code=123456,
country=Country(
name='USA',
phone_code=1
)
),
card_details=CardDetails(
number=4212934504460000,
expires=datetime.date(2020, 5, 1)
),
hobbies=[
Hobby(name='Programming', info='Writing code and stuff'),
Hobby(name='Gaming', info='Hell Yeah!!!'),
],
)
exclude_keys = {
'second_name': True,
'address': {'post_code': True, 'country': {'phone_code'}},
'card_details': True,
# You can exclude fields from specific members of a tuple/list by index:
'hobbies': {-1: {'info'}},
}
include_keys = {
'first_name': True,
'address': {'country': {'name'}},
'hobbies': {0: True, -1: {'name'}},
}
# would be the same as user.dict(exclude=exclude_keys) in this case:
print(user.dict(include=include_keys))
"""
{
'first_name': 'John',
'address': {'country': {'name': 'USA'}},
'hobbies': [
{
'name': 'Programming',
'info': 'Writing code and stuff',
},
{'name': 'Gaming'},
],
}
"""
# To exclude a field from all members of a nested list or tuple, use "__all__":
print(user.dict(exclude={'hobbies': {'__all__': {'info'}}}))
"""
{
'first_name': 'John',
'second_name': 'Doe',
'address': {
'post_code': 123456,
'country': {'name': 'USA', 'phone_code': 1},
},
'card_details': {
'number': SecretStr('**********'),
'expires': datetime.date(2020, 5, 1),
},
'hobbies': [{'name': 'Programming'}, {'name': 'Gaming'}],
}
"""
(This script is complete, it should run "as is")
The same holds for the json
and copy
methods.
Model and field level include and exclude¶
In addition to the explicit arguments exclude
and include
passed to dict
, json
and copy
methods, we can also pass the include
/exclude
arguments directly to the Field
constructor or the equivalent field
entry in the models Config
class:
from pydantic import BaseModel, Field, SecretStr
class User(BaseModel):
id: int
username: str
password: SecretStr = Field(..., exclude=True)
class Transaction(BaseModel):
id: str
user: User = Field(..., exclude={'username'})
value: int
class Config:
fields = {'value': {'exclude': True}}
t = Transaction(
id='1234567890',
user=User(
id=42,
username='JohnDoe',
password='hashedpassword'
),
value=9876543210,
)
print(t.dict())
#> {'id': '1234567890', 'user': {'id': 42}}
(This script is complete, it should run "as is")
In the case where multiple strategies are used, exclude
/include
fields are merged according to the following rules:
- First, model config level settings (via
"fields"
entry) are merged per field with the field constructor settings (i.e.Field(..., exclude=True)
), with the field constructor taking priority. - The resulting settings are merged per class with the explicit settings on
dict
,json
,copy
calls with the explicit settings taking priority.
Note that while merging settings, exclude
entries are merged by computing the "union" of keys, while include
entries are merged by computing the "intersection" of keys.
The resulting merged exclude settings:
from pydantic import BaseModel, Field, SecretStr
class User(BaseModel):
id: int
username: str # overridden by explicit exclude
password: SecretStr = Field(exclude=True)
class Transaction(BaseModel):
id: str
user: User
value: int
t = Transaction(
id='1234567890',
user=User(
id=42,
username='JohnDoe',
password='hashedpassword'
),
value=9876543210,
)
print(t.dict(exclude={'value': True, 'user': {'username'}}))
#> {'id': '1234567890', 'user': {'id': 42}}
(This script is complete, it should run "as is")
are the same as using merged include settings as follows:
from pydantic import BaseModel, Field, SecretStr
class User(BaseModel):
id: int = Field(..., include=True)
username: str = Field(..., include=True) # overridden by explicit include
password: SecretStr
class Transaction(BaseModel):
id: str
user: User
value: int
t = Transaction(
id='1234567890',
user=User(
id=42,
username='JohnDoe',
password='hashedpassword'
),
value=9876543210,
)
print(t.dict(include={'id': True, 'user': {'id'}}))
#> {'id': '1234567890', 'user': {'id': 42}}
(This script is complete, it should run "as is")