././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.703037
datamodel_code_generator-0.26.4/LICENSE 0000644 0000000 0000000 00000002054 00000000000 015732 0 ustar 00 0000000 0000000 MIT License
Copyright (c) 2019 Koudai Aono
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.703037
datamodel_code_generator-0.26.4/README.md 0000644 0000000 0000000 00000054504 00000000000 016213 0 ustar 00 0000000 0000000 # datamodel-code-generator
This code generator creates [pydantic v1 and v2](https://docs.pydantic.dev/) model, [dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html), [typing.TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict)
and [msgspec.Struct](https://github.com/jcrist/msgspec) from an openapi file and others.
[](https://pypi.python.org/pypi/datamodel-code-generator)
[](https://anaconda.org/conda-forge/datamodel-code-generator)
[](https://pepy.tech/project/datamodel-code-generator)
[](https://pypi.python.org/pypi/datamodel-code-generator)
[](https://codecov.io/gh/koxudaxi/datamodel-code-generator)

[](https://github.com/astral-sh/ruff)
[](https://pydantic.dev)
[](https://pydantic.dev)
## Help
See [documentation](https://koxudaxi.github.io/datamodel-code-generator) for more details.
## Quick Installation
To install `datamodel-code-generator`:
```bash
$ pip install datamodel-code-generator
```
## Simple Usage
You can generate models from a local file.
```bash
$ datamodel-codegen --input api.yaml --output model.py
```
api.yaml
```yaml
openapi: "3.0.0"
info:
version: 1.0.0
title: Swagger Petstore
license:
name: MIT
servers:
- url: http://petstore.swagger.io/v1
paths:
/pets:
get:
summary: List all pets
operationId: listPets
tags:
- pets
parameters:
- name: limit
in: query
description: How many items to return at one time (max 100)
required: false
schema:
type: integer
format: int32
responses:
'200':
description: A paged array of pets
headers:
x-next:
description: A link to the next page of responses
schema:
type: string
content:
application/json:
schema:
$ref: "#/components/schemas/Pets"
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
post:
summary: Create a pet
operationId: createPets
tags:
- pets
responses:
'201':
description: Null response
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
/pets/{petId}:
get:
summary: Info for a specific pet
operationId: showPetById
tags:
- pets
parameters:
- name: petId
in: path
required: true
description: The id of the pet to retrieve
schema:
type: string
responses:
'200':
description: Expected response to a valid request
content:
application/json:
schema:
$ref: "#/components/schemas/Pets"
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
components:
schemas:
Pet:
required:
- id
- name
properties:
id:
type: integer
format: int64
name:
type: string
tag:
type: string
Pets:
type: array
items:
$ref: "#/components/schemas/Pet"
Error:
required:
- code
- message
properties:
code:
type: integer
format: int32
message:
type: string
apis:
type: array
items:
type: object
properties:
apiKey:
type: string
description: To be used as a dataset parameter value
apiVersionNumber:
type: string
description: To be used as a version parameter value
apiUrl:
type: string
format: uri
description: "The URL describing the dataset's fields"
apiDocumentationUrl:
type: string
format: uri
description: A URL to the API console for each API
```
model.py
```python
# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2020-06-02T05:28:24+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class Error(BaseModel):
code: int
message: str
class Api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset's fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class Apis(BaseModel):
__root__: List[Api]
```
## Supported input types
- OpenAPI 3 (YAML/JSON, [OpenAPI Data Type](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#data-types));
- JSON Schema ([JSON Schema Core](http://json-schema.org/draft/2019-09/json-schema-validation.html)/[JSON Schema Validation](http://json-schema.org/draft/2019-09/json-schema-validation.html));
- JSON/YAML/CSV Data (it will be converted to JSON Schema);
- Python dictionary (it will be converted to JSON Schema);
- GraphQL schema ([GraphQL Schemas and Types](https://graphql.org/learn/schema/));
## Supported output types
- [pydantic](https://docs.pydantic.dev/1.10/).BaseModel;
- [pydantic_v2](https://docs.pydantic.dev/2.0/).BaseModel;
- [dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html);
- [typing.TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict);
- [msgspec.Struct](https://github.com/jcrist/msgspec);
- Custom type from your [jinja2](https://jinja.palletsprojects.com/en/3.1.x/) template;
## Sponsors
## Projects that use datamodel-code-generator
These OSS projects use datamodel-code-generator to generate many models.
See the following linked projects for real world examples and inspiration.
- [airbytehq/airbyte](https://github.com/airbytehq/airbyte)
- *[Generate Python, Java/Kotlin, and Typescript protocol models](https://github.com/airbytehq/airbyte-protocol/tree/main/protocol-models/bin)*
- [apache/iceberg](https://github.com/apache/iceberg)
- *[Generate Python code](https://github.com/apache/iceberg/blob/d2e1094ee0cc6239d43f63ba5114272f59d605d2/open-api/README.md?plain=1#L39)*
*[`make generate`](https://github.com/apache/iceberg/blob/d2e1094ee0cc6239d43f63ba5114272f59d605d2/open-api/Makefile#L24-L34)*
- [argoproj-labs/hera](https://github.com/argoproj-labs/hera)
- *[`Makefile`](https://github.com/argoproj-labs/hera/blob/c8cbf0c7a676de57469ca3d6aeacde7a5e84f8b7/Makefile#L53-L62)*
- [awslabs/aws-lambda-powertools-python](https://github.com/awslabs/aws-lambda-powertools-python)
- *Recommended for [advanced-use-cases](https://awslabs.github.io/aws-lambda-powertools-python/2.6.0/utilities/parser/#advanced-use-cases) in the official documentation*
- [DataDog/integrations-core](https://github.com/DataDog/integrations-core)
- *[Config models](https://github.com/DataDog/integrations-core/blob/master/docs/developer/meta/config-models.md)*
- [hashintel/hash](https://github.com/hashintel/hash)
- *[`codegen.sh`](https://github.com/hashintel/hash/blob/9762b1a1937e14f6b387677e4c7fe4a5f3d4a1e1/libs/%40local/hash-graph-client/python/scripts/codegen.sh#L21-L39)*
- [IBM/compliance-trestle](https://github.com/IBM/compliance-trestle)
- *[Building the models from the OSCAL schemas.](https://github.com/IBM/compliance-trestle/blob/develop/docs/contributing/website.md#building-the-models-from-the-oscal-schemas)*
- [Netflix/consoleme](https://github.com/Netflix/consoleme)
- *[How do I generate models from the Swagger specification?](https://github.com/Netflix/consoleme/blob/master/docs/gitbook/faq.md#how-do-i-generate-models-from-the-swagger-specification)*
- [Nike-Inc/brickflow](https://github.com/Nike-Inc/brickflow)
- *[Code generate tools](https://github.com/Nike-Inc/brickflow/blob/e3245bf638588867b831820a6675ada76b2010bf/tools/README.md?plain=1#L8)[`./tools/gen-bundle.sh`](https://github.com/Nike-Inc/brickflow/blob/e3245bf638588867b831820a6675ada76b2010bf/tools/gen-bundle.sh#L15-L22)*
- [open-metadata/OpenMetadata](https://github.com/open-metadata/OpenMetadata)
- *[Makefile](https://github.com/open-metadata/OpenMetadata/blob/main/Makefile)*
- [PostHog/posthog](https://github.com/PostHog/posthog)
- *[Generate models via `npm run`](https://github.com/PostHog/posthog/blob/e1a55b9cb38d01225224bebf8f0c1e28faa22399/package.json#L41)*
- [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer)
- *[generate-types.sh](https://github.com/SeldonIO/MLServer/blob/master/hack/generate-types.sh)*
## Installation
To install `datamodel-code-generator`:
```bash
$ pip install datamodel-code-generator
```
### `http` extra option
If you want to resolve `$ref` for remote files then you should specify `http` extra option.
```bash
$ pip install 'datamodel-code-generator[http]'
```
### `graphql` extra option
If you want to generate data model from a GraphQL schema then you should specify `graphql` extra option.
```bash
$ pip install 'datamodel-code-generator[graphql]'
```
### Docker Image
The docker image is in [Docker Hub](https://hub.docker.com/r/koxudaxi/datamodel-code-generator)
```bash
$ docker pull koxudaxi/datamodel-code-generator
```
## Advanced Uses
You can generate models from a URL.
```bash
$ datamodel-codegen --url https:// --output model.py
```
This method needs the [http extra option](#http-extra-option)
## All Command Options
The `datamodel-codegen` command:
```bash
usage:
datamodel-codegen [options]
Generate Python data models from schema definitions or structured data
Options:
--additional-imports ADDITIONAL_IMPORTS
Custom imports for output (delimited list input). For example
"datetime.date,datetime.datetime"
--custom-formatters CUSTOM_FORMATTERS
List of modules with custom formatter (delimited list input).
--http-headers HTTP_HEADER [HTTP_HEADER ...]
Set headers in HTTP requests to the remote host. (example:
"Authorization: Basic dXNlcjpwYXNz")
--http-ignore-tls Disable verification of the remote host''s TLS certificate
--http-query-parameters HTTP_QUERY_PARAMETERS [HTTP_QUERY_PARAMETERS ...]
Set query parameters in HTTP requests to the remote host. (example:
"ref=branch")
--input INPUT Input file/directory (default: stdin)
--input-file-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}
Input file type (default: auto)
--output OUTPUT Output file (default: stdout)
--output-model-type {pydantic.BaseModel,pydantic_v2.BaseModel,dataclasses.dataclass,typing.TypedDict,msgspec.Struct}
Output model type (default: pydantic.BaseModel)
--url URL Input file URL. `--input` is ignored when `--url` is used
Typing customization:
--base-class BASE_CLASS
Base Class (default: pydantic.BaseModel)
--enum-field-as-literal {all,one}
Parse enum field as literal. all: all enum field type are Literal.
one: field type is Literal when an enum has only one possible value
--field-constraints Use field constraints and not con* annotations
--set-default-enum-member
Set enum members as default values for enum field
--strict-types {str,bytes,int,float,bool} [{str,bytes,int,float,bool} ...]
Use strict types
--use-annotated Use typing.Annotated for Field(). Also, `--field-constraints` option
will be enabled.
--use-generic-container-types
Use generic container types for type hinting (typing.Sequence,
typing.Mapping). If `--use-standard-collections` option is set, then
import from collections.abc instead of typing
--use-non-positive-negative-number-constrained-types
Use the Non{Positive,Negative}{FloatInt} types instead of the
corresponding con* constrained types.
--use-one-literal-as-default
Use one literal as default value for one literal field
--use-standard-collections
Use standard collections for type hinting (list, dict)
--use-subclass-enum Define Enum class as subclass with field type when enum has type
(int, float, bytes, str)
--use-union-operator Use | operator for Union type (PEP 604).
--use-unique-items-as-set
define field type as `set` when the field attribute has
`uniqueItems`
Field customization:
--capitalise-enum-members, --capitalize-enum-members
Capitalize field names on enum
--empty-enum-field-name EMPTY_ENUM_FIELD_NAME
Set field name when enum value is empty (default: `_`)
--field-extra-keys FIELD_EXTRA_KEYS [FIELD_EXTRA_KEYS ...]
Add extra keys to field parameters
--field-extra-keys-without-x-prefix FIELD_EXTRA_KEYS_WITHOUT_X_PREFIX [FIELD_EXTRA_KEYS_WITHOUT_X_PREFIX ...]
Add extra keys with `x-` prefix to field parameters. The extra keys
are stripped of the `x-` prefix.
--field-include-all-keys
Add all keys to field parameters
--force-optional Force optional for required fields
--no-alias Do not add a field alias. E.g., if --snake-case-field is used along
with a base class, which has an alias_generator
--original-field-name-delimiter ORIGINAL_FIELD_NAME_DELIMITER
Set delimiter to convert to snake case. This option only can be used
with --snake-case-field (default: `_` )
--remove-special-field-name-prefix
Remove field name prefix if it has a special meaning e.g.
underscores
--snake-case-field Change camel-case field name to snake-case
--special-field-name-prefix SPECIAL_FIELD_NAME_PREFIX
Set field name prefix when first character can''t be used as Python
field name (default: `field`)
--strip-default-none Strip default None on fields
--union-mode {smart,left_to_right}
Union mode for only pydantic v2 field
--use-default Use default value even if a field is required
--use-default-kwarg Use `default=` instead of a positional argument for Fields that have
default values.
--use-field-description
Use schema description to populate field docstring
Model customization:
--allow-extra-fields Allow to pass extra fields, if this flag is not passed, extra fields
are forbidden.
--allow-population-by-field-name
Allow population by field name
--class-name CLASS_NAME
Set class name of root model
--collapse-root-models
Models generated with a root-type field will be merged into the
models using that root-type model
--disable-appending-item-suffix
Disable appending `Item` suffix to model name in an array
--disable-timestamp Disable timestamp on file headers
--enable-faux-immutability
Enable faux immutability
--enable-version-header
Enable package version on file headers
--keep-model-order Keep generated models'' order
--keyword-only Defined models as keyword only (for example
dataclass(kw_only=True)).
--output-datetime-class {datetime,AwareDatetime,NaiveDatetime}
Choose Datetime class between AwareDatetime, NaiveDatetime or
datetime. Each output model has its default mapping (for example
pydantic: datetime, dataclass: str, ...)
--reuse-model Reuse models on the field when a module has the model with the same
content
--target-python-version {3.6,3.7,3.8,3.9,3.10,3.11,3.12}
target python version (default: 3.8)
--treat-dot-as-module
treat dotted module names as modules
--use-exact-imports import exact types instead of modules, for example: "from .foo
import Bar" instead of "from . import foo" with "foo.Bar"
--use-pendulum use pendulum instead of datetime
--use-schema-description
Use schema description to populate class docstring
--use-title-as-name use titles as class names of models
Template customization:
--aliases ALIASES Alias mapping file
--custom-file-header CUSTOM_FILE_HEADER
Custom file header
--custom-file-header-path CUSTOM_FILE_HEADER_PATH
Custom file header file path
--custom-formatters-kwargs CUSTOM_FORMATTERS_KWARGS
A file with kwargs for custom formatters.
--custom-template-dir CUSTOM_TEMPLATE_DIR
Custom template directory
--encoding ENCODING The encoding of input and output (default: utf-8)
--extra-template-data EXTRA_TEMPLATE_DATA
Extra template data
--use-double-quotes Model generated with double quotes. Single quotes or your black
config skip_string_normalization value will be used without this
option.
--wrap-string-literal
Wrap string literal by using black `experimental-string-processing`
option (require black 20.8b0 or later)
OpenAPI-only options:
--openapi-scopes {schemas,paths,tags,parameters} [{schemas,paths,tags,parameters} ...]
Scopes of OpenAPI model generation (default: schemas)
--strict-nullable Treat default field as a non-nullable field (Only OpenAPI)
--use-operation-id-as-name
use operation id of OpenAPI as class names of models
--validation Deprecated: Enable validation (Only OpenAPI). this option is
deprecated. it will be removed in future releases
General options:
--debug show debug message (require "debug". `$ pip install ''datamodel-code-
generator[debug]''`)
--disable-warnings disable warnings
--no-color disable colorized output
--version show version
-h, --help show this help message and exit
```
## Related projects
### fastapi-code-generator
This code generator creates [FastAPI](https://github.com/tiangolo/fastapi) app from an openapi file.
[https://github.com/koxudaxi/fastapi-code-generator](https://github.com/koxudaxi/fastapi-code-generator)
### pydantic-pycharm-plugin
[A JetBrains PyCharm plugin](https://plugins.jetbrains.com/plugin/12861-pydantic) for [`pydantic`](https://github.com/samuelcolvin/pydantic).
[https://github.com/koxudaxi/pydantic-pycharm-plugin](https://github.com/koxudaxi/pydantic-pycharm-plugin)
## PyPi
[https://pypi.org/project/datamodel-code-generator](https://pypi.org/project/datamodel-code-generator)
## Contributing
See `docs/development-contributing.md` for how to get started!
## License
datamodel-code-generator is released under the MIT License. http://www.opensource.org/licenses/mit-license
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.703037
datamodel_code_generator-0.26.4/datamodel_code_generator/__init__.py 0000644 0000000 0000000 00000046741 00000000000 024043 0 ustar 00 0000000 0000000 from __future__ import annotations
import contextlib
import os
import sys
from datetime import datetime, timezone
from enum import Enum
from pathlib import Path
from typing import (
IO,
TYPE_CHECKING,
Any,
Callable,
DefaultDict,
Dict,
Iterator,
List,
Mapping,
Optional,
Sequence,
Set,
TextIO,
Tuple,
Type,
TypeVar,
Union,
)
from urllib.parse import ParseResult
import yaml
import datamodel_code_generator.pydantic_patch # noqa: F401
from datamodel_code_generator.format import DatetimeClassType, PythonVersion
from datamodel_code_generator.model.pydantic_v2 import UnionMode
from datamodel_code_generator.parser import DefaultPutDict, LiteralType
from datamodel_code_generator.parser.base import Parser
from datamodel_code_generator.types import StrictTypes
from datamodel_code_generator.util import SafeLoader # type: ignore
T = TypeVar('T')
try:
import pysnooper
pysnooper.tracer.DISABLED = True
except ImportError: # pragma: no cover
pysnooper = None
DEFAULT_BASE_CLASS: str = 'pydantic.BaseModel'
def load_yaml(stream: Union[str, TextIO]) -> Any:
return yaml.load(stream, Loader=SafeLoader)
def load_yaml_from_path(path: Path, encoding: str) -> Any:
with path.open(encoding=encoding) as f:
return load_yaml(f)
if TYPE_CHECKING:
def get_version() -> str: ...
else:
def get_version() -> str:
package = 'datamodel-code-generator'
from importlib.metadata import version
return version(package)
def enable_debug_message() -> None: # pragma: no cover
if not pysnooper:
raise Exception(
"Please run `$pip install 'datamodel-code-generator[debug]'` to use debug option"
)
pysnooper.tracer.DISABLED = False
def snooper_to_methods( # type: ignore
output=None,
watch=(),
watch_explode=(),
depth=1,
prefix='',
overwrite=False,
thread_info=False,
custom_repr=(),
max_variable_length=100,
) -> Callable[..., Any]:
def inner(cls: Type[T]) -> Type[T]:
if not pysnooper:
return cls
import inspect
methods = inspect.getmembers(cls, predicate=inspect.isfunction)
for name, method in methods:
snooper_method = pysnooper.snoop(
output,
watch,
watch_explode,
depth,
prefix,
overwrite,
thread_info,
custom_repr,
max_variable_length,
)(method)
setattr(cls, name, snooper_method)
return cls
return inner
@contextlib.contextmanager
def chdir(path: Optional[Path]) -> Iterator[None]:
"""Changes working directory and returns to previous on exit."""
if path is None:
yield
else:
prev_cwd = Path.cwd()
try:
os.chdir(path if path.is_dir() else path.parent)
yield
finally:
os.chdir(prev_cwd)
def is_openapi(text: str) -> bool:
return 'openapi' in load_yaml(text)
JSON_SCHEMA_URLS: Tuple[str, ...] = (
'http://json-schema.org/',
'https://json-schema.org/',
)
def is_schema(text: str) -> bool:
data = load_yaml(text)
if not isinstance(data, dict):
return False
schema = data.get('$schema')
if isinstance(schema, str) and any(
schema.startswith(u) for u in JSON_SCHEMA_URLS
): # pragma: no cover
return True
if isinstance(data.get('type'), str):
return True
if any(
isinstance(data.get(o), list)
for o in (
'allOf',
'anyOf',
'oneOf',
)
):
return True
if isinstance(data.get('properties'), dict):
return True
return False
class InputFileType(Enum):
Auto = 'auto'
OpenAPI = 'openapi'
JsonSchema = 'jsonschema'
Json = 'json'
Yaml = 'yaml'
Dict = 'dict'
CSV = 'csv'
GraphQL = 'graphql'
RAW_DATA_TYPES: List[InputFileType] = [
InputFileType.Json,
InputFileType.Yaml,
InputFileType.Dict,
InputFileType.CSV,
InputFileType.GraphQL,
]
class DataModelType(Enum):
PydanticBaseModel = 'pydantic.BaseModel'
PydanticV2BaseModel = 'pydantic_v2.BaseModel'
DataclassesDataclass = 'dataclasses.dataclass'
TypingTypedDict = 'typing.TypedDict'
MsgspecStruct = 'msgspec.Struct'
class OpenAPIScope(Enum):
Schemas = 'schemas'
Paths = 'paths'
Tags = 'tags'
Parameters = 'parameters'
class GraphQLScope(Enum):
Schema = 'schema'
class Error(Exception):
def __init__(self, message: str) -> None:
self.message: str = message
def __str__(self) -> str:
return self.message
class InvalidClassNameError(Error):
def __init__(self, class_name: str) -> None:
self.class_name = class_name
message = f'title={repr(class_name)} is invalid class name.'
super().__init__(message=message)
def get_first_file(path: Path) -> Path: # pragma: no cover
if path.is_file():
return path
elif path.is_dir():
for child in path.rglob('*'):
if child.is_file():
return child
raise Error('File not found')
def generate(
input_: Union[Path, str, ParseResult, Mapping[str, Any]],
*,
input_filename: Optional[str] = None,
input_file_type: InputFileType = InputFileType.Auto,
output: Optional[Path] = None,
output_model_type: DataModelType = DataModelType.PydanticBaseModel,
target_python_version: PythonVersion = PythonVersion.PY_38,
base_class: str = '',
additional_imports: Optional[List[str]] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
validation: bool = False,
field_constraints: bool = False,
snake_case_field: bool = False,
strip_default_none: bool = False,
aliases: Optional[Mapping[str, str]] = None,
disable_timestamp: bool = False,
enable_version_header: bool = False,
allow_population_by_field_name: bool = False,
allow_extra_fields: bool = False,
apply_default_values_for_required_fields: bool = False,
force_optional_for_required_fields: bool = False,
class_name: Optional[str] = None,
use_standard_collections: bool = False,
use_schema_description: bool = False,
use_field_description: bool = False,
use_default_kwarg: bool = False,
reuse_model: bool = False,
encoding: str = 'utf-8',
enum_field_as_literal: Optional[LiteralType] = None,
use_one_literal_as_default: bool = False,
set_default_enum_member: bool = False,
use_subclass_enum: bool = False,
strict_nullable: bool = False,
use_generic_container_types: bool = False,
enable_faux_immutability: bool = False,
disable_appending_item_suffix: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
empty_enum_field_name: Optional[str] = None,
custom_class_name_generator: Optional[Callable[[str], str]] = None,
field_extra_keys: Optional[Set[str]] = None,
field_include_all_keys: bool = False,
field_extra_keys_without_x_prefix: Optional[Set[str]] = None,
openapi_scopes: Optional[List[OpenAPIScope]] = None,
graphql_scopes: Optional[List[GraphQLScope]] = None,
wrap_string_literal: Optional[bool] = None,
use_title_as_name: bool = False,
use_operation_id_as_name: bool = False,
use_unique_items_as_set: bool = False,
http_headers: Optional[Sequence[Tuple[str, str]]] = None,
http_ignore_tls: bool = False,
use_annotated: bool = False,
use_non_positive_negative_number_constrained_types: bool = False,
original_field_name_delimiter: Optional[str] = None,
use_double_quotes: bool = False,
use_union_operator: bool = False,
collapse_root_models: bool = False,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
keep_model_order: bool = False,
custom_file_header: Optional[str] = None,
custom_file_header_path: Optional[Path] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
use_pendulum: bool = False,
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
treat_dots_as_module: bool = False,
use_exact_imports: bool = False,
union_mode: Optional[UnionMode] = None,
output_datetime_class: Optional[DatetimeClassType] = None,
keyword_only: bool = False,
no_alias: bool = False,
) -> None:
remote_text_cache: DefaultPutDict[str, str] = DefaultPutDict()
if isinstance(input_, str):
input_text: Optional[str] = input_
elif isinstance(input_, ParseResult):
from datamodel_code_generator.http import get_body
input_text = remote_text_cache.get_or_put(
input_.geturl(),
default_factory=lambda url: get_body(
url, http_headers, http_ignore_tls, http_query_parameters
),
)
else:
input_text = None
if isinstance(input_, Path) and not input_.is_absolute():
input_ = input_.expanduser().resolve()
if input_file_type == InputFileType.Auto:
try:
input_text_ = (
get_first_file(input_).read_text(encoding=encoding)
if isinstance(input_, Path)
else input_text
)
assert isinstance(input_text_, str)
input_file_type = infer_input_type(input_text_)
print(
inferred_message.format(input_file_type.value),
file=sys.stderr,
)
except: # noqa
raise Error('Invalid file format')
kwargs: Dict[str, Any] = {}
if input_file_type == InputFileType.OpenAPI:
from datamodel_code_generator.parser.openapi import OpenAPIParser
parser_class: Type[Parser] = OpenAPIParser
kwargs['openapi_scopes'] = openapi_scopes
elif input_file_type == InputFileType.GraphQL:
from datamodel_code_generator.parser.graphql import GraphQLParser
parser_class: Type[Parser] = GraphQLParser
else:
from datamodel_code_generator.parser.jsonschema import JsonSchemaParser
parser_class = JsonSchemaParser
if input_file_type in RAW_DATA_TYPES:
import json
try:
if isinstance(input_, Path) and input_.is_dir(): # pragma: no cover
raise Error(f'Input must be a file for {input_file_type}')
obj: Dict[Any, Any]
if input_file_type == InputFileType.CSV:
import csv
def get_header_and_first_line(csv_file: IO[str]) -> Dict[str, Any]:
csv_reader = csv.DictReader(csv_file)
return dict(zip(csv_reader.fieldnames, next(csv_reader))) # type: ignore
if isinstance(input_, Path):
with input_.open(encoding=encoding) as f:
obj = get_header_and_first_line(f)
else:
import io
obj = get_header_and_first_line(io.StringIO(input_text))
elif input_file_type == InputFileType.Yaml:
obj = load_yaml(
input_.read_text(encoding=encoding) # type: ignore
if isinstance(input_, Path)
else input_text
)
elif input_file_type == InputFileType.Json:
obj = json.loads(
input_.read_text(encoding=encoding) # type: ignore
if isinstance(input_, Path)
else input_text
)
elif input_file_type == InputFileType.Dict:
import ast
# Input can be a dict object stored in a python file
obj = (
ast.literal_eval(
input_.read_text(encoding=encoding) # type: ignore
)
if isinstance(input_, Path)
else input_
)
else: # pragma: no cover
raise Error(f'Unsupported input file type: {input_file_type}')
except: # noqa
raise Error('Invalid file format')
from genson import SchemaBuilder
builder = SchemaBuilder()
builder.add_object(obj)
input_text = json.dumps(builder.to_schema())
if isinstance(input_, ParseResult) and input_file_type not in RAW_DATA_TYPES:
input_text = None
if union_mode is not None:
if output_model_type == DataModelType.PydanticV2BaseModel:
default_field_extras = {'union_mode': union_mode}
else: # pragma: no cover
raise Error('union_mode is only supported for pydantic_v2.BaseModel')
else:
default_field_extras = None
from datamodel_code_generator.model import get_data_model_types
data_model_types = get_data_model_types(
output_model_type, target_python_version, output_datetime_class
)
parser = parser_class(
source=input_text or input_,
data_model_type=data_model_types.data_model,
data_model_root_type=data_model_types.root_model,
data_model_field_type=data_model_types.field_model,
data_type_manager_type=data_model_types.data_type_manager,
base_class=base_class,
additional_imports=additional_imports,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
target_python_version=target_python_version,
dump_resolve_reference_action=data_model_types.dump_resolve_reference_action,
validation=validation,
field_constraints=field_constraints,
snake_case_field=snake_case_field,
strip_default_none=strip_default_none,
aliases=aliases,
allow_population_by_field_name=allow_population_by_field_name,
allow_extra_fields=allow_extra_fields,
apply_default_values_for_required_fields=apply_default_values_for_required_fields,
force_optional_for_required_fields=force_optional_for_required_fields,
class_name=class_name,
use_standard_collections=use_standard_collections,
base_path=input_.parent
if isinstance(input_, Path) and input_.is_file()
else None,
use_schema_description=use_schema_description,
use_field_description=use_field_description,
use_default_kwarg=use_default_kwarg,
reuse_model=reuse_model,
enum_field_as_literal=LiteralType.All
if output_model_type == DataModelType.TypingTypedDict
else enum_field_as_literal,
use_one_literal_as_default=use_one_literal_as_default,
set_default_enum_member=True
if output_model_type == DataModelType.DataclassesDataclass
else set_default_enum_member,
use_subclass_enum=use_subclass_enum,
strict_nullable=strict_nullable,
use_generic_container_types=use_generic_container_types,
enable_faux_immutability=enable_faux_immutability,
remote_text_cache=remote_text_cache,
disable_appending_item_suffix=disable_appending_item_suffix,
strict_types=strict_types,
empty_enum_field_name=empty_enum_field_name,
custom_class_name_generator=custom_class_name_generator,
field_extra_keys=field_extra_keys,
field_include_all_keys=field_include_all_keys,
field_extra_keys_without_x_prefix=field_extra_keys_without_x_prefix,
wrap_string_literal=wrap_string_literal,
use_title_as_name=use_title_as_name,
use_operation_id_as_name=use_operation_id_as_name,
use_unique_items_as_set=use_unique_items_as_set,
http_headers=http_headers,
http_ignore_tls=http_ignore_tls,
use_annotated=use_annotated,
use_non_positive_negative_number_constrained_types=use_non_positive_negative_number_constrained_types,
original_field_name_delimiter=original_field_name_delimiter,
use_double_quotes=use_double_quotes,
use_union_operator=use_union_operator,
collapse_root_models=collapse_root_models,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members,
keep_model_order=keep_model_order,
known_third_party=data_model_types.known_third_party,
custom_formatters=custom_formatters,
custom_formatters_kwargs=custom_formatters_kwargs,
use_pendulum=use_pendulum,
http_query_parameters=http_query_parameters,
treat_dots_as_module=treat_dots_as_module,
use_exact_imports=use_exact_imports,
default_field_extras=default_field_extras,
target_datetime_class=output_datetime_class,
keyword_only=keyword_only,
no_alias=no_alias,
**kwargs,
)
with chdir(output):
results = parser.parse()
if not input_filename: # pragma: no cover
if isinstance(input_, str):
input_filename = ''
elif isinstance(input_, ParseResult):
input_filename = input_.geturl()
elif input_file_type == InputFileType.Dict:
# input_ might be a dict object provided directly, and missing a name field
input_filename = getattr(input_, 'name', '')
else:
input_filename = input_.name
if not results:
raise Error('Models not found in the input data')
elif isinstance(results, str):
modules = {output: (results, input_filename)}
else:
if output is None:
raise Error('Modular references require an output directory')
if output.suffix:
raise Error('Modular references require an output directory, not a file')
modules = {
output.joinpath(*name): (
result.body,
str(result.source.as_posix() if result.source else input_filename),
)
for name, result in sorted(results.items())
}
timestamp = datetime.now(timezone.utc).replace(microsecond=0).isoformat()
if custom_file_header is None and custom_file_header_path:
custom_file_header = custom_file_header_path.read_text(encoding=encoding)
header = """\
# generated by datamodel-codegen:
# filename: {}"""
if not disable_timestamp:
header += f'\n# timestamp: {timestamp}'
if enable_version_header:
header += f'\n# version: {get_version()}'
file: Optional[IO[Any]]
for path, (body, filename) in modules.items():
if path is None:
file = None
else:
if not path.parent.exists():
path.parent.mkdir(parents=True)
file = path.open('wt', encoding=encoding)
print(custom_file_header or header.format(filename), file=file)
if body:
print('', file=file)
print(body.rstrip(), file=file)
if file is not None:
file.close()
def infer_input_type(text: str) -> InputFileType:
if is_openapi(text):
return InputFileType.OpenAPI
elif is_schema(text):
return InputFileType.JsonSchema
return InputFileType.Json
inferred_message = (
'The input file type was determined to be: {}\nThis can be specified explicitly with the '
'`--input-file-type` option.'
)
__all__ = [
'DefaultPutDict',
'Error',
'InputFileType',
'InvalidClassNameError',
'LiteralType',
'PythonVersion',
'generate',
]
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.703037
datamodel_code_generator-0.26.4/datamodel_code_generator/__main__.py 0000644 0000000 0000000 00000051525 00000000000 024020 0 ustar 00 0000000 0000000 #! /usr/bin/env python
"""
Main function.
"""
from __future__ import annotations
import json
import signal
import sys
import warnings
from collections import defaultdict
from enum import IntEnum
from io import TextIOBase
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
DefaultDict,
Dict,
List,
Optional,
Sequence,
Set,
Tuple,
Union,
cast,
)
from urllib.parse import ParseResult, urlparse
import argcomplete
import black
from pydantic import BaseModel
from datamodel_code_generator.model.pydantic_v2 import UnionMode
if TYPE_CHECKING:
from argparse import Namespace
from typing_extensions import Self
from datamodel_code_generator import (
DataModelType,
Error,
InputFileType,
InvalidClassNameError,
OpenAPIScope,
enable_debug_message,
generate,
)
from datamodel_code_generator.arguments import DEFAULT_ENCODING, arg_parser, namespace
from datamodel_code_generator.format import (
DatetimeClassType,
PythonVersion,
black_find_project_root,
is_supported_in_black,
)
from datamodel_code_generator.parser import LiteralType
from datamodel_code_generator.reference import is_url
from datamodel_code_generator.types import StrictTypes
from datamodel_code_generator.util import (
PYDANTIC_V2,
ConfigDict,
Model,
field_validator,
load_toml,
model_validator,
)
class Exit(IntEnum):
"""Exit reasons."""
OK = 0
ERROR = 1
KeyboardInterrupt = 2
def sig_int_handler(_: int, __: Any) -> None: # pragma: no cover
exit(Exit.OK)
signal.signal(signal.SIGINT, sig_int_handler)
class Config(BaseModel):
if PYDANTIC_V2:
model_config = ConfigDict(arbitrary_types_allowed=True)
def get(self, item: str) -> Any:
return getattr(self, item)
def __getitem__(self, item: str) -> Any:
return self.get(item)
if TYPE_CHECKING:
@classmethod
def get_fields(cls) -> Dict[str, Any]: ...
else:
@classmethod
def parse_obj(cls: type[Model], obj: Any) -> Model:
return cls.model_validate(obj)
@classmethod
def get_fields(cls) -> Dict[str, Any]:
return cls.model_fields
else:
class Config:
# validate_assignment = True
# Pydantic 1.5.1 doesn't support validate_assignment correctly
arbitrary_types_allowed = (TextIOBase,)
if not TYPE_CHECKING:
@classmethod
def get_fields(cls) -> Dict[str, Any]:
return cls.__fields__
@field_validator(
'aliases', 'extra_template_data', 'custom_formatters_kwargs', mode='before'
)
def validate_file(cls, value: Any) -> Optional[TextIOBase]:
if value is None or isinstance(value, TextIOBase):
return value
return cast(TextIOBase, Path(value).expanduser().resolve().open('rt'))
@field_validator(
'input',
'output',
'custom_template_dir',
'custom_file_header_path',
mode='before',
)
def validate_path(cls, value: Any) -> Optional[Path]:
if value is None or isinstance(value, Path):
return value # pragma: no cover
return Path(value).expanduser().resolve()
@field_validator('url', mode='before')
def validate_url(cls, value: Any) -> Optional[ParseResult]:
if isinstance(value, str) and is_url(value): # pragma: no cover
return urlparse(value)
elif value is None: # pragma: no cover
return None
raise Error(
f"This protocol doesn't support only http/https. --input={value}"
) # pragma: no cover
@model_validator(mode='after')
def validate_use_generic_container_types(
cls, values: Dict[str, Any]
) -> Dict[str, Any]:
if values.get('use_generic_container_types'):
target_python_version: PythonVersion = values['target_python_version']
if target_python_version == target_python_version.PY_36:
raise Error(
f'`--use-generic-container-types` can not be used with `--target-python-version` {target_python_version.PY_36.value}.\n'
' The version will be not supported in a future version'
)
return values
@model_validator(mode='after')
def validate_original_field_name_delimiter(
cls, values: Dict[str, Any]
) -> Dict[str, Any]:
if values.get('original_field_name_delimiter') is not None:
if not values.get('snake_case_field'):
raise Error(
'`--original-field-name-delimiter` can not be used without `--snake-case-field`.'
)
return values
@model_validator(mode='after')
def validate_custom_file_header(cls, values: Dict[str, Any]) -> Dict[str, Any]:
if values.get('custom_file_header') and values.get('custom_file_header_path'):
raise Error(
'`--custom_file_header_path` can not be used with `--custom_file_header`.'
) # pragma: no cover
return values
@model_validator(mode='after')
def validate_keyword_only(cls, values: Dict[str, Any]) -> Dict[str, Any]:
output_model_type: DataModelType = values.get('output_model_type')
python_target: PythonVersion = values.get('target_python_version')
if (
values.get('keyword_only')
and output_model_type == DataModelType.DataclassesDataclass
and not python_target.has_kw_only_dataclass
):
raise Error(
f'`--keyword-only` requires `--target-python-version` {PythonVersion.PY_310.value} or higher.'
)
return values
@model_validator(mode='after')
def validate_output_datetime_class(cls, values: Dict[str, Any]) -> Dict[str, Any]:
datetime_class_type: Optional[DatetimeClassType] = values.get(
'output_datetime_class'
)
if (
datetime_class_type
and datetime_class_type is not DatetimeClassType.Datetime
and values.get('output_model_type') == DataModelType.DataclassesDataclass
):
raise Error(
'`--output-datetime-class` only allows "datetime" for '
f'`--output-model-type` {DataModelType.DataclassesDataclass.value}'
)
return values
# Pydantic 1.5.1 doesn't support each_item=True correctly
@field_validator('http_headers', mode='before')
def validate_http_headers(cls, value: Any) -> Optional[List[Tuple[str, str]]]:
def validate_each_item(each_item: Any) -> Tuple[str, str]:
if isinstance(each_item, str): # pragma: no cover
try:
field_name, field_value = each_item.split(':', maxsplit=1) # type: str, str
return field_name, field_value.lstrip()
except ValueError:
raise Error(f'Invalid http header: {each_item!r}')
return each_item # pragma: no cover
if isinstance(value, list):
return [validate_each_item(each_item) for each_item in value]
return value # pragma: no cover
@field_validator('http_query_parameters', mode='before')
def validate_http_query_parameters(
cls, value: Any
) -> Optional[List[Tuple[str, str]]]:
def validate_each_item(each_item: Any) -> Tuple[str, str]:
if isinstance(each_item, str): # pragma: no cover
try:
field_name, field_value = each_item.split('=', maxsplit=1) # type: str, str
return field_name, field_value.lstrip()
except ValueError:
raise Error(f'Invalid http query parameter: {each_item!r}')
return each_item # pragma: no cover
if isinstance(value, list):
return [validate_each_item(each_item) for each_item in value]
return value # pragma: no cover
@model_validator(mode='before')
def validate_additional_imports(cls, values: Dict[str, Any]) -> Dict[str, Any]:
if values.get('additional_imports') is not None:
values['additional_imports'] = values.get('additional_imports').split(',')
return values
@model_validator(mode='before')
def validate_custom_formatters(cls, values: Dict[str, Any]) -> Dict[str, Any]:
if values.get('custom_formatters') is not None:
values['custom_formatters'] = values.get('custom_formatters').split(',')
return values
if PYDANTIC_V2:
@model_validator(mode='after') # type: ignore
def validate_root(self: Self) -> Self:
if self.use_annotated:
self.field_constraints = True
return self
else:
@model_validator(mode='after')
def validate_root(cls, values: Any) -> Any:
if values.get('use_annotated'):
values['field_constraints'] = True
return values
input: Optional[Union[Path, str]] = None
input_file_type: InputFileType = InputFileType.Auto
output_model_type: DataModelType = DataModelType.PydanticBaseModel
output: Optional[Path] = None
debug: bool = False
disable_warnings: bool = False
target_python_version: PythonVersion = PythonVersion.PY_38
base_class: str = ''
additional_imports: Optional[List[str]] = (None,)
custom_template_dir: Optional[Path] = None
extra_template_data: Optional[TextIOBase] = None
validation: bool = False
field_constraints: bool = False
snake_case_field: bool = False
strip_default_none: bool = False
aliases: Optional[TextIOBase] = None
disable_timestamp: bool = False
enable_version_header: bool = False
allow_population_by_field_name: bool = False
allow_extra_fields: bool = False
use_default: bool = False
force_optional: bool = False
class_name: Optional[str] = None
use_standard_collections: bool = False
use_schema_description: bool = False
use_field_description: bool = False
use_default_kwarg: bool = False
reuse_model: bool = False
encoding: str = DEFAULT_ENCODING
enum_field_as_literal: Optional[LiteralType] = None
use_one_literal_as_default: bool = False
set_default_enum_member: bool = False
use_subclass_enum: bool = False
strict_nullable: bool = False
use_generic_container_types: bool = False
use_union_operator: bool = False
enable_faux_immutability: bool = False
url: Optional[ParseResult] = None
disable_appending_item_suffix: bool = False
strict_types: List[StrictTypes] = []
empty_enum_field_name: Optional[str] = None
field_extra_keys: Optional[Set[str]] = None
field_include_all_keys: bool = False
field_extra_keys_without_x_prefix: Optional[Set[str]] = None
openapi_scopes: Optional[List[OpenAPIScope]] = [OpenAPIScope.Schemas]
wrap_string_literal: Optional[bool] = None
use_title_as_name: bool = False
use_operation_id_as_name: bool = False
use_unique_items_as_set: bool = False
http_headers: Optional[Sequence[Tuple[str, str]]] = None
http_ignore_tls: bool = False
use_annotated: bool = False
use_non_positive_negative_number_constrained_types: bool = False
original_field_name_delimiter: Optional[str] = None
use_double_quotes: bool = False
collapse_root_models: bool = False
special_field_name_prefix: Optional[str] = None
remove_special_field_name_prefix: bool = False
capitalise_enum_members: bool = False
keep_model_order: bool = False
custom_file_header: Optional[str] = None
custom_file_header_path: Optional[Path] = None
custom_formatters: Optional[List[str]] = None
custom_formatters_kwargs: Optional[TextIOBase] = None
use_pendulum: bool = False
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None
treat_dot_as_module: bool = False
use_exact_imports: bool = False
union_mode: Optional[UnionMode] = None
output_datetime_class: Optional[DatetimeClassType] = None
keyword_only: bool = False
no_alias: bool = False
def merge_args(self, args: Namespace) -> None:
set_args = {
f: getattr(args, f)
for f in self.get_fields()
if getattr(args, f) is not None
}
if set_args.get('output_model_type') == DataModelType.MsgspecStruct.value:
set_args['use_annotated'] = True
if set_args.get('use_annotated'):
set_args['field_constraints'] = True
parsed_args = Config.parse_obj(set_args)
for field_name in set_args:
setattr(self, field_name, getattr(parsed_args, field_name))
def main(args: Optional[Sequence[str]] = None) -> Exit:
"""Main function."""
# add cli completion support
argcomplete.autocomplete(arg_parser)
if args is None: # pragma: no cover
args = sys.argv[1:]
arg_parser.parse_args(args, namespace=namespace)
if namespace.version:
from datamodel_code_generator.version import version
print(version)
exit(0)
root = black_find_project_root((Path().resolve(),))
pyproject_toml_path = root / 'pyproject.toml'
if pyproject_toml_path.is_file():
pyproject_toml: Dict[str, Any] = {
k.replace('-', '_'): v
for k, v in load_toml(pyproject_toml_path)
.get('tool', {})
.get('datamodel-codegen', {})
.items()
}
else:
pyproject_toml = {}
try:
config = Config.parse_obj(pyproject_toml)
config.merge_args(namespace)
except Error as e:
print(e.message, file=sys.stderr)
return Exit.ERROR
if not config.input and not config.url and sys.stdin.isatty():
print(
'Not Found Input: require `stdin` or arguments `--input` or `--url`',
file=sys.stderr,
)
arg_parser.print_help()
return Exit.ERROR
if not is_supported_in_black(config.target_python_version): # pragma: no cover
print(
f"Installed black doesn't support Python version {config.target_python_version.value}.\n" # type: ignore
f'You have to install a newer black.\n'
f'Installed black version: {black.__version__}',
file=sys.stderr,
)
return Exit.ERROR
if config.debug: # pragma: no cover
enable_debug_message()
if config.disable_warnings:
warnings.simplefilter('ignore')
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]]
if config.extra_template_data is None:
extra_template_data = None
else:
with config.extra_template_data as data:
try:
extra_template_data = json.load(
data, object_hook=lambda d: defaultdict(dict, **d)
)
except json.JSONDecodeError as e:
print(f'Unable to load extra template data: {e}', file=sys.stderr)
return Exit.ERROR
if config.aliases is None:
aliases = None
else:
with config.aliases as data:
try:
aliases = json.load(data)
except json.JSONDecodeError as e:
print(f'Unable to load alias mapping: {e}', file=sys.stderr)
return Exit.ERROR
if not isinstance(aliases, dict) or not all(
isinstance(k, str) and isinstance(v, str) for k, v in aliases.items()
):
print(
'Alias mapping must be a JSON string mapping (e.g. {"from": "to", ...})',
file=sys.stderr,
)
return Exit.ERROR
if config.custom_formatters_kwargs is None:
custom_formatters_kwargs = None
else:
with config.custom_formatters_kwargs as data:
try:
custom_formatters_kwargs = json.load(data)
except json.JSONDecodeError as e: # pragma: no cover
print(
f'Unable to load custom_formatters_kwargs mapping: {e}',
file=sys.stderr,
)
return Exit.ERROR
if not isinstance(custom_formatters_kwargs, dict) or not all(
isinstance(k, str) and isinstance(v, str)
for k, v in custom_formatters_kwargs.items()
): # pragma: no cover
print(
'Custom formatters kwargs mapping must be a JSON string mapping (e.g. {"from": "to", ...})',
file=sys.stderr,
)
return Exit.ERROR
try:
generate(
input_=config.url or config.input or sys.stdin.read(),
input_file_type=config.input_file_type,
output=config.output,
output_model_type=config.output_model_type,
target_python_version=config.target_python_version,
base_class=config.base_class,
additional_imports=config.additional_imports,
custom_template_dir=config.custom_template_dir,
validation=config.validation,
field_constraints=config.field_constraints,
snake_case_field=config.snake_case_field,
strip_default_none=config.strip_default_none,
extra_template_data=extra_template_data,
aliases=aliases,
disable_timestamp=config.disable_timestamp,
enable_version_header=config.enable_version_header,
allow_population_by_field_name=config.allow_population_by_field_name,
allow_extra_fields=config.allow_extra_fields,
apply_default_values_for_required_fields=config.use_default,
force_optional_for_required_fields=config.force_optional,
class_name=config.class_name,
use_standard_collections=config.use_standard_collections,
use_schema_description=config.use_schema_description,
use_field_description=config.use_field_description,
use_default_kwarg=config.use_default_kwarg,
reuse_model=config.reuse_model,
encoding=config.encoding,
enum_field_as_literal=config.enum_field_as_literal,
use_one_literal_as_default=config.use_one_literal_as_default,
set_default_enum_member=config.set_default_enum_member,
use_subclass_enum=config.use_subclass_enum,
strict_nullable=config.strict_nullable,
use_generic_container_types=config.use_generic_container_types,
enable_faux_immutability=config.enable_faux_immutability,
disable_appending_item_suffix=config.disable_appending_item_suffix,
strict_types=config.strict_types,
empty_enum_field_name=config.empty_enum_field_name,
field_extra_keys=config.field_extra_keys,
field_include_all_keys=config.field_include_all_keys,
field_extra_keys_without_x_prefix=config.field_extra_keys_without_x_prefix,
openapi_scopes=config.openapi_scopes,
wrap_string_literal=config.wrap_string_literal,
use_title_as_name=config.use_title_as_name,
use_operation_id_as_name=config.use_operation_id_as_name,
use_unique_items_as_set=config.use_unique_items_as_set,
http_headers=config.http_headers,
http_ignore_tls=config.http_ignore_tls,
use_annotated=config.use_annotated,
use_non_positive_negative_number_constrained_types=config.use_non_positive_negative_number_constrained_types,
original_field_name_delimiter=config.original_field_name_delimiter,
use_double_quotes=config.use_double_quotes,
collapse_root_models=config.collapse_root_models,
use_union_operator=config.use_union_operator,
special_field_name_prefix=config.special_field_name_prefix,
remove_special_field_name_prefix=config.remove_special_field_name_prefix,
capitalise_enum_members=config.capitalise_enum_members,
keep_model_order=config.keep_model_order,
custom_file_header=config.custom_file_header,
custom_file_header_path=config.custom_file_header_path,
custom_formatters=config.custom_formatters,
custom_formatters_kwargs=custom_formatters_kwargs,
use_pendulum=config.use_pendulum,
http_query_parameters=config.http_query_parameters,
treat_dots_as_module=config.treat_dot_as_module,
use_exact_imports=config.use_exact_imports,
union_mode=config.union_mode,
output_datetime_class=config.output_datetime_class,
keyword_only=config.keyword_only,
no_alias=config.no_alias,
)
return Exit.OK
except InvalidClassNameError as e:
print(f'{e} You have to set `--class-name` option', file=sys.stderr)
return Exit.ERROR
except Error as e:
print(str(e), file=sys.stderr)
return Exit.ERROR
except Exception:
import traceback
print(traceback.format_exc(), file=sys.stderr)
return Exit.ERROR
if __name__ == '__main__':
sys.exit(main())
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/arguments.py 0000644 0000000 0000000 00000037662 00000000000 024313 0 ustar 00 0000000 0000000 from __future__ import annotations
import locale
from argparse import ArgumentParser, FileType, HelpFormatter, Namespace
from operator import attrgetter
from typing import TYPE_CHECKING
from datamodel_code_generator import DataModelType, InputFileType, OpenAPIScope
from datamodel_code_generator.format import DatetimeClassType, PythonVersion
from datamodel_code_generator.model.pydantic_v2 import UnionMode
from datamodel_code_generator.parser import LiteralType
from datamodel_code_generator.types import StrictTypes
if TYPE_CHECKING:
from argparse import Action
from typing import Iterable, Optional
DEFAULT_ENCODING = locale.getpreferredencoding()
namespace = Namespace(no_color=False)
class SortingHelpFormatter(HelpFormatter):
def _bold_cyan(self, text: str) -> str:
return f'\x1b[36;1m{text}\x1b[0m'
def add_arguments(self, actions: Iterable[Action]) -> None:
actions = sorted(actions, key=attrgetter('option_strings'))
super().add_arguments(actions)
def start_section(self, heading: Optional[str]) -> None:
return super().start_section(
heading if namespace.no_color or not heading else self._bold_cyan(heading)
)
arg_parser = ArgumentParser(
usage='\n datamodel-codegen [options]',
description='Generate Python data models from schema definitions or structured data',
formatter_class=SortingHelpFormatter,
add_help=False,
)
base_options = arg_parser.add_argument_group('Options')
typing_options = arg_parser.add_argument_group('Typing customization')
field_options = arg_parser.add_argument_group('Field customization')
model_options = arg_parser.add_argument_group('Model customization')
template_options = arg_parser.add_argument_group('Template customization')
openapi_options = arg_parser.add_argument_group('OpenAPI-only options')
general_options = arg_parser.add_argument_group('General options')
# ======================================================================================
# Base options for input/output
# ======================================================================================
base_options.add_argument(
'--http-headers',
nargs='+',
metavar='HTTP_HEADER',
help='Set headers in HTTP requests to the remote host. (example: "Authorization: Basic dXNlcjpwYXNz")',
)
base_options.add_argument(
'--http-query-parameters',
nargs='+',
metavar='HTTP_QUERY_PARAMETERS',
help='Set query parameters in HTTP requests to the remote host. (example: "ref=branch")',
)
base_options.add_argument(
'--http-ignore-tls',
help="Disable verification of the remote host's TLS certificate",
action='store_true',
default=None,
)
base_options.add_argument(
'--input',
help='Input file/directory (default: stdin)',
)
base_options.add_argument(
'--input-file-type',
help='Input file type (default: auto)',
choices=[i.value for i in InputFileType],
)
base_options.add_argument(
'--output',
help='Output file (default: stdout)',
)
base_options.add_argument(
'--output-model-type',
help='Output model type (default: pydantic.BaseModel)',
choices=[i.value for i in DataModelType],
)
base_options.add_argument(
'--url',
help='Input file URL. `--input` is ignored when `--url` is used',
)
# ======================================================================================
# Customization options for generated models
# ======================================================================================
model_options.add_argument(
'--allow-extra-fields',
help='Allow to pass extra fields, if this flag is not passed, extra fields are forbidden.',
action='store_true',
default=None,
)
model_options.add_argument(
'--allow-population-by-field-name',
help='Allow population by field name',
action='store_true',
default=None,
)
model_options.add_argument(
'--class-name',
help='Set class name of root model',
default=None,
)
model_options.add_argument(
'--collapse-root-models',
action='store_true',
default=None,
help='Models generated with a root-type field will be merged '
'into the models using that root-type model',
)
model_options.add_argument(
'--disable-appending-item-suffix',
help='Disable appending `Item` suffix to model name in an array',
action='store_true',
default=None,
)
model_options.add_argument(
'--disable-timestamp',
help='Disable timestamp on file headers',
action='store_true',
default=None,
)
model_options.add_argument(
'--enable-faux-immutability',
help='Enable faux immutability',
action='store_true',
default=None,
)
model_options.add_argument(
'--enable-version-header',
help='Enable package version on file headers',
action='store_true',
default=None,
)
model_options.add_argument(
'--keep-model-order',
help="Keep generated models' order",
action='store_true',
default=None,
)
model_options.add_argument(
'--keyword-only',
help='Defined models as keyword only (for example dataclass(kw_only=True)).',
action='store_true',
default=None,
)
model_options.add_argument(
'--reuse-model',
help='Reuse models on the field when a module has the model with the same content',
action='store_true',
default=None,
)
model_options.add_argument(
'--target-python-version',
help='target python version (default: 3.8)',
choices=[v.value for v in PythonVersion],
)
model_options.add_argument(
'--treat-dot-as-module',
help='treat dotted module names as modules',
action='store_true',
default=False,
)
model_options.add_argument(
'--use-schema-description',
help='Use schema description to populate class docstring',
action='store_true',
default=None,
)
model_options.add_argument(
'--use-title-as-name',
help='use titles as class names of models',
action='store_true',
default=None,
)
model_options.add_argument(
'--use-pendulum',
help='use pendulum instead of datetime',
action='store_true',
default=False,
)
model_options.add_argument(
'--use-exact-imports',
help='import exact types instead of modules, for example: "from .foo import Bar" instead of '
'"from . import foo" with "foo.Bar"',
action='store_true',
default=False,
)
model_options.add_argument(
'--output-datetime-class',
help='Choose Datetime class between AwareDatetime, NaiveDatetime or datetime. '
'Each output model has its default mapping (for example pydantic: datetime, dataclass: str, ...)',
choices=[i.value for i in DatetimeClassType],
default=None,
)
# ======================================================================================
# Typing options for generated models
# ======================================================================================
typing_options.add_argument(
'--base-class',
help='Base Class (default: pydantic.BaseModel)',
type=str,
)
typing_options.add_argument(
'--enum-field-as-literal',
help='Parse enum field as literal. '
'all: all enum field type are Literal. '
'one: field type is Literal when an enum has only one possible value',
choices=[lt.value for lt in LiteralType],
default=None,
)
typing_options.add_argument(
'--field-constraints',
help='Use field constraints and not con* annotations',
action='store_true',
default=None,
)
typing_options.add_argument(
'--set-default-enum-member',
help='Set enum members as default values for enum field',
action='store_true',
default=None,
)
typing_options.add_argument(
'--strict-types',
help='Use strict types',
choices=[t.value for t in StrictTypes],
nargs='+',
)
typing_options.add_argument(
'--use-annotated',
help='Use typing.Annotated for Field(). Also, `--field-constraints` option will be enabled.',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-generic-container-types',
help='Use generic container types for type hinting (typing.Sequence, typing.Mapping). '
'If `--use-standard-collections` option is set, then import from collections.abc instead of typing',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-non-positive-negative-number-constrained-types',
help='Use the Non{Positive,Negative}{FloatInt} types instead of the corresponding con* constrained types.',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-one-literal-as-default',
help='Use one literal as default value for one literal field',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-standard-collections',
help='Use standard collections for type hinting (list, dict)',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-subclass-enum',
help='Define Enum class as subclass with field type when enum has type (int, float, bytes, str)',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-union-operator',
help='Use | operator for Union type (PEP 604).',
action='store_true',
default=None,
)
typing_options.add_argument(
'--use-unique-items-as-set',
help='define field type as `set` when the field attribute has `uniqueItems`',
action='store_true',
default=None,
)
# ======================================================================================
# Customization options for generated model fields
# ======================================================================================
field_options.add_argument(
'--capitalise-enum-members',
'--capitalize-enum-members',
help='Capitalize field names on enum',
action='store_true',
default=None,
)
field_options.add_argument(
'--empty-enum-field-name',
help='Set field name when enum value is empty (default: `_`)',
default=None,
)
field_options.add_argument(
'--field-extra-keys',
help='Add extra keys to field parameters',
type=str,
nargs='+',
)
field_options.add_argument(
'--field-extra-keys-without-x-prefix',
help='Add extra keys with `x-` prefix to field parameters. The extra keys are stripped of the `x-` prefix.',
type=str,
nargs='+',
)
field_options.add_argument(
'--field-include-all-keys',
help='Add all keys to field parameters',
action='store_true',
default=None,
)
field_options.add_argument(
'--force-optional',
help='Force optional for required fields',
action='store_true',
default=None,
)
field_options.add_argument(
'--original-field-name-delimiter',
help='Set delimiter to convert to snake case. This option only can be used with --snake-case-field (default: `_` )',
default=None,
)
field_options.add_argument(
'--remove-special-field-name-prefix',
help='Remove field name prefix if it has a special meaning e.g. underscores',
action='store_true',
default=None,
)
field_options.add_argument(
'--snake-case-field',
help='Change camel-case field name to snake-case',
action='store_true',
default=None,
)
field_options.add_argument(
'--special-field-name-prefix',
help="Set field name prefix when first character can't be used as Python field name (default: `field`)",
default=None,
)
field_options.add_argument(
'--strip-default-none',
help='Strip default None on fields',
action='store_true',
default=None,
)
field_options.add_argument(
'--use-default',
help='Use default value even if a field is required',
action='store_true',
default=None,
)
field_options.add_argument(
'--use-default-kwarg',
action='store_true',
help='Use `default=` instead of a positional argument for Fields that have default values.',
default=None,
)
field_options.add_argument(
'--use-field-description',
help='Use schema description to populate field docstring',
action='store_true',
default=None,
)
field_options.add_argument(
'--union-mode',
help='Union mode for only pydantic v2 field',
choices=[u.value for u in UnionMode],
default=None,
)
field_options.add_argument(
'--no-alias',
help="""Do not add a field alias. E.g., if --snake-case-field is used along with a base class, which has an
alias_generator""",
action='store_true',
default=None,
)
# ======================================================================================
# Options for templating output
# ======================================================================================
template_options.add_argument(
'--aliases',
help='Alias mapping file',
type=FileType('rt'),
)
template_options.add_argument(
'--custom-file-header',
help='Custom file header',
type=str,
default=None,
)
template_options.add_argument(
'--custom-file-header-path',
help='Custom file header file path',
default=None,
type=str,
)
template_options.add_argument(
'--custom-template-dir',
help='Custom template directory',
type=str,
)
template_options.add_argument(
'--encoding',
help=f'The encoding of input and output (default: {DEFAULT_ENCODING})',
default=None,
)
template_options.add_argument(
'--extra-template-data',
help='Extra template data',
type=FileType('rt'),
)
template_options.add_argument(
'--use-double-quotes',
action='store_true',
default=None,
help='Model generated with double quotes. Single quotes or '
'your black config skip_string_normalization value will be used without this option.',
)
template_options.add_argument(
'--wrap-string-literal',
help='Wrap string literal by using black `experimental-string-processing` option (require black 20.8b0 or later)',
action='store_true',
default=None,
)
base_options.add_argument(
'--additional-imports',
help='Custom imports for output (delimited list input). For example "datetime.date,datetime.datetime"',
type=str,
default=None,
)
base_options.add_argument(
'--custom-formatters',
help='List of modules with custom formatter (delimited list input).',
type=str,
default=None,
)
template_options.add_argument(
'--custom-formatters-kwargs',
help='A file with kwargs for custom formatters.',
type=FileType('rt'),
)
# ======================================================================================
# Options specific to OpenAPI input schemas
# ======================================================================================
openapi_options.add_argument(
'--openapi-scopes',
help='Scopes of OpenAPI model generation (default: schemas)',
choices=[o.value for o in OpenAPIScope],
nargs='+',
default=None,
)
openapi_options.add_argument(
'--strict-nullable',
help='Treat default field as a non-nullable field (Only OpenAPI)',
action='store_true',
default=None,
)
openapi_options.add_argument(
'--use-operation-id-as-name',
help='use operation id of OpenAPI as class names of models',
action='store_true',
default=None,
)
openapi_options.add_argument(
'--validation',
help='Deprecated: Enable validation (Only OpenAPI). this option is deprecated. it will be removed in future '
'releases',
action='store_true',
default=None,
)
# ======================================================================================
# General options
# ======================================================================================
general_options.add_argument(
'--debug',
help='show debug message (require "debug". `$ pip install \'datamodel-code-generator[debug]\'`)',
action='store_true',
default=None,
)
general_options.add_argument(
'--disable-warnings',
help='disable warnings',
action='store_true',
default=None,
)
general_options.add_argument(
'-h',
'--help',
action='help',
default='==SUPPRESS==',
help='show this help message and exit',
)
general_options.add_argument(
'--no-color',
action='store_true',
default=False,
help='disable colorized output',
)
general_options.add_argument(
'--version',
action='store_true',
help='show version',
)
__all__ = [
'arg_parser',
'DEFAULT_ENCODING',
'namespace',
]
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/format.py 0000644 0000000 0000000 00000021310 00000000000 023555 0 ustar 00 0000000 0000000 from __future__ import annotations
from enum import Enum
from importlib import import_module
from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence
from warnings import warn
import black
import isort
from datamodel_code_generator.util import cached_property, load_toml
try:
import black.mode
except ImportError: # pragma: no cover
black.mode = None
class DatetimeClassType(Enum):
Datetime = 'datetime'
Awaredatetime = 'AwareDatetime'
Naivedatetime = 'NaiveDatetime'
class PythonVersion(Enum):
PY_36 = '3.6'
PY_37 = '3.7'
PY_38 = '3.8'
PY_39 = '3.9'
PY_310 = '3.10'
PY_311 = '3.11'
PY_312 = '3.12'
PY_313 = '3.13'
@cached_property
def _is_py_38_or_later(self) -> bool: # pragma: no cover
return self.value not in {self.PY_36.value, self.PY_37.value} # type: ignore
@cached_property
def _is_py_39_or_later(self) -> bool: # pragma: no cover
return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value} # type: ignore
@cached_property
def _is_py_310_or_later(self) -> bool: # pragma: no cover
return self.value not in {
self.PY_36.value,
self.PY_37.value,
self.PY_38.value,
self.PY_39.value,
} # type: ignore
@cached_property
def _is_py_311_or_later(self) -> bool: # pragma: no cover
return self.value not in {
self.PY_36.value,
self.PY_37.value,
self.PY_38.value,
self.PY_39.value,
self.PY_310.value,
} # type: ignore
@property
def has_literal_type(self) -> bool:
return self._is_py_38_or_later
@property
def has_union_operator(self) -> bool: # pragma: no cover
return self._is_py_310_or_later
@property
def has_annotated_type(self) -> bool:
return self._is_py_39_or_later
@property
def has_typed_dict(self) -> bool:
return self._is_py_38_or_later
@property
def has_typed_dict_non_required(self) -> bool:
return self._is_py_311_or_later
@property
def has_kw_only_dataclass(self) -> bool:
return self._is_py_310_or_later
if TYPE_CHECKING:
class _TargetVersion(Enum): ...
BLACK_PYTHON_VERSION: Dict[PythonVersion, _TargetVersion]
else:
BLACK_PYTHON_VERSION: Dict[PythonVersion, black.TargetVersion] = {
v: getattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
for v in PythonVersion
if hasattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
}
def is_supported_in_black(python_version: PythonVersion) -> bool: # pragma: no cover
return python_version in BLACK_PYTHON_VERSION
def black_find_project_root(sources: Sequence[Path]) -> Path:
if TYPE_CHECKING:
from typing import Iterable, Tuple, Union
def _find_project_root(
srcs: Union[Sequence[str], Iterable[str]],
) -> Union[Tuple[Path, str], Path]: ...
else:
from black import find_project_root as _find_project_root
project_root = _find_project_root(tuple(str(s) for s in sources))
if isinstance(project_root, tuple):
return project_root[0]
else: # pragma: no cover
return project_root
class CodeFormatter:
def __init__(
self,
python_version: PythonVersion,
settings_path: Optional[Path] = None,
wrap_string_literal: Optional[bool] = None,
skip_string_normalization: bool = True,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
) -> None:
if not settings_path:
settings_path = Path().resolve()
root = black_find_project_root((settings_path,))
path = root / 'pyproject.toml'
if path.is_file():
pyproject_toml = load_toml(path)
config = pyproject_toml.get('tool', {}).get('black', {})
else:
config = {}
black_kwargs: Dict[str, Any] = {}
if wrap_string_literal is not None:
experimental_string_processing = wrap_string_literal
else:
if black.__version__ < '24.1.0': # type: ignore
experimental_string_processing = config.get(
'experimental-string-processing'
)
else:
experimental_string_processing = config.get(
'preview', False
) and ( # pragma: no cover
config.get('unstable', False)
or 'string_processing' in config.get('enable-unstable-feature', [])
)
if experimental_string_processing is not None: # pragma: no cover
if black.__version__.startswith('19.'): # type: ignore
warn(
f"black doesn't support `experimental-string-processing` option" # type: ignore
f' for wrapping string literal in {black.__version__}'
)
elif black.__version__ < '24.1.0': # type: ignore
black_kwargs['experimental_string_processing'] = (
experimental_string_processing
)
elif experimental_string_processing:
black_kwargs['preview'] = True
black_kwargs['unstable'] = config.get('unstable', False)
black_kwargs['enabled_features'] = {
black.mode.Preview.string_processing
}
if TYPE_CHECKING:
self.black_mode: black.FileMode
else:
self.black_mode = black.FileMode(
target_versions={BLACK_PYTHON_VERSION[python_version]},
line_length=config.get('line-length', black.DEFAULT_LINE_LENGTH),
string_normalization=not skip_string_normalization
or not config.get('skip-string-normalization', True),
**black_kwargs,
)
self.settings_path: str = str(settings_path)
self.isort_config_kwargs: Dict[str, Any] = {}
if known_third_party:
self.isort_config_kwargs['known_third_party'] = known_third_party
if isort.__version__.startswith('4.'):
self.isort_config = None
else:
self.isort_config = isort.Config(
settings_path=self.settings_path, **self.isort_config_kwargs
)
self.custom_formatters_kwargs = custom_formatters_kwargs or {}
self.custom_formatters = self._check_custom_formatters(custom_formatters)
def _load_custom_formatter(
self, custom_formatter_import: str
) -> CustomCodeFormatter:
import_ = import_module(custom_formatter_import)
if not hasattr(import_, 'CodeFormatter'):
raise NameError(
f'Custom formatter module `{import_.__name__}` must contains object with name Formatter'
)
formatter_class = import_.__getattribute__('CodeFormatter')
if not issubclass(formatter_class, CustomCodeFormatter):
raise TypeError(
f'The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`'
)
return formatter_class(formatter_kwargs=self.custom_formatters_kwargs)
def _check_custom_formatters(
self, custom_formatters: Optional[List[str]]
) -> List[CustomCodeFormatter]:
if custom_formatters is None:
return []
return [
self._load_custom_formatter(custom_formatter_import)
for custom_formatter_import in custom_formatters
]
def format_code(
self,
code: str,
) -> str:
code = self.apply_isort(code)
code = self.apply_black(code)
for formatter in self.custom_formatters:
code = formatter.apply(code)
return code
def apply_black(self, code: str) -> str:
return black.format_str(
code,
mode=self.black_mode,
)
if TYPE_CHECKING:
def apply_isort(self, code: str) -> str: ...
else:
if isort.__version__.startswith('4.'):
def apply_isort(self, code: str) -> str:
return isort.SortImports(
file_contents=code,
settings_path=self.settings_path,
**self.isort_config_kwargs,
).output
else:
def apply_isort(self, code: str) -> str:
return isort.code(code, config=self.isort_config)
class CustomCodeFormatter:
def __init__(self, formatter_kwargs: Dict[str, Any]) -> None:
self.formatter_kwargs = formatter_kwargs
def apply(self, code: str) -> str:
raise NotImplementedError
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/http.py 0000644 0000000 0000000 00000001312 00000000000 023244 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import Optional, Sequence, Tuple
try:
import httpx
except ImportError: # pragma: no cover
raise Exception(
"Please run `$pip install 'datamodel-code-generator[http]`' to resolve URL Reference"
)
def get_body(
url: str,
headers: Optional[Sequence[Tuple[str, str]]] = None,
ignore_tls: bool = False,
query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
) -> str:
return httpx.get(
url,
headers=headers,
verify=not ignore_tls,
follow_redirects=True,
params=query_parameters,
).text
def join_url(url: str, ref: str = '.') -> str:
return str(httpx.URL(url).join(ref))
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/imports.py 0000644 0000000 0000000 00000013140 00000000000 023764 0 ustar 00 0000000 0000000 from __future__ import annotations
from collections import defaultdict
from functools import lru_cache
from typing import DefaultDict, Dict, Iterable, List, Optional, Set, Tuple, Union
from datamodel_code_generator.util import BaseModel
class Import(BaseModel):
from_: Optional[str] = None
import_: str
alias: Optional[str] = None
reference_path: Optional[str] = None
@classmethod
@lru_cache()
def from_full_path(cls, class_path: str) -> Import:
split_class_path: List[str] = class_path.split('.')
return Import(
from_='.'.join(split_class_path[:-1]) or None, import_=split_class_path[-1]
)
class Imports(DefaultDict[Optional[str], Set[str]]):
def __str__(self) -> str:
return self.dump()
def __init__(self, use_exact: bool = False) -> None:
super().__init__(set)
self.alias: DefaultDict[Optional[str], Dict[str, str]] = defaultdict(dict)
self.counter: Dict[Tuple[Optional[str], str], int] = defaultdict(int)
self.reference_paths: Dict[str, Import] = {}
self.use_exact: bool = use_exact
def _set_alias(self, from_: Optional[str], imports: Set[str]) -> List[str]:
return [
f'{i} as {self.alias[from_][i]}'
if i in self.alias[from_] and i != self.alias[from_][i]
else i
for i in sorted(imports)
]
def create_line(self, from_: Optional[str], imports: Set[str]) -> str:
if from_:
return f"from {from_} import {', '.join(self._set_alias(from_, imports))}"
return '\n'.join(f'import {i}' for i in self._set_alias(from_, imports))
def dump(self) -> str:
return '\n'.join(
self.create_line(from_, imports) for from_, imports in self.items()
)
def append(self, imports: Union[Import, Iterable[Import], None]) -> None:
if imports:
if isinstance(imports, Import):
imports = [imports]
for import_ in imports:
if import_.reference_path:
self.reference_paths[import_.reference_path] = import_
if '.' in import_.import_:
self[None].add(import_.import_)
self.counter[(None, import_.import_)] += 1
else:
self[import_.from_].add(import_.import_)
self.counter[(import_.from_, import_.import_)] += 1
if import_.alias:
self.alias[import_.from_][import_.import_] = import_.alias
def remove(self, imports: Union[Import, Iterable[Import]]) -> None:
if isinstance(imports, Import): # pragma: no cover
imports = [imports]
for import_ in imports:
if '.' in import_.import_: # pragma: no cover
self.counter[(None, import_.import_)] -= 1
if self.counter[(None, import_.import_)] == 0: # pragma: no cover
self[None].remove(import_.import_)
if not self[None]:
del self[None]
else:
self.counter[(import_.from_, import_.import_)] -= 1 # pragma: no cover
if (
self.counter[(import_.from_, import_.import_)] == 0
): # pragma: no cover
self[import_.from_].remove(import_.import_)
if not self[import_.from_]:
del self[import_.from_]
if import_.alias: # pragma: no cover
del self.alias[import_.from_][import_.import_]
if not self.alias[import_.from_]:
del self.alias[import_.from_]
def remove_referenced_imports(self, reference_path: str) -> None:
if reference_path in self.reference_paths:
self.remove(self.reference_paths[reference_path])
IMPORT_ANNOTATED = Import.from_full_path('typing.Annotated')
IMPORT_ANNOTATED_BACKPORT = Import.from_full_path('typing_extensions.Annotated')
IMPORT_ANY = Import.from_full_path('typing.Any')
IMPORT_LIST = Import.from_full_path('typing.List')
IMPORT_SET = Import.from_full_path('typing.Set')
IMPORT_UNION = Import.from_full_path('typing.Union')
IMPORT_OPTIONAL = Import.from_full_path('typing.Optional')
IMPORT_LITERAL = Import.from_full_path('typing.Literal')
IMPORT_TYPE_ALIAS = Import.from_full_path('typing.TypeAlias')
IMPORT_LITERAL_BACKPORT = Import.from_full_path('typing_extensions.Literal')
IMPORT_SEQUENCE = Import.from_full_path('typing.Sequence')
IMPORT_FROZEN_SET = Import.from_full_path('typing.FrozenSet')
IMPORT_MAPPING = Import.from_full_path('typing.Mapping')
IMPORT_ABC_SEQUENCE = Import.from_full_path('collections.abc.Sequence')
IMPORT_ABC_SET = Import.from_full_path('collections.abc.Set')
IMPORT_ABC_MAPPING = Import.from_full_path('collections.abc.Mapping')
IMPORT_ENUM = Import.from_full_path('enum.Enum')
IMPORT_ANNOTATIONS = Import.from_full_path('__future__.annotations')
IMPORT_DICT = Import.from_full_path('typing.Dict')
IMPORT_DECIMAL = Import.from_full_path('decimal.Decimal')
IMPORT_DATE = Import.from_full_path('datetime.date')
IMPORT_DATETIME = Import.from_full_path('datetime.datetime')
IMPORT_TIMEDELTA = Import.from_full_path('datetime.timedelta')
IMPORT_PATH = Import.from_full_path('pathlib.Path')
IMPORT_TIME = Import.from_full_path('datetime.time')
IMPORT_UUID = Import.from_full_path('uuid.UUID')
IMPORT_PENDULUM_DATE = Import.from_full_path('pendulum.Date')
IMPORT_PENDULUM_DATETIME = Import.from_full_path('pendulum.DateTime')
IMPORT_PENDULUM_DURATION = Import.from_full_path('pendulum.Duration')
IMPORT_PENDULUM_TIME = Import.from_full_path('pendulum.Time')
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/__init__.py 0000644 0000000 0000000 00000006672 00000000000 025142 0 ustar 00 0000000 0000000 from __future__ import annotations
import sys
from typing import TYPE_CHECKING, Callable, Iterable, List, NamedTuple, Optional, Type
from .. import DatetimeClassType, PythonVersion
from ..types import DataTypeManager as DataTypeManagerABC
from .base import ConstraintsBase, DataModel, DataModelFieldBase
if TYPE_CHECKING:
from .. import DataModelType
DEFAULT_TARGET_DATETIME_CLASS = DatetimeClassType.Datetime
DEFAULT_TARGET_PYTHON_VERSION = PythonVersion(
f'{sys.version_info.major}.{sys.version_info.minor}'
)
class DataModelSet(NamedTuple):
data_model: Type[DataModel]
root_model: Type[DataModel]
field_model: Type[DataModelFieldBase]
data_type_manager: Type[DataTypeManagerABC]
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]]
known_third_party: Optional[List[str]] = None
def get_data_model_types(
data_model_type: DataModelType,
target_python_version: PythonVersion = DEFAULT_TARGET_PYTHON_VERSION,
target_datetime_class: DatetimeClassType = DEFAULT_TARGET_DATETIME_CLASS,
) -> DataModelSet:
from .. import DataModelType
from . import dataclass, msgspec, pydantic, pydantic_v2, rootmodel, typed_dict
from .types import DataTypeManager
if data_model_type == DataModelType.PydanticBaseModel:
return DataModelSet(
data_model=pydantic.BaseModel,
root_model=pydantic.CustomRootType,
field_model=pydantic.DataModelField,
data_type_manager=pydantic.DataTypeManager,
dump_resolve_reference_action=pydantic.dump_resolve_reference_action,
)
elif data_model_type == DataModelType.PydanticV2BaseModel:
return DataModelSet(
data_model=pydantic_v2.BaseModel,
root_model=pydantic_v2.RootModel,
field_model=pydantic_v2.DataModelField,
data_type_manager=pydantic_v2.DataTypeManager,
dump_resolve_reference_action=pydantic_v2.dump_resolve_reference_action,
)
elif data_model_type == DataModelType.DataclassesDataclass:
return DataModelSet(
data_model=dataclass.DataClass,
root_model=rootmodel.RootModel,
field_model=dataclass.DataModelField,
data_type_manager=dataclass.DataTypeManager,
dump_resolve_reference_action=None,
)
elif data_model_type == DataModelType.TypingTypedDict:
return DataModelSet(
data_model=(
typed_dict.TypedDict
if target_python_version.has_typed_dict
else typed_dict.TypedDictBackport
),
root_model=rootmodel.RootModel,
field_model=(
typed_dict.DataModelField
if target_python_version.has_typed_dict_non_required
else typed_dict.DataModelFieldBackport
),
data_type_manager=DataTypeManager,
dump_resolve_reference_action=None,
)
elif data_model_type == DataModelType.MsgspecStruct:
return DataModelSet(
data_model=msgspec.Struct,
root_model=msgspec.RootModel,
field_model=msgspec.DataModelField,
data_type_manager=msgspec.DataTypeManager,
dump_resolve_reference_action=None,
known_third_party=['msgspec'],
)
raise ValueError(
f'{data_model_type} is unsupported data model type'
) # pragma: no cover
__all__ = ['ConstraintsBase', 'DataModel', 'DataModelFieldBase']
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/base.py 0000644 0000000 0000000 00000034547 00000000000 024317 0 ustar 00 0000000 0000000 from abc import ABC, abstractmethod
from collections import defaultdict
from copy import deepcopy
from functools import lru_cache
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
ClassVar,
DefaultDict,
Dict,
FrozenSet,
Iterator,
List,
Optional,
Set,
Tuple,
TypeVar,
Union,
)
from warnings import warn
from jinja2 import Environment, FileSystemLoader, Template
from pydantic import Field
from datamodel_code_generator.imports import (
IMPORT_ANNOTATED,
IMPORT_ANNOTATED_BACKPORT,
IMPORT_OPTIONAL,
IMPORT_UNION,
Import,
)
from datamodel_code_generator.reference import Reference, _BaseModel
from datamodel_code_generator.types import (
ANY,
NONE,
UNION_PREFIX,
DataType,
Nullable,
chain_as_tuple,
get_optional_type,
)
from datamodel_code_generator.util import PYDANTIC_V2, ConfigDict, cached_property
TEMPLATE_DIR: Path = Path(__file__).parents[0] / 'template'
ALL_MODEL: str = '#all#'
ConstraintsBaseT = TypeVar('ConstraintsBaseT', bound='ConstraintsBase')
class ConstraintsBase(_BaseModel):
unique_items: Optional[bool] = Field(None, alias='uniqueItems')
_exclude_fields: ClassVar[Set[str]] = {'has_constraints'}
if PYDANTIC_V2:
model_config = ConfigDict(
arbitrary_types_allowed=True, ignored_types=(cached_property,)
)
else:
class Config:
arbitrary_types_allowed = True
keep_untouched = (cached_property,)
@cached_property
def has_constraints(self) -> bool:
return any(v is not None for v in self.dict().values())
@staticmethod
def merge_constraints(
a: ConstraintsBaseT, b: ConstraintsBaseT
) -> Optional[ConstraintsBaseT]:
constraints_class = None
if isinstance(a, ConstraintsBase): # pragma: no cover
root_type_field_constraints = {
k: v for k, v in a.dict(by_alias=True).items() if v is not None
}
constraints_class = a.__class__
else:
root_type_field_constraints = {} # pragma: no cover
if isinstance(b, ConstraintsBase): # pragma: no cover
model_field_constraints = {
k: v for k, v in b.dict(by_alias=True).items() if v is not None
}
constraints_class = constraints_class or b.__class__
else:
model_field_constraints = {}
if not issubclass(constraints_class, ConstraintsBase): # pragma: no cover
return None
return constraints_class.parse_obj(
{
**root_type_field_constraints,
**model_field_constraints,
}
)
class DataModelFieldBase(_BaseModel):
name: Optional[str] = None
default: Optional[Any] = None
required: bool = False
alias: Optional[str] = None
data_type: DataType
constraints: Any = None
strip_default_none: bool = False
nullable: Optional[bool] = None
parent: Optional[Any] = None
extras: Dict[str, Any] = {}
use_annotated: bool = False
has_default: bool = False
use_field_description: bool = False
const: bool = False
original_name: Optional[str] = None
use_default_kwarg: bool = False
use_one_literal_as_default: bool = False
_exclude_fields: ClassVar[Set[str]] = {'parent'}
_pass_fields: ClassVar[Set[str]] = {'parent', 'data_type'}
can_have_extra_keys: ClassVar[bool] = True
type_has_null: Optional[bool] = None
if not TYPE_CHECKING:
def __init__(self, **data: Any) -> None:
super().__init__(**data)
if self.data_type.reference or self.data_type.data_types:
self.data_type.parent = self
self.process_const()
def process_const(self) -> None:
if 'const' not in self.extras:
return None
self.default = self.extras['const']
self.const = True
self.required = False
self.nullable = False
@property
def type_hint(self) -> str:
type_hint = self.data_type.type_hint
if not type_hint:
return NONE
elif self.has_default_factory:
return type_hint
elif self.data_type.is_optional and self.data_type.type != ANY:
return type_hint
elif self.nullable is not None:
if self.nullable:
return get_optional_type(type_hint, self.data_type.use_union_operator)
return type_hint
elif self.required:
if self.type_has_null:
return get_optional_type(type_hint, self.data_type.use_union_operator)
return type_hint
elif self.fall_back_to_nullable:
return get_optional_type(type_hint, self.data_type.use_union_operator)
else:
return type_hint
@property
def imports(self) -> Tuple[Import, ...]:
type_hint = self.type_hint
has_union = not self.data_type.use_union_operator and UNION_PREFIX in type_hint
imports: List[Union[Tuple[Import], Iterator[Import]]] = [
(
i
for i in self.data_type.all_imports
if not (not has_union and i == IMPORT_UNION)
)
]
if self.fall_back_to_nullable:
if (
self.nullable or (self.nullable is None and not self.required)
) and not self.data_type.use_union_operator:
imports.append((IMPORT_OPTIONAL,))
else:
if (
self.nullable and not self.data_type.use_union_operator
): # pragma: no cover
imports.append((IMPORT_OPTIONAL,))
if self.use_annotated and self.annotated:
import_annotated = (
IMPORT_ANNOTATED
if self.data_type.python_version.has_annotated_type
else IMPORT_ANNOTATED_BACKPORT
)
imports.append((import_annotated,))
return chain_as_tuple(*imports)
@property
def docstring(self) -> Optional[str]:
if self.use_field_description:
description = self.extras.get('description', None)
if description is not None:
return f'{description}'
return None
@property
def unresolved_types(self) -> FrozenSet[str]:
return self.data_type.unresolved_types
@property
def field(self) -> Optional[str]:
"""for backwards compatibility"""
return None
@property
def method(self) -> Optional[str]:
return None
@property
def represented_default(self) -> str:
return repr(self.default)
@property
def annotated(self) -> Optional[str]:
return None
@property
def has_default_factory(self) -> bool:
return 'default_factory' in self.extras
@property
def fall_back_to_nullable(self) -> bool:
return True
@lru_cache()
def get_template(template_file_path: Path) -> Template:
loader = FileSystemLoader(str(TEMPLATE_DIR / template_file_path.parent))
environment: Environment = Environment(loader=loader)
return environment.get_template(template_file_path.name)
def get_module_path(name: str, file_path: Optional[Path]) -> List[str]:
if file_path:
return [
*file_path.parts[:-1],
file_path.stem,
*name.split('.')[:-1],
]
return name.split('.')[:-1]
def get_module_name(name: str, file_path: Optional[Path]) -> str:
return '.'.join(get_module_path(name, file_path))
class TemplateBase(ABC):
@property
@abstractmethod
def template_file_path(self) -> Path:
raise NotImplementedError
@cached_property
def template(self) -> Template:
return get_template(self.template_file_path)
@abstractmethod
def render(self) -> str:
raise NotImplementedError
def _render(self, *args: Any, **kwargs: Any) -> str:
return self.template.render(*args, **kwargs)
def __str__(self) -> str:
return self.render()
class BaseClassDataType(DataType): ...
UNDEFINED: Any = object()
class DataModel(TemplateBase, Nullable, ABC):
TEMPLATE_FILE_PATH: ClassVar[str] = ''
BASE_CLASS: ClassVar[str] = ''
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = ()
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
self.keyword_only = keyword_only
if not self.TEMPLATE_FILE_PATH:
raise Exception('TEMPLATE_FILE_PATH is undefined')
self._custom_template_dir: Optional[Path] = custom_template_dir
self.decorators: List[str] = decorators or []
self._additional_imports: List[Import] = []
self.custom_base_class = custom_base_class
if base_classes:
self.base_classes: List[BaseClassDataType] = [
BaseClassDataType(reference=b) for b in base_classes
]
else:
self.set_base_class()
self.file_path: Optional[Path] = path
self.reference: Reference = reference
self.reference.source = self
self.extra_template_data = (
# The supplied defaultdict will either create a new entry,
# or already contain a predefined entry for this type
extra_template_data[self.name]
if extra_template_data is not None
else defaultdict(dict)
)
self.fields = self._validate_fields(fields) if fields else []
for base_class in self.base_classes:
if base_class.reference:
base_class.reference.children.append(self)
if extra_template_data is not None:
all_model_extra_template_data = extra_template_data.get(ALL_MODEL)
if all_model_extra_template_data:
# The deepcopy is needed here to ensure that different models don't
# end up inadvertently sharing state (such as "base_class_kwargs")
self.extra_template_data.update(deepcopy(all_model_extra_template_data))
self.methods: List[str] = methods or []
self.description = description
for field in self.fields:
field.parent = self
self._additional_imports.extend(self.DEFAULT_IMPORTS)
self.default: Any = default
self._nullable: bool = nullable
def _validate_fields(
self, fields: List[DataModelFieldBase]
) -> List[DataModelFieldBase]:
names: Set[str] = set()
unique_fields: List[DataModelFieldBase] = []
for field in fields:
if field.name:
if field.name in names:
warn(f'Field name `{field.name}` is duplicated on {self.name}')
continue
else:
names.add(field.name)
unique_fields.append(field)
return unique_fields
def set_base_class(self) -> None:
base_class = self.custom_base_class or self.BASE_CLASS
if not base_class:
self.base_classes = []
return None
base_class_import = Import.from_full_path(base_class)
self._additional_imports.append(base_class_import)
self.base_classes = [BaseClassDataType.from_import(base_class_import)]
@cached_property
def template_file_path(self) -> Path:
template_file_path = Path(self.TEMPLATE_FILE_PATH)
if self._custom_template_dir is not None:
custom_template_file_path = self._custom_template_dir / template_file_path
if custom_template_file_path.exists():
return custom_template_file_path
return template_file_path
@property
def imports(self) -> Tuple[Import, ...]:
return chain_as_tuple(
(i for f in self.fields for i in f.imports),
self._additional_imports,
)
@property
def reference_classes(self) -> FrozenSet[str]:
return frozenset(
{r.reference.path for r in self.base_classes if r.reference}
| {t for f in self.fields for t in f.unresolved_types}
)
@property
def name(self) -> str:
return self.reference.name
@property
def duplicate_name(self) -> str:
return self.reference.duplicate_name or ''
@property
def base_class(self) -> str:
return ', '.join(b.type_hint for b in self.base_classes)
@staticmethod
def _get_class_name(name: str) -> str:
if '.' in name:
return name.rsplit('.', 1)[-1]
return name
@property
def class_name(self) -> str:
return self._get_class_name(self.name)
@class_name.setter
def class_name(self, class_name: str) -> None:
if '.' in self.reference.name:
self.reference.name = (
f"{self.reference.name.rsplit('.', 1)[0]}.{class_name}"
)
else:
self.reference.name = class_name
@property
def duplicate_class_name(self) -> str:
return self._get_class_name(self.duplicate_name)
@property
def module_path(self) -> List[str]:
return get_module_path(self.name, self.file_path)
@property
def module_name(self) -> str:
return get_module_name(self.name, self.file_path)
@property
def all_data_types(self) -> Iterator[DataType]:
for field in self.fields:
yield from field.data_type.all_data_types
yield from self.base_classes
@property
def nullable(self) -> bool:
return self._nullable
@cached_property
def path(self) -> str:
return self.reference.path
def render(self, *, class_name: Optional[str] = None) -> str:
response = self._render(
class_name=class_name or self.class_name,
fields=self.fields,
decorators=self.decorators,
base_class=self.base_class,
methods=self.methods,
description=self.description,
keyword_only=self.keyword_only,
**self.extra_template_data,
)
return response
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/dataclass.py 0000644 0000000 0000000 00000013357 00000000000 025340 0 ustar 00 0000000 0000000 from pathlib import Path
from typing import (
Any,
ClassVar,
DefaultDict,
Dict,
List,
Optional,
Sequence,
Set,
Tuple,
)
from datamodel_code_generator import DatetimeClassType, PythonVersion
from datamodel_code_generator.imports import (
IMPORT_DATE,
IMPORT_DATETIME,
IMPORT_TIME,
IMPORT_TIMEDELTA,
Import,
)
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.model.imports import IMPORT_DATACLASS, IMPORT_FIELD
from datamodel_code_generator.model.pydantic.base_model import Constraints
from datamodel_code_generator.model.types import DataTypeManager as _DataTypeManager
from datamodel_code_generator.model.types import type_map_factory
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import DataType, StrictTypes, Types, chain_as_tuple
def _has_field_assignment(field: DataModelFieldBase) -> bool:
return bool(field.field) or not (
field.required
or (field.represented_default == 'None' and field.strip_default_none)
)
class DataClass(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'dataclass.jinja2'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_DATACLASS,)
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
super().__init__(
reference=reference,
fields=sorted(fields, key=_has_field_assignment, reverse=False),
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
class DataModelField(DataModelFieldBase):
_FIELD_KEYS: ClassVar[Set[str]] = {
'default_factory',
'init',
'repr',
'hash',
'compare',
'metadata',
'kw_only',
}
constraints: Optional[Constraints] = None
@property
def imports(self) -> Tuple[Import, ...]:
field = self.field
if field and field.startswith('field('):
return chain_as_tuple(super().imports, (IMPORT_FIELD,))
return super().imports
def self_reference(self) -> bool: # pragma: no cover
return isinstance(self.parent, DataClass) and self.parent.reference.path in {
d.reference.path for d in self.data_type.all_data_types if d.reference
}
@property
def field(self) -> Optional[str]:
"""for backwards compatibility"""
result = str(self)
if result == '':
return None
return result
def __str__(self) -> str:
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._FIELD_KEYS
}
if self.default != UNDEFINED and self.default is not None:
data['default'] = self.default
if self.required:
data = {
k: v
for k, v in data.items()
if k
not in (
'default',
'default_factory',
)
}
if not data:
return ''
if len(data) == 1 and 'default' in data:
default = data['default']
if isinstance(default, (list, dict)):
return f'field(default_factory=lambda :{repr(default)})'
return repr(default)
kwargs = [
f'{k}={v if k == "default_factory" else repr(v)}' for k, v in data.items()
]
return f'field({", ".join(kwargs)})'
class DataTypeManager(_DataTypeManager):
def __init__(
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
):
super().__init__(
python_version,
use_standard_collections,
use_generic_container_types,
strict_types,
use_non_positive_negative_number_constrained_types,
use_union_operator,
use_pendulum,
target_datetime_class,
)
datetime_map = (
{
Types.time: self.data_type.from_import(IMPORT_TIME),
Types.date: self.data_type.from_import(IMPORT_DATE),
Types.date_time: self.data_type.from_import(IMPORT_DATETIME),
Types.timedelta: self.data_type.from_import(IMPORT_TIMEDELTA),
}
if target_datetime_class is DatetimeClassType.Datetime
else {}
)
self.type_map: Dict[Types, DataType] = {
**type_map_factory(self.data_type),
**datetime_map,
}
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/enum.py 0000644 0000000 0000000 00000006525 00000000000 024344 0 ustar 00 0000000 0000000 from __future__ import annotations
from pathlib import Path
from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Tuple
from datamodel_code_generator.imports import IMPORT_ANY, IMPORT_ENUM, Import
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED, BaseClassDataType
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import DataType, Types
_INT: str = 'int'
_FLOAT: str = 'float'
_BYTES: str = 'bytes'
_STR: str = 'str'
SUBCLASS_BASE_CLASSES: Dict[Types, str] = {
Types.int32: _INT,
Types.int64: _INT,
Types.integer: _INT,
Types.float: _FLOAT,
Types.double: _FLOAT,
Types.number: _FLOAT,
Types.byte: _BYTES,
Types.string: _STR,
}
class Enum(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'Enum.jinja2'
BASE_CLASS: ClassVar[str] = 'enum.Enum'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_ENUM,)
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
type_: Optional[Types] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
):
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
if not base_classes and type_:
base_class = SUBCLASS_BASE_CLASSES.get(type_)
if base_class:
self.base_classes: List[BaseClassDataType] = [
BaseClassDataType(type=base_class),
*self.base_classes,
]
@classmethod
def get_data_type(cls, types: Types, **kwargs: Any) -> DataType:
raise NotImplementedError
def get_member(self, field: DataModelFieldBase) -> Member:
return Member(self, field)
def find_member(self, value: Any) -> Optional[Member]:
repr_value = repr(value)
for field in self.fields: # pragma: no cover
if field.default == repr_value:
return self.get_member(field)
return None # pragma: no cover
@property
def imports(self) -> Tuple[Import, ...]:
return tuple(i for i in super().imports if i != IMPORT_ANY)
class Member:
def __init__(self, enum: Enum, field: DataModelFieldBase) -> None:
self.enum: Enum = enum
self.field: DataModelFieldBase = field
self.alias: Optional[str] = None
def __repr__(self) -> str:
return f'{self.alias or self.enum.name}.{self.field.name}'
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/imports.py 0000644 0000000 0000000 00000001420 00000000000 025062 0 ustar 00 0000000 0000000 from datamodel_code_generator.imports import Import
IMPORT_DATACLASS = Import.from_full_path('dataclasses.dataclass')
IMPORT_FIELD = Import.from_full_path('dataclasses.field')
IMPORT_CLASSVAR = Import.from_full_path('typing.ClassVar')
IMPORT_TYPED_DICT = Import.from_full_path('typing.TypedDict')
IMPORT_TYPED_DICT_BACKPORT = Import.from_full_path('typing_extensions.TypedDict')
IMPORT_NOT_REQUIRED = Import.from_full_path('typing.NotRequired')
IMPORT_NOT_REQUIRED_BACKPORT = Import.from_full_path('typing_extensions.NotRequired')
IMPORT_MSGSPEC_STRUCT = Import.from_full_path('msgspec.Struct')
IMPORT_MSGSPEC_FIELD = Import.from_full_path('msgspec.field')
IMPORT_MSGSPEC_META = Import.from_full_path('msgspec.Meta')
IMPORT_MSGSPEC_CONVERT = Import.from_full_path('msgspec.convert')
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/msgspec.py 0000644 0000000 0000000 00000026372 00000000000 025043 0 ustar 00 0000000 0000000 from functools import wraps
from pathlib import Path
from typing import (
Any,
ClassVar,
DefaultDict,
Dict,
List,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
)
from pydantic import Field
from datamodel_code_generator import DatetimeClassType, PythonVersion
from datamodel_code_generator.imports import (
IMPORT_DATE,
IMPORT_DATETIME,
IMPORT_TIME,
IMPORT_TIMEDELTA,
Import,
)
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.model.imports import (
IMPORT_CLASSVAR,
IMPORT_MSGSPEC_CONVERT,
IMPORT_MSGSPEC_FIELD,
IMPORT_MSGSPEC_META,
)
from datamodel_code_generator.model.pydantic.base_model import (
Constraints as _Constraints,
)
from datamodel_code_generator.model.rootmodel import RootModel as _RootModel
from datamodel_code_generator.model.types import DataTypeManager as _DataTypeManager
from datamodel_code_generator.model.types import type_map_factory
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import (
DataType,
StrictTypes,
Types,
chain_as_tuple,
get_optional_type,
)
def _has_field_assignment(field: DataModelFieldBase) -> bool:
return not (
field.required
or (field.represented_default == 'None' and field.strip_default_none)
)
DataModelFieldBaseT = TypeVar('DataModelFieldBaseT', bound=DataModelFieldBase)
def import_extender(cls: Type[DataModelFieldBaseT]) -> Type[DataModelFieldBaseT]:
original_imports: property = getattr(cls, 'imports', None) # type: ignore
@wraps(original_imports.fget) # type: ignore
def new_imports(self: DataModelFieldBaseT) -> Tuple[Import, ...]:
extra_imports = []
field = self.field
# TODO: Improve field detection
if field and field.startswith('field('):
extra_imports.append(IMPORT_MSGSPEC_FIELD)
if self.field and 'lambda: convert' in self.field:
extra_imports.append(IMPORT_MSGSPEC_CONVERT)
if self.annotated:
extra_imports.append(IMPORT_MSGSPEC_META)
if self.extras.get('is_classvar'):
extra_imports.append(IMPORT_CLASSVAR)
return chain_as_tuple(original_imports.fget(self), extra_imports) # type: ignore
setattr(cls, 'imports', property(new_imports))
return cls
class RootModel(_RootModel):
pass
class Struct(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'msgspec.jinja2'
BASE_CLASS: ClassVar[str] = 'msgspec.Struct'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = ()
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
super().__init__(
reference=reference,
fields=sorted(fields, key=_has_field_assignment, reverse=False),
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
self.extra_template_data.setdefault('base_class_kwargs', {})
if self.keyword_only:
self.add_base_class_kwarg('kw_only', 'True')
def add_base_class_kwarg(self, name: str, value):
self.extra_template_data['base_class_kwargs'][name] = value
class Constraints(_Constraints):
# To override existing pattern alias
regex: Optional[str] = Field(None, alias='regex')
pattern: Optional[str] = Field(None, alias='pattern')
@import_extender
class DataModelField(DataModelFieldBase):
_FIELD_KEYS: ClassVar[Set[str]] = {
'default',
'default_factory',
}
_META_FIELD_KEYS: ClassVar[Set[str]] = {
'title',
'description',
'gt',
'ge',
'lt',
'le',
'multiple_of',
# 'min_items', # not supported by msgspec
# 'max_items', # not supported by msgspec
'min_length',
'max_length',
'pattern',
'examples',
# 'unique_items', # not supported by msgspec
}
_PARSE_METHOD = 'convert'
_COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le', 'multiple_of'}
constraints: Optional[Constraints] = None
def self_reference(self) -> bool: # pragma: no cover
return isinstance(self.parent, Struct) and self.parent.reference.path in {
d.reference.path for d in self.data_type.all_data_types if d.reference
}
def process_const(self) -> None:
if 'const' not in self.extras:
return None
self.const = True
self.nullable = False
const = self.extras['const']
if self.data_type.type == 'str' and isinstance(
const, str
): # pragma: no cover # Literal supports only str
self.data_type = self.data_type.__class__(literals=[const])
def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:
if value is None or constraint not in self._COMPARE_EXPRESSIONS:
return value
if any(
data_type.type == 'float' for data_type in self.data_type.all_data_types
):
return float(value)
return int(value)
@property
def field(self) -> Optional[str]:
"""for backwards compatibility"""
result = str(self)
if result == '':
return None
return result
def __str__(self) -> str:
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._FIELD_KEYS
}
if self.alias:
data['name'] = self.alias
if self.default != UNDEFINED and self.default is not None:
data['default'] = self.default
elif not self.required:
data['default'] = None
if self.required:
data = {
k: v
for k, v in data.items()
if k
not in (
'default',
'default_factory',
)
}
elif self.default and 'default_factory' not in data:
default_factory = self._get_default_as_struct_model()
if default_factory is not None:
data.pop('default')
data['default_factory'] = default_factory
if not data:
return ''
if len(data) == 1 and 'default' in data:
return repr(data['default'])
kwargs = [
f'{k}={v if k == "default_factory" else repr(v)}' for k, v in data.items()
]
return f'field({", ".join(kwargs)})'
@property
def annotated(self) -> Optional[str]:
if not self.use_annotated: # pragma: no cover
return None
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._META_FIELD_KEYS
}
if (
self.constraints is not None
and not self.self_reference()
and not self.data_type.strict
):
data = {
**data,
**{
k: self._get_strict_field_constraint_value(k, v)
for k, v in self.constraints.dict().items()
if k in self._META_FIELD_KEYS
},
}
meta_arguments = sorted(
f'{k}={repr(v)}' for k, v in data.items() if v is not None
)
if not meta_arguments:
return None
meta = f'Meta({", ".join(meta_arguments)})'
if not self.required and not self.extras.get('is_classvar'):
type_hint = self.data_type.type_hint
annotated_type = f'Annotated[{type_hint}, {meta}]'
return get_optional_type(annotated_type, self.data_type.use_union_operator)
annotated_type = f'Annotated[{self.type_hint}, {meta}]'
if self.extras.get('is_classvar'):
annotated_type = f'ClassVar[{annotated_type}]'
return annotated_type
def _get_default_as_struct_model(self) -> Optional[str]:
for data_type in self.data_type.data_types or (self.data_type,):
# TODO: Check nested data_types
if data_type.is_dict or self.data_type.is_union:
# TODO: Parse Union and dict model for default
continue # pragma: no cover
elif data_type.is_list and len(data_type.data_types) == 1:
data_type = data_type.data_types[0]
if ( # pragma: no cover
data_type.reference
and (
isinstance(data_type.reference.source, Struct)
or isinstance(data_type.reference.source, RootModel)
)
and isinstance(self.default, list)
):
return f'lambda: {self._PARSE_METHOD}({repr(self.default)}, type=list[{data_type.alias or data_type.reference.source.class_name}])'
elif data_type.reference and isinstance(data_type.reference.source, Struct):
return f'lambda: {self._PARSE_METHOD}({repr(self.default)}, type={data_type.alias or data_type.reference.source.class_name})'
return None
class DataTypeManager(_DataTypeManager):
def __init__(
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
):
super().__init__(
python_version,
use_standard_collections,
use_generic_container_types,
strict_types,
use_non_positive_negative_number_constrained_types,
use_union_operator,
use_pendulum,
target_datetime_class,
)
datetime_map = (
{
Types.time: self.data_type.from_import(IMPORT_TIME),
Types.date: self.data_type.from_import(IMPORT_DATE),
Types.date_time: self.data_type.from_import(IMPORT_DATETIME),
Types.timedelta: self.data_type.from_import(IMPORT_TIMEDELTA),
}
if target_datetime_class is DatetimeClassType.Datetime
else {}
)
self.type_map: Dict[Types, DataType] = {
**type_map_factory(self.data_type),
**datetime_map,
}
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/__init__.py 0000644 0000000 0000000 00000002534 00000000000 026746 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import Iterable, Optional
from pydantic import BaseModel as _BaseModel
from .base_model import BaseModel, DataModelField
from .custom_root_type import CustomRootType
from .dataclass import DataClass
from .types import DataTypeManager
def dump_resolve_reference_action(class_names: Iterable[str]) -> str:
return '\n'.join(
f'{class_name}.update_forward_refs()' for class_name in class_names
)
class Config(_BaseModel):
extra: Optional[str] = None
title: Optional[str] = None
allow_population_by_field_name: Optional[bool] = None
allow_extra_fields: Optional[bool] = None
allow_mutation: Optional[bool] = None
arbitrary_types_allowed: Optional[bool] = None
orm_mode: Optional[bool] = None
# def get_validator_template() -> Template:
# template_file_path: Path = Path('pydantic') / 'one_of_validator.jinja2'
# loader = FileSystemLoader(str(TEMPLATE_DIR / template_file_path.parent))
# environment: Environment = Environment(loader=loader, autoescape=True)
# return environment.get_template(template_file_path.name)
#
#
# VALIDATOR_TEMPLATE: Template = get_validator_template()
__all__ = [
'BaseModel',
'DataModelField',
'CustomRootType',
'DataClass',
'dump_resolve_reference_action',
'DataTypeManager',
# 'VALIDATOR_TEMPLATE',
]
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/base_model.py 0000644 0000000 0000000 00000027530 00000000000 027304 0 ustar 00 0000000 0000000 from abc import ABC
from pathlib import Path
from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple
from pydantic import Field
from datamodel_code_generator.imports import Import
from datamodel_code_generator.model import (
ConstraintsBase,
DataModel,
DataModelFieldBase,
)
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.model.pydantic.imports import (
IMPORT_ANYURL,
IMPORT_EXTRA,
IMPORT_FIELD,
)
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import UnionIntFloat, chain_as_tuple
from datamodel_code_generator.util import cached_property
class Constraints(ConstraintsBase):
gt: Optional[UnionIntFloat] = Field(None, alias='exclusiveMinimum')
ge: Optional[UnionIntFloat] = Field(None, alias='minimum')
lt: Optional[UnionIntFloat] = Field(None, alias='exclusiveMaximum')
le: Optional[UnionIntFloat] = Field(None, alias='maximum')
multiple_of: Optional[float] = Field(None, alias='multipleOf')
min_items: Optional[int] = Field(None, alias='minItems')
max_items: Optional[int] = Field(None, alias='maxItems')
min_length: Optional[int] = Field(None, alias='minLength')
max_length: Optional[int] = Field(None, alias='maxLength')
regex: Optional[str] = Field(None, alias='pattern')
class DataModelField(DataModelFieldBase):
_EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {
'alias',
'default',
'const',
'gt',
'ge',
'lt',
'le',
'multiple_of',
'min_items',
'max_items',
'min_length',
'max_length',
'regex',
}
_COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}
constraints: Optional[Constraints] = None
_PARSE_METHOD: ClassVar[str] = 'parse_obj'
@property
def method(self) -> Optional[str]:
return self.validator
@property
def validator(self) -> Optional[str]:
return None
# TODO refactor this method for other validation logic
# from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE
#
# return VALIDATOR_TEMPLATE.render(
# field_name=self.name, types=','.join([t.type_hint for t in self.data_types])
# )
@property
def field(self) -> Optional[str]:
"""for backwards compatibility"""
result = str(self)
if (
self.use_default_kwarg
and not result.startswith('Field(...')
and not result.startswith('Field(default_factory=')
):
# Use `default=` for fields that have a default value so that type
# checkers using @dataclass_transform can infer the field as
# optional in __init__.
result = result.replace('Field(', 'Field(default=')
if result == '':
return None
return result
def self_reference(self) -> bool:
return isinstance(
self.parent, BaseModelBase
) and self.parent.reference.path in {
d.reference.path for d in self.data_type.all_data_types if d.reference
}
def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:
if value is None or constraint not in self._COMPARE_EXPRESSIONS:
return value
if any(
data_type.type == 'float' for data_type in self.data_type.all_data_types
):
return float(value)
return int(value)
def _get_default_as_pydantic_model(self) -> Optional[str]:
for data_type in self.data_type.data_types or (self.data_type,):
# TODO: Check nested data_types
if data_type.is_dict or self.data_type.is_union:
# TODO: Parse Union and dict model for default
continue
elif data_type.is_list and len(data_type.data_types) == 1:
data_type = data_type.data_types[0]
if (
data_type.reference
and isinstance(data_type.reference.source, BaseModelBase)
and isinstance(self.default, list)
): # pragma: no cover
return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.{self._PARSE_METHOD}(v) for v in {repr(self.default)}]'
elif data_type.reference and isinstance(
data_type.reference.source, BaseModelBase
): # pragma: no cover
return f'lambda :{data_type.alias or data_type.reference.source.class_name}.{self._PARSE_METHOD}({repr(self.default)})'
return None
def _process_data_in_str(self, data: Dict[str, Any]) -> None:
if self.const:
data['const'] = True
def _process_annotated_field_arguments(
self, field_arguments: List[str]
) -> List[str]:
return field_arguments
def __str__(self) -> str:
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS
}
if self.alias:
data['alias'] = self.alias
if (
self.constraints is not None
and not self.self_reference()
and not self.data_type.strict
):
data = {
**data,
**(
{}
if any(
d.import_ == IMPORT_ANYURL
for d in self.data_type.all_data_types
)
else {
k: self._get_strict_field_constraint_value(k, v)
for k, v in self.constraints.dict(exclude_unset=True).items()
}
),
}
if self.use_field_description:
data.pop('description', None) # Description is part of field docstring
self._process_data_in_str(data)
discriminator = data.pop('discriminator', None)
if discriminator:
if isinstance(discriminator, str):
data['discriminator'] = discriminator
elif isinstance(discriminator, dict): # pragma: no cover
data['discriminator'] = discriminator['propertyName']
if self.required:
default_factory = None
elif self.default and 'default_factory' not in data:
default_factory = self._get_default_as_pydantic_model()
else:
default_factory = data.pop('default_factory', None)
field_arguments = sorted(
f'{k}={repr(v)}' for k, v in data.items() if v is not None
)
if not field_arguments and not default_factory:
if self.nullable and self.required:
return 'Field(...)' # Field() is for mypy
return ''
if self.use_annotated:
field_arguments = self._process_annotated_field_arguments(field_arguments)
elif self.required:
field_arguments = ['...', *field_arguments]
elif default_factory:
field_arguments = [f'default_factory={default_factory}', *field_arguments]
else:
field_arguments = [f'{repr(self.default)}', *field_arguments]
return f'Field({", ".join(field_arguments)})'
@property
def annotated(self) -> Optional[str]:
if not self.use_annotated or not str(self):
return None
return f'Annotated[{self.type_hint}, {str(self)}]'
@property
def imports(self) -> Tuple[Import, ...]:
if self.field:
return chain_as_tuple(super().imports, (IMPORT_FIELD,))
return super().imports
class BaseModelBase(DataModel, ABC):
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Any]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
methods: List[str] = [field.method for field in fields if field.method]
super().__init__(
fields=fields,
reference=reference,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
@cached_property
def template_file_path(self) -> Path:
# This property is for Backward compatibility
# Current version supports '{custom_template_dir}/BaseModel.jinja'
# But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'
if self._custom_template_dir is not None:
custom_template_file_path = (
self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name
)
if custom_template_file_path.exists():
return custom_template_file_path
return super().template_file_path
class BaseModel(BaseModelBase):
TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'
BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Any]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
config_parameters: Dict[str, Any] = {}
additionalProperties = self.extra_template_data.get('additionalProperties')
allow_extra_fields = self.extra_template_data.get('allow_extra_fields')
if additionalProperties is not None or allow_extra_fields:
config_parameters['extra'] = (
'Extra.allow'
if additionalProperties or allow_extra_fields
else 'Extra.forbid'
)
self._additional_imports.append(IMPORT_EXTRA)
for config_attribute in 'allow_population_by_field_name', 'allow_mutation':
if config_attribute in self.extra_template_data:
config_parameters[config_attribute] = self.extra_template_data[
config_attribute
]
for data_type in self.all_data_types:
if data_type.is_custom_type:
config_parameters['arbitrary_types_allowed'] = True
break
if isinstance(self.extra_template_data.get('config'), dict):
for key, value in self.extra_template_data['config'].items():
config_parameters[key] = value
if config_parameters:
from datamodel_code_generator.model.pydantic import Config
self.extra_template_data['config'] = Config.parse_obj(config_parameters)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/custom_root_type.py 0000644 0000000 0000000 00000000453 00000000000 030623 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import ClassVar
from datamodel_code_generator.model.pydantic.base_model import BaseModel
class CustomRootType(BaseModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel_root.jinja2'
BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/dataclass.py 0000644 0000000 0000000 00000000650 00000000000 027143 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import ClassVar, Tuple
from datamodel_code_generator.imports import Import
from datamodel_code_generator.model import DataModel
from datamodel_code_generator.model.pydantic.imports import IMPORT_DATACLASS
class DataClass(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/dataclass.jinja2'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_DATACLASS,)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/imports.py 0000644 0000000 0000000 00000004215 00000000000 026702 0 ustar 00 0000000 0000000 from datamodel_code_generator.imports import Import
IMPORT_CONSTR = Import.from_full_path('pydantic.constr')
IMPORT_CONINT = Import.from_full_path('pydantic.conint')
IMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')
IMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')
IMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')
IMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')
IMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')
IMPORT_NON_POSITIVE_INT = Import.from_full_path('pydantic.NonPositiveInt')
IMPORT_NON_NEGATIVE_INT = Import.from_full_path('pydantic.NonNegativeInt')
IMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')
IMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')
IMPORT_NON_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NonNegativeFloat')
IMPORT_NON_POSITIVE_FLOAT = Import.from_full_path('pydantic.NonPositiveFloat')
IMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')
IMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')
IMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')
IMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')
IMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')
IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')
IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')
IMPORT_IPV4NETWORKS = Import.from_full_path('ipaddress.IPv4Network')
IMPORT_IPV6NETWORKS = Import.from_full_path('ipaddress.IPv6Network')
IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
IMPORT_FIELD = Import.from_full_path('pydantic.Field')
IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
IMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')
IMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')
IMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')
IMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')
IMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic/types.py 0000644 0000000 0000000 00000032447 00000000000 026361 0 ustar 00 0000000 0000000 from __future__ import annotations
from decimal import Decimal
from typing import Any, ClassVar, Dict, Optional, Sequence, Set, Type
from datamodel_code_generator.format import DatetimeClassType, PythonVersion
from datamodel_code_generator.imports import (
IMPORT_ANY,
IMPORT_DATE,
IMPORT_DATETIME,
IMPORT_DECIMAL,
IMPORT_PATH,
IMPORT_PENDULUM_DATE,
IMPORT_PENDULUM_DATETIME,
IMPORT_PENDULUM_DURATION,
IMPORT_PENDULUM_TIME,
IMPORT_TIME,
IMPORT_TIMEDELTA,
IMPORT_UUID,
)
from datamodel_code_generator.model.pydantic.imports import (
IMPORT_ANYURL,
IMPORT_CONBYTES,
IMPORT_CONDECIMAL,
IMPORT_CONFLOAT,
IMPORT_CONINT,
IMPORT_CONSTR,
IMPORT_EMAIL_STR,
IMPORT_IPV4ADDRESS,
IMPORT_IPV4NETWORKS,
IMPORT_IPV6ADDRESS,
IMPORT_IPV6NETWORKS,
IMPORT_NEGATIVE_FLOAT,
IMPORT_NEGATIVE_INT,
IMPORT_NON_NEGATIVE_FLOAT,
IMPORT_NON_NEGATIVE_INT,
IMPORT_NON_POSITIVE_FLOAT,
IMPORT_NON_POSITIVE_INT,
IMPORT_POSITIVE_FLOAT,
IMPORT_POSITIVE_INT,
IMPORT_SECRET_STR,
IMPORT_STRICT_BOOL,
IMPORT_STRICT_BYTES,
IMPORT_STRICT_FLOAT,
IMPORT_STRICT_INT,
IMPORT_STRICT_STR,
IMPORT_UUID1,
IMPORT_UUID2,
IMPORT_UUID3,
IMPORT_UUID4,
IMPORT_UUID5,
)
from datamodel_code_generator.types import DataType, StrictTypes, Types, UnionIntFloat
from datamodel_code_generator.types import DataTypeManager as _DataTypeManager
def type_map_factory(
data_type: Type[DataType],
strict_types: Sequence[StrictTypes],
pattern_key: str,
use_pendulum: bool,
target_datetime_class: DatetimeClassType,
) -> Dict[Types, DataType]:
data_type_int = data_type(type='int')
data_type_float = data_type(type='float')
data_type_str = data_type(type='str')
result = {
Types.integer: data_type_int,
Types.int32: data_type_int,
Types.int64: data_type_int,
Types.number: data_type_float,
Types.float: data_type_float,
Types.double: data_type_float,
Types.decimal: data_type.from_import(IMPORT_DECIMAL),
Types.time: data_type.from_import(IMPORT_TIME),
Types.string: data_type_str,
Types.byte: data_type_str, # base64 encoded string
Types.binary: data_type(type='bytes'),
Types.date: data_type.from_import(IMPORT_DATE),
Types.date_time: data_type.from_import(IMPORT_DATETIME),
Types.timedelta: data_type.from_import(IMPORT_TIMEDELTA),
Types.path: data_type.from_import(IMPORT_PATH),
Types.password: data_type.from_import(IMPORT_SECRET_STR),
Types.email: data_type.from_import(IMPORT_EMAIL_STR),
Types.uuid: data_type.from_import(IMPORT_UUID),
Types.uuid1: data_type.from_import(IMPORT_UUID1),
Types.uuid2: data_type.from_import(IMPORT_UUID2),
Types.uuid3: data_type.from_import(IMPORT_UUID3),
Types.uuid4: data_type.from_import(IMPORT_UUID4),
Types.uuid5: data_type.from_import(IMPORT_UUID5),
Types.uri: data_type.from_import(IMPORT_ANYURL),
Types.hostname: data_type.from_import(
IMPORT_CONSTR,
strict=StrictTypes.str in strict_types,
# https://github.com/horejsek/python-fastjsonschema/blob/61c6997a8348b8df9b22e029ca2ba35ef441fbb8/fastjsonschema/draft04.py#L31
kwargs={
pattern_key: r"r'^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]{0,61}[A-Za-z0-9])\Z'",
**({'strict': True} if StrictTypes.str in strict_types else {}),
},
),
Types.ipv4: data_type.from_import(IMPORT_IPV4ADDRESS),
Types.ipv6: data_type.from_import(IMPORT_IPV6ADDRESS),
Types.ipv4_network: data_type.from_import(IMPORT_IPV4NETWORKS),
Types.ipv6_network: data_type.from_import(IMPORT_IPV6NETWORKS),
Types.boolean: data_type(type='bool'),
Types.object: data_type.from_import(IMPORT_ANY, is_dict=True),
Types.null: data_type(type='None'),
Types.array: data_type.from_import(IMPORT_ANY, is_list=True),
Types.any: data_type.from_import(IMPORT_ANY),
}
if use_pendulum:
result[Types.date] = data_type.from_import(IMPORT_PENDULUM_DATE)
result[Types.date_time] = data_type.from_import(IMPORT_PENDULUM_DATETIME)
result[Types.time] = data_type.from_import(IMPORT_PENDULUM_TIME)
result[Types.timedelta] = data_type.from_import(IMPORT_PENDULUM_DURATION)
return result
def strict_type_map_factory(data_type: Type[DataType]) -> Dict[StrictTypes, DataType]:
return {
StrictTypes.int: data_type.from_import(IMPORT_STRICT_INT, strict=True),
StrictTypes.float: data_type.from_import(IMPORT_STRICT_FLOAT, strict=True),
StrictTypes.bytes: data_type.from_import(IMPORT_STRICT_BYTES, strict=True),
StrictTypes.bool: data_type.from_import(IMPORT_STRICT_BOOL, strict=True),
StrictTypes.str: data_type.from_import(IMPORT_STRICT_STR, strict=True),
}
number_kwargs: Set[str] = {
'exclusiveMinimum',
'minimum',
'exclusiveMaximum',
'maximum',
'multipleOf',
}
string_kwargs: Set[str] = {'minItems', 'maxItems', 'minLength', 'maxLength', 'pattern'}
byes_kwargs: Set[str] = {'minLength', 'maxLength'}
escape_characters = str.maketrans(
{
"'": r'\'',
'\b': r'\b',
'\f': r'\f',
'\n': r'\n',
'\r': r'\r',
'\t': r'\t',
}
)
class DataTypeManager(_DataTypeManager):
PATTERN_KEY: ClassVar[str] = 'regex'
def __init__(
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
target_datetime_class: Optional[DatetimeClassType] = None,
):
super().__init__(
python_version,
use_standard_collections,
use_generic_container_types,
strict_types,
use_non_positive_negative_number_constrained_types,
use_union_operator,
use_pendulum,
target_datetime_class,
)
self.type_map: Dict[Types, DataType] = self.type_map_factory(
self.data_type,
strict_types=self.strict_types,
pattern_key=self.PATTERN_KEY,
target_datetime_class=target_datetime_class,
)
self.strict_type_map: Dict[StrictTypes, DataType] = strict_type_map_factory(
self.data_type,
)
self.kwargs_schema_to_model: Dict[str, str] = {
'exclusiveMinimum': 'gt',
'minimum': 'ge',
'exclusiveMaximum': 'lt',
'maximum': 'le',
'multipleOf': 'multiple_of',
'minItems': 'min_items',
'maxItems': 'max_items',
'minLength': 'min_length',
'maxLength': 'max_length',
'pattern': self.PATTERN_KEY,
}
def type_map_factory(
self,
data_type: Type[DataType],
strict_types: Sequence[StrictTypes],
pattern_key: str,
target_datetime_class: DatetimeClassType,
) -> Dict[Types, DataType]:
return type_map_factory(
data_type,
strict_types,
pattern_key,
self.use_pendulum,
self.target_datetime_class,
)
def transform_kwargs(
self, kwargs: Dict[str, Any], filter_: Set[str]
) -> Dict[str, str]:
return {
self.kwargs_schema_to_model.get(k, k): v
for (k, v) in kwargs.items()
if v is not None and k in filter_
}
def get_data_int_type(
self,
types: Types,
**kwargs: Any,
) -> DataType:
data_type_kwargs: Dict[str, Any] = self.transform_kwargs(kwargs, number_kwargs)
strict = StrictTypes.int in self.strict_types
if data_type_kwargs:
if not strict:
if data_type_kwargs == {'gt': 0}:
return self.data_type.from_import(IMPORT_POSITIVE_INT)
if data_type_kwargs == {'lt': 0}:
return self.data_type.from_import(IMPORT_NEGATIVE_INT)
if (
data_type_kwargs == {'ge': 0}
and self.use_non_positive_negative_number_constrained_types
):
return self.data_type.from_import(IMPORT_NON_NEGATIVE_INT)
if (
data_type_kwargs == {'le': 0}
and self.use_non_positive_negative_number_constrained_types
):
return self.data_type.from_import(IMPORT_NON_POSITIVE_INT)
kwargs = {k: int(v) for k, v in data_type_kwargs.items()}
if strict:
kwargs['strict'] = True
return self.data_type.from_import(IMPORT_CONINT, kwargs=kwargs)
if strict:
return self.strict_type_map[StrictTypes.int]
return self.type_map[types]
def get_data_float_type(
self,
types: Types,
**kwargs: Any,
) -> DataType:
data_type_kwargs = self.transform_kwargs(kwargs, number_kwargs)
strict = StrictTypes.float in self.strict_types
if data_type_kwargs:
if not strict:
if data_type_kwargs == {'gt': 0}:
return self.data_type.from_import(IMPORT_POSITIVE_FLOAT)
if data_type_kwargs == {'lt': 0}:
return self.data_type.from_import(IMPORT_NEGATIVE_FLOAT)
if (
data_type_kwargs == {'ge': 0}
and self.use_non_positive_negative_number_constrained_types
):
return self.data_type.from_import(IMPORT_NON_NEGATIVE_FLOAT)
if (
data_type_kwargs == {'le': 0}
and self.use_non_positive_negative_number_constrained_types
):
return self.data_type.from_import(IMPORT_NON_POSITIVE_FLOAT)
kwargs = {k: float(v) for k, v in data_type_kwargs.items()}
if strict:
kwargs['strict'] = True
return self.data_type.from_import(IMPORT_CONFLOAT, kwargs=kwargs)
if strict:
return self.strict_type_map[StrictTypes.float]
return self.type_map[types]
def get_data_decimal_type(self, types: Types, **kwargs: Any) -> DataType:
data_type_kwargs = self.transform_kwargs(kwargs, number_kwargs)
if data_type_kwargs:
return self.data_type.from_import(
IMPORT_CONDECIMAL,
kwargs={
k: Decimal(str(v) if isinstance(v, UnionIntFloat) else v)
for k, v in data_type_kwargs.items()
},
)
return self.type_map[types]
def get_data_str_type(self, types: Types, **kwargs: Any) -> DataType:
data_type_kwargs: Dict[str, Any] = self.transform_kwargs(kwargs, string_kwargs)
strict = StrictTypes.str in self.strict_types
if data_type_kwargs:
if strict:
data_type_kwargs['strict'] = True
if self.PATTERN_KEY in data_type_kwargs:
escaped_regex = data_type_kwargs[self.PATTERN_KEY].translate(
escape_characters
)
# TODO: remove unneeded escaped characters
data_type_kwargs[self.PATTERN_KEY] = f"r'{escaped_regex}'"
return self.data_type.from_import(IMPORT_CONSTR, kwargs=data_type_kwargs)
if strict:
return self.strict_type_map[StrictTypes.str]
return self.type_map[types]
def get_data_bytes_type(self, types: Types, **kwargs: Any) -> DataType:
data_type_kwargs: Dict[str, Any] = self.transform_kwargs(kwargs, byes_kwargs)
strict = StrictTypes.bytes in self.strict_types
if data_type_kwargs:
if not strict:
return self.data_type.from_import(
IMPORT_CONBYTES, kwargs=data_type_kwargs
)
# conbytes doesn't accept strict argument
# https://github.com/samuelcolvin/pydantic/issues/2489
# if strict:
# data_type_kwargs['strict'] = True
# return self.data_type.from_import(IMPORT_CONBYTES, kwargs=data_type_kwargs)
if strict:
return self.strict_type_map[StrictTypes.bytes]
return self.type_map[types]
def get_data_type(
self,
types: Types,
**kwargs: Any,
) -> DataType:
if types == Types.string:
return self.get_data_str_type(types, **kwargs)
elif types in (Types.int32, Types.int64, Types.integer):
return self.get_data_int_type(types, **kwargs)
elif types in (Types.float, Types.double, Types.number, Types.time):
return self.get_data_float_type(types, **kwargs)
elif types == Types.decimal:
return self.get_data_decimal_type(types, **kwargs)
elif types == Types.binary:
return self.get_data_bytes_type(types, **kwargs)
elif types == Types.boolean:
if StrictTypes.bool in self.strict_types:
return self.strict_type_map[StrictTypes.bool]
return self.type_map[types]
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic_v2/__init__.py 0000644 0000000 0000000 00000001772 00000000000 027360 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import Iterable, Optional, Tuple
from pydantic import BaseModel as _BaseModel
from .base_model import BaseModel, DataModelField, UnionMode
from .root_model import RootModel
from .types import DataTypeManager
def dump_resolve_reference_action(class_names: Iterable[str]) -> str:
return '\n'.join(f'{class_name}.model_rebuild()' for class_name in class_names)
class ConfigDict(_BaseModel):
extra: Optional[str] = None
title: Optional[str] = None
populate_by_name: Optional[bool] = None
allow_extra_fields: Optional[bool] = None
from_attributes: Optional[bool] = None
frozen: Optional[bool] = None
arbitrary_types_allowed: Optional[bool] = None
protected_namespaces: Optional[Tuple[str, ...]] = None
regex_engine: Optional[str] = None
use_enum_values: Optional[bool] = None
__all__ = [
'BaseModel',
'DataModelField',
'RootModel',
'dump_resolve_reference_action',
'DataTypeManager',
'UnionMode',
]
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283557.7040372
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic_v2/base_model.py 0000644 0000000 0000000 00000017541 00000000000 027714 0 ustar 00 0000000 0000000 import re
from enum import Enum
from pathlib import Path
from typing import (
Any,
ClassVar,
DefaultDict,
Dict,
List,
NamedTuple,
Optional,
Set,
)
from pydantic import Field
from typing_extensions import Literal
from datamodel_code_generator.model.base import UNDEFINED, DataModelFieldBase
from datamodel_code_generator.model.pydantic.base_model import (
BaseModelBase,
)
from datamodel_code_generator.model.pydantic.base_model import (
Constraints as _Constraints,
)
from datamodel_code_generator.model.pydantic.base_model import (
DataModelField as DataModelFieldV1,
)
from datamodel_code_generator.model.pydantic_v2.imports import IMPORT_CONFIG_DICT
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.util import field_validator, model_validator
class UnionMode(Enum):
smart = 'smart'
left_to_right = 'left_to_right'
class Constraints(_Constraints):
# To override existing pattern alias
regex: Optional[str] = Field(None, alias='regex')
pattern: Optional[str] = Field(None, alias='pattern')
@model_validator(mode='before')
def validate_min_max_items(cls, values: Any) -> Dict[str, Any]:
if not isinstance(values, dict): # pragma: no cover
return values
min_items = values.pop('minItems', None)
if min_items is not None:
values['minLength'] = min_items
max_items = values.pop('maxItems', None)
if max_items is not None:
values['maxLength'] = max_items
return values
class DataModelField(DataModelFieldV1):
_EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {
'alias',
'default',
'gt',
'ge',
'lt',
'le',
'multiple_of',
'min_length',
'max_length',
'pattern',
}
_DEFAULT_FIELD_KEYS: ClassVar[Set[str]] = {
'default',
'default_factory',
'alias',
'alias_priority',
'validation_alias',
'serialization_alias',
'title',
'description',
'examples',
'exclude',
'discriminator',
'json_schema_extra',
'frozen',
'validate_default',
'repr',
'init_var',
'kw_only',
'pattern',
'strict',
'gt',
'ge',
'lt',
'le',
'multiple_of',
'allow_inf_nan',
'max_digits',
'decimal_places',
'min_length',
'max_length',
'union_mode',
}
constraints: Optional[Constraints] = None
_PARSE_METHOD: ClassVar[str] = 'model_validate'
can_have_extra_keys: ClassVar[bool] = False
@field_validator('extras')
def validate_extras(cls, values: Any) -> Dict[str, Any]:
if not isinstance(values, dict): # pragma: no cover
return values
if 'examples' in values:
return values
if 'example' in values:
values['examples'] = [values.pop('example')]
return values
def process_const(self) -> None:
if 'const' not in self.extras:
return None
self.const = True
self.nullable = False
const = self.extras['const']
self.data_type = self.data_type.__class__(literals=[const])
if not self.default:
self.default = const
def _process_data_in_str(self, data: Dict[str, Any]) -> None:
if self.const:
# const is removed in pydantic 2.0
data.pop('const')
# unique_items is not supported in pydantic 2.0
data.pop('unique_items', None)
if 'union_mode' in data:
if self.data_type.is_union:
data['union_mode'] = data.pop('union_mode').value
else:
data.pop('union_mode')
# **extra is not supported in pydantic 2.0
json_schema_extra = {
k: v for k, v in data.items() if k not in self._DEFAULT_FIELD_KEYS
}
if json_schema_extra:
data['json_schema_extra'] = json_schema_extra
for key in json_schema_extra.keys():
data.pop(key)
def _process_annotated_field_arguments(
self,
field_arguments: List[str],
) -> List[str]:
return field_arguments
class ConfigAttribute(NamedTuple):
from_: str
to: str
invert: bool
class BaseModel(BaseModelBase):
TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic_v2/BaseModel.jinja2'
BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
CONFIG_ATTRIBUTES: ClassVar[List[ConfigAttribute]] = [
ConfigAttribute('allow_population_by_field_name', 'populate_by_name', False),
ConfigAttribute('populate_by_name', 'populate_by_name', False),
ConfigAttribute('allow_mutation', 'frozen', True),
ConfigAttribute('frozen', 'frozen', False),
]
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Any]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
config_parameters: Dict[str, Any] = {}
extra = self._get_config_extra()
if extra:
config_parameters['extra'] = extra
for from_, to, invert in self.CONFIG_ATTRIBUTES:
if from_ in self.extra_template_data:
config_parameters[to] = (
not self.extra_template_data[from_]
if invert
else self.extra_template_data[from_]
)
for data_type in self.all_data_types:
if data_type.is_custom_type: # pragma: no cover
config_parameters['arbitrary_types_allowed'] = True
break
for field in self.fields:
# Check if a regex pattern uses lookarounds.
# Depending on the generation configuration, the pattern may end up in two different places.
pattern = (
isinstance(field.constraints, Constraints) and field.constraints.pattern
) or (field.data_type.kwargs or {}).get('pattern')
if pattern and re.search(r'\(\?[=!]', pattern):
config_parameters['regex_engine'] = '"python-re"'
break
if isinstance(self.extra_template_data.get('config'), dict):
for key, value in self.extra_template_data['config'].items():
config_parameters[key] = value
if config_parameters:
from datamodel_code_generator.model.pydantic_v2 import ConfigDict
self.extra_template_data['config'] = ConfigDict.parse_obj(config_parameters)
self._additional_imports.append(IMPORT_CONFIG_DICT)
def _get_config_extra(self) -> Optional[Literal["'allow'", "'forbid'"]]:
additionalProperties = self.extra_template_data.get('additionalProperties')
allow_extra_fields = self.extra_template_data.get('allow_extra_fields')
if additionalProperties is not None or allow_extra_fields:
return (
"'allow'" if additionalProperties or allow_extra_fields else "'forbid'"
)
return None
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic_v2/imports.py 0000644 0000000 0000000 00000000407 00000000000 027310 0 ustar 00 0000000 0000000 from datamodel_code_generator.imports import Import
IMPORT_CONFIG_DICT = Import.from_full_path('pydantic.ConfigDict')
IMPORT_AWARE_DATETIME = Import.from_full_path('pydantic.AwareDatetime')
IMPORT_NAIVE_DATETIME = Import.from_full_path('pydantic.NaiveDatetime')
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic_v2/root_model.py 0000644 0000000 0000000 00000001564 00000000000 027763 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import Any, ClassVar, Literal, Optional
from datamodel_code_generator.model.pydantic_v2.base_model import BaseModel
class RootModel(BaseModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic_v2/RootModel.jinja2'
BASE_CLASS: ClassVar[str] = 'pydantic.RootModel'
def __init__(
self,
**kwargs: Any,
) -> None:
# Remove custom_base_class for Pydantic V2 models; behaviour is different from Pydantic V1 as it will not
# be treated as a root model. custom_base_class cannot both implement BaseModel and RootModel!
if 'custom_base_class' in kwargs:
kwargs.pop('custom_base_class')
super().__init__(**kwargs)
def _get_config_extra(self) -> Optional[Literal["'allow'", "'forbid'"]]:
# PydanticV2 RootModels cannot have extra fields
return None
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/pydantic_v2/types.py 0000644 0000000 0000000 00000003601 00000000000 026756 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import ClassVar, Dict, Optional, Sequence, Type
from datamodel_code_generator.format import DatetimeClassType
from datamodel_code_generator.model.pydantic import DataTypeManager as _DataTypeManager
from datamodel_code_generator.model.pydantic.imports import IMPORT_CONSTR
from datamodel_code_generator.model.pydantic_v2.imports import (
IMPORT_AWARE_DATETIME,
IMPORT_NAIVE_DATETIME,
)
from datamodel_code_generator.types import DataType, StrictTypes, Types
class DataTypeManager(_DataTypeManager):
PATTERN_KEY: ClassVar[str] = 'pattern'
def type_map_factory(
self,
data_type: Type[DataType],
strict_types: Sequence[StrictTypes],
pattern_key: str,
target_datetime_class: Optional[DatetimeClassType] = None,
) -> Dict[Types, DataType]:
result = {
**super().type_map_factory(
data_type, strict_types, pattern_key, target_datetime_class
),
Types.hostname: self.data_type.from_import(
IMPORT_CONSTR,
strict=StrictTypes.str in strict_types,
# https://github.com/horejsek/python-fastjsonschema/blob/61c6997a8348b8df9b22e029ca2ba35ef441fbb8/fastjsonschema/draft04.py#L31
kwargs={
pattern_key: r"r'^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]{0,61}[A-Za-z0-9])$'",
**({'strict': True} if StrictTypes.str in strict_types else {}),
},
),
}
if target_datetime_class == DatetimeClassType.Awaredatetime:
result[Types.date_time] = data_type.from_import(IMPORT_AWARE_DATETIME)
if target_datetime_class == DatetimeClassType.Naivedatetime:
result[Types.date_time] = data_type.from_import(IMPORT_NAIVE_DATETIME)
return result
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/rootmodel.py 0000644 0000000 0000000 00000000312 00000000000 025370 0 ustar 00 0000000 0000000 from __future__ import annotations
from typing import ClassVar
from datamodel_code_generator.model import DataModel
class RootModel(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'root.jinja2'
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/scalar.py 0000644 0000000 0000000 00000005044 00000000000 024640 0 ustar 00 0000000 0000000 from __future__ import annotations
from collections import defaultdict
from pathlib import Path
from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Tuple
from datamodel_code_generator.imports import IMPORT_TYPE_ALIAS, Import
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.reference import Reference
_INT: str = 'int'
_FLOAT: str = 'float'
_BOOLEAN: str = 'bool'
_STR: str = 'str'
# default graphql scalar types
DEFAULT_GRAPHQL_SCALAR_TYPE = _STR
DEFAULT_GRAPHQL_SCALAR_TYPES: Dict[str, str] = {
'Boolean': _BOOLEAN,
'String': _STR,
'ID': _STR,
'Int': _INT,
'Float': _FLOAT,
}
class DataTypeScalar(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'Scalar.jinja2'
BASE_CLASS: ClassVar[str] = ''
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_TYPE_ALIAS,)
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
):
extra_template_data = extra_template_data or defaultdict(dict)
scalar_name = reference.name
if scalar_name not in extra_template_data:
extra_template_data[scalar_name] = defaultdict(dict)
# py_type
py_type = extra_template_data[scalar_name].get(
'py_type',
DEFAULT_GRAPHQL_SCALAR_TYPES.get(
reference.name, DEFAULT_GRAPHQL_SCALAR_TYPE
),
)
extra_template_data[scalar_name]['py_type'] = py_type
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/Enum.jinja2 0000644 0000000 0000000 00000000572 00000000000 026640 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
class {{ class_name }}({{ base_class }}):
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- for field in fields %}
{{ field.name }} = {{ field.default }}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/Scalar.jinja2 0000644 0000000 0000000 00000000150 00000000000 027131 0 ustar 00 0000000 0000000 {{ class_name }}: TypeAlias = {{ py_type }}
{%- if description %}
"""
{{ description }}
"""
{%- endif %} ././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/TypedDict.jinja2 0000644 0000000 0000000 00000000207 00000000000 027620 0 ustar 00 0000000 0000000 {%- if is_functional_syntax %}
{% include 'TypedDictFunction.jinja2' %}
{%- else %}
{% include 'TypedDictClass.jinja2' %}
{%- endif %}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/TypedDictClass.jinja2 0000644 0000000 0000000 00000000571 00000000000 030612 0 ustar 00 0000000 0000000 class {{ class_name }}({{ base_class }}):
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields and not description %}
pass
{%- endif %}
{%- for field in fields %}
{{ field.name }}: {{ field.type_hint }}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/TypedDictFunction.jinja2 0000644 0000000 0000000 00000000501 00000000000 031323 0 ustar 00 0000000 0000000 {%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{{ class_name }} = TypedDict('{{ class_name }}', {
{%- for field in all_fields %}
'{{ field.key }}': {{ field.type_hint }},
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
})
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/Union.jinja2 0000644 0000000 0000000 00000000402 00000000000 027014 0 ustar 00 0000000 0000000 {%- if description %}
# {{ description }}
{%- endif %}
{%- if fields|length > 1 %}
{{ class_name }}: TypeAlias = Union[
{%- for field in fields %}
'{{ field.name }}',
{%- endfor %}
]{% else %}
{{ class_name }}: TypeAlias = {{ fields[0].name }}{% endif %} ././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/dataclass.jinja2 0000644 0000000 0000000 00000001541 00000000000 027670 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
@dataclass{%- if keyword_only -%}(kw_only=True){%- endif %}
{%- if base_class %}
class {{ class_name }}({{ base_class }}):
{%- else %}
class {{ class_name }}:
{%- endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields and not description %}
pass
{%- endif %}
{%- for field in fields -%}
{%- if field.field %}
{{ field.name }}: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{{ field.name }}: {{ field.type_hint }}
{%- if not (field.required or (field.represented_default == 'None' and field.strip_default_none))
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/msgspec.jinja2 0000644 0000000 0000000 00000002226 00000000000 027373 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
{%- if base_class %}
class {{ class_name }}({{ base_class }}{%- for key, value in (base_class_kwargs|default({})).items() -%}
, {{ key }}={{ value }}
{%- endfor -%}):
{%- else %}
class {{ class_name }}:
{%- endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields and not description %}
pass
{%- endif %}
{%- for field in fields -%}
{%- if not field.annotated and field.field %}
{{ field.name }}: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{%- if field.annotated and not field.field %}
{{ field.name }}: {{ field.annotated }}
{%- elif field.annotated and field.field %}
{{ field.name }}: {{ field.annotated }} = {{ field.field }}
{%- else %}
{{ field.name }}: {{ field.type_hint }}
{%- endif %}
{%- if not field.field and (not field.required or field.data_type.is_optional or field.nullable)
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic/BaseModel.jinja2 0000644 0000000 0000000 00000002074 00000000000 031401 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
class {{ class_name }}({{ base_class }}):{% if comment is defined %} # {{ comment }}{% endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields and not description %}
pass
{%- endif %}
{%- if config %}
{%- filter indent(4) %}
{% include 'Config.jinja2' %}
{%- endfilter %}
{%- endif %}
{%- for field in fields -%}
{%- if not field.annotated and field.field %}
{{ field.name }}: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{%- if field.annotated %}
{{ field.name }}: {{ field.annotated }}
{%- else %}
{{ field.name }}: {{ field.type_hint }}
{%- endif %}
{%- if not (field.required or (field.represented_default == 'None' and field.strip_default_none))
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- for method in methods -%}
{{ method }}
{%- endfor -%}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000213 00000000000 011451 x ustar 00 0000000 0000000 112 path=datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic/BaseModel_root.jinja2
27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic/BaseModel_root.jinj0000644 0000000 0000000 00000001750 00000000000 032221 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
class {{ class_name }}({{ base_class }}):{% if comment is defined %} # {{ comment }}{% endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if config %}
{%- filter indent(4) %}
{% include 'Config.jinja2' %}
{%- endfilter %}
{%- endif %}
{%- if not fields and not description %}
pass
{%- else %}
{%- set field = fields[0] %}
{%- if not field.annotated and field.field %}
__root__: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{%- if field.annotated %}
__root__: {{ field.annotated }}
{%- else %}
__root__: {{ field.type_hint }}
{%- endif %}
{%- if not (field.required or (field.represented_default == 'None' and field.strip_default_none))
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endif %}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic/Config.jinja2 0000644 0000000 0000000 00000000207 00000000000 030747 0 ustar 00 0000000 0000000 class Config:
{%- for field_name, value in config.dict(exclude_unset=True).items() %}
{{ field_name }} = {{ value }}
{%- endfor %}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic/dataclass.jinja2 0000644 0000000 0000000 00000001165 00000000000 031505 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
@dataclass
{%- if base_class %}
class {{ class_name }}({{ base_class }}):
{%- else %}
class {{ class_name }}:
{%- endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields %}
pass
{%- endif %}
{%- for field in fields -%}
{%- if field.default %}
{{ field.name }}: {{ field.type_hint }} = {{field.default}}
{%- else %}
{{ field.name }}: {{ field.type_hint }}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic_v2/BaseModel.jinja20000644 0000000 0000000 00000002137 00000000000 032010 0 ustar 00 0000000 0000000 {% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
class {{ class_name }}({{ base_class }}):{% if comment is defined %} # {{ comment }}{% endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if not fields and not description %}
pass
{%- endif %}
{%- if config %}
{%- filter indent(4) %}
{% include 'ConfigDict.jinja2' %}
{%- endfilter %}
{%- endif %}
{%- for field in fields -%}
{%- if not field.annotated and field.field %}
{{ field.name }}: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{%- if field.annotated %}
{{ field.name }}: {{ field.annotated }}
{%- else %}
{{ field.name }}: {{ field.type_hint }}
{%- endif %}
{%- if not (field.required or (field.represented_default == 'None' and field.strip_default_none)) or field.data_type.is_optional
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- for method in methods -%}
{{ method }}
{%- endfor -%}
{%- endfor -%}
././@PaxHeader 0000000 0000000 0000000 00000000212 00000000000 011450 x ustar 00 0000000 0000000 111 path=datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic_v2/ConfigDict.jinja2
27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic_v2/ConfigDict.jinja0000644 0000000 0000000 00000000225 00000000000 032100 0 ustar 00 0000000 0000000 model_config = ConfigDict(
{%- for field_name, value in config.dict(exclude_unset=True).items() %}
{{ field_name }}={{ value }},
{%- endfor %}
)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/pydantic_v2/RootModel.jinja20000644 0000000 0000000 00000002307 00000000000 032060 0 ustar 00 0000000 0000000 {%- macro get_type_hint(_fields) -%}
{%- if _fields -%}
{#There will only ever be a single field for RootModel#}
{{- _fields[0].type_hint}}
{%- endif -%}
{%- endmacro -%}
{% for decorator in decorators -%}
{{ decorator }}
{% endfor -%}
class {{ class_name }}({{ base_class }}{%- if fields -%}[{{get_type_hint(fields)}}]{%- endif -%}):{% if comment is defined %} # {{ comment }}{% endif %}
{%- if description %}
"""
{{ description | indent(4) }}
"""
{%- endif %}
{%- if config %}
{%- filter indent(4) %}
{% include 'ConfigDict.jinja2' %}
{%- endfilter %}
{%- endif %}
{%- if not fields and not description %}
pass
{%- else %}
{%- set field = fields[0] %}
{%- if not field.annotated and field.field %}
root: {{ field.type_hint }} = {{ field.field }}
{%- else %}
{%- if field.annotated %}
root: {{ field.annotated }}
{%- else %}
root: {{ field.type_hint }}
{%- endif %}
{%- if not (field.required or (field.represented_default == 'None' and field.strip_default_none))
%} = {{ field.represented_default }}
{%- endif -%}
{%- endif %}
{%- if field.docstring %}
"""
{{ field.docstring | indent(4) }}
"""
{%- endif %}
{%- endif %}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/template/root.jinja2 0000644 0000000 0000000 00000000242 00000000000 026711 0 ustar 00 0000000 0000000 {%- set field = fields[0] %}
{%- if field.annotated %}
{{ class_name }} = {{ field.annotated }}
{%- else %}
{{ class_name }} = {{ field.type_hint }}
{%- endif %}
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/typed_dict.py 0000644 0000000 0000000 00000011270 00000000000 025521 0 ustar 00 0000000 0000000 import keyword
from pathlib import Path
from typing import (
Any,
ClassVar,
DefaultDict,
Dict,
Iterator,
List,
Optional,
Tuple,
)
from datamodel_code_generator.imports import Import
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.model.imports import (
IMPORT_NOT_REQUIRED,
IMPORT_NOT_REQUIRED_BACKPORT,
IMPORT_TYPED_DICT,
IMPORT_TYPED_DICT_BACKPORT,
)
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import NOT_REQUIRED_PREFIX
escape_characters = str.maketrans(
{
'\\': r'\\',
"'": r'\'',
'\b': r'\b',
'\f': r'\f',
'\n': r'\n',
'\r': r'\r',
'\t': r'\t',
}
)
def _is_valid_field_name(field: DataModelFieldBase) -> bool:
name = field.original_name or field.name
if name is None: # pragma: no cover
return False
return name.isidentifier() and not keyword.iskeyword(name)
class TypedDict(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'TypedDict.jinja2'
BASE_CLASS: ClassVar[str] = 'typing.TypedDict'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_TYPED_DICT,)
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
) -> None:
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
@property
def is_functional_syntax(self) -> bool:
return any(not _is_valid_field_name(f) for f in self.fields)
@property
def all_fields(self) -> Iterator[DataModelFieldBase]:
for base_class in self.base_classes:
if base_class.reference is None: # pragma: no cover
continue
data_model = base_class.reference.source
if not isinstance(data_model, DataModel): # pragma: no cover
continue
if isinstance(data_model, TypedDict): # pragma: no cover
yield from data_model.all_fields
yield from self.fields
def render(self, *, class_name: Optional[str] = None) -> str:
response = self._render(
class_name=class_name or self.class_name,
fields=self.fields,
decorators=self.decorators,
base_class=self.base_class,
methods=self.methods,
description=self.description,
is_functional_syntax=self.is_functional_syntax,
all_fields=self.all_fields,
**self.extra_template_data,
)
return response
class TypedDictBackport(TypedDict):
BASE_CLASS: ClassVar[str] = 'typing_extensions.TypedDict'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_TYPED_DICT_BACKPORT,)
class DataModelField(DataModelFieldBase):
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_NOT_REQUIRED,)
@property
def key(self) -> str:
return (self.original_name or self.name or '').translate( # pragma: no cover
escape_characters
)
@property
def type_hint(self) -> str:
type_hint = super().type_hint
if self._not_required:
return f'{NOT_REQUIRED_PREFIX}{type_hint}]'
return type_hint
@property
def _not_required(self) -> bool:
return not self.required and isinstance(self.parent, TypedDict)
@property
def fall_back_to_nullable(self) -> bool:
return not self._not_required
@property
def imports(self) -> Tuple[Import, ...]:
return (
*super().imports,
*(self.DEFAULT_IMPORTS if self._not_required else ()),
)
class DataModelFieldBackport(DataModelField):
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_NOT_REQUIRED_BACKPORT,)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/types.py 0000644 0000000 0000000 00000006125 00000000000 024540 0 ustar 00 0000000 0000000 from typing import Any, Dict, Optional, Sequence, Type
from datamodel_code_generator import DatetimeClassType, PythonVersion
from datamodel_code_generator.imports import (
IMPORT_ANY,
IMPORT_DECIMAL,
IMPORT_TIMEDELTA,
)
from datamodel_code_generator.types import DataType, StrictTypes, Types
from datamodel_code_generator.types import DataTypeManager as _DataTypeManager
def type_map_factory(data_type: Type[DataType]) -> Dict[Types, DataType]:
data_type_int = data_type(type='int')
data_type_float = data_type(type='float')
data_type_str = data_type(type='str')
return {
# TODO: Should we support a special type such UUID?
Types.integer: data_type_int,
Types.int32: data_type_int,
Types.int64: data_type_int,
Types.number: data_type_float,
Types.float: data_type_float,
Types.double: data_type_float,
Types.decimal: data_type.from_import(IMPORT_DECIMAL),
Types.time: data_type_str,
Types.string: data_type_str,
Types.byte: data_type_str, # base64 encoded string
Types.binary: data_type(type='bytes'),
Types.date: data_type_str,
Types.date_time: data_type_str,
Types.timedelta: data_type.from_import(IMPORT_TIMEDELTA),
Types.password: data_type_str,
Types.email: data_type_str,
Types.uuid: data_type_str,
Types.uuid1: data_type_str,
Types.uuid2: data_type_str,
Types.uuid3: data_type_str,
Types.uuid4: data_type_str,
Types.uuid5: data_type_str,
Types.uri: data_type_str,
Types.hostname: data_type_str,
Types.ipv4: data_type_str,
Types.ipv6: data_type_str,
Types.ipv4_network: data_type_str,
Types.ipv6_network: data_type_str,
Types.boolean: data_type(type='bool'),
Types.object: data_type.from_import(IMPORT_ANY, is_dict=True),
Types.null: data_type(type='None'),
Types.array: data_type.from_import(IMPORT_ANY, is_list=True),
Types.any: data_type.from_import(IMPORT_ANY),
}
class DataTypeManager(_DataTypeManager):
def __init__(
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
):
super().__init__(
python_version,
use_standard_collections,
use_generic_container_types,
strict_types,
use_non_positive_negative_number_constrained_types,
use_union_operator,
use_pendulum,
target_datetime_class,
)
self.type_map: Dict[Types, DataType] = type_map_factory(self.data_type)
def get_data_type(
self,
types: Types,
**_: Any,
) -> DataType:
return self.type_map[types]
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/model/union.py 0000644 0000000 0000000 00000003377 00000000000 024532 0 ustar 00 0000000 0000000 from __future__ import annotations
from pathlib import Path
from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Tuple
from datamodel_code_generator.imports import IMPORT_TYPE_ALIAS, IMPORT_UNION, Import
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.reference import Reference
class DataTypeUnion(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'Union.jinja2'
BASE_CLASS: ClassVar[str] = ''
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (
IMPORT_TYPE_ALIAS,
IMPORT_UNION,
)
def __init__(
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
):
super().__init__(
reference=reference,
fields=fields,
decorators=decorators,
base_classes=base_classes,
custom_base_class=custom_base_class,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
methods=methods,
path=path,
description=description,
default=default,
nullable=nullable,
keyword_only=keyword_only,
)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.705037
datamodel_code_generator-0.26.4/datamodel_code_generator/parser/__init__.py 0000644 0000000 0000000 00000001431 00000000000 025322 0 ustar 00 0000000 0000000 from __future__ import annotations
from enum import Enum
from typing import Callable, Dict, Optional, TypeVar
TK = TypeVar('TK')
TV = TypeVar('TV')
class LiteralType(Enum):
All = 'all'
One = 'one'
class DefaultPutDict(Dict[TK, TV]):
def get_or_put(
self,
key: TK,
default: Optional[TV] = None,
default_factory: Optional[Callable[[TK], TV]] = None,
) -> TV:
if key in self:
return self[key]
elif default: # pragma: no cover
value = self[key] = default
return value
elif default_factory:
value = self[key] = default_factory(key)
return value
raise ValueError('Not found default and default_factory') # pragma: no cover
__all__ = ['LiteralType']
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/parser/base.py 0000644 0000000 0000000 00000170773 00000000000 024515 0 ustar 00 0000000 0000000 import re
import sys
from abc import ABC, abstractmethod
from collections import OrderedDict, defaultdict
from itertools import groupby
from pathlib import Path
from typing import (
Any,
Callable,
DefaultDict,
Dict,
Iterable,
Iterator,
List,
Mapping,
NamedTuple,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
)
from urllib.parse import ParseResult
from pydantic import BaseModel
from datamodel_code_generator.format import (
CodeFormatter,
DatetimeClassType,
PythonVersion,
)
from datamodel_code_generator.imports import (
IMPORT_ANNOTATIONS,
IMPORT_LITERAL,
IMPORT_LITERAL_BACKPORT,
Import,
Imports,
)
from datamodel_code_generator.model import dataclass as dataclass_model
from datamodel_code_generator.model import msgspec as msgspec_model
from datamodel_code_generator.model import pydantic as pydantic_model
from datamodel_code_generator.model import pydantic_v2 as pydantic_model_v2
from datamodel_code_generator.model.base import (
ALL_MODEL,
UNDEFINED,
BaseClassDataType,
ConstraintsBase,
DataModel,
DataModelFieldBase,
)
from datamodel_code_generator.model.enum import Enum, Member
from datamodel_code_generator.parser import DefaultPutDict, LiteralType
from datamodel_code_generator.reference import ModelResolver, Reference
from datamodel_code_generator.types import DataType, DataTypeManager, StrictTypes
from datamodel_code_generator.util import Protocol, runtime_checkable
SPECIAL_PATH_FORMAT: str = '#-datamodel-code-generator-#-{}-#-special-#'
def get_special_path(keyword: str, path: List[str]) -> List[str]:
return [*path, SPECIAL_PATH_FORMAT.format(keyword)]
escape_characters = str.maketrans(
{
'\\': r'\\',
"'": r'\'',
'\b': r'\b',
'\f': r'\f',
'\n': r'\n',
'\r': r'\r',
'\t': r'\t',
}
)
def to_hashable(item: Any) -> Any:
if isinstance(
item,
(
list,
tuple,
),
):
return tuple(sorted(to_hashable(i) for i in item))
elif isinstance(item, dict):
return tuple(
sorted(
(
k,
to_hashable(v),
)
for k, v in item.items()
)
)
elif isinstance(item, set): # pragma: no cover
return frozenset(to_hashable(i) for i in item)
elif isinstance(item, BaseModel):
return to_hashable(item.dict())
return item
def dump_templates(templates: List[DataModel]) -> str:
return '\n\n\n'.join(str(m) for m in templates)
ReferenceMapSet = Dict[str, Set[str]]
SortedDataModels = Dict[str, DataModel]
MAX_RECURSION_COUNT: int = sys.getrecursionlimit()
def sort_data_models(
unsorted_data_models: List[DataModel],
sorted_data_models: Optional[SortedDataModels] = None,
require_update_action_models: Optional[List[str]] = None,
recursion_count: int = MAX_RECURSION_COUNT,
) -> Tuple[List[DataModel], SortedDataModels, List[str]]:
if sorted_data_models is None:
sorted_data_models = OrderedDict()
if require_update_action_models is None:
require_update_action_models = []
sorted_model_count: int = len(sorted_data_models)
unresolved_references: List[DataModel] = []
for model in unsorted_data_models:
if not model.reference_classes:
sorted_data_models[model.path] = model
elif (
model.path in model.reference_classes and len(model.reference_classes) == 1
): # only self-referencing
sorted_data_models[model.path] = model
require_update_action_models.append(model.path)
elif (
not model.reference_classes - {model.path} - set(sorted_data_models)
): # reference classes have been resolved
sorted_data_models[model.path] = model
if model.path in model.reference_classes:
require_update_action_models.append(model.path)
else:
unresolved_references.append(model)
if unresolved_references:
if sorted_model_count != len(sorted_data_models) and recursion_count:
try:
return sort_data_models(
unresolved_references,
sorted_data_models,
require_update_action_models,
recursion_count - 1,
)
except RecursionError: # pragma: no cover
pass
# sort on base_class dependency
while True:
ordered_models: List[Tuple[int, DataModel]] = []
unresolved_reference_model_names = [m.path for m in unresolved_references]
for model in unresolved_references:
indexes = [
unresolved_reference_model_names.index(b.reference.path)
for b in model.base_classes
if b.reference
and b.reference.path in unresolved_reference_model_names
]
if indexes:
ordered_models.append(
(
max(indexes),
model,
)
)
else:
ordered_models.append(
(
-1,
model,
)
)
sorted_unresolved_models = [
m[1] for m in sorted(ordered_models, key=lambda m: m[0])
]
if sorted_unresolved_models == unresolved_references:
break
unresolved_references = sorted_unresolved_models
# circular reference
unsorted_data_model_names = set(unresolved_reference_model_names)
for model in unresolved_references:
unresolved_model = (
model.reference_classes - {model.path} - set(sorted_data_models)
)
base_models = [
getattr(s.reference, 'path', None) for s in model.base_classes
]
update_action_parent = set(require_update_action_models).intersection(
base_models
)
if not unresolved_model:
sorted_data_models[model.path] = model
if update_action_parent:
require_update_action_models.append(model.path)
continue
if not unresolved_model - unsorted_data_model_names:
sorted_data_models[model.path] = model
require_update_action_models.append(model.path)
continue
# unresolved
unresolved_classes = ', '.join(
f'[class: {item.path} references: {item.reference_classes}]'
for item in unresolved_references
)
raise Exception(f'A Parser can not resolve classes: {unresolved_classes}.')
return unresolved_references, sorted_data_models, require_update_action_models
def relative(current_module: str, reference: str) -> Tuple[str, str]:
"""Find relative module path."""
current_module_path = current_module.split('.') if current_module else []
*reference_path, name = reference.split('.')
if current_module_path == reference_path:
return '', ''
i = 0
for x, y in zip(current_module_path, reference_path):
if x != y:
break
i += 1
left = '.' * (len(current_module_path) - i)
right = '.'.join(reference_path[i:])
if not left:
left = '.'
if not right:
right = name
elif '.' in right:
extra, right = right.rsplit('.', 1)
left += extra
return left, right
def exact_import(from_: str, import_: str, short_name: str) -> Tuple[str, str]:
if from_ == len(from_) * '.':
# Prevents "from . import foo" becoming "from ..foo import Foo"
# or "from .. import foo" becoming "from ...foo import Foo"
# when our imported module has the same parent
return f'{from_}{import_}', short_name
return f'{from_}.{import_}', short_name
@runtime_checkable
class Child(Protocol):
@property
def parent(self) -> Optional[Any]:
raise NotImplementedError
T = TypeVar('T')
def get_most_of_parent(value: Any, type_: Optional[Type[T]] = None) -> Optional[T]:
if isinstance(value, Child) and (type_ is None or not isinstance(value, type_)):
return get_most_of_parent(value.parent, type_)
return value
def title_to_class_name(title: str) -> str:
classname = re.sub('[^A-Za-z0-9]+', ' ', title)
classname = ''.join(x for x in classname.title() if not x.isspace())
return classname
def _find_base_classes(model: DataModel) -> List[DataModel]:
return [
b.reference.source
for b in model.base_classes
if b.reference and isinstance(b.reference.source, DataModel)
]
def _find_field(
original_name: str, models: List[DataModel]
) -> Optional[DataModelFieldBase]:
def _find_field_and_base_classes(
model_: DataModel,
) -> Tuple[Optional[DataModelFieldBase], List[DataModel]]:
for field_ in model_.fields:
if field_.original_name == original_name:
return field_, []
return None, _find_base_classes(model_) # pragma: no cover
for model in models:
field, base_models = _find_field_and_base_classes(model)
if field:
return field
models.extend(base_models) # pragma: no cover
return None # pragma: no cover
def _copy_data_types(data_types: List[DataType]) -> List[DataType]:
copied_data_types: List[DataType] = []
for data_type_ in data_types:
if data_type_.reference:
copied_data_types.append(
data_type_.__class__(reference=data_type_.reference)
)
elif data_type_.data_types: # pragma: no cover
copied_data_type = data_type_.copy()
copied_data_type.data_types = _copy_data_types(data_type_.data_types)
copied_data_types.append(copied_data_type)
else:
copied_data_types.append(data_type_.copy())
return copied_data_types
class Result(BaseModel):
body: str
source: Optional[Path] = None
class Source(BaseModel):
path: Path
text: str
@classmethod
def from_path(cls, path: Path, base_path: Path, encoding: str) -> 'Source':
return cls(
path=path.relative_to(base_path),
text=path.read_text(encoding=encoding),
)
class Parser(ABC):
def __init__(
self,
source: Union[str, Path, List[Path], ParseResult],
*,
data_model_type: Type[DataModel] = pydantic_model.BaseModel,
data_model_root_type: Type[DataModel] = pydantic_model.CustomRootType,
data_type_manager_type: Type[DataTypeManager] = pydantic_model.DataTypeManager,
data_model_field_type: Type[DataModelFieldBase] = pydantic_model.DataModelField,
base_class: Optional[str] = None,
additional_imports: Optional[List[str]] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
target_python_version: PythonVersion = PythonVersion.PY_38,
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]] = None,
validation: bool = False,
field_constraints: bool = False,
snake_case_field: bool = False,
strip_default_none: bool = False,
aliases: Optional[Mapping[str, str]] = None,
allow_population_by_field_name: bool = False,
apply_default_values_for_required_fields: bool = False,
allow_extra_fields: bool = False,
force_optional_for_required_fields: bool = False,
class_name: Optional[str] = None,
use_standard_collections: bool = False,
base_path: Optional[Path] = None,
use_schema_description: bool = False,
use_field_description: bool = False,
use_default_kwarg: bool = False,
reuse_model: bool = False,
encoding: str = 'utf-8',
enum_field_as_literal: Optional[LiteralType] = None,
set_default_enum_member: bool = False,
use_subclass_enum: bool = False,
strict_nullable: bool = False,
use_generic_container_types: bool = False,
enable_faux_immutability: bool = False,
remote_text_cache: Optional[DefaultPutDict[str, str]] = None,
disable_appending_item_suffix: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
empty_enum_field_name: Optional[str] = None,
custom_class_name_generator: Optional[
Callable[[str], str]
] = title_to_class_name,
field_extra_keys: Optional[Set[str]] = None,
field_include_all_keys: bool = False,
field_extra_keys_without_x_prefix: Optional[Set[str]] = None,
wrap_string_literal: Optional[bool] = None,
use_title_as_name: bool = False,
use_operation_id_as_name: bool = False,
use_unique_items_as_set: bool = False,
http_headers: Optional[Sequence[Tuple[str, str]]] = None,
http_ignore_tls: bool = False,
use_annotated: bool = False,
use_non_positive_negative_number_constrained_types: bool = False,
original_field_name_delimiter: Optional[str] = None,
use_double_quotes: bool = False,
use_union_operator: bool = False,
allow_responses_without_content: bool = False,
collapse_root_models: bool = False,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
keep_model_order: bool = False,
use_one_literal_as_default: bool = False,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
use_pendulum: bool = False,
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
treat_dots_as_module: bool = False,
use_exact_imports: bool = False,
default_field_extras: Optional[Dict[str, Any]] = None,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
keyword_only: bool = False,
no_alias: bool = False,
) -> None:
self.keyword_only = keyword_only
self.data_type_manager: DataTypeManager = data_type_manager_type(
python_version=target_python_version,
use_standard_collections=use_standard_collections,
use_generic_container_types=use_generic_container_types,
strict_types=strict_types,
use_union_operator=use_union_operator,
use_pendulum=use_pendulum,
target_datetime_class=target_datetime_class,
)
self.data_model_type: Type[DataModel] = data_model_type
self.data_model_root_type: Type[DataModel] = data_model_root_type
self.data_model_field_type: Type[DataModelFieldBase] = data_model_field_type
self.imports: Imports = Imports(use_exact_imports)
self.use_exact_imports: bool = use_exact_imports
self._append_additional_imports(additional_imports=additional_imports)
self.base_class: Optional[str] = base_class
self.target_python_version: PythonVersion = target_python_version
self.results: List[DataModel] = []
self.dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]] = (
dump_resolve_reference_action
)
self.validation: bool = validation
self.field_constraints: bool = field_constraints
self.snake_case_field: bool = snake_case_field
self.strip_default_none: bool = strip_default_none
self.apply_default_values_for_required_fields: bool = (
apply_default_values_for_required_fields
)
self.force_optional_for_required_fields: bool = (
force_optional_for_required_fields
)
self.use_schema_description: bool = use_schema_description
self.use_field_description: bool = use_field_description
self.use_default_kwarg: bool = use_default_kwarg
self.reuse_model: bool = reuse_model
self.encoding: str = encoding
self.enum_field_as_literal: Optional[LiteralType] = enum_field_as_literal
self.set_default_enum_member: bool = set_default_enum_member
self.use_subclass_enum: bool = use_subclass_enum
self.strict_nullable: bool = strict_nullable
self.use_generic_container_types: bool = use_generic_container_types
self.use_union_operator: bool = use_union_operator
self.enable_faux_immutability: bool = enable_faux_immutability
self.custom_class_name_generator: Optional[Callable[[str], str]] = (
custom_class_name_generator
)
self.field_extra_keys: Set[str] = field_extra_keys or set()
self.field_extra_keys_without_x_prefix: Set[str] = (
field_extra_keys_without_x_prefix or set()
)
self.field_include_all_keys: bool = field_include_all_keys
self.remote_text_cache: DefaultPutDict[str, str] = (
remote_text_cache or DefaultPutDict()
)
self.current_source_path: Optional[Path] = None
self.use_title_as_name: bool = use_title_as_name
self.use_operation_id_as_name: bool = use_operation_id_as_name
self.use_unique_items_as_set: bool = use_unique_items_as_set
if base_path:
self.base_path = base_path
elif isinstance(source, Path):
self.base_path = (
source.absolute() if source.is_dir() else source.absolute().parent
)
else:
self.base_path = Path.cwd()
self.source: Union[str, Path, List[Path], ParseResult] = source
self.custom_template_dir = custom_template_dir
self.extra_template_data: DefaultDict[str, Any] = (
extra_template_data or defaultdict(dict)
)
if allow_population_by_field_name:
self.extra_template_data[ALL_MODEL]['allow_population_by_field_name'] = True
if allow_extra_fields:
self.extra_template_data[ALL_MODEL]['allow_extra_fields'] = True
if enable_faux_immutability:
self.extra_template_data[ALL_MODEL]['allow_mutation'] = False
self.model_resolver = ModelResolver(
base_url=source.geturl() if isinstance(source, ParseResult) else None,
singular_name_suffix='' if disable_appending_item_suffix else None,
aliases=aliases,
empty_field_name=empty_enum_field_name,
snake_case_field=snake_case_field,
custom_class_name_generator=custom_class_name_generator,
base_path=self.base_path,
original_field_name_delimiter=original_field_name_delimiter,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members,
no_alias=no_alias,
)
self.class_name: Optional[str] = class_name
self.wrap_string_literal: Optional[bool] = wrap_string_literal
self.http_headers: Optional[Sequence[Tuple[str, str]]] = http_headers
self.http_query_parameters: Optional[Sequence[Tuple[str, str]]] = (
http_query_parameters
)
self.http_ignore_tls: bool = http_ignore_tls
self.use_annotated: bool = use_annotated
if self.use_annotated and not self.field_constraints: # pragma: no cover
raise Exception(
'`use_annotated=True` has to be used with `field_constraints=True`'
)
self.use_non_positive_negative_number_constrained_types = (
use_non_positive_negative_number_constrained_types
)
self.use_double_quotes = use_double_quotes
self.allow_responses_without_content = allow_responses_without_content
self.collapse_root_models = collapse_root_models
self.capitalise_enum_members = capitalise_enum_members
self.keep_model_order = keep_model_order
self.use_one_literal_as_default = use_one_literal_as_default
self.known_third_party = known_third_party
self.custom_formatter = custom_formatters
self.custom_formatters_kwargs = custom_formatters_kwargs
self.treat_dots_as_module = treat_dots_as_module
self.default_field_extras: Optional[Dict[str, Any]] = default_field_extras
@property
def iter_source(self) -> Iterator[Source]:
if isinstance(self.source, str):
yield Source(path=Path(), text=self.source)
elif isinstance(self.source, Path): # pragma: no cover
if self.source.is_dir():
for path in sorted(self.source.rglob('*'), key=lambda p: p.name):
if path.is_file():
yield Source.from_path(path, self.base_path, self.encoding)
else:
yield Source.from_path(self.source, self.base_path, self.encoding)
elif isinstance(self.source, list): # pragma: no cover
for path in self.source:
yield Source.from_path(path, self.base_path, self.encoding)
else:
yield Source(
path=Path(self.source.path),
text=self.remote_text_cache.get_or_put(
self.source.geturl(), default_factory=self._get_text_from_url
),
)
def _append_additional_imports(
self, additional_imports: Optional[List[str]]
) -> None:
if additional_imports is None:
additional_imports = []
for additional_import_string in additional_imports:
if additional_import_string is None:
continue
new_import = Import.from_full_path(additional_import_string)
self.imports.append(new_import)
def _get_text_from_url(self, url: str) -> str:
from datamodel_code_generator.http import get_body
return self.remote_text_cache.get_or_put(
url,
default_factory=lambda url_: get_body(
url, self.http_headers, self.http_ignore_tls, self.http_query_parameters
),
)
@classmethod
def get_url_path_parts(cls, url: ParseResult) -> List[str]:
return [
f'{url.scheme}://{url.hostname}',
*url.path.split('/')[1:],
]
@property
def data_type(self) -> Type[DataType]:
return self.data_type_manager.data_type
@abstractmethod
def parse_raw(self) -> None:
raise NotImplementedError
def __delete_duplicate_models(self, models: List[DataModel]) -> None:
model_class_names: Dict[str, DataModel] = {}
model_to_duplicate_models: DefaultDict[DataModel, List[DataModel]] = (
defaultdict(list)
)
for model in models[:]:
if isinstance(model, self.data_model_root_type):
root_data_type = model.fields[0].data_type
# backward compatible
# Remove duplicated root model
if (
root_data_type.reference
and not root_data_type.is_dict
and not root_data_type.is_list
and root_data_type.reference.source in models
and root_data_type.reference.name
== self.model_resolver.get_class_name(
model.reference.original_name, unique=False
).name
):
# Replace referenced duplicate model to original model
for child in model.reference.children[:]:
child.replace_reference(root_data_type.reference)
models.remove(model)
for data_type in model.all_data_types:
if data_type.reference:
data_type.remove_reference()
continue
# Custom root model can't be inherited on restriction of Pydantic
for child in model.reference.children:
# inheritance model
if isinstance(child, DataModel):
for base_class in child.base_classes[:]:
if base_class.reference == model.reference:
child.base_classes.remove(base_class)
if not child.base_classes: # pragma: no cover
child.set_base_class()
class_name = model.duplicate_class_name or model.class_name
if class_name in model_class_names:
model_key = tuple(
to_hashable(v)
for v in (
model.render(class_name=model.duplicate_class_name),
model.imports,
)
)
original_model = model_class_names[class_name]
original_model_key = tuple(
to_hashable(v)
for v in (
original_model.render(
class_name=original_model.duplicate_class_name
),
original_model.imports,
)
)
if model_key == original_model_key:
model_to_duplicate_models[original_model].append(model)
continue
model_class_names[class_name] = model
for model, duplicate_models in model_to_duplicate_models.items():
for duplicate_model in duplicate_models:
for child in duplicate_model.reference.children[:]:
child.replace_reference(model.reference)
models.remove(duplicate_model)
@classmethod
def __replace_duplicate_name_in_module(cls, models: List[DataModel]) -> None:
scoped_model_resolver = ModelResolver(
exclude_names={i.alias or i.import_ for m in models for i in m.imports},
duplicate_name_suffix='Model',
)
model_names: Dict[str, DataModel] = {}
for model in models:
class_name: str = model.class_name
generated_name: str = scoped_model_resolver.add(
[model.path], class_name, unique=True, class_name=True
).name
if class_name != generated_name:
model.class_name = generated_name
model_names[model.class_name] = model
for model in models:
duplicate_name = model.duplicate_class_name
# check only first desired name
if duplicate_name and duplicate_name not in model_names:
del model_names[model.class_name]
model.class_name = duplicate_name
model_names[duplicate_name] = model
def __change_from_import(
self,
models: List[DataModel],
imports: Imports,
scoped_model_resolver: ModelResolver,
init: bool,
) -> None:
for model in models:
scoped_model_resolver.add([model.path], model.class_name)
for model in models:
before_import = model.imports
imports.append(before_import)
for data_type in model.all_data_types:
# To change from/import
if not data_type.reference or data_type.reference.source in models:
# No need to import non-reference model.
# Or, Referenced model is in the same file. we don't need to import the model
continue
if isinstance(data_type, BaseClassDataType):
left, right = relative(model.module_name, data_type.full_name)
from_ = (
''.join([left, right])
if left.endswith('.')
else '.'.join([left, right])
)
import_ = data_type.reference.short_name
full_path = from_, import_
else:
from_, import_ = full_path = relative(
model.module_name, data_type.full_name
)
if imports.use_exact: # pragma: no cover
from_, import_ = exact_import(
from_, import_, data_type.reference.short_name
)
import_ = import_.replace('-', '_')
if (
len(model.module_path) > 1
and model.module_path[-1].count('.') > 0
and not self.treat_dots_as_module
):
rel_path_depth = model.module_path[-1].count('.')
from_ = from_[rel_path_depth:]
alias = scoped_model_resolver.add(full_path, import_).name
name = data_type.reference.short_name
if from_ and import_ and alias != name:
data_type.alias = (
alias
if data_type.reference.short_name == import_
else f'{alias}.{name}'
)
if init:
from_ = '.' + from_
imports.append(
Import(
from_=from_,
import_=import_,
alias=alias,
reference_path=data_type.reference.path,
),
)
after_import = model.imports
if before_import != after_import:
imports.append(after_import)
@classmethod
def __extract_inherited_enum(cls, models: List[DataModel]) -> None:
for model in models[:]:
if model.fields:
continue
enums: List[Enum] = []
for base_model in model.base_classes:
if not base_model.reference:
continue
source_model = base_model.reference.source
if isinstance(source_model, Enum):
enums.append(source_model)
if enums:
models.insert(
models.index(model),
enums[0].__class__(
fields=[f for e in enums for f in e.fields],
description=model.description,
reference=model.reference,
),
)
models.remove(model)
def __apply_discriminator_type(
self,
models: List[DataModel],
imports: Imports,
) -> None:
for model in models:
for field in model.fields:
discriminator = field.extras.get('discriminator')
if not discriminator or not isinstance(discriminator, dict):
continue
property_name = discriminator.get('propertyName')
if not property_name: # pragma: no cover
continue
mapping = discriminator.get('mapping', {})
for data_type in field.data_type.data_types:
if not data_type.reference: # pragma: no cover
continue
discriminator_model = data_type.reference.source
if not isinstance( # pragma: no cover
discriminator_model,
(
pydantic_model.BaseModel,
pydantic_model_v2.BaseModel,
dataclass_model.DataClass,
msgspec_model.Struct,
),
):
continue # pragma: no cover
type_names: List[str] = []
def check_paths(
model: Union[
pydantic_model.BaseModel,
pydantic_model_v2.BaseModel,
Reference,
],
mapping: Dict[str, str],
type_names: List[str] = type_names,
) -> None:
"""Helper function to validate paths for a given model."""
for name, path in mapping.items():
if (
model.path.split('#/')[-1] != path.split('#/')[-1]
) and (
path.startswith('#/')
or model.path[:-1] != path.split('/')[-1]
):
t_path = path[str(path).find('/') + 1 :]
t_disc = model.path[: str(model.path).find('#')].lstrip(
'../'
)
t_disc_2 = '/'.join(t_disc.split('/')[1:])
if t_path != t_disc and t_path != t_disc_2:
continue
type_names.append(name)
# Check the main discriminator model path
if mapping:
check_paths(discriminator_model, mapping)
# Check the base_classes if they exist
if len(type_names) == 0:
for base_class in discriminator_model.base_classes:
check_paths(base_class.reference, mapping)
else:
type_names = [discriminator_model.path.split('/')[-1]]
if not type_names: # pragma: no cover
raise RuntimeError(
f'Discriminator type is not found. {data_type.reference.path}'
)
has_one_literal = False
for discriminator_field in discriminator_model.fields:
if (
discriminator_field.original_name
or discriminator_field.name
) != property_name:
continue
literals = discriminator_field.data_type.literals
if len(literals) == 1 and literals[0] == (
type_names[0] if type_names else None
):
has_one_literal = True
if isinstance(
discriminator_model, msgspec_model.Struct
): # pragma: no cover
discriminator_model.add_base_class_kwarg(
'tag_field', f"'{property_name}'"
)
discriminator_model.add_base_class_kwarg(
'tag', discriminator_field.represented_default
)
discriminator_field.extras['is_classvar'] = True
# Found the discriminator field, no need to keep looking
break
for (
field_data_type
) in discriminator_field.data_type.all_data_types:
if field_data_type.reference: # pragma: no cover
field_data_type.remove_reference()
discriminator_field.data_type = self.data_type(
literals=type_names
)
discriminator_field.data_type.parent = discriminator_field
discriminator_field.required = True
imports.append(discriminator_field.imports)
has_one_literal = True
if not has_one_literal:
discriminator_model.fields.append(
self.data_model_field_type(
name=property_name,
data_type=self.data_type(literals=type_names),
required=True,
)
)
literal = (
IMPORT_LITERAL
if self.target_python_version.has_literal_type
else IMPORT_LITERAL_BACKPORT
)
has_imported_literal = any(
literal == import_ # type: ignore [comparison-overlap]
for import_ in imports
)
if has_imported_literal: # pragma: no cover
imports.append(literal)
@classmethod
def _create_set_from_list(cls, data_type: DataType) -> Optional[DataType]:
if data_type.is_list:
new_data_type = data_type.copy()
new_data_type.is_list = False
new_data_type.is_set = True
for data_type_ in new_data_type.data_types:
data_type_.parent = new_data_type
return new_data_type
elif data_type.data_types: # pragma: no cover
for index, nested_data_type in enumerate(data_type.data_types[:]):
set_data_type = cls._create_set_from_list(nested_data_type)
if set_data_type: # pragma: no cover
data_type.data_types[index] = set_data_type
return data_type
return None # pragma: no cover
def __replace_unique_list_to_set(self, models: List[DataModel]) -> None:
for model in models:
for model_field in model.fields:
if not self.use_unique_items_as_set:
continue
if not (
model_field.constraints and model_field.constraints.unique_items
):
continue
set_data_type = self._create_set_from_list(model_field.data_type)
if set_data_type: # pragma: no cover
model_field.data_type.parent = None
model_field.data_type = set_data_type
set_data_type.parent = model_field
@classmethod
def __set_reference_default_value_to_field(cls, models: List[DataModel]) -> None:
for model in models:
for model_field in model.fields:
if not model_field.data_type.reference or model_field.has_default:
continue
if isinstance(
model_field.data_type.reference.source, DataModel
): # pragma: no cover
if model_field.data_type.reference.source.default != UNDEFINED:
model_field.default = (
model_field.data_type.reference.source.default
)
def __reuse_model(
self, models: List[DataModel], require_update_action_models: List[str]
) -> None:
if not self.reuse_model:
return None
model_cache: Dict[Tuple[str, ...], Reference] = {}
duplicates = []
for model in models[:]:
model_key = tuple(
to_hashable(v) for v in (model.render(class_name='M'), model.imports)
)
cached_model_reference = model_cache.get(model_key)
if cached_model_reference:
if isinstance(model, Enum):
for child in model.reference.children[:]:
# child is resolved data_type by reference
data_model = get_most_of_parent(child)
# TODO: replace reference in all modules
if data_model in models: # pragma: no cover
child.replace_reference(cached_model_reference)
duplicates.append(model)
else:
index = models.index(model)
inherited_model = model.__class__(
fields=[],
base_classes=[cached_model_reference],
description=model.description,
reference=Reference(
name=model.name,
path=model.reference.path + '/reuse',
),
custom_template_dir=model._custom_template_dir,
)
if cached_model_reference.path in require_update_action_models:
require_update_action_models.append(inherited_model.path)
models.insert(index, inherited_model)
models.remove(model)
else:
model_cache[model_key] = model.reference
for duplicate in duplicates:
models.remove(duplicate)
def __collapse_root_models(
self,
models: List[DataModel],
unused_models: List[DataModel],
imports: Imports,
scoped_model_resolver: ModelResolver,
) -> None:
if not self.collapse_root_models:
return None
for model in models:
for model_field in model.fields:
for data_type in model_field.data_type.all_data_types:
reference = data_type.reference
if not reference or not isinstance(
reference.source, self.data_model_root_type
):
continue
# Use root-type as model_field type
root_type_model = reference.source
root_type_field = root_type_model.fields[0]
if (
self.field_constraints
and isinstance(root_type_field.constraints, ConstraintsBase)
and root_type_field.constraints.has_constraints
and any(
d
for d in model_field.data_type.all_data_types
if d.is_dict or d.is_union
)
):
continue # pragma: no cover
# set copied data_type
copied_data_type = root_type_field.data_type.copy()
if isinstance(data_type.parent, self.data_model_field_type):
# for field
# override empty field by root-type field
model_field.extras = {
**root_type_field.extras,
**model_field.extras,
}
model_field.process_const()
if self.field_constraints:
model_field.constraints = ConstraintsBase.merge_constraints(
root_type_field.constraints, model_field.constraints
)
data_type.parent.data_type = copied_data_type
elif data_type.parent.is_list:
if self.field_constraints:
model_field.constraints = ConstraintsBase.merge_constraints(
root_type_field.constraints, model_field.constraints
)
if isinstance(
root_type_field,
pydantic_model.DataModelField,
) and not model_field.extras.get('discriminator'):
discriminator = root_type_field.extras.get('discriminator')
if discriminator:
model_field.extras['discriminator'] = discriminator
data_type.parent.data_types.remove(
data_type
) # pragma: no cover
data_type.parent.data_types.append(copied_data_type)
elif isinstance(data_type.parent, DataType):
# for data_type
data_type_id = id(data_type)
data_type.parent.data_types = [
d
for d in (*data_type.parent.data_types, copied_data_type)
if id(d) != data_type_id
]
else: # pragma: no cover
continue
for d in root_type_field.data_type.data_types:
if d.reference is None:
continue
from_, import_ = full_path = relative(
model.module_name, d.full_name
)
if from_ and import_:
alias = scoped_model_resolver.add(full_path, import_)
d.alias = (
alias.name
if d.reference.short_name == import_
else f'{alias.name}.{d.reference.short_name}'
)
imports.append(
[
Import(
from_=from_,
import_=import_,
alias=alias.name,
reference_path=d.reference.path,
)
]
)
original_field = get_most_of_parent(data_type, DataModelFieldBase)
if original_field: # pragma: no cover
# TODO: Improve detection of reference type
imports.append(original_field.imports)
data_type.remove_reference()
root_type_model.reference.children = [
c
for c in root_type_model.reference.children
if getattr(c, 'parent', None)
]
imports.remove_referenced_imports(root_type_model.path)
if not root_type_model.reference.children:
unused_models.append(root_type_model)
def __set_default_enum_member(
self,
models: List[DataModel],
) -> None:
if not self.set_default_enum_member:
return None
for model in models:
for model_field in model.fields:
if not model_field.default:
continue
for data_type in model_field.data_type.all_data_types:
if data_type.reference and isinstance(
data_type.reference.source, Enum
): # pragma: no cover
if isinstance(model_field.default, list):
enum_member: Union[List[Member], Optional[Member]] = [
e
for e in (
data_type.reference.source.find_member(d)
for d in model_field.default
)
if e
]
else:
enum_member = data_type.reference.source.find_member(
model_field.default
)
if not enum_member:
continue
model_field.default = enum_member
if data_type.alias:
if isinstance(enum_member, list):
for enum_member_ in enum_member:
enum_member_.alias = data_type.alias
else:
enum_member.alias = data_type.alias
def __override_required_field(
self,
models: List[DataModel],
) -> None:
for model in models:
if isinstance(model, (Enum, self.data_model_root_type)):
continue
for index, model_field in enumerate(model.fields[:]):
data_type = model_field.data_type
if (
not model_field.original_name
or data_type.data_types
or data_type.reference
or data_type.type
or data_type.literals
or data_type.dict_key
):
continue
original_field = _find_field(
model_field.original_name, _find_base_classes(model)
)
if not original_field: # pragma: no cover
model.fields.remove(model_field)
continue
copied_original_field = original_field.copy()
if original_field.data_type.reference:
data_type = self.data_type_manager.data_type(
reference=original_field.data_type.reference,
)
elif original_field.data_type.data_types:
data_type = original_field.data_type.copy()
data_type.data_types = _copy_data_types(
original_field.data_type.data_types
)
for data_type_ in data_type.data_types:
data_type_.parent = data_type
else:
data_type = original_field.data_type.copy()
data_type.parent = copied_original_field
copied_original_field.data_type = data_type
copied_original_field.parent = model
copied_original_field.required = True
model.fields.insert(index, copied_original_field)
model.fields.remove(model_field)
def __sort_models(
self,
models: List[DataModel],
imports: Imports,
) -> None:
if not self.keep_model_order:
return
models.sort(key=lambda x: x.class_name)
imported = {i for v in imports.values() for i in v}
model_class_name_baseclasses: Dict[DataModel, Tuple[str, Set[str]]] = {}
for model in models:
class_name = model.class_name
model_class_name_baseclasses[model] = (
class_name,
{b.type_hint for b in model.base_classes if b.reference} - {class_name},
)
changed: bool = True
while changed:
changed = False
resolved = imported.copy()
for i in range(len(models) - 1):
model = models[i]
class_name, baseclasses = model_class_name_baseclasses[model]
if not baseclasses - resolved:
resolved.add(class_name)
continue
models[i], models[i + 1] = models[i + 1], model
changed = True
def __set_one_literal_on_default(self, models: List[DataModel]) -> None:
if not self.use_one_literal_as_default:
return None
for model in models:
for model_field in model.fields:
if not model_field.required or len(model_field.data_type.literals) != 1:
continue
model_field.default = model_field.data_type.literals[0]
model_field.required = False
if model_field.nullable is not True: # pragma: no cover
model_field.nullable = False
@classmethod
def __postprocess_result_modules(cls, results):
def process(input_tuple) -> Tuple[str, ...]:
r = []
for item in input_tuple:
p = item.split('.')
if len(p) > 1:
r.extend(p[:-1])
r.append(p[-1])
else:
r.append(item)
r = r[:-2] + [f'{r[-2]}.{r[-1]}']
return tuple(r)
results = {process(k): v for k, v in results.items()}
init_result = [v for k, v in results.items() if k[-1] == '__init__.py'][0]
folders = {t[:-1] if t[-1].endswith('.py') else t for t in results.keys()}
for folder in folders:
for i in range(len(folder)):
subfolder = folder[: i + 1]
init_file = subfolder + ('__init__.py',)
results.update({init_file: init_result})
return results
def __change_imported_model_name(
self,
models: List[DataModel],
imports: Imports,
scoped_model_resolver: ModelResolver,
) -> None:
imported_names = {
imports.alias[from_][i]
if i in imports.alias[from_] and i != imports.alias[from_][i]
else i
for from_, import_ in imports.items()
for i in import_
}
for model in models:
if model.class_name not in imported_names: # pragma: no cover
continue
model.reference.name = scoped_model_resolver.add( # pragma: no cover
path=get_special_path('imported_name', model.path.split('/')),
original_name=model.reference.name,
unique=True,
class_name=True,
).name
def parse(
self,
with_import: Optional[bool] = True,
format_: Optional[bool] = True,
settings_path: Optional[Path] = None,
) -> Union[str, Dict[Tuple[str, ...], Result]]:
self.parse_raw()
if with_import:
if self.target_python_version != PythonVersion.PY_36:
self.imports.append(IMPORT_ANNOTATIONS)
if format_:
code_formatter: Optional[CodeFormatter] = CodeFormatter(
self.target_python_version,
settings_path,
self.wrap_string_literal,
skip_string_normalization=not self.use_double_quotes,
known_third_party=self.known_third_party,
custom_formatters=self.custom_formatter,
custom_formatters_kwargs=self.custom_formatters_kwargs,
)
else:
code_formatter = None
_, sorted_data_models, require_update_action_models = sort_data_models(
self.results
)
results: Dict[Tuple[str, ...], Result] = {}
def module_key(data_model: DataModel) -> Tuple[str, ...]:
return tuple(data_model.module_path)
def sort_key(data_model: DataModel) -> Tuple[int, Tuple[str, ...]]:
return (len(data_model.module_path), tuple(data_model.module_path))
# process in reverse order to correctly establish module levels
grouped_models = groupby(
sorted(sorted_data_models.values(), key=sort_key, reverse=True),
key=module_key,
)
module_models: List[Tuple[Tuple[str, ...], List[DataModel]]] = []
unused_models: List[DataModel] = []
model_to_module_models: Dict[
DataModel, Tuple[Tuple[str, ...], List[DataModel]]
] = {}
module_to_import: Dict[Tuple[str, ...], Imports] = {}
previous_module = () # type: Tuple[str, ...]
for module, models in ((k, [*v]) for k, v in grouped_models): # type: Tuple[str, ...], List[DataModel]
for model in models:
model_to_module_models[model] = module, models
self.__delete_duplicate_models(models)
self.__replace_duplicate_name_in_module(models)
if len(previous_module) - len(module) > 1:
for parts in range(len(previous_module) - 1, len(module), -1):
module_models.append(
(
previous_module[:parts],
[],
)
)
module_models.append(
(
module,
models,
)
)
previous_module = module
class Processed(NamedTuple):
module: Tuple[str, ...]
models: List[DataModel]
init: bool
imports: Imports
scoped_model_resolver: ModelResolver
processed_models: List[Processed] = []
for module, models in module_models:
imports = module_to_import[module] = Imports(self.use_exact_imports)
init = False
if module:
parent = (*module[:-1], '__init__.py')
if parent not in results:
results[parent] = Result(body='')
if (*module, '__init__.py') in results:
module = (*module, '__init__.py')
init = True
else:
module = (*module[:-1], f'{module[-1]}.py')
module = tuple(part.replace('-', '_') for part in module)
else:
module = ('__init__.py',)
scoped_model_resolver = ModelResolver()
self.__override_required_field(models)
self.__replace_unique_list_to_set(models)
self.__change_from_import(models, imports, scoped_model_resolver, init)
self.__extract_inherited_enum(models)
self.__set_reference_default_value_to_field(models)
self.__reuse_model(models, require_update_action_models)
self.__collapse_root_models(
models, unused_models, imports, scoped_model_resolver
)
self.__set_default_enum_member(models)
self.__sort_models(models, imports)
self.__apply_discriminator_type(models, imports)
self.__set_one_literal_on_default(models)
processed_models.append(
Processed(module, models, init, imports, scoped_model_resolver)
)
for processed_model in processed_models:
for model in processed_model.models:
processed_model.imports.append(model.imports)
for unused_model in unused_models:
module, models = model_to_module_models[unused_model]
if unused_model in models: # pragma: no cover
imports = module_to_import[module]
imports.remove(unused_model.imports)
models.remove(unused_model)
for processed_model in processed_models:
# postprocess imports to remove unused imports.
model_code = str('\n'.join([str(m) for m in processed_model.models]))
unused_imports = [
(from_, import_)
for from_, imports_ in processed_model.imports.items()
for import_ in imports_
if import_ not in model_code
]
for from_, import_ in unused_imports:
processed_model.imports.remove(Import(from_=from_, import_=import_))
for module, models, init, imports, scoped_model_resolver in processed_models:
# process after removing unused models
self.__change_imported_model_name(models, imports, scoped_model_resolver)
for module, models, init, imports, scoped_model_resolver in processed_models:
result: List[str] = []
if models:
if with_import:
result += [str(self.imports), str(imports), '\n']
code = dump_templates(models)
result += [code]
if self.dump_resolve_reference_action is not None:
result += [
'\n',
self.dump_resolve_reference_action(
m.reference.short_name
for m in models
if m.path in require_update_action_models
),
]
if not result and not init:
continue
body = '\n'.join(result)
if code_formatter:
body = code_formatter.format_code(body)
results[module] = Result(
body=body, source=models[0].file_path if models else None
)
# retain existing behaviour
if [*results] == [('__init__.py',)]:
return results[('__init__.py',)].body
results = {tuple(i.replace('-', '_') for i in k): v for k, v in results.items()}
results = (
self.__postprocess_result_modules(results)
if self.treat_dots_as_module
else {
tuple(
(
part[: part.rfind('.')].replace('.', '_')
+ part[part.rfind('.') :]
)
for part in k
): v
for k, v in results.items()
}
)
return results
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/parser/graphql.py 0000644 0000000 0000000 00000053621 00000000000 025231 0 ustar 00 0000000 0000000 from __future__ import annotations
from pathlib import Path
from typing import (
Any,
Callable,
DefaultDict,
Dict,
Iterable,
Iterator,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
Type,
Union,
)
from urllib.parse import ParseResult
from datamodel_code_generator import (
DefaultPutDict,
LiteralType,
PythonVersion,
snooper_to_methods,
)
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model import pydantic as pydantic_model
from datamodel_code_generator.model.enum import Enum
from datamodel_code_generator.model.scalar import DataTypeScalar
from datamodel_code_generator.model.union import DataTypeUnion
from datamodel_code_generator.parser.base import (
DataType,
Parser,
Source,
escape_characters,
)
from datamodel_code_generator.reference import ModelType, Reference
from datamodel_code_generator.types import DataTypeManager, StrictTypes, Types
try:
import graphql
except ImportError: # pragma: no cover
raise Exception(
"Please run `$pip install 'datamodel-code-generator[graphql]`' to generate data-model from a GraphQL schema."
)
from datamodel_code_generator.format import DatetimeClassType
graphql_resolver = graphql.type.introspection.TypeResolvers()
def build_graphql_schema(schema_str: str) -> graphql.GraphQLSchema:
"""Build a graphql schema from a string."""
schema = graphql.build_schema(schema_str)
return graphql.lexicographic_sort_schema(schema)
@snooper_to_methods(max_variable_length=None)
class GraphQLParser(Parser):
# raw graphql schema as `graphql-core` object
raw_obj: graphql.GraphQLSchema
# all processed graphql objects
# mapper from an object name (unique) to an object
all_graphql_objects: Dict[str, graphql.GraphQLNamedType]
# a reference for each object
# mapper from an object name to his reference
references: Dict[str, Reference] = {}
# mapper from graphql type to all objects with this type
# `graphql.type.introspection.TypeKind` -- an enum with all supported types
# `graphql.GraphQLNamedType` -- base type for each graphql object
# see `graphql-core` for more details
support_graphql_types: Dict[
graphql.type.introspection.TypeKind, List[graphql.GraphQLNamedType]
]
# graphql types order for render
# may be as a parameter in the future
parse_order: List[graphql.type.introspection.TypeKind] = [
graphql.type.introspection.TypeKind.SCALAR,
graphql.type.introspection.TypeKind.ENUM,
graphql.type.introspection.TypeKind.INTERFACE,
graphql.type.introspection.TypeKind.OBJECT,
graphql.type.introspection.TypeKind.INPUT_OBJECT,
graphql.type.introspection.TypeKind.UNION,
]
def __init__(
self,
source: Union[str, Path, ParseResult],
*,
data_model_type: Type[DataModel] = pydantic_model.BaseModel,
data_model_root_type: Type[DataModel] = pydantic_model.CustomRootType,
data_model_scalar_type: Type[DataModel] = DataTypeScalar,
data_model_union_type: Type[DataModel] = DataTypeUnion,
data_type_manager_type: Type[DataTypeManager] = pydantic_model.DataTypeManager,
data_model_field_type: Type[DataModelFieldBase] = pydantic_model.DataModelField,
base_class: Optional[str] = None,
additional_imports: Optional[List[str]] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
target_python_version: PythonVersion = PythonVersion.PY_38,
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]] = None,
validation: bool = False,
field_constraints: bool = False,
snake_case_field: bool = False,
strip_default_none: bool = False,
aliases: Optional[Mapping[str, str]] = None,
allow_population_by_field_name: bool = False,
apply_default_values_for_required_fields: bool = False,
allow_extra_fields: bool = False,
force_optional_for_required_fields: bool = False,
class_name: Optional[str] = None,
use_standard_collections: bool = False,
base_path: Optional[Path] = None,
use_schema_description: bool = False,
use_field_description: bool = False,
use_default_kwarg: bool = False,
reuse_model: bool = False,
encoding: str = 'utf-8',
enum_field_as_literal: Optional[LiteralType] = None,
set_default_enum_member: bool = False,
use_subclass_enum: bool = False,
strict_nullable: bool = False,
use_generic_container_types: bool = False,
enable_faux_immutability: bool = False,
remote_text_cache: Optional[DefaultPutDict[str, str]] = None,
disable_appending_item_suffix: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
empty_enum_field_name: Optional[str] = None,
custom_class_name_generator: Optional[Callable[[str], str]] = None,
field_extra_keys: Optional[Set[str]] = None,
field_include_all_keys: bool = False,
field_extra_keys_without_x_prefix: Optional[Set[str]] = None,
wrap_string_literal: Optional[bool] = None,
use_title_as_name: bool = False,
use_operation_id_as_name: bool = False,
use_unique_items_as_set: bool = False,
http_headers: Optional[Sequence[Tuple[str, str]]] = None,
http_ignore_tls: bool = False,
use_annotated: bool = False,
use_non_positive_negative_number_constrained_types: bool = False,
original_field_name_delimiter: Optional[str] = None,
use_double_quotes: bool = False,
use_union_operator: bool = False,
allow_responses_without_content: bool = False,
collapse_root_models: bool = False,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
keep_model_order: bool = False,
use_one_literal_as_default: bool = False,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
use_pendulum: bool = False,
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
treat_dots_as_module: bool = False,
use_exact_imports: bool = False,
default_field_extras: Optional[Dict[str, Any]] = None,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
keyword_only: bool = False,
no_alias: bool = False,
) -> None:
super().__init__(
source=source,
data_model_type=data_model_type,
data_model_root_type=data_model_root_type,
data_type_manager_type=data_type_manager_type,
data_model_field_type=data_model_field_type,
base_class=base_class,
additional_imports=additional_imports,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
target_python_version=target_python_version,
dump_resolve_reference_action=dump_resolve_reference_action,
validation=validation,
field_constraints=field_constraints,
snake_case_field=snake_case_field,
strip_default_none=strip_default_none,
aliases=aliases,
allow_population_by_field_name=allow_population_by_field_name,
allow_extra_fields=allow_extra_fields,
apply_default_values_for_required_fields=apply_default_values_for_required_fields,
force_optional_for_required_fields=force_optional_for_required_fields,
class_name=class_name,
use_standard_collections=use_standard_collections,
base_path=base_path,
use_schema_description=use_schema_description,
use_field_description=use_field_description,
use_default_kwarg=use_default_kwarg,
reuse_model=reuse_model,
encoding=encoding,
enum_field_as_literal=enum_field_as_literal,
use_one_literal_as_default=use_one_literal_as_default,
set_default_enum_member=set_default_enum_member,
use_subclass_enum=use_subclass_enum,
strict_nullable=strict_nullable,
use_generic_container_types=use_generic_container_types,
enable_faux_immutability=enable_faux_immutability,
remote_text_cache=remote_text_cache,
disable_appending_item_suffix=disable_appending_item_suffix,
strict_types=strict_types,
empty_enum_field_name=empty_enum_field_name,
custom_class_name_generator=custom_class_name_generator,
field_extra_keys=field_extra_keys,
field_include_all_keys=field_include_all_keys,
field_extra_keys_without_x_prefix=field_extra_keys_without_x_prefix,
wrap_string_literal=wrap_string_literal,
use_title_as_name=use_title_as_name,
use_operation_id_as_name=use_operation_id_as_name,
use_unique_items_as_set=use_unique_items_as_set,
http_headers=http_headers,
http_ignore_tls=http_ignore_tls,
use_annotated=use_annotated,
use_non_positive_negative_number_constrained_types=use_non_positive_negative_number_constrained_types,
original_field_name_delimiter=original_field_name_delimiter,
use_double_quotes=use_double_quotes,
use_union_operator=use_union_operator,
allow_responses_without_content=allow_responses_without_content,
collapse_root_models=collapse_root_models,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members,
keep_model_order=keep_model_order,
known_third_party=known_third_party,
custom_formatters=custom_formatters,
custom_formatters_kwargs=custom_formatters_kwargs,
use_pendulum=use_pendulum,
http_query_parameters=http_query_parameters,
treat_dots_as_module=treat_dots_as_module,
use_exact_imports=use_exact_imports,
default_field_extras=default_field_extras,
target_datetime_class=target_datetime_class,
keyword_only=keyword_only,
no_alias=no_alias,
)
self.data_model_scalar_type = data_model_scalar_type
self.data_model_union_type = data_model_union_type
self.use_standard_collections = use_standard_collections
self.use_union_operator = use_union_operator
def _get_context_source_path_parts(self) -> Iterator[Tuple[Source, List[str]]]:
# TODO (denisart): Temporarily this method duplicates
# the method `datamodel_code_generator.parser.jsonschema.JsonSchemaParser._get_context_source_path_parts`.
if isinstance(self.source, list) or ( # pragma: no cover
isinstance(self.source, Path) and self.source.is_dir()
): # pragma: no cover
self.current_source_path = Path()
self.model_resolver.after_load_files = {
self.base_path.joinpath(s.path).resolve().as_posix()
for s in self.iter_source
}
for source in self.iter_source:
if isinstance(self.source, ParseResult): # pragma: no cover
path_parts = self.get_url_path_parts(self.source)
else:
path_parts = list(source.path.parts)
if self.current_source_path is not None: # pragma: no cover
self.current_source_path = source.path
with self.model_resolver.current_base_path_context(
source.path.parent
), self.model_resolver.current_root_context(path_parts):
yield source, path_parts
def _resolve_types(self, paths: List[str], schema: graphql.GraphQLSchema) -> None:
for type_name, type_ in schema.type_map.items():
if type_name.startswith('__'):
continue
if type_name in ['Query', 'Mutation']:
continue
resolved_type = graphql_resolver.kind(type_, None)
if resolved_type in self.support_graphql_types: # pragma: no cover
self.all_graphql_objects[type_.name] = type_
# TODO: need a special method for each graph type
self.references[type_.name] = Reference(
path=f'{str(*paths)}/{resolved_type.value}/{type_.name}',
name=type_.name,
original_name=type_.name,
)
self.support_graphql_types[resolved_type].append(type_)
def _typename_field(self, name: str) -> DataModelFieldBase:
return self.data_model_field_type(
name='typename__',
data_type=DataType(
literals=[name],
use_union_operator=self.use_union_operator,
use_standard_collections=self.use_standard_collections,
),
default=name,
use_annotated=self.use_annotated,
required=False,
alias='__typename',
use_one_literal_as_default=True,
has_default=True,
)
def _get_default(
self,
field: Union[graphql.GraphQLField, graphql.GraphQLInputField],
final_data_type: DataType,
required: bool,
) -> Any:
if isinstance(field, graphql.GraphQLInputField): # pragma: no cover
if field.default_value == graphql.pyutils.Undefined: # pragma: no cover
return None
return field.default_value
if required is False:
if final_data_type.is_list:
return None
return None
def parse_scalar(self, scalar_graphql_object: graphql.GraphQLScalarType) -> None:
self.results.append(
self.data_model_scalar_type(
reference=self.references[scalar_graphql_object.name],
fields=[],
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
description=scalar_graphql_object.description,
)
)
def parse_enum(self, enum_object: graphql.GraphQLEnumType) -> None:
enum_fields: List[DataModelFieldBase] = []
exclude_field_names: Set[str] = set()
for value_name, value in enum_object.values.items():
default = (
f"'{value_name.translate(escape_characters)}'"
if isinstance(value_name, str)
else value_name
)
field_name = self.model_resolver.get_valid_field_name(
value_name, excludes=exclude_field_names, model_type=ModelType.ENUM
)
exclude_field_names.add(field_name)
enum_fields.append(
self.data_model_field_type(
name=field_name,
data_type=self.data_type_manager.get_data_type(
Types.string,
),
default=default,
required=True,
strip_default_none=self.strip_default_none,
has_default=True,
use_field_description=value.description is not None,
original_name=None,
)
)
enum = Enum(
reference=self.references[enum_object.name],
fields=enum_fields,
path=self.current_source_path,
description=enum_object.description,
custom_template_dir=self.custom_template_dir,
)
self.results.append(enum)
def parse_field(
self,
field_name: str,
alias: str,
field: Union[graphql.GraphQLField, graphql.GraphQLInputField],
) -> DataModelFieldBase:
final_data_type = DataType(
is_optional=True,
use_union_operator=self.use_union_operator,
use_standard_collections=self.use_standard_collections,
)
data_type = final_data_type
obj = field.type
while graphql.is_list_type(obj) or graphql.is_non_null_type(obj):
if graphql.is_list_type(obj):
data_type.is_list = True
new_data_type = DataType(
is_optional=True,
use_union_operator=self.use_union_operator,
use_standard_collections=self.use_standard_collections,
)
data_type.data_types = [new_data_type]
data_type = new_data_type
elif graphql.is_non_null_type(obj): # pragma: no cover
data_type.is_optional = False
obj = obj.of_type
data_type.type = obj.name
required = (not self.force_optional_for_required_fields) and (
not final_data_type.is_optional
)
default = self._get_default(field, final_data_type, required)
extras = (
{}
if self.default_field_extras is None
else self.default_field_extras.copy()
)
if field.description is not None: # pragma: no cover
extras['description'] = field.description
return self.data_model_field_type(
name=field_name,
default=default,
data_type=final_data_type,
required=required,
extras=extras,
alias=alias,
strip_default_none=self.strip_default_none,
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
use_default_kwarg=self.use_default_kwarg,
original_name=field_name,
has_default=default is not None,
)
def parse_object_like(
self,
obj: Union[
graphql.GraphQLInterfaceType,
graphql.GraphQLObjectType,
graphql.GraphQLInputObjectType,
],
) -> None:
fields = []
exclude_field_names: Set[str] = set()
for field_name, field in obj.fields.items():
field_name_, alias = self.model_resolver.get_valid_field_name_and_alias(
field_name, excludes=exclude_field_names
)
exclude_field_names.add(field_name_)
data_model_field_type = self.parse_field(field_name_, alias, field)
fields.append(data_model_field_type)
fields.append(self._typename_field(obj.name))
base_classes = []
if hasattr(obj, 'interfaces'): # pragma: no cover
base_classes = [self.references[i.name] for i in obj.interfaces]
data_model_type = self.data_model_type(
reference=self.references[obj.name],
fields=fields,
base_classes=base_classes,
custom_base_class=self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=obj.description,
keyword_only=self.keyword_only,
)
self.results.append(data_model_type)
def parse_interface(
self, interface_graphql_object: graphql.GraphQLInterfaceType
) -> None:
self.parse_object_like(interface_graphql_object)
def parse_object(self, graphql_object: graphql.GraphQLObjectType) -> None:
self.parse_object_like(graphql_object)
def parse_input_object(
self, input_graphql_object: graphql.GraphQLInputObjectType
) -> None:
self.parse_object_like(input_graphql_object) # pragma: no cover
def parse_union(self, union_object: graphql.GraphQLUnionType) -> None:
fields = []
for type_ in union_object.types:
fields.append(
self.data_model_field_type(name=type_.name, data_type=DataType())
)
data_model_type = self.data_model_union_type(
reference=self.references[union_object.name],
fields=fields,
custom_base_class=self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=union_object.description,
)
self.results.append(data_model_type)
def parse_raw(self) -> None:
self.all_graphql_objects = {}
self.references: Dict[str, Reference] = {}
self.support_graphql_types = {
graphql.type.introspection.TypeKind.SCALAR: [],
graphql.type.introspection.TypeKind.ENUM: [],
graphql.type.introspection.TypeKind.UNION: [],
graphql.type.introspection.TypeKind.INTERFACE: [],
graphql.type.introspection.TypeKind.OBJECT: [],
graphql.type.introspection.TypeKind.INPUT_OBJECT: [],
}
# may be as a parameter in the future (??)
_mapper_from_graphql_type_to_parser_method = {
graphql.type.introspection.TypeKind.SCALAR: self.parse_scalar,
graphql.type.introspection.TypeKind.ENUM: self.parse_enum,
graphql.type.introspection.TypeKind.INTERFACE: self.parse_interface,
graphql.type.introspection.TypeKind.OBJECT: self.parse_object,
graphql.type.introspection.TypeKind.INPUT_OBJECT: self.parse_input_object,
graphql.type.introspection.TypeKind.UNION: self.parse_union,
}
for source, path_parts in self._get_context_source_path_parts():
schema: graphql.GraphQLSchema = build_graphql_schema(source.text)
self.raw_obj = schema
self._resolve_types(path_parts, schema)
for next_type in self.parse_order:
for obj in self.support_graphql_types[next_type]:
parser_ = _mapper_from_graphql_type_to_parser_method[next_type]
parser_(obj) # type: ignore
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/parser/jsonschema.py 0000644 0000000 0000000 00000212341 00000000000 025721 0 ustar 00 0000000 0000000 from __future__ import annotations
import enum as _enum
from collections import defaultdict
from contextlib import contextmanager
from functools import lru_cache
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
DefaultDict,
Dict,
Generator,
Iterable,
Iterator,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
Type,
Union,
)
from urllib.parse import ParseResult
from warnings import warn
from pydantic import (
Field,
)
from datamodel_code_generator import (
InvalidClassNameError,
load_yaml,
load_yaml_from_path,
snooper_to_methods,
)
from datamodel_code_generator.format import PythonVersion
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model import pydantic as pydantic_model
from datamodel_code_generator.model.base import UNDEFINED, get_module_name
from datamodel_code_generator.model.enum import Enum
from datamodel_code_generator.parser import DefaultPutDict, LiteralType
from datamodel_code_generator.parser.base import (
SPECIAL_PATH_FORMAT,
Parser,
Source,
escape_characters,
get_special_path,
title_to_class_name,
)
from datamodel_code_generator.reference import ModelType, Reference, is_url
from datamodel_code_generator.types import (
DataType,
DataTypeManager,
EmptyDataType,
StrictTypes,
Types,
UnionIntFloat,
)
from datamodel_code_generator.util import (
PYDANTIC_V2,
BaseModel,
cached_property,
field_validator,
model_validator,
)
if PYDANTIC_V2:
from pydantic import ConfigDict
from datamodel_code_generator.format import DatetimeClassType
def get_model_by_path(
schema: Union[Dict[str, Any], List[Any]], keys: Union[List[str], List[int]]
) -> Dict[Any, Any]:
model: Union[Dict[Any, Any], List[Any]]
if not keys:
model = schema
elif len(keys) == 1:
if isinstance(schema, dict):
model = schema.get(keys[0], {}) # type: ignore
else: # pragma: no cover
model = schema[int(keys[0])]
elif isinstance(schema, dict):
model = get_model_by_path(schema[keys[0]], keys[1:]) # type: ignore
else:
model = get_model_by_path(schema[int(keys[0])], keys[1:])
if isinstance(model, dict):
return model
raise NotImplementedError( # pragma: no cover
f'Does not support json pointer to array. schema={schema}, key={keys}'
)
json_schema_data_formats: Dict[str, Dict[str, Types]] = {
'integer': {
'int32': Types.int32,
'int64': Types.int64,
'default': Types.integer,
'date-time': Types.date_time,
'unix-time': Types.int64,
},
'number': {
'float': Types.float,
'double': Types.double,
'decimal': Types.decimal,
'date-time': Types.date_time,
'time': Types.time,
'default': Types.number,
},
'string': {
'default': Types.string,
'byte': Types.byte, # base64 encoded string
'binary': Types.binary,
'date': Types.date,
'date-time': Types.date_time,
'duration': Types.timedelta,
'time': Types.time,
'password': Types.password,
'path': Types.path,
'email': Types.email,
'idn-email': Types.email,
'uuid': Types.uuid,
'uuid1': Types.uuid1,
'uuid2': Types.uuid2,
'uuid3': Types.uuid3,
'uuid4': Types.uuid4,
'uuid5': Types.uuid5,
'uri': Types.uri,
'uri-reference': Types.string,
'hostname': Types.hostname,
'ipv4': Types.ipv4,
'ipv4-network': Types.ipv4_network,
'ipv6': Types.ipv6,
'ipv6-network': Types.ipv6_network,
'decimal': Types.decimal,
'integer': Types.integer,
},
'boolean': {'default': Types.boolean},
'object': {'default': Types.object},
'null': {'default': Types.null},
'array': {'default': Types.array},
}
class JSONReference(_enum.Enum):
LOCAL = 'LOCAL'
REMOTE = 'REMOTE'
URL = 'URL'
class Discriminator(BaseModel):
propertyName: str
mapping: Optional[Dict[str, str]] = None
class JsonSchemaObject(BaseModel):
if not TYPE_CHECKING:
if PYDANTIC_V2:
@classmethod
def get_fields(cls) -> Dict[str, Any]:
return cls.model_fields
else:
@classmethod
def get_fields(cls) -> Dict[str, Any]:
return cls.__fields__
@classmethod
def model_rebuild(cls) -> None:
cls.update_forward_refs()
__constraint_fields__: Set[str] = {
'exclusiveMinimum',
'minimum',
'exclusiveMaximum',
'maximum',
'multipleOf',
'minItems',
'maxItems',
'minLength',
'maxLength',
'pattern',
'uniqueItems',
}
__extra_key__: str = SPECIAL_PATH_FORMAT.format('extras')
@model_validator(mode='before')
def validate_exclusive_maximum_and_exclusive_minimum(cls, values: Any) -> Any:
if not isinstance(values, dict):
return values
exclusive_maximum: Union[float, bool, None] = values.get('exclusiveMaximum')
exclusive_minimum: Union[float, bool, None] = values.get('exclusiveMinimum')
if exclusive_maximum is True:
values['exclusiveMaximum'] = values['maximum']
del values['maximum']
elif exclusive_maximum is False:
del values['exclusiveMaximum']
if exclusive_minimum is True:
values['exclusiveMinimum'] = values['minimum']
del values['minimum']
elif exclusive_minimum is False:
del values['exclusiveMinimum']
return values
@field_validator('ref')
def validate_ref(cls, value: Any) -> Any:
if isinstance(value, str) and '#' in value:
if value.endswith('#/'):
return value[:-1]
elif '#/' in value or value[0] == '#' or value[-1] == '#':
return value
return value.replace('#', '#/')
return value
items: Union[List[JsonSchemaObject], JsonSchemaObject, bool, None] = None
uniqueItems: Optional[bool] = None
type: Union[str, List[str], None] = None
format: Optional[str] = None
pattern: Optional[str] = None
minLength: Optional[int] = None
maxLength: Optional[int] = None
minimum: Optional[UnionIntFloat] = None
maximum: Optional[UnionIntFloat] = None
minItems: Optional[int] = None
maxItems: Optional[int] = None
multipleOf: Optional[float] = None
exclusiveMaximum: Union[float, bool, None] = None
exclusiveMinimum: Union[float, bool, None] = None
additionalProperties: Union[JsonSchemaObject, bool, None] = None
patternProperties: Optional[Dict[str, JsonSchemaObject]] = None
oneOf: List[JsonSchemaObject] = []
anyOf: List[JsonSchemaObject] = []
allOf: List[JsonSchemaObject] = []
enum: List[Any] = []
writeOnly: Optional[bool] = None
readOnly: Optional[bool] = None
properties: Optional[Dict[str, Union[JsonSchemaObject, bool]]] = None
required: List[str] = []
ref: Optional[str] = Field(default=None, alias='$ref')
nullable: Optional[bool] = False
x_enum_varnames: List[str] = Field(default=[], alias='x-enum-varnames')
description: Optional[str] = None
title: Optional[str] = None
example: Any = None
examples: Any = None
default: Any = None
id: Optional[str] = Field(default=None, alias='$id')
custom_type_path: Optional[str] = Field(default=None, alias='customTypePath')
custom_base_path: Optional[str] = Field(default=None, alias='customBasePath')
extras: Dict[str, Any] = Field(alias=__extra_key__, default_factory=dict)
discriminator: Union[Discriminator, str, None] = None
if PYDANTIC_V2:
model_config = ConfigDict(
arbitrary_types_allowed=True,
ignored_types=(cached_property,),
)
else:
class Config:
arbitrary_types_allowed = True
keep_untouched = (cached_property,)
smart_casts = True
if not TYPE_CHECKING:
def __init__(self, **data: Any) -> None:
super().__init__(**data)
self.extras = {k: v for k, v in data.items() if k not in EXCLUDE_FIELD_KEYS}
if 'const' in data.get(self.__extra_key__, {}):
self.extras['const'] = data[self.__extra_key__]['const']
@cached_property
def is_object(self) -> bool:
return (
self.properties is not None
or self.type == 'object'
and not self.allOf
and not self.oneOf
and not self.anyOf
and not self.ref
)
@cached_property
def is_array(self) -> bool:
return self.items is not None or self.type == 'array'
@cached_property
def ref_object_name(self) -> str: # pragma: no cover
return self.ref.rsplit('/', 1)[-1] # type: ignore
@field_validator('items', mode='before')
def validate_items(cls, values: Any) -> Any:
# this condition expects empty dict
return values or None
@cached_property
def has_default(self) -> bool:
return 'default' in self.__fields_set__ or 'default_factory' in self.extras
@cached_property
def has_constraint(self) -> bool:
return bool(self.__constraint_fields__ & self.__fields_set__)
@cached_property
def ref_type(self) -> Optional[JSONReference]:
if self.ref:
return get_ref_type(self.ref)
return None # pragma: no cover
@cached_property
def type_has_null(self) -> bool:
return isinstance(self.type, list) and 'null' in self.type
@lru_cache()
def get_ref_type(ref: str) -> JSONReference:
if ref[0] == '#':
return JSONReference.LOCAL
elif is_url(ref):
return JSONReference.URL
return JSONReference.REMOTE
def _get_type(type_: str, format__: Optional[str] = None) -> Types:
if type_ not in json_schema_data_formats:
return Types.any
data_formats: Optional[Types] = json_schema_data_formats[type_].get(
'default' if format__ is None else format__
)
if data_formats is not None:
return data_formats
warn(f'format of {format__!r} not understood for {type_!r} - using default' '')
return json_schema_data_formats[type_]['default']
JsonSchemaObject.model_rebuild()
DEFAULT_FIELD_KEYS: Set[str] = {
'example',
'examples',
'description',
'discriminator',
'title',
'const',
'default_factory',
}
EXCLUDE_FIELD_KEYS_IN_JSON_SCHEMA: Set[str] = {
'readOnly',
'writeOnly',
}
EXCLUDE_FIELD_KEYS = (
set(JsonSchemaObject.get_fields())
- DEFAULT_FIELD_KEYS
- EXCLUDE_FIELD_KEYS_IN_JSON_SCHEMA
) | {
'$id',
'$ref',
JsonSchemaObject.__extra_key__,
}
@snooper_to_methods(max_variable_length=None)
class JsonSchemaParser(Parser):
SCHEMA_PATHS: ClassVar[List[str]] = ['#/definitions', '#/$defs']
SCHEMA_OBJECT_TYPE: ClassVar[Type[JsonSchemaObject]] = JsonSchemaObject
def __init__(
self,
source: Union[str, Path, List[Path], ParseResult],
*,
data_model_type: Type[DataModel] = pydantic_model.BaseModel,
data_model_root_type: Type[DataModel] = pydantic_model.CustomRootType,
data_type_manager_type: Type[DataTypeManager] = pydantic_model.DataTypeManager,
data_model_field_type: Type[DataModelFieldBase] = pydantic_model.DataModelField,
base_class: Optional[str] = None,
additional_imports: Optional[List[str]] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
target_python_version: PythonVersion = PythonVersion.PY_38,
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]] = None,
validation: bool = False,
field_constraints: bool = False,
snake_case_field: bool = False,
strip_default_none: bool = False,
aliases: Optional[Mapping[str, str]] = None,
allow_population_by_field_name: bool = False,
apply_default_values_for_required_fields: bool = False,
allow_extra_fields: bool = False,
force_optional_for_required_fields: bool = False,
class_name: Optional[str] = None,
use_standard_collections: bool = False,
base_path: Optional[Path] = None,
use_schema_description: bool = False,
use_field_description: bool = False,
use_default_kwarg: bool = False,
reuse_model: bool = False,
encoding: str = 'utf-8',
enum_field_as_literal: Optional[LiteralType] = None,
use_one_literal_as_default: bool = False,
set_default_enum_member: bool = False,
use_subclass_enum: bool = False,
strict_nullable: bool = False,
use_generic_container_types: bool = False,
enable_faux_immutability: bool = False,
remote_text_cache: Optional[DefaultPutDict[str, str]] = None,
disable_appending_item_suffix: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
empty_enum_field_name: Optional[str] = None,
custom_class_name_generator: Optional[Callable[[str], str]] = None,
field_extra_keys: Optional[Set[str]] = None,
field_include_all_keys: bool = False,
field_extra_keys_without_x_prefix: Optional[Set[str]] = None,
wrap_string_literal: Optional[bool] = None,
use_title_as_name: bool = False,
use_operation_id_as_name: bool = False,
use_unique_items_as_set: bool = False,
http_headers: Optional[Sequence[Tuple[str, str]]] = None,
http_ignore_tls: bool = False,
use_annotated: bool = False,
use_non_positive_negative_number_constrained_types: bool = False,
original_field_name_delimiter: Optional[str] = None,
use_double_quotes: bool = False,
use_union_operator: bool = False,
allow_responses_without_content: bool = False,
collapse_root_models: bool = False,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
keep_model_order: bool = False,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
use_pendulum: bool = False,
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
treat_dots_as_module: bool = False,
use_exact_imports: bool = False,
default_field_extras: Optional[Dict[str, Any]] = None,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
keyword_only: bool = False,
no_alias: bool = False,
) -> None:
super().__init__(
source=source,
data_model_type=data_model_type,
data_model_root_type=data_model_root_type,
data_type_manager_type=data_type_manager_type,
data_model_field_type=data_model_field_type,
base_class=base_class,
additional_imports=additional_imports,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
target_python_version=target_python_version,
dump_resolve_reference_action=dump_resolve_reference_action,
validation=validation,
field_constraints=field_constraints,
snake_case_field=snake_case_field,
strip_default_none=strip_default_none,
aliases=aliases,
allow_population_by_field_name=allow_population_by_field_name,
allow_extra_fields=allow_extra_fields,
apply_default_values_for_required_fields=apply_default_values_for_required_fields,
force_optional_for_required_fields=force_optional_for_required_fields,
class_name=class_name,
use_standard_collections=use_standard_collections,
base_path=base_path,
use_schema_description=use_schema_description,
use_field_description=use_field_description,
use_default_kwarg=use_default_kwarg,
reuse_model=reuse_model,
encoding=encoding,
enum_field_as_literal=enum_field_as_literal,
use_one_literal_as_default=use_one_literal_as_default,
set_default_enum_member=set_default_enum_member,
use_subclass_enum=use_subclass_enum,
strict_nullable=strict_nullable,
use_generic_container_types=use_generic_container_types,
enable_faux_immutability=enable_faux_immutability,
remote_text_cache=remote_text_cache,
disable_appending_item_suffix=disable_appending_item_suffix,
strict_types=strict_types,
empty_enum_field_name=empty_enum_field_name,
custom_class_name_generator=custom_class_name_generator,
field_extra_keys=field_extra_keys,
field_include_all_keys=field_include_all_keys,
field_extra_keys_without_x_prefix=field_extra_keys_without_x_prefix,
wrap_string_literal=wrap_string_literal,
use_title_as_name=use_title_as_name,
use_operation_id_as_name=use_operation_id_as_name,
use_unique_items_as_set=use_unique_items_as_set,
http_headers=http_headers,
http_ignore_tls=http_ignore_tls,
use_annotated=use_annotated,
use_non_positive_negative_number_constrained_types=use_non_positive_negative_number_constrained_types,
original_field_name_delimiter=original_field_name_delimiter,
use_double_quotes=use_double_quotes,
use_union_operator=use_union_operator,
allow_responses_without_content=allow_responses_without_content,
collapse_root_models=collapse_root_models,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members,
keep_model_order=keep_model_order,
known_third_party=known_third_party,
custom_formatters=custom_formatters,
custom_formatters_kwargs=custom_formatters_kwargs,
use_pendulum=use_pendulum,
http_query_parameters=http_query_parameters,
treat_dots_as_module=treat_dots_as_module,
use_exact_imports=use_exact_imports,
default_field_extras=default_field_extras,
target_datetime_class=target_datetime_class,
keyword_only=keyword_only,
no_alias=no_alias,
)
self.remote_object_cache: DefaultPutDict[str, Dict[str, Any]] = DefaultPutDict()
self.raw_obj: Dict[Any, Any] = {}
self._root_id: Optional[str] = None
self._root_id_base_path: Optional[str] = None
self.reserved_refs: DefaultDict[Tuple[str], Set[str]] = defaultdict(set)
self.field_keys: Set[str] = {
*DEFAULT_FIELD_KEYS,
*self.field_extra_keys,
*self.field_extra_keys_without_x_prefix,
}
if self.data_model_field_type.can_have_extra_keys:
self.get_field_extra_key: Callable[[str], str] = (
lambda key: self.model_resolver.get_valid_field_name_and_alias(key)[0]
)
else:
self.get_field_extra_key = lambda key: key
def get_field_extras(self, obj: JsonSchemaObject) -> Dict[str, Any]:
if self.field_include_all_keys:
extras = {
self.get_field_extra_key(
k.lstrip('x-') if k in self.field_extra_keys_without_x_prefix else k
): v
for k, v in obj.extras.items()
}
else:
extras = {
self.get_field_extra_key(
k.lstrip('x-') if k in self.field_extra_keys_without_x_prefix else k
): v
for k, v in obj.extras.items()
if k in self.field_keys
}
if self.default_field_extras:
extras.update(self.default_field_extras)
return extras
@cached_property
def schema_paths(self) -> List[Tuple[str, List[str]]]:
return [(s, s.lstrip('#/').split('/')) for s in self.SCHEMA_PATHS]
@property
def root_id(self) -> Optional[str]:
return self.model_resolver.root_id
@root_id.setter
def root_id(self, value: Optional[str]) -> None:
self.model_resolver.set_root_id(value)
def should_parse_enum_as_literal(self, obj: JsonSchemaObject) -> bool:
return self.enum_field_as_literal == LiteralType.All or (
self.enum_field_as_literal == LiteralType.One and len(obj.enum) == 1
)
def is_constraints_field(self, obj: JsonSchemaObject) -> bool:
return obj.is_array or (
self.field_constraints
and not (
obj.ref
or obj.anyOf
or obj.oneOf
or obj.allOf
or obj.is_object
or obj.enum
)
)
def get_object_field(
self,
*,
field_name: Optional[str],
field: JsonSchemaObject,
required: bool,
field_type: DataType,
alias: Optional[str],
original_field_name: Optional[str],
) -> DataModelFieldBase:
return self.data_model_field_type(
name=field_name,
default=field.default,
data_type=field_type,
required=required,
alias=alias,
constraints=field.dict() if self.is_constraints_field(field) else None,
nullable=field.nullable
if self.strict_nullable and (field.has_default or required)
else None,
strip_default_none=self.strip_default_none,
extras=self.get_field_extras(field),
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
use_default_kwarg=self.use_default_kwarg,
original_name=original_field_name,
has_default=field.has_default,
type_has_null=field.type_has_null,
)
def get_data_type(self, obj: JsonSchemaObject) -> DataType:
if obj.type is None:
if 'const' in obj.extras:
return self.data_type_manager.get_data_type_from_value(
obj.extras['const']
)
return self.data_type_manager.get_data_type(
Types.any,
)
def _get_data_type(type_: str, format__: str) -> DataType:
return self.data_type_manager.get_data_type(
_get_type(type_, format__),
**obj.dict() if not self.field_constraints else {},
)
if isinstance(obj.type, list):
return self.data_type(
data_types=[
_get_data_type(t, obj.format or 'default')
for t in obj.type
if t != 'null'
],
is_optional='null' in obj.type,
)
return _get_data_type(obj.type, obj.format or 'default')
def get_ref_data_type(self, ref: str) -> DataType:
reference = self.model_resolver.add_ref(ref)
return self.data_type(reference=reference)
def set_additional_properties(self, name: str, obj: JsonSchemaObject) -> None:
if isinstance(obj.additionalProperties, bool):
self.extra_template_data[name]['additionalProperties'] = (
obj.additionalProperties
)
def set_title(self, name: str, obj: JsonSchemaObject) -> None:
if obj.title:
self.extra_template_data[name]['title'] = obj.title
def _deep_merge(
self, dict1: Dict[Any, Any], dict2: Dict[Any, Any]
) -> Dict[Any, Any]:
result = dict1.copy()
for key, value in dict2.items():
if key in result:
if isinstance(result[key], dict) and isinstance(value, dict):
result[key] = self._deep_merge(result[key], value)
continue
elif isinstance(result[key], list) and isinstance(value, list):
result[key] = result[key] + value
continue
result[key] = value
return result
def parse_combined_schema(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
target_attribute_name: str,
) -> List[DataType]:
base_object = obj.dict(
exclude={target_attribute_name}, exclude_unset=True, by_alias=True
)
combined_schemas: List[JsonSchemaObject] = []
refs = []
for index, target_attribute in enumerate(
getattr(obj, target_attribute_name, [])
):
if target_attribute.ref:
combined_schemas.append(target_attribute)
refs.append(index)
# TODO: support partial ref
# {
# "type": "integer",
# "oneOf": [
# { "minimum": 5 },
# { "$ref": "#/definitions/positive" }
# ],
# "definitions": {
# "positive": {
# "minimum": 0,
# "exclusiveMinimum": true
# }
# }
# }
else:
combined_schemas.append(
self.SCHEMA_OBJECT_TYPE.parse_obj(
self._deep_merge(
base_object,
target_attribute.dict(exclude_unset=True, by_alias=True),
)
)
)
parsed_schemas = self.parse_list_item(
name,
combined_schemas,
path,
obj,
singular_name=False,
)
common_path_keyword = f'{target_attribute_name}Common'
return [
self._parse_object_common_part(
name,
obj,
[*get_special_path(common_path_keyword, path), str(i)],
ignore_duplicate_model=True,
fields=[],
base_classes=[d.reference],
required=[],
)
if i in refs and d.reference
else d
for i, d in enumerate(parsed_schemas)
]
def parse_any_of(
self, name: str, obj: JsonSchemaObject, path: List[str]
) -> List[DataType]:
return self.parse_combined_schema(name, obj, path, 'anyOf')
def parse_one_of(
self, name: str, obj: JsonSchemaObject, path: List[str]
) -> List[DataType]:
return self.parse_combined_schema(name, obj, path, 'oneOf')
def _parse_object_common_part(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
ignore_duplicate_model: bool,
fields: List[DataModelFieldBase],
base_classes: List[Reference],
required: List[str],
) -> DataType:
if obj.properties:
fields.extend(
self.parse_object_fields(obj, path, get_module_name(name, None))
)
# ignore an undetected object
if ignore_duplicate_model and not fields and len(base_classes) == 1:
with self.model_resolver.current_base_path_context(
self.model_resolver._base_path
):
self.model_resolver.delete(path)
return self.data_type(reference=base_classes[0])
if required:
for field in fields:
if self.force_optional_for_required_fields or ( # pragma: no cover
self.apply_default_values_for_required_fields and field.has_default
):
continue # pragma: no cover
if (field.original_name or field.name) in required:
field.required = True
if obj.required:
field_name_to_field = {f.original_name or f.name: f for f in fields}
for required_ in obj.required:
if required_ in field_name_to_field:
field = field_name_to_field[required_]
if self.force_optional_for_required_fields or (
self.apply_default_values_for_required_fields
and field.has_default
):
continue
field.required = True
else:
fields.append(
self.data_model_field_type(
required=True, original_name=required_, data_type=DataType()
)
)
if self.use_title_as_name and obj.title: # pragma: no cover
name = obj.title
reference = self.model_resolver.add(path, name, class_name=True, loaded=True)
self.set_additional_properties(reference.name, obj)
data_model_type = self.data_model_type(
reference=reference,
fields=fields,
base_classes=base_classes,
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=obj.description if self.use_schema_description else None,
keyword_only=self.keyword_only,
)
self.results.append(data_model_type)
return self.data_type(reference=reference)
def _parse_all_of_item(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
fields: List[DataModelFieldBase],
base_classes: List[Reference],
required: List[str],
union_models: List[Reference],
) -> None:
for all_of_item in obj.allOf:
if all_of_item.ref: # $ref
base_classes.append(self.model_resolver.add_ref(all_of_item.ref))
else:
module_name = get_module_name(name, None)
object_fields = self.parse_object_fields(
all_of_item,
path,
module_name,
)
if object_fields:
fields.extend(object_fields)
else:
if all_of_item.required:
required.extend(all_of_item.required)
self._parse_all_of_item(
name,
all_of_item,
path,
fields,
base_classes,
required,
union_models,
)
if all_of_item.anyOf:
self.model_resolver.add(path, name, class_name=True, loaded=True)
union_models.extend(
d.reference
for d in self.parse_any_of(name, all_of_item, path)
if d.reference
)
if all_of_item.oneOf:
self.model_resolver.add(path, name, class_name=True, loaded=True)
union_models.extend(
d.reference
for d in self.parse_one_of(name, all_of_item, path)
if d.reference
)
def parse_all_of(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
ignore_duplicate_model: bool = False,
) -> DataType:
if len(obj.allOf) == 1 and not obj.properties:
single_obj = obj.allOf[0]
if single_obj.ref and single_obj.ref_type == JSONReference.LOCAL:
if get_model_by_path(self.raw_obj, single_obj.ref[2:].split('/')).get(
'enum'
):
return self.get_ref_data_type(single_obj.ref)
fields: List[DataModelFieldBase] = []
base_classes: List[Reference] = []
required: List[str] = []
union_models: List[Reference] = []
self._parse_all_of_item(
name, obj, path, fields, base_classes, required, union_models
)
if not union_models:
return self._parse_object_common_part(
name, obj, path, ignore_duplicate_model, fields, base_classes, required
)
reference = self.model_resolver.add(path, name, class_name=True, loaded=True)
all_of_data_type = self._parse_object_common_part(
name,
obj,
get_special_path('allOf', path),
ignore_duplicate_model,
fields,
base_classes,
required,
)
data_type = self.data_type(
data_types=[
self._parse_object_common_part(
name,
obj,
get_special_path(f'union_model-{index}', path),
ignore_duplicate_model,
[],
[union_model, all_of_data_type.reference], # type: ignore
[],
)
for index, union_model in enumerate(union_models)
]
)
field = self.get_object_field(
field_name=None,
field=obj,
required=True,
field_type=data_type,
alias=None,
original_field_name=None,
)
data_model_root = self.data_model_root_type(
reference=reference,
fields=[field],
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=obj.description if self.use_schema_description else None,
nullable=obj.type_has_null,
)
self.results.append(data_model_root)
return self.data_type(reference=reference)
def parse_object_fields(
self, obj: JsonSchemaObject, path: List[str], module_name: Optional[str] = None
) -> List[DataModelFieldBase]:
properties: Dict[str, Union[JsonSchemaObject, bool]] = (
{} if obj.properties is None else obj.properties
)
requires: Set[str] = {*()} if obj.required is None else {*obj.required}
fields: List[DataModelFieldBase] = []
exclude_field_names: Set[str] = set()
for original_field_name, field in properties.items():
field_name, alias = self.model_resolver.get_valid_field_name_and_alias(
original_field_name, exclude_field_names
)
modular_name = f'{module_name}.{field_name}' if module_name else field_name
exclude_field_names.add(field_name)
if isinstance(field, bool):
fields.append(
self.data_model_field_type(
name=field_name,
data_type=self.data_type_manager.get_data_type(
Types.any,
),
required=False
if self.force_optional_for_required_fields
else original_field_name in requires,
alias=alias,
strip_default_none=self.strip_default_none,
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
original_name=original_field_name,
)
)
continue
field_type = self.parse_item(modular_name, field, [*path, field_name])
if self.force_optional_for_required_fields or (
self.apply_default_values_for_required_fields and field.has_default
):
required: bool = False
else:
required = original_field_name in requires
fields.append(
self.get_object_field(
field_name=field_name,
field=field,
required=required,
field_type=field_type,
alias=alias,
original_field_name=original_field_name,
)
)
return fields
def parse_object(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
singular_name: bool = False,
unique: bool = True,
) -> DataType:
if not unique: # pragma: no cover
warn(
f'{self.__class__.__name__}.parse_object() ignore `unique` argument.'
f'An object name must be unique.'
f'This argument will be removed in a future version'
)
if self.use_title_as_name and obj.title:
name = obj.title
reference = self.model_resolver.add(
path,
name,
class_name=True,
singular_name=singular_name,
loaded=True,
)
class_name = reference.name
self.set_title(class_name, obj)
fields = self.parse_object_fields(obj, path, get_module_name(class_name, None))
if fields or not isinstance(obj.additionalProperties, JsonSchemaObject):
data_model_type_class = self.data_model_type
else:
fields.append(
self.get_object_field(
field_name=None,
field=obj.additionalProperties,
required=False,
original_field_name=None,
field_type=self.data_type(
data_types=[
self.parse_item(
# TODO: Improve naming for nested ClassName
name,
obj.additionalProperties,
[*path, 'additionalProperties'],
)
],
is_dict=True,
),
alias=None,
)
)
data_model_type_class = self.data_model_root_type
self.set_additional_properties(class_name, obj)
data_model_type = data_model_type_class(
reference=reference,
fields=fields,
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=obj.description if self.use_schema_description else None,
nullable=obj.type_has_null,
keyword_only=self.keyword_only,
)
self.results.append(data_model_type)
return self.data_type(reference=reference)
def parse_pattern_properties(
self,
name: str,
pattern_properties: Dict[str, JsonSchemaObject],
path: List[str],
) -> DataType:
return self.data_type(
data_types=[
self.data_type(
data_types=[
self.parse_item(
name,
kv[1],
get_special_path(f'patternProperties/{i}', path),
)
],
is_dict=True,
dict_key=self.data_type_manager.get_data_type(
Types.string,
pattern=kv[0] if not self.field_constraints else None,
),
)
for i, kv in enumerate(pattern_properties.items())
],
)
def parse_item(
self,
name: str,
item: JsonSchemaObject,
path: List[str],
singular_name: bool = False,
parent: Optional[JsonSchemaObject] = None,
) -> DataType:
if self.use_title_as_name and item.title:
name = item.title
singular_name = False
if (
parent
and not item.enum
and item.has_constraint
and (parent.has_constraint or self.field_constraints)
):
root_type_path = get_special_path('array', path)
return self.parse_root_type(
self.model_resolver.add(
root_type_path,
name,
class_name=True,
singular_name=singular_name,
).name,
item,
root_type_path,
)
elif item.ref:
return self.get_ref_data_type(item.ref)
elif item.custom_type_path:
return self.data_type_manager.get_data_type_from_full_path(
item.custom_type_path, is_custom_type=True
)
elif item.is_array:
return self.parse_array_fields(
name, item, get_special_path('array', path)
).data_type
elif (
item.discriminator
and parent
and parent.is_array
and (item.oneOf or item.anyOf)
):
return self.parse_root_type(name, item, path)
elif item.anyOf:
return self.data_type(
data_types=self.parse_any_of(
name, item, get_special_path('anyOf', path)
)
)
elif item.oneOf:
return self.data_type(
data_types=self.parse_one_of(
name, item, get_special_path('oneOf', path)
)
)
elif item.allOf:
all_of_path = get_special_path('allOf', path)
all_of_path = [self.model_resolver.resolve_ref(all_of_path)]
return self.parse_all_of(
self.model_resolver.add(
all_of_path, name, singular_name=singular_name, class_name=True
).name,
item,
all_of_path,
ignore_duplicate_model=True,
)
elif item.is_object or item.patternProperties:
object_path = get_special_path('object', path)
if item.properties:
return self.parse_object(
name, item, object_path, singular_name=singular_name
)
elif item.patternProperties:
# support only single key dict.
return self.parse_pattern_properties(
name, item.patternProperties, object_path
)
elif isinstance(item.additionalProperties, JsonSchemaObject):
return self.data_type(
data_types=[
self.parse_item(name, item.additionalProperties, object_path)
],
is_dict=True,
)
return self.data_type_manager.get_data_type(
Types.object,
)
elif item.enum:
if self.should_parse_enum_as_literal(item):
return self.parse_enum_as_literal(item)
return self.parse_enum(
name, item, get_special_path('enum', path), singular_name=singular_name
)
return self.get_data_type(item)
def parse_list_item(
self,
name: str,
target_items: List[JsonSchemaObject],
path: List[str],
parent: JsonSchemaObject,
singular_name: bool = True,
) -> List[DataType]:
return [
self.parse_item(
name,
item,
[*path, str(index)],
singular_name=singular_name,
parent=parent,
)
for index, item in enumerate(target_items)
]
def parse_array_fields(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
singular_name: bool = True,
) -> DataModelFieldBase:
if self.force_optional_for_required_fields:
required: bool = False
nullable: Optional[bool] = None
else:
required = not (
obj.has_default and self.apply_default_values_for_required_fields
)
if self.strict_nullable:
nullable = obj.nullable if obj.has_default or required else True
else:
required = not obj.nullable and required
nullable = None
if isinstance(obj.items, JsonSchemaObject):
items: List[JsonSchemaObject] = [obj.items]
elif isinstance(obj.items, list):
items = obj.items
else:
items = []
data_types: List[DataType] = [
self.data_type(
data_types=self.parse_list_item(
name,
items,
path,
obj,
singular_name=singular_name,
),
is_list=True,
)
]
# TODO: decide special path word for a combined data model.
if obj.allOf:
data_types.append(
self.parse_all_of(name, obj, get_special_path('allOf', path))
)
elif obj.is_object:
data_types.append(
self.parse_object(name, obj, get_special_path('object', path))
)
if obj.enum:
data_types.append(
self.parse_enum(name, obj, get_special_path('enum', path))
)
return self.data_model_field_type(
data_type=self.data_type(data_types=data_types),
default=obj.default,
required=required,
constraints=obj.dict(),
nullable=nullable,
strip_default_none=self.strip_default_none,
extras=self.get_field_extras(obj),
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
original_name=None,
has_default=obj.has_default,
)
def parse_array(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
original_name: Optional[str] = None,
) -> DataType:
if self.use_title_as_name and obj.title:
name = obj.title
reference = self.model_resolver.add(path, name, loaded=True, class_name=True)
field = self.parse_array_fields(original_name or name, obj, [*path, name])
if reference in [
d.reference for d in field.data_type.all_data_types if d.reference
]:
# self-reference
field = self.data_model_field_type(
data_type=self.data_type(
data_types=[
self.data_type(
data_types=field.data_type.data_types[1:], is_list=True
),
*field.data_type.data_types[1:],
]
),
default=field.default,
required=field.required,
constraints=field.constraints,
nullable=field.nullable,
strip_default_none=field.strip_default_none,
extras=field.extras,
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
original_name=None,
has_default=field.has_default,
)
data_model_root = self.data_model_root_type(
reference=reference,
fields=[field],
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
description=obj.description if self.use_schema_description else None,
nullable=obj.type_has_null,
)
self.results.append(data_model_root)
return self.data_type(reference=reference)
def parse_root_type(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
) -> DataType:
reference: Optional[Reference] = None
if obj.ref:
data_type: DataType = self.get_ref_data_type(obj.ref)
elif obj.custom_type_path:
data_type = self.data_type_manager.get_data_type_from_full_path(
obj.custom_type_path, is_custom_type=True
) # pragma: no cover
elif obj.is_array:
data_type = self.parse_array_fields(
name, obj, get_special_path('array', path)
).data_type # pragma: no cover
elif obj.anyOf or obj.oneOf:
reference = self.model_resolver.add(
path, name, loaded=True, class_name=True
)
if obj.anyOf:
data_types: List[DataType] = self.parse_any_of(
name, obj, get_special_path('anyOf', path)
)
else:
data_types = self.parse_one_of(
name, obj, get_special_path('oneOf', path)
)
if len(data_types) > 1: # pragma: no cover
data_type = self.data_type(data_types=data_types)
elif not data_types: # pragma: no cover
return EmptyDataType()
else: # pragma: no cover
data_type = data_types[0]
elif obj.patternProperties:
data_type = self.parse_pattern_properties(name, obj.patternProperties, path)
elif obj.enum:
if self.should_parse_enum_as_literal(obj):
data_type = self.parse_enum_as_literal(obj)
else: # pragma: no cover
data_type = self.parse_enum(name, obj, path)
elif obj.type:
data_type = self.get_data_type(obj)
else:
data_type = self.data_type_manager.get_data_type(
Types.any,
)
if self.force_optional_for_required_fields:
required: bool = False
else:
required = not obj.nullable and not (
obj.has_default and self.apply_default_values_for_required_fields
)
if self.use_title_as_name and obj.title:
name = obj.title
if not reference:
reference = self.model_resolver.add(
path, name, loaded=True, class_name=True
)
self.set_title(name, obj)
self.set_additional_properties(name, obj)
data_model_root_type = self.data_model_root_type(
reference=reference,
fields=[
self.data_model_field_type(
data_type=data_type,
default=obj.default,
required=required,
constraints=obj.dict() if self.field_constraints else {},
nullable=obj.nullable if self.strict_nullable else None,
strip_default_none=self.strip_default_none,
extras=self.get_field_extras(obj),
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
original_name=None,
has_default=obj.has_default,
)
],
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
nullable=obj.type_has_null,
)
self.results.append(data_model_root_type)
return self.data_type(reference=reference)
def parse_enum_as_literal(self, obj: JsonSchemaObject) -> DataType:
return self.data_type(literals=[i for i in obj.enum if i is not None])
def parse_enum(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
singular_name: bool = False,
unique: bool = True,
) -> DataType:
if not unique: # pragma: no cover
warn(
f'{self.__class__.__name__}.parse_enum() ignore `unique` argument.'
f'An object name must be unique.'
f'This argument will be removed in a future version'
)
enum_fields: List[DataModelFieldBase] = []
if None in obj.enum and obj.type == 'string':
# Nullable is valid in only OpenAPI
nullable: bool = True
enum_times = [e for e in obj.enum if e is not None]
else:
enum_times = obj.enum
nullable = False
exclude_field_names: Set[str] = set()
for i, enum_part in enumerate(enum_times):
if obj.type == 'string' or isinstance(enum_part, str):
default = (
f"'{enum_part.translate(escape_characters)}'"
if isinstance(enum_part, str)
else enum_part
)
if obj.x_enum_varnames:
field_name = obj.x_enum_varnames[i]
else:
field_name = str(enum_part)
else:
default = enum_part
if obj.x_enum_varnames:
field_name = obj.x_enum_varnames[i]
else:
prefix = (
obj.type
if isinstance(obj.type, str)
else type(enum_part).__name__
)
field_name = f'{prefix}_{enum_part}'
field_name = self.model_resolver.get_valid_field_name(
field_name, excludes=exclude_field_names, model_type=ModelType.ENUM
)
exclude_field_names.add(field_name)
enum_fields.append(
self.data_model_field_type(
name=field_name,
default=default,
data_type=self.data_type_manager.get_data_type(
Types.any,
),
required=True,
strip_default_none=self.strip_default_none,
has_default=obj.has_default,
use_field_description=self.use_field_description,
original_name=None,
)
)
def create_enum(reference_: Reference) -> DataType:
enum = Enum(
reference=reference_,
fields=enum_fields,
path=self.current_source_path,
description=obj.description if self.use_schema_description else None,
custom_template_dir=self.custom_template_dir,
type_=_get_type(obj.type, obj.format)
if self.use_subclass_enum and isinstance(obj.type, str)
else None,
default=obj.default if obj.has_default else UNDEFINED,
)
self.results.append(enum)
return self.data_type(reference=reference_)
if self.use_title_as_name and obj.title:
name = obj.title
reference = self.model_resolver.add(
path,
name,
class_name=True,
singular_name=singular_name,
singular_name_suffix='Enum',
loaded=True,
)
if not nullable:
return create_enum(reference)
enum_reference = self.model_resolver.add(
[*path, 'Enum'],
f'{reference.name}Enum',
class_name=True,
singular_name=singular_name,
singular_name_suffix='Enum',
loaded=True,
)
data_model_root_type = self.data_model_root_type(
reference=reference,
fields=[
self.data_model_field_type(
data_type=create_enum(enum_reference),
default=obj.default,
required=False,
nullable=True,
strip_default_none=self.strip_default_none,
extras=self.get_field_extras(obj),
use_annotated=self.use_annotated,
has_default=obj.has_default,
use_field_description=self.use_field_description,
original_name=None,
)
],
custom_base_class=obj.custom_base_path or self.base_class,
custom_template_dir=self.custom_template_dir,
extra_template_data=self.extra_template_data,
path=self.current_source_path,
default=obj.default if obj.has_default else UNDEFINED,
nullable=obj.type_has_null,
)
self.results.append(data_model_root_type)
return self.data_type(reference=reference)
def _get_ref_body(self, resolved_ref: str) -> Dict[Any, Any]:
if is_url(resolved_ref):
return self._get_ref_body_from_url(resolved_ref)
return self._get_ref_body_from_remote(resolved_ref)
def _get_ref_body_from_url(self, ref: str) -> Dict[Any, Any]:
# URL Reference – $ref: 'http://path/to/your/resource' Uses the whole document located on the different server.
return self.remote_object_cache.get_or_put(
ref, default_factory=lambda key: load_yaml(self._get_text_from_url(key))
)
def _get_ref_body_from_remote(self, resolved_ref: str) -> Dict[Any, Any]:
# Remote Reference – $ref: 'document.json' Uses the whole document located on the same server and in
# the same location. TODO treat edge case
full_path = self.base_path / resolved_ref
return self.remote_object_cache.get_or_put(
str(full_path),
default_factory=lambda _: load_yaml_from_path(full_path, self.encoding),
)
def resolve_ref(self, object_ref: str) -> Reference:
reference = self.model_resolver.add_ref(object_ref)
if reference.loaded:
return reference
# https://swagger.io/docs/specification/using-ref/
ref = self.model_resolver.resolve_ref(object_ref)
if get_ref_type(object_ref) == JSONReference.LOCAL:
# Local Reference – $ref: '#/definitions/myElement'
self.reserved_refs[tuple(self.model_resolver.current_root)].add(ref) # type: ignore
return reference
elif self.model_resolver.is_after_load(ref):
self.reserved_refs[tuple(ref.split('#')[0].split('/'))].add(ref) # type: ignore
return reference
if is_url(ref):
relative_path, object_path = ref.split('#')
relative_paths = [relative_path]
base_path = None
else:
if self.model_resolver.is_external_root_ref(ref):
relative_path, object_path = ref[:-1], ''
else:
relative_path, object_path = ref.split('#')
relative_paths = relative_path.split('/')
base_path = Path(*relative_paths).parent
with self.model_resolver.current_base_path_context(
base_path
), self.model_resolver.base_url_context(relative_path):
self._parse_file(
self._get_ref_body(relative_path),
self.model_resolver.add_ref(ref, resolved=True).name,
relative_paths,
object_path.split('/') if object_path else None,
)
reference.loaded = True
return reference
def parse_ref(self, obj: JsonSchemaObject, path: List[str]) -> None:
if obj.ref:
self.resolve_ref(obj.ref)
if obj.items:
if isinstance(obj.items, JsonSchemaObject):
self.parse_ref(obj.items, path)
else:
if isinstance(obj.items, list):
for item in obj.items:
self.parse_ref(item, path)
if isinstance(obj.additionalProperties, JsonSchemaObject):
self.parse_ref(obj.additionalProperties, path)
if obj.patternProperties:
for value in obj.patternProperties.values():
self.parse_ref(value, path)
for item in obj.anyOf:
self.parse_ref(item, path)
for item in obj.allOf:
self.parse_ref(item, path)
for item in obj.oneOf:
self.parse_ref(item, path)
if obj.properties:
for property_value in obj.properties.values():
if isinstance(property_value, JsonSchemaObject):
self.parse_ref(property_value, path)
def parse_id(self, obj: JsonSchemaObject, path: List[str]) -> None:
if obj.id:
self.model_resolver.add_id(obj.id, path)
if obj.items:
if isinstance(obj.items, JsonSchemaObject):
self.parse_id(obj.items, path)
else:
if isinstance(obj.items, list):
for item in obj.items:
self.parse_id(item, path)
if isinstance(obj.additionalProperties, JsonSchemaObject):
self.parse_id(obj.additionalProperties, path)
if obj.patternProperties:
for value in obj.patternProperties.values():
self.parse_id(value, path)
for item in obj.anyOf:
self.parse_id(item, path)
for item in obj.allOf:
self.parse_id(item, path)
if obj.properties:
for property_value in obj.properties.values():
if isinstance(property_value, JsonSchemaObject):
self.parse_id(property_value, path)
@contextmanager
def root_id_context(self, root_raw: Dict[str, Any]) -> Generator[None, None, None]:
root_id: Optional[str] = root_raw.get('$id')
previous_root_id: Optional[str] = self.root_id
self.root_id = root_id if root_id else None
yield
self.root_id = previous_root_id
def parse_raw_obj(
self,
name: str,
raw: Dict[str, Any],
path: List[str],
) -> None:
self.parse_obj(name, self.SCHEMA_OBJECT_TYPE.parse_obj(raw), path)
def parse_obj(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
) -> None:
if obj.is_array:
self.parse_array(name, obj, path)
elif obj.allOf:
self.parse_all_of(name, obj, path)
elif obj.oneOf or obj.anyOf:
data_type = self.parse_root_type(name, obj, path)
if isinstance(data_type, EmptyDataType) and obj.properties:
self.parse_object(name, obj, path) # pragma: no cover
elif obj.properties:
self.parse_object(name, obj, path)
elif obj.patternProperties:
self.parse_root_type(name, obj, path)
elif obj.type == 'object':
self.parse_object(name, obj, path)
elif obj.enum and not self.should_parse_enum_as_literal(obj):
self.parse_enum(name, obj, path)
else:
self.parse_root_type(name, obj, path)
self.parse_ref(obj, path)
def _get_context_source_path_parts(self) -> Iterator[Tuple[Source, List[str]]]:
if isinstance(self.source, list) or (
isinstance(self.source, Path) and self.source.is_dir()
):
self.current_source_path = Path()
self.model_resolver.after_load_files = {
self.base_path.joinpath(s.path).resolve().as_posix()
for s in self.iter_source
}
for source in self.iter_source:
if isinstance(self.source, ParseResult):
path_parts = self.get_url_path_parts(self.source)
else:
path_parts = list(source.path.parts)
if self.current_source_path is not None:
self.current_source_path = source.path
with self.model_resolver.current_base_path_context(
source.path.parent
), self.model_resolver.current_root_context(path_parts):
yield source, path_parts
def parse_raw(self) -> None:
for source, path_parts in self._get_context_source_path_parts():
self.raw_obj = load_yaml(source.text)
if self.raw_obj is None: # pragma: no cover
warn(f'{source.path} is empty. Skipping this file')
continue
if self.custom_class_name_generator:
obj_name = self.raw_obj.get('title', 'Model')
else:
if self.class_name:
obj_name = self.class_name
else:
# backward compatible
obj_name = self.raw_obj.get('title', 'Model')
if not self.model_resolver.validate_name(obj_name):
obj_name = title_to_class_name(obj_name)
if not self.model_resolver.validate_name(obj_name):
raise InvalidClassNameError(obj_name)
self._parse_file(self.raw_obj, obj_name, path_parts)
self._resolve_unparsed_json_pointer()
def _resolve_unparsed_json_pointer(self) -> None:
model_count: int = len(self.results)
for source in self.iter_source:
path_parts = list(source.path.parts)
reserved_refs = self.reserved_refs.get(tuple(path_parts)) # type: ignore
if not reserved_refs:
continue
if self.current_source_path is not None:
self.current_source_path = source.path
with self.model_resolver.current_base_path_context(
source.path.parent
), self.model_resolver.current_root_context(path_parts):
for reserved_ref in sorted(reserved_refs):
if self.model_resolver.add_ref(reserved_ref, resolved=True).loaded:
continue
# for root model
self.raw_obj = load_yaml(source.text)
self.parse_json_pointer(self.raw_obj, reserved_ref, path_parts)
if model_count != len(self.results):
# New model have been generated. It try to resolve json pointer again.
self._resolve_unparsed_json_pointer()
def parse_json_pointer(
self, raw: Dict[str, Any], ref: str, path_parts: List[str]
) -> None:
path = ref.split('#', 1)[-1]
if path[0] == '/': # pragma: no cover
path = path[1:]
object_paths = path.split('/')
models = get_model_by_path(raw, object_paths)
model_name = object_paths[-1]
self.parse_raw_obj(
model_name, models, [*path_parts, f'#/{object_paths[0]}', *object_paths[1:]]
)
def _parse_file(
self,
raw: Dict[str, Any],
obj_name: str,
path_parts: List[str],
object_paths: Optional[List[str]] = None,
) -> None:
object_paths = [o for o in object_paths or [] if o]
if object_paths:
path = [*path_parts, f'#/{object_paths[0]}', *object_paths[1:]]
else:
path = path_parts
with self.model_resolver.current_root_context(path_parts):
obj_name = self.model_resolver.add(
path, obj_name, unique=False, class_name=True
).name
with self.root_id_context(raw):
# Some jsonschema docs include attribute self to have include version details
raw.pop('self', None)
# parse $id before parsing $ref
root_obj = self.SCHEMA_OBJECT_TYPE.parse_obj(raw)
self.parse_id(root_obj, path_parts)
definitions: Optional[Dict[Any, Any]] = None
for schema_path, split_schema_path in self.schema_paths:
try:
definitions = get_model_by_path(raw, split_schema_path)
if definitions:
break
except KeyError:
continue
if definitions is None:
definitions = {}
for key, model in definitions.items():
obj = self.SCHEMA_OBJECT_TYPE.parse_obj(model)
self.parse_id(obj, [*path_parts, schema_path, key])
if object_paths:
models = get_model_by_path(raw, object_paths)
model_name = object_paths[-1]
self.parse_obj(
model_name, self.SCHEMA_OBJECT_TYPE.parse_obj(models), path
)
else:
self.parse_obj(obj_name, root_obj, path_parts or ['#'])
for key, model in definitions.items():
path = [*path_parts, schema_path, key]
reference = self.model_resolver.get(path)
if not reference or not reference.loaded:
self.parse_raw_obj(key, model, path)
key = tuple(path_parts)
reserved_refs = set(self.reserved_refs.get(key) or [])
while reserved_refs:
for reserved_path in sorted(reserved_refs):
reference = self.model_resolver.get(reserved_path)
if not reference or reference.loaded:
continue
object_paths = reserved_path.split('#/', 1)[-1].split('/')
path = reserved_path.split('/')
models = get_model_by_path(raw, object_paths)
model_name = object_paths[-1]
self.parse_obj(
model_name, self.SCHEMA_OBJECT_TYPE.parse_obj(models), path
)
previous_reserved_refs = reserved_refs
reserved_refs = set(self.reserved_refs.get(key) or [])
if previous_reserved_refs == reserved_refs:
break
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/parser/openapi.py 0000644 0000000 0000000 00000064052 00000000000 025226 0 ustar 00 0000000 0000000 from __future__ import annotations
import re
from collections import defaultdict
from enum import Enum
from pathlib import Path
from typing import (
Any,
Callable,
ClassVar,
DefaultDict,
Dict,
Iterable,
List,
Mapping,
Optional,
Pattern,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
)
from urllib.parse import ParseResult
from warnings import warn
from pydantic import Field
from datamodel_code_generator import (
DefaultPutDict,
Error,
LiteralType,
OpenAPIScope,
PythonVersion,
load_yaml,
snooper_to_methods,
)
from datamodel_code_generator.format import DatetimeClassType
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model import pydantic as pydantic_model
from datamodel_code_generator.parser.base import get_special_path
from datamodel_code_generator.parser.jsonschema import (
JsonSchemaObject,
JsonSchemaParser,
get_model_by_path,
)
from datamodel_code_generator.reference import snake_to_upper_camel
from datamodel_code_generator.types import (
DataType,
DataTypeManager,
EmptyDataType,
StrictTypes,
)
from datamodel_code_generator.util import BaseModel
RE_APPLICATION_JSON_PATTERN: Pattern[str] = re.compile(r'^application/.*json$')
OPERATION_NAMES: List[str] = [
'get',
'put',
'post',
'delete',
'patch',
'head',
'options',
'trace',
]
class ParameterLocation(Enum):
query = 'query'
header = 'header'
path = 'path'
cookie = 'cookie'
BaseModelT = TypeVar('BaseModelT', bound=BaseModel)
class ReferenceObject(BaseModel):
ref: str = Field(..., alias='$ref')
class ExampleObject(BaseModel):
summary: Optional[str] = None
description: Optional[str] = None
value: Any = None
externalValue: Optional[str] = None
class MediaObject(BaseModel):
schema_: Union[ReferenceObject, JsonSchemaObject, None] = Field(
None, alias='schema'
)
example: Any = None
examples: Union[str, ReferenceObject, ExampleObject, None] = None
class ParameterObject(BaseModel):
name: Optional[str] = None
in_: Optional[ParameterLocation] = Field(None, alias='in')
description: Optional[str] = None
required: bool = False
deprecated: bool = False
schema_: Optional[JsonSchemaObject] = Field(None, alias='schema')
example: Any = None
examples: Union[str, ReferenceObject, ExampleObject, None] = None
content: Dict[str, MediaObject] = {}
class HeaderObject(BaseModel):
description: Optional[str] = None
required: bool = False
deprecated: bool = False
schema_: Optional[JsonSchemaObject] = Field(None, alias='schema')
example: Any = None
examples: Union[str, ReferenceObject, ExampleObject, None] = None
content: Dict[str, MediaObject] = {}
class RequestBodyObject(BaseModel):
description: Optional[str] = None
content: Dict[str, MediaObject] = {}
required: bool = False
class ResponseObject(BaseModel):
description: Optional[str] = None
headers: Dict[str, ParameterObject] = {}
content: Dict[Union[str, int], MediaObject] = {}
class Operation(BaseModel):
tags: List[str] = []
summary: Optional[str] = None
description: Optional[str] = None
operationId: Optional[str] = None
parameters: List[Union[ReferenceObject, ParameterObject]] = []
requestBody: Union[ReferenceObject, RequestBodyObject, None] = None
responses: Dict[Union[str, int], Union[ReferenceObject, ResponseObject]] = {}
deprecated: bool = False
class ComponentsObject(BaseModel):
schemas: Dict[str, Union[ReferenceObject, JsonSchemaObject]] = {}
responses: Dict[str, Union[ReferenceObject, ResponseObject]] = {}
examples: Dict[str, Union[ReferenceObject, ExampleObject]] = {}
requestBodies: Dict[str, Union[ReferenceObject, RequestBodyObject]] = {}
headers: Dict[str, Union[ReferenceObject, HeaderObject]] = {}
@snooper_to_methods(max_variable_length=None)
class OpenAPIParser(JsonSchemaParser):
SCHEMA_PATHS: ClassVar[List[str]] = ['#/components/schemas']
def __init__(
self,
source: Union[str, Path, List[Path], ParseResult],
*,
data_model_type: Type[DataModel] = pydantic_model.BaseModel,
data_model_root_type: Type[DataModel] = pydantic_model.CustomRootType,
data_type_manager_type: Type[DataTypeManager] = pydantic_model.DataTypeManager,
data_model_field_type: Type[DataModelFieldBase] = pydantic_model.DataModelField,
base_class: Optional[str] = None,
additional_imports: Optional[List[str]] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
target_python_version: PythonVersion = PythonVersion.PY_38,
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]] = None,
validation: bool = False,
field_constraints: bool = False,
snake_case_field: bool = False,
strip_default_none: bool = False,
aliases: Optional[Mapping[str, str]] = None,
allow_population_by_field_name: bool = False,
allow_extra_fields: bool = False,
apply_default_values_for_required_fields: bool = False,
force_optional_for_required_fields: bool = False,
class_name: Optional[str] = None,
use_standard_collections: bool = False,
base_path: Optional[Path] = None,
use_schema_description: bool = False,
use_field_description: bool = False,
use_default_kwarg: bool = False,
reuse_model: bool = False,
encoding: str = 'utf-8',
enum_field_as_literal: Optional[LiteralType] = None,
use_one_literal_as_default: bool = False,
set_default_enum_member: bool = False,
use_subclass_enum: bool = False,
strict_nullable: bool = False,
use_generic_container_types: bool = False,
enable_faux_immutability: bool = False,
remote_text_cache: Optional[DefaultPutDict[str, str]] = None,
disable_appending_item_suffix: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
empty_enum_field_name: Optional[str] = None,
custom_class_name_generator: Optional[Callable[[str], str]] = None,
field_extra_keys: Optional[Set[str]] = None,
field_include_all_keys: bool = False,
field_extra_keys_without_x_prefix: Optional[Set[str]] = None,
openapi_scopes: Optional[List[OpenAPIScope]] = None,
wrap_string_literal: Optional[bool] = False,
use_title_as_name: bool = False,
use_operation_id_as_name: bool = False,
use_unique_items_as_set: bool = False,
http_headers: Optional[Sequence[Tuple[str, str]]] = None,
http_ignore_tls: bool = False,
use_annotated: bool = False,
use_non_positive_negative_number_constrained_types: bool = False,
original_field_name_delimiter: Optional[str] = None,
use_double_quotes: bool = False,
use_union_operator: bool = False,
allow_responses_without_content: bool = False,
collapse_root_models: bool = False,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
keep_model_order: bool = False,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
use_pendulum: bool = False,
http_query_parameters: Optional[Sequence[Tuple[str, str]]] = None,
treat_dots_as_module: bool = False,
use_exact_imports: bool = False,
default_field_extras: Optional[Dict[str, Any]] = None,
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
keyword_only: bool = False,
no_alias: bool = False,
):
super().__init__(
source=source,
data_model_type=data_model_type,
data_model_root_type=data_model_root_type,
data_type_manager_type=data_type_manager_type,
data_model_field_type=data_model_field_type,
base_class=base_class,
additional_imports=additional_imports,
custom_template_dir=custom_template_dir,
extra_template_data=extra_template_data,
target_python_version=target_python_version,
dump_resolve_reference_action=dump_resolve_reference_action,
validation=validation,
field_constraints=field_constraints,
snake_case_field=snake_case_field,
strip_default_none=strip_default_none,
aliases=aliases,
allow_population_by_field_name=allow_population_by_field_name,
allow_extra_fields=allow_extra_fields,
apply_default_values_for_required_fields=apply_default_values_for_required_fields,
force_optional_for_required_fields=force_optional_for_required_fields,
class_name=class_name,
use_standard_collections=use_standard_collections,
base_path=base_path,
use_schema_description=use_schema_description,
use_field_description=use_field_description,
use_default_kwarg=use_default_kwarg,
reuse_model=reuse_model,
encoding=encoding,
enum_field_as_literal=enum_field_as_literal,
use_one_literal_as_default=use_one_literal_as_default,
set_default_enum_member=set_default_enum_member,
use_subclass_enum=use_subclass_enum,
strict_nullable=strict_nullable,
use_generic_container_types=use_generic_container_types,
enable_faux_immutability=enable_faux_immutability,
remote_text_cache=remote_text_cache,
disable_appending_item_suffix=disable_appending_item_suffix,
strict_types=strict_types,
empty_enum_field_name=empty_enum_field_name,
custom_class_name_generator=custom_class_name_generator,
field_extra_keys=field_extra_keys,
field_include_all_keys=field_include_all_keys,
field_extra_keys_without_x_prefix=field_extra_keys_without_x_prefix,
wrap_string_literal=wrap_string_literal,
use_title_as_name=use_title_as_name,
use_operation_id_as_name=use_operation_id_as_name,
use_unique_items_as_set=use_unique_items_as_set,
http_headers=http_headers,
http_ignore_tls=http_ignore_tls,
use_annotated=use_annotated,
use_non_positive_negative_number_constrained_types=use_non_positive_negative_number_constrained_types,
original_field_name_delimiter=original_field_name_delimiter,
use_double_quotes=use_double_quotes,
use_union_operator=use_union_operator,
allow_responses_without_content=allow_responses_without_content,
collapse_root_models=collapse_root_models,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members,
keep_model_order=keep_model_order,
known_third_party=known_third_party,
custom_formatters=custom_formatters,
custom_formatters_kwargs=custom_formatters_kwargs,
use_pendulum=use_pendulum,
http_query_parameters=http_query_parameters,
treat_dots_as_module=treat_dots_as_module,
use_exact_imports=use_exact_imports,
default_field_extras=default_field_extras,
target_datetime_class=target_datetime_class,
keyword_only=keyword_only,
no_alias=no_alias,
)
self.open_api_scopes: List[OpenAPIScope] = openapi_scopes or [
OpenAPIScope.Schemas
]
def get_ref_model(self, ref: str) -> Dict[str, Any]:
ref_file, ref_path = self.model_resolver.resolve_ref(ref).split('#', 1)
if ref_file:
ref_body = self._get_ref_body(ref_file)
else: # pragma: no cover
ref_body = self.raw_obj
return get_model_by_path(ref_body, ref_path.split('/')[1:])
def get_data_type(self, obj: JsonSchemaObject) -> DataType:
# OpenAPI 3.0 doesn't allow `null` in the `type` field and list of types
# https://swagger.io/docs/specification/data-models/data-types/#null
# OpenAPI 3.1 does allow `null` in the `type` field and is equivalent to
# a `nullable` flag on the property itself
if obj.nullable and self.strict_nullable and isinstance(obj.type, str):
obj.type = [obj.type, 'null']
return super().get_data_type(obj)
def resolve_object(
self, obj: Union[ReferenceObject, BaseModelT], object_type: Type[BaseModelT]
) -> BaseModelT:
if isinstance(obj, ReferenceObject):
ref_obj = self.get_ref_model(obj.ref)
return object_type.parse_obj(ref_obj)
return obj
def parse_schema(
self,
name: str,
obj: JsonSchemaObject,
path: List[str],
) -> DataType:
if obj.is_array:
data_type = self.parse_array(name, obj, [*path, name])
elif obj.allOf: # pragma: no cover
data_type = self.parse_all_of(name, obj, path)
elif obj.oneOf or obj.anyOf: # pragma: no cover
data_type = self.parse_root_type(name, obj, path)
if isinstance(data_type, EmptyDataType) and obj.properties:
self.parse_object(name, obj, path)
elif obj.is_object:
data_type = self.parse_object(name, obj, path)
elif obj.enum: # pragma: no cover
data_type = self.parse_enum(name, obj, path)
elif obj.ref: # pragma: no cover
data_type = self.get_ref_data_type(obj.ref)
else:
data_type = self.get_data_type(obj)
self.parse_ref(obj, path)
return data_type
def parse_request_body(
self,
name: str,
request_body: RequestBodyObject,
path: List[str],
) -> None:
for (
media_type,
media_obj,
) in request_body.content.items(): # type: str, MediaObject
if isinstance(media_obj.schema_, JsonSchemaObject):
self.parse_schema(name, media_obj.schema_, [*path, media_type])
def parse_responses(
self,
name: str,
responses: Dict[Union[str, int], Union[ReferenceObject, ResponseObject]],
path: List[str],
) -> Dict[Union[str, int], Dict[str, DataType]]:
data_types: DefaultDict[Union[str, int], Dict[str, DataType]] = defaultdict(
dict
)
for status_code, detail in responses.items():
if isinstance(detail, ReferenceObject):
if not detail.ref: # pragma: no cover
continue
ref_model = self.get_ref_model(detail.ref)
content = {
k: MediaObject.parse_obj(v)
for k, v in ref_model.get('content', {}).items()
}
else:
content = detail.content
if self.allow_responses_without_content and not content:
data_types[status_code]['application/json'] = DataType(type='None')
for content_type, obj in content.items():
object_schema = obj.schema_
if not object_schema: # pragma: no cover
continue
if isinstance(object_schema, JsonSchemaObject):
data_types[status_code][content_type] = self.parse_schema(
name, object_schema, [*path, str(status_code), content_type]
)
else:
data_types[status_code][content_type] = self.get_ref_data_type(
object_schema.ref
)
return data_types
@classmethod
def parse_tags(
cls,
name: str,
tags: List[str],
path: List[str],
) -> List[str]:
return tags
@classmethod
def _get_model_name(cls, path_name: str, method: str, suffix: str) -> str:
camel_path_name = snake_to_upper_camel(path_name.replace('/', '_'))
return f'{camel_path_name}{method.capitalize()}{suffix}'
def parse_all_parameters(
self,
name: str,
parameters: List[Union[ReferenceObject, ParameterObject]],
path: List[str],
) -> None:
fields: List[DataModelFieldBase] = []
exclude_field_names: Set[str] = set()
reference = self.model_resolver.add(path, name, class_name=True, unique=True)
for parameter in parameters:
parameter = self.resolve_object(parameter, ParameterObject)
parameter_name = parameter.name
if not parameter_name or parameter.in_ != ParameterLocation.query:
continue
field_name, alias = self.model_resolver.get_valid_field_name_and_alias(
field_name=parameter_name, excludes=exclude_field_names
)
if parameter.schema_:
fields.append(
self.get_object_field(
field_name=field_name,
field=parameter.schema_,
field_type=self.parse_item(
field_name, parameter.schema_, [*path, name, parameter_name]
),
original_field_name=parameter_name,
required=parameter.required,
alias=alias,
)
)
else:
data_types: List[DataType] = []
object_schema: Optional[JsonSchemaObject] = None
for (
media_type,
media_obj,
) in parameter.content.items():
if not media_obj.schema_:
continue
object_schema = self.resolve_object(
media_obj.schema_, JsonSchemaObject
)
data_types.append(
self.parse_item(
field_name,
object_schema,
[*path, name, parameter_name, media_type],
)
)
if not data_types:
continue
if len(data_types) == 1:
data_type = data_types[0]
else:
data_type = self.data_type(data_types=data_types)
# multiple data_type parse as non-constraints field
object_schema = None
fields.append(
self.data_model_field_type(
name=field_name,
default=object_schema.default if object_schema else None,
data_type=data_type,
required=parameter.required,
alias=alias,
constraints=object_schema.dict()
if object_schema and self.is_constraints_field(object_schema)
else None,
nullable=object_schema.nullable
if object_schema
and self.strict_nullable
and (object_schema.has_default or parameter.required)
else None,
strip_default_none=self.strip_default_none,
extras=self.get_field_extras(object_schema)
if object_schema
else {},
use_annotated=self.use_annotated,
use_field_description=self.use_field_description,
use_default_kwarg=self.use_default_kwarg,
original_name=parameter_name,
has_default=object_schema.has_default
if object_schema
else False,
type_has_null=object_schema.type_has_null
if object_schema
else None,
)
)
if OpenAPIScope.Parameters in self.open_api_scopes and fields:
self.results.append(
self.data_model_type(
fields=fields,
reference=reference,
custom_base_class=self.base_class,
custom_template_dir=self.custom_template_dir,
keyword_only=self.keyword_only,
)
)
def parse_operation(
self,
raw_operation: Dict[str, Any],
path: List[str],
) -> None:
operation = Operation.parse_obj(raw_operation)
path_name, method = path[-2:]
if self.use_operation_id_as_name:
if not operation.operationId:
raise Error(
f'All operations must have an operationId when --use_operation_id_as_name is set.'
f'The following path was missing an operationId: {path_name}'
)
path_name = operation.operationId
method = ''
self.parse_all_parameters(
self._get_model_name(path_name, method, suffix='ParametersQuery'),
operation.parameters,
[*path, 'parameters'],
)
if operation.requestBody:
if isinstance(operation.requestBody, ReferenceObject):
ref_model = self.get_ref_model(operation.requestBody.ref)
request_body = RequestBodyObject.parse_obj(ref_model)
else:
request_body = operation.requestBody
self.parse_request_body(
name=self._get_model_name(path_name, method, suffix='Request'),
request_body=request_body,
path=[*path, 'requestBody'],
)
self.parse_responses(
name=self._get_model_name(path_name, method, suffix='Response'),
responses=operation.responses,
path=[*path, 'responses'],
)
if OpenAPIScope.Tags in self.open_api_scopes:
self.parse_tags(
name=self._get_model_name(path_name, method, suffix='Tags'),
tags=operation.tags,
path=[*path, 'tags'],
)
def parse_raw(self) -> None:
for source, path_parts in self._get_context_source_path_parts():
if self.validation:
warn(
'Deprecated: `--validation` option is deprecated. the option will be removed in a future '
'release. please use another tool to validate OpenAPI.\n'
)
try:
from prance import BaseParser
BaseParser(
spec_string=source.text,
backend='openapi-spec-validator',
encoding=self.encoding,
)
except ImportError: # pragma: no cover
warn(
'Warning: Validation was skipped for OpenAPI. `prance` or `openapi-spec-validator` are not '
'installed.\n'
'To use --validation option after datamodel-code-generator 0.24.0, Please run `$pip install '
"'datamodel-code-generator[validation]'`.\n"
)
specification: Dict[str, Any] = load_yaml(source.text)
self.raw_obj = specification
schemas: Dict[Any, Any] = specification.get('components', {}).get(
'schemas', {}
)
security: Optional[List[Dict[str, List[str]]]] = specification.get(
'security'
)
if OpenAPIScope.Schemas in self.open_api_scopes:
for (
obj_name,
raw_obj,
) in schemas.items(): # type: str, Dict[Any, Any]
self.parse_raw_obj(
obj_name,
raw_obj,
[*path_parts, '#/components', 'schemas', obj_name],
)
if OpenAPIScope.Paths in self.open_api_scopes:
paths: Dict[str, Dict[str, Any]] = specification.get('paths', {})
parameters: List[Dict[str, Any]] = [
self._get_ref_body(p['$ref']) if '$ref' in p else p
for p in paths.get('parameters', [])
if isinstance(p, dict)
]
paths_path = [*path_parts, '#/paths']
for path_name, methods in paths.items():
# Resolve path items if applicable
if '$ref' in methods:
methods = self.get_ref_model(methods['$ref'])
paths_parameters = parameters[:]
if 'parameters' in methods:
paths_parameters.extend(methods['parameters'])
relative_path_name = path_name[1:]
if relative_path_name:
path = [*paths_path, relative_path_name]
else: # pragma: no cover
path = get_special_path('root', paths_path)
for operation_name, raw_operation in methods.items():
if operation_name not in OPERATION_NAMES:
continue
if paths_parameters:
if 'parameters' in raw_operation: # pragma: no cover
raw_operation['parameters'].extend(paths_parameters)
else:
raw_operation['parameters'] = paths_parameters
if security is not None and 'security' not in raw_operation:
raw_operation['security'] = security
self.parse_operation(
raw_operation,
[*path, operation_name],
)
self._resolve_unparsed_json_pointer()
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/py.typed 0000644 0000000 0000000 00000000000 00000000000 023403 0 ustar 00 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/pydantic_patch.py 0000644 0000000 0000000 00000001072 00000000000 025262 0 ustar 00 0000000 0000000 import sys
import pydantic.typing
def patched_evaluate_forwardref(
forward_ref, globalns, localns=None
): # pragma: no cover
try:
return forward_ref._evaluate(
globalns, localns or None, set()
) # pragma: no cover
except TypeError:
# Fallback for Python 3.12 compatibility
return forward_ref._evaluate(
globalns, localns or None, set(), recursive_guard=set()
)
# Patch only Python3.12
if sys.version_info >= (3, 12):
pydantic.typing.evaluate_forwardref = patched_evaluate_forwardref
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/reference.py 0000644 0000000 0000000 00000063723 00000000000 024241 0 ustar 00 0000000 0000000 import re
from collections import defaultdict
from contextlib import contextmanager
from enum import Enum, auto
from functools import lru_cache
from itertools import zip_longest
from keyword import iskeyword
from pathlib import Path, PurePath
from typing import (
TYPE_CHECKING,
AbstractSet,
Any,
Callable,
ClassVar,
DefaultDict,
Dict,
Generator,
List,
Mapping,
NamedTuple,
Optional,
Pattern,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
)
from urllib.parse import ParseResult, urlparse
import inflect
import pydantic
from packaging import version
from pydantic import BaseModel
from datamodel_code_generator.util import (
PYDANTIC_V2,
ConfigDict,
cached_property,
model_validator,
)
if TYPE_CHECKING:
from pydantic.typing import DictStrAny
class _BaseModel(BaseModel):
_exclude_fields: ClassVar[Set[str]] = set()
_pass_fields: ClassVar[Set[str]] = set()
if not TYPE_CHECKING:
def __init__(self, **values: Any) -> None:
super().__init__(**values)
for pass_field_name in self._pass_fields:
if pass_field_name in values:
setattr(self, pass_field_name, values[pass_field_name])
if not TYPE_CHECKING:
if PYDANTIC_V2:
def dict(
self,
*,
include: Union[
AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], None
] = None,
exclude: Union[
AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], None
] = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
) -> 'DictStrAny':
return self.model_dump(
include=include,
exclude=set(exclude or ()) | self._exclude_fields,
by_alias=by_alias,
exclude_unset=exclude_unset,
exclude_defaults=exclude_defaults,
exclude_none=exclude_none,
)
else:
def dict(
self,
*,
include: Union[
AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], None
] = None,
exclude: Union[
AbstractSet[Union[int, str]], Mapping[Union[int, str], Any], None
] = None,
by_alias: bool = False,
skip_defaults: Optional[bool] = None,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
) -> 'DictStrAny':
return super().dict(
include=include,
exclude=set(exclude or ()) | self._exclude_fields,
by_alias=by_alias,
skip_defaults=skip_defaults,
exclude_unset=exclude_unset,
exclude_defaults=exclude_defaults,
exclude_none=exclude_none,
)
class Reference(_BaseModel):
path: str
original_name: str = ''
name: str
duplicate_name: Optional[str] = None
loaded: bool = True
source: Optional[Any] = None
children: List[Any] = []
_exclude_fields: ClassVar[Set[str]] = {'children'}
@model_validator(mode='before')
def validate_original_name(cls, values: Any) -> Any:
"""
If original_name is empty then, `original_name` is assigned `name`
"""
if not isinstance(values, dict): # pragma: no cover
return values
original_name = values.get('original_name')
if original_name:
return values
values['original_name'] = values.get('name', original_name)
return values
if PYDANTIC_V2:
# TODO[pydantic]: The following keys were removed: `copy_on_model_validation`.
# Check https://docs.pydantic.dev/dev-v2/migration/#changes-to-config for more information.
model_config = ConfigDict(
arbitrary_types_allowed=True,
ignored_types=(cached_property,),
revalidate_instances='never',
)
else:
class Config:
arbitrary_types_allowed = True
keep_untouched = (cached_property,)
copy_on_model_validation = (
False
if version.parse(pydantic.VERSION) < version.parse('1.9.2')
else 'none'
)
@property
def short_name(self) -> str:
return self.name.rsplit('.', 1)[-1]
SINGULAR_NAME_SUFFIX: str = 'Item'
ID_PATTERN: Pattern[str] = re.compile(r'^#[^/].*')
T = TypeVar('T')
@contextmanager
def context_variable(
setter: Callable[[T], None], current_value: T, new_value: T
) -> Generator[None, None, None]:
previous_value: T = current_value
setter(new_value)
try:
yield
finally:
setter(previous_value)
_UNDER_SCORE_1: Pattern[str] = re.compile(r'([^_])([A-Z][a-z]+)')
_UNDER_SCORE_2: Pattern[str] = re.compile('([a-z0-9])([A-Z])')
@lru_cache()
def camel_to_snake(string: str) -> str:
subbed = _UNDER_SCORE_1.sub(r'\1_\2', string)
return _UNDER_SCORE_2.sub(r'\1_\2', subbed).lower()
class FieldNameResolver:
def __init__(
self,
aliases: Optional[Mapping[str, str]] = None,
snake_case_field: bool = False,
empty_field_name: Optional[str] = None,
original_delimiter: Optional[str] = None,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
no_alias: bool = False,
):
self.aliases: Mapping[str, str] = {} if aliases is None else {**aliases}
self.empty_field_name: str = empty_field_name or '_'
self.snake_case_field = snake_case_field
self.original_delimiter: Optional[str] = original_delimiter
self.special_field_name_prefix: Optional[str] = (
'field' if special_field_name_prefix is None else special_field_name_prefix
)
self.remove_special_field_name_prefix: bool = remove_special_field_name_prefix
self.capitalise_enum_members: bool = capitalise_enum_members
self.no_alias = no_alias
@classmethod
def _validate_field_name(cls, field_name: str) -> bool:
return True
def get_valid_name(
self,
name: str,
excludes: Optional[Set[str]] = None,
ignore_snake_case_field: bool = False,
upper_camel: bool = False,
) -> str:
if not name:
name = self.empty_field_name
if name[0] == '#':
name = name[1:] or self.empty_field_name
if (
self.snake_case_field
and not ignore_snake_case_field
and self.original_delimiter is not None
):
name = snake_to_upper_camel(name, delimiter=self.original_delimiter)
name = re.sub(r'[¹²³⁴⁵⁶⁷⁸⁹]|\W', '_', name)
if name[0].isnumeric():
name = f'{self.special_field_name_prefix}_{name}'
# We should avoid having a field begin with an underscore, as it
# causes pydantic to consider it as private
while name.startswith('_'):
if self.remove_special_field_name_prefix:
name = name[1:]
else:
name = f'{self.special_field_name_prefix}{name}'
break
if (
self.capitalise_enum_members
or self.snake_case_field
and not ignore_snake_case_field
):
name = camel_to_snake(name)
count = 1
if iskeyword(name) or not self._validate_field_name(name):
name += '_'
if upper_camel:
new_name = snake_to_upper_camel(name)
elif self.capitalise_enum_members:
new_name = name.upper()
else:
new_name = name
while (
not (new_name.isidentifier() or not self._validate_field_name(new_name))
or iskeyword(new_name)
or (excludes and new_name in excludes)
):
new_name = f'{name}{count}' if upper_camel else f'{name}_{count}'
count += 1
return new_name
def get_valid_field_name_and_alias(
self, field_name: str, excludes: Optional[Set[str]] = None
) -> Tuple[str, Optional[str]]:
if field_name in self.aliases:
return self.aliases[field_name], field_name
valid_name = self.get_valid_name(field_name, excludes=excludes)
return (
valid_name,
None if self.no_alias or field_name == valid_name else field_name,
)
class PydanticFieldNameResolver(FieldNameResolver):
@classmethod
def _validate_field_name(cls, field_name: str) -> bool:
# TODO: Support Pydantic V2
return not hasattr(BaseModel, field_name)
class EnumFieldNameResolver(FieldNameResolver):
def get_valid_name(
self,
name: str,
excludes: Optional[Set[str]] = None,
ignore_snake_case_field: bool = False,
upper_camel: bool = False,
) -> str:
return super().get_valid_name(
name='mro_' if name == 'mro' else name,
excludes={'mro'} | (excludes or set()),
ignore_snake_case_field=ignore_snake_case_field,
upper_camel=upper_camel,
)
class ModelType(Enum):
PYDANTIC = auto()
ENUM = auto()
CLASS = auto()
DEFAULT_FIELD_NAME_RESOLVERS: Dict[ModelType, Type[FieldNameResolver]] = {
ModelType.ENUM: EnumFieldNameResolver,
ModelType.PYDANTIC: PydanticFieldNameResolver,
ModelType.CLASS: FieldNameResolver,
}
class ClassName(NamedTuple):
name: str
duplicate_name: Optional[str]
def get_relative_path(base_path: PurePath, target_path: PurePath) -> PurePath:
if base_path == target_path:
return Path('.')
if not target_path.is_absolute():
return target_path
parent_count: int = 0
children: List[str] = []
for base_part, target_part in zip_longest(base_path.parts, target_path.parts):
if base_part == target_part and not parent_count:
continue
if base_part or not target_part:
parent_count += 1
if target_part:
children.append(target_part)
return Path(*['..' for _ in range(parent_count)], *children)
class ModelResolver:
def __init__(
self,
exclude_names: Optional[Set[str]] = None,
duplicate_name_suffix: Optional[str] = None,
base_url: Optional[str] = None,
singular_name_suffix: Optional[str] = None,
aliases: Optional[Mapping[str, str]] = None,
snake_case_field: bool = False,
empty_field_name: Optional[str] = None,
custom_class_name_generator: Optional[Callable[[str], str]] = None,
base_path: Optional[Path] = None,
field_name_resolver_classes: Optional[
Dict[ModelType, Type[FieldNameResolver]]
] = None,
original_field_name_delimiter: Optional[str] = None,
special_field_name_prefix: Optional[str] = None,
remove_special_field_name_prefix: bool = False,
capitalise_enum_members: bool = False,
no_alias: bool = False,
) -> None:
self.references: Dict[str, Reference] = {}
self._current_root: Sequence[str] = []
self._root_id: Optional[str] = None
self._root_id_base_path: Optional[str] = None
self.ids: DefaultDict[str, Dict[str, str]] = defaultdict(dict)
self.after_load_files: Set[str] = set()
self.exclude_names: Set[str] = exclude_names or set()
self.duplicate_name_suffix: Optional[str] = duplicate_name_suffix
self._base_url: Optional[str] = base_url
self.singular_name_suffix: str = (
singular_name_suffix
if isinstance(singular_name_suffix, str)
else SINGULAR_NAME_SUFFIX
)
merged_field_name_resolver_classes = DEFAULT_FIELD_NAME_RESOLVERS.copy()
if field_name_resolver_classes: # pragma: no cover
merged_field_name_resolver_classes.update(field_name_resolver_classes)
self.field_name_resolvers: Dict[ModelType, FieldNameResolver] = {
k: v(
aliases=aliases,
snake_case_field=snake_case_field,
empty_field_name=empty_field_name,
original_delimiter=original_field_name_delimiter,
special_field_name_prefix=special_field_name_prefix,
remove_special_field_name_prefix=remove_special_field_name_prefix,
capitalise_enum_members=capitalise_enum_members
if k == ModelType.ENUM
else False,
no_alias=no_alias,
)
for k, v in merged_field_name_resolver_classes.items()
}
self.class_name_generator = (
custom_class_name_generator or self.default_class_name_generator
)
self._base_path: Path = base_path or Path.cwd()
self._current_base_path: Optional[Path] = self._base_path
@property
def current_base_path(self) -> Optional[Path]:
return self._current_base_path
def set_current_base_path(self, base_path: Optional[Path]) -> None:
self._current_base_path = base_path
@property
def base_url(self) -> Optional[str]:
return self._base_url
def set_base_url(self, base_url: Optional[str]) -> None:
self._base_url = base_url
@contextmanager
def current_base_path_context(
self, base_path: Optional[Path]
) -> Generator[None, None, None]:
if base_path:
base_path = (self._base_path / base_path).resolve()
with context_variable(
self.set_current_base_path, self.current_base_path, base_path
):
yield
@contextmanager
def base_url_context(self, base_url: str) -> Generator[None, None, None]:
if self._base_url:
with context_variable(self.set_base_url, self.base_url, base_url):
yield
else:
yield
@property
def current_root(self) -> Sequence[str]:
if len(self._current_root) > 1:
return self._current_root
return self._current_root
def set_current_root(self, current_root: Sequence[str]) -> None:
self._current_root = current_root
@contextmanager
def current_root_context(
self, current_root: Sequence[str]
) -> Generator[None, None, None]:
with context_variable(self.set_current_root, self.current_root, current_root):
yield
@property
def root_id(self) -> Optional[str]:
return self._root_id
@property
def root_id_base_path(self) -> Optional[str]:
return self._root_id_base_path
def set_root_id(self, root_id: Optional[str]) -> None:
if root_id and '/' in root_id:
self._root_id_base_path = root_id.rsplit('/', 1)[0]
else:
self._root_id_base_path = None
self._root_id = root_id
def add_id(self, id_: str, path: Sequence[str]) -> None:
self.ids['/'.join(self.current_root)][id_] = self.resolve_ref(path)
def resolve_ref(self, path: Union[Sequence[str], str]) -> str:
if isinstance(path, str):
joined_path = path
else:
joined_path = self.join_path(path)
if joined_path == '#':
return f"{'/'.join(self.current_root)}#"
if (
self.current_base_path
and not self.base_url
and joined_path[0] != '#'
and not is_url(joined_path)
):
# resolve local file path
file_path, *object_part = joined_path.split('#', 1)
resolved_file_path = Path(self.current_base_path, file_path).resolve()
joined_path = get_relative_path(
self._base_path, resolved_file_path
).as_posix()
if object_part:
joined_path += f'#{object_part[0]}'
if ID_PATTERN.match(joined_path):
ref: str = self.ids['/'.join(self.current_root)][joined_path]
else:
if '#' not in joined_path:
joined_path += '#'
elif joined_path[0] == '#':
joined_path = f'{"/".join(self.current_root)}{joined_path}'
delimiter = joined_path.index('#')
file_path = ''.join(joined_path[:delimiter])
ref = f"{''.join(joined_path[:delimiter])}#{''.join(joined_path[delimiter + 1:])}"
if self.root_id_base_path and not (
is_url(joined_path) or Path(self._base_path, file_path).is_file()
):
ref = f'{self.root_id_base_path}/{ref}'
if self.base_url:
from .http import join_url
joined_url = join_url(self.base_url, ref)
if '#' in joined_url:
return joined_url
return f'{joined_url}#'
if is_url(ref):
file_part, path_part = ref.split('#', 1)
if file_part == self.root_id:
return f'{"/".join(self.current_root)}#{path_part}'
target_url: ParseResult = urlparse(file_part)
if not (self.root_id and self.current_base_path):
return ref
root_id_url: ParseResult = urlparse(self.root_id)
if (target_url.scheme, target_url.netloc) == (
root_id_url.scheme,
root_id_url.netloc,
): # pragma: no cover
target_url_path = Path(target_url.path)
relative_target_base = get_relative_path(
Path(root_id_url.path).parent, target_url_path.parent
)
target_path = (
self.current_base_path / relative_target_base / target_url_path.name
)
if target_path.exists():
return f'{target_path.resolve().relative_to(self._base_path)}#{path_part}'
return ref
def is_after_load(self, ref: str) -> bool:
if is_url(ref) or not self.current_base_path:
return False
file_part, *_ = ref.split('#', 1)
absolute_path = Path(self._base_path, file_part).resolve().as_posix()
if self.is_external_root_ref(ref):
return absolute_path in self.after_load_files
elif self.is_external_ref(ref):
return absolute_path in self.after_load_files
return False # pragma: no cover
@staticmethod
def is_external_ref(ref: str) -> bool:
return '#' in ref and ref[0] != '#'
@staticmethod
def is_external_root_ref(ref: str) -> bool:
return ref[-1] == '#'
@staticmethod
def join_path(path: Sequence[str]) -> str:
joined_path = '/'.join(p for p in path if p).replace('/#', '#')
if '#' not in joined_path:
joined_path += '#'
return joined_path
def add_ref(self, ref: str, resolved: bool = False) -> Reference:
if not resolved:
path = self.resolve_ref(ref)
else:
path = ref
reference = self.references.get(path)
if reference:
return reference
split_ref = ref.rsplit('/', 1)
if len(split_ref) == 1:
original_name = Path(
split_ref[0].rstrip('#')
if self.is_external_root_ref(path)
else split_ref[0]
).stem
else:
original_name = (
Path(split_ref[1].rstrip('#')).stem
if self.is_external_root_ref(path)
else split_ref[1]
)
name = self.get_class_name(original_name, unique=False).name
reference = Reference(
path=path,
original_name=original_name,
name=name,
loaded=False,
)
self.references[path] = reference
return reference
def add(
self,
path: Sequence[str],
original_name: str,
*,
class_name: bool = False,
singular_name: bool = False,
unique: bool = True,
singular_name_suffix: Optional[str] = None,
loaded: bool = False,
) -> Reference:
joined_path = self.join_path(path)
reference: Optional[Reference] = self.references.get(joined_path)
if reference:
if loaded and not reference.loaded:
reference.loaded = True
if (
not original_name
or original_name == reference.original_name
or original_name == reference.name
):
return reference
name = original_name
duplicate_name: Optional[str] = None
if class_name:
name, duplicate_name = self.get_class_name(
name=name,
unique=unique,
reserved_name=reference.name if reference else None,
singular_name=singular_name,
singular_name_suffix=singular_name_suffix,
)
else:
# TODO: create a validate for module name
name = self.get_valid_field_name(name, model_type=ModelType.CLASS)
if singular_name: # pragma: no cover
name = get_singular_name(
name, singular_name_suffix or self.singular_name_suffix
)
elif unique: # pragma: no cover
unique_name = self._get_unique_name(name)
if unique_name == name:
duplicate_name = name
name = unique_name
if reference:
reference.original_name = original_name
reference.name = name
reference.loaded = loaded
reference.duplicate_name = duplicate_name
else:
reference = Reference(
path=joined_path,
original_name=original_name,
name=name,
loaded=loaded,
duplicate_name=duplicate_name,
)
self.references[joined_path] = reference
return reference
def get(self, path: Union[Sequence[str], str]) -> Optional[Reference]:
return self.references.get(self.resolve_ref(path))
def delete(self, path: Union[Sequence[str], str]) -> None:
if self.resolve_ref(path) in self.references:
del self.references[self.resolve_ref(path)]
def default_class_name_generator(self, name: str) -> str:
# TODO: create a validate for class name
return self.field_name_resolvers[ModelType.CLASS].get_valid_name(
name, ignore_snake_case_field=True, upper_camel=True
)
def get_class_name(
self,
name: str,
unique: bool = True,
reserved_name: Optional[str] = None,
singular_name: bool = False,
singular_name_suffix: Optional[str] = None,
) -> ClassName:
if '.' in name:
split_name = name.split('.')
prefix = '.'.join(
# TODO: create a validate for class name
self.field_name_resolvers[ModelType.CLASS].get_valid_name(
n, ignore_snake_case_field=True
)
for n in split_name[:-1]
)
prefix += '.'
class_name = split_name[-1]
else:
prefix = ''
class_name = name
class_name = self.class_name_generator(class_name)
if singular_name:
class_name = get_singular_name(
class_name, singular_name_suffix or self.singular_name_suffix
)
duplicate_name: Optional[str] = None
if unique:
if reserved_name == class_name:
return ClassName(name=class_name, duplicate_name=duplicate_name)
unique_name = self._get_unique_name(class_name, camel=True)
if unique_name != class_name:
duplicate_name = class_name
class_name = unique_name
return ClassName(name=f'{prefix}{class_name}', duplicate_name=duplicate_name)
def _get_unique_name(self, name: str, camel: bool = False) -> str:
unique_name: str = name
count: int = 1
reference_names = {
r.name for r in self.references.values()
} | self.exclude_names
while unique_name in reference_names:
if self.duplicate_name_suffix:
name_parts: List[Union[str, int]] = [
name,
self.duplicate_name_suffix,
count - 1,
]
else:
name_parts = [name, count]
delimiter = '' if camel else '_'
unique_name = delimiter.join(str(p) for p in name_parts if p)
count += 1
return unique_name
@classmethod
def validate_name(cls, name: str) -> bool:
return name.isidentifier() and not iskeyword(name)
def get_valid_field_name(
self,
name: str,
excludes: Optional[Set[str]] = None,
model_type: ModelType = ModelType.PYDANTIC,
) -> str:
return self.field_name_resolvers[model_type].get_valid_name(name, excludes)
def get_valid_field_name_and_alias(
self,
field_name: str,
excludes: Optional[Set[str]] = None,
model_type: ModelType = ModelType.PYDANTIC,
) -> Tuple[str, Optional[str]]:
return self.field_name_resolvers[model_type].get_valid_field_name_and_alias(
field_name, excludes
)
@lru_cache()
def get_singular_name(name: str, suffix: str = SINGULAR_NAME_SUFFIX) -> str:
singular_name = inflect_engine.singular_noun(name)
if singular_name is False:
singular_name = f'{name}{suffix}'
return singular_name
@lru_cache()
def snake_to_upper_camel(word: str, delimiter: str = '_') -> str:
prefix = ''
if word.startswith(delimiter):
prefix = '_'
word = word[1:]
return prefix + ''.join(x[0].upper() + x[1:] for x in word.split(delimiter) if x)
def is_url(ref: str) -> bool:
return ref.startswith(('https://', 'http://'))
inflect_engine = inflect.engine()
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/types.py 0000644 0000000 0000000 00000046552 00000000000 023450 0 ustar 00 0000000 0000000 import re
from abc import ABC, abstractmethod
from enum import Enum, auto
from functools import lru_cache
from itertools import chain
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Dict,
FrozenSet,
Iterable,
Iterator,
List,
Optional,
Pattern,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
)
import pydantic
from packaging import version
from pydantic import StrictBool, StrictInt, StrictStr, create_model
from datamodel_code_generator.format import DatetimeClassType, PythonVersion
from datamodel_code_generator.imports import (
IMPORT_ABC_MAPPING,
IMPORT_ABC_SEQUENCE,
IMPORT_ABC_SET,
IMPORT_DICT,
IMPORT_FROZEN_SET,
IMPORT_LIST,
IMPORT_LITERAL,
IMPORT_LITERAL_BACKPORT,
IMPORT_MAPPING,
IMPORT_OPTIONAL,
IMPORT_SEQUENCE,
IMPORT_SET,
IMPORT_UNION,
Import,
)
from datamodel_code_generator.reference import Reference, _BaseModel
from datamodel_code_generator.util import (
PYDANTIC_V2,
ConfigDict,
Protocol,
runtime_checkable,
)
if PYDANTIC_V2:
from pydantic import GetCoreSchemaHandler
from pydantic_core import core_schema
T = TypeVar('T')
OPTIONAL = 'Optional'
OPTIONAL_PREFIX = f'{OPTIONAL}['
UNION = 'Union'
UNION_PREFIX = f'{UNION}['
UNION_DELIMITER = ', '
UNION_PATTERN: Pattern[str] = re.compile(r'\s*,\s*')
UNION_OPERATOR_DELIMITER = ' | '
UNION_OPERATOR_PATTERN: Pattern[str] = re.compile(r'\s*\|\s*')
NONE = 'None'
ANY = 'Any'
LITERAL = 'Literal'
SEQUENCE = 'Sequence'
FROZEN_SET = 'FrozenSet'
MAPPING = 'Mapping'
DICT = 'Dict'
SET = 'Set'
LIST = 'List'
STANDARD_DICT = 'dict'
STANDARD_LIST = 'list'
STANDARD_SET = 'set'
STR = 'str'
NOT_REQUIRED = 'NotRequired'
NOT_REQUIRED_PREFIX = f'{NOT_REQUIRED}['
class StrictTypes(Enum):
str = 'str'
bytes = 'bytes'
int = 'int'
float = 'float'
bool = 'bool'
class UnionIntFloat:
def __init__(self, value: Union[int, float]) -> None:
self.value: Union[int, float] = value
def __int__(self) -> int:
return int(self.value)
def __float__(self) -> float:
return float(self.value)
def __str__(self) -> str:
return str(self.value)
@classmethod
def __get_validators__(cls) -> Iterator[Callable[[Any], Any]]:
yield cls.validate
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: 'GetCoreSchemaHandler'
) -> 'core_schema.CoreSchema':
from_int_schema = core_schema.chain_schema(
[
core_schema.union_schema(
[core_schema.int_schema(), core_schema.float_schema()]
),
core_schema.no_info_plain_validator_function(cls.validate),
]
)
return core_schema.json_or_python_schema(
json_schema=from_int_schema,
python_schema=core_schema.union_schema(
[
# check if it's an instance first before doing any further work
core_schema.is_instance_schema(UnionIntFloat),
from_int_schema,
]
),
serialization=core_schema.plain_serializer_function_ser_schema(
lambda instance: instance.value
),
)
@classmethod
def validate(cls, v: Any) -> 'UnionIntFloat':
if isinstance(v, UnionIntFloat):
return v
elif not isinstance(v, (int, float)): # pragma: no cover
try:
int(v)
return cls(v)
except (TypeError, ValueError):
pass
try:
float(v)
return cls(v)
except (TypeError, ValueError):
pass
raise TypeError(f'{v} is not int or float')
return cls(v)
def chain_as_tuple(*iterables: Iterable[T]) -> Tuple[T, ...]:
return tuple(chain(*iterables))
@lru_cache()
def _remove_none_from_type(
type_: str, split_pattern: Pattern[str], delimiter: str
) -> List[str]:
types: List[str] = []
split_type: str = ''
inner_count: int = 0
for part in re.split(split_pattern, type_):
if part == NONE:
continue
inner_count += part.count('[') - part.count(']')
if split_type:
split_type += delimiter
if inner_count == 0:
if split_type:
types.append(f'{split_type}{part}')
else:
types.append(part)
split_type = ''
continue
else:
split_type += part
return types
def _remove_none_from_union(type_: str, use_union_operator: bool) -> str:
if use_union_operator:
if not re.match(r'^\w+ | ', type_):
return type_
return UNION_OPERATOR_DELIMITER.join(
_remove_none_from_type(
type_, UNION_OPERATOR_PATTERN, UNION_OPERATOR_DELIMITER
)
)
if not type_.startswith(UNION_PREFIX):
return type_
inner_types = _remove_none_from_type(
type_[len(UNION_PREFIX) :][:-1], UNION_PATTERN, UNION_DELIMITER
)
if len(inner_types) == 1:
return inner_types[0]
return f'{UNION_PREFIX}{UNION_DELIMITER.join(inner_types)}]'
@lru_cache()
def get_optional_type(type_: str, use_union_operator: bool) -> str:
type_ = _remove_none_from_union(type_, use_union_operator)
if not type_ or type_ == NONE:
return NONE
if use_union_operator:
return f'{type_} | {NONE}'
return f'{OPTIONAL_PREFIX}{type_}]'
@runtime_checkable
class Modular(Protocol):
@property
def module_name(self) -> str:
raise NotImplementedError
@runtime_checkable
class Nullable(Protocol):
@property
def nullable(self) -> bool:
raise NotImplementedError
class DataType(_BaseModel):
if PYDANTIC_V2:
# TODO[pydantic]: The following keys were removed: `copy_on_model_validation`.
# Check https://docs.pydantic.dev/dev-v2/migration/#changes-to-config for more information.
model_config = ConfigDict(
extra='forbid',
revalidate_instances='never',
)
else:
if not TYPE_CHECKING:
@classmethod
def model_rebuild(cls) -> None:
cls.update_forward_refs()
class Config:
extra = 'forbid'
copy_on_model_validation = (
False
if version.parse(pydantic.VERSION) < version.parse('1.9.2')
else 'none'
)
type: Optional[str] = None
reference: Optional[Reference] = None
data_types: List['DataType'] = []
is_func: bool = False
kwargs: Optional[Dict[str, Any]] = None
import_: Optional[Import] = None
python_version: PythonVersion = PythonVersion.PY_38
is_optional: bool = False
is_dict: bool = False
is_list: bool = False
is_set: bool = False
is_custom_type: bool = False
literals: List[Union[StrictBool, StrictInt, StrictStr]] = []
use_standard_collections: bool = False
use_generic_container: bool = False
use_union_operator: bool = False
alias: Optional[str] = None
parent: Optional[Any] = None
children: List[Any] = []
strict: bool = False
dict_key: Optional['DataType'] = None
_exclude_fields: ClassVar[Set[str]] = {'parent', 'children'}
_pass_fields: ClassVar[Set[str]] = {'parent', 'children', 'data_types', 'reference'}
@classmethod
def from_import(
cls: Type['DataTypeT'],
import_: Import,
*,
is_optional: bool = False,
is_dict: bool = False,
is_list: bool = False,
is_set: bool = False,
is_custom_type: bool = False,
strict: bool = False,
kwargs: Optional[Dict[str, Any]] = None,
) -> 'DataTypeT':
return cls(
type=import_.import_,
import_=import_,
is_optional=is_optional,
is_dict=is_dict,
is_list=is_list,
is_set=is_set,
is_func=True if kwargs else False,
is_custom_type=is_custom_type,
strict=strict,
kwargs=kwargs,
)
@property
def unresolved_types(self) -> FrozenSet[str]:
return frozenset(
{
t.reference.path
for data_types in self.data_types
for t in data_types.all_data_types
if t.reference
}
| ({self.reference.path} if self.reference else set())
)
def replace_reference(self, reference: Optional[Reference]) -> None:
if not self.reference: # pragma: no cover
raise Exception(
f"`{self.__class__.__name__}.replace_reference()` can't be called"
f' when `reference` field is empty.'
)
self_id = id(self)
self.reference.children = [
c for c in self.reference.children if id(c) != self_id
]
self.reference = reference
if reference:
reference.children.append(self)
def remove_reference(self) -> None:
self.replace_reference(None)
@property
def module_name(self) -> Optional[str]:
if self.reference and isinstance(self.reference.source, Modular):
return self.reference.source.module_name
return None # pragma: no cover
@property
def full_name(self) -> str:
module_name = self.module_name
if module_name:
return f'{module_name}.{self.reference.short_name}' # type: ignore
return self.reference.short_name # type: ignore
@property
def all_data_types(self) -> Iterator['DataType']:
for data_type in self.data_types:
yield from data_type.all_data_types
yield self
@property
def all_imports(self) -> Iterator[Import]:
for data_type in self.data_types:
yield from data_type.all_imports
yield from self.imports
@property
def imports(self) -> Iterator[Import]:
if self.import_:
yield self.import_
imports: Tuple[Tuple[bool, Import], ...] = (
(self.is_optional and not self.use_union_operator, IMPORT_OPTIONAL),
(len(self.data_types) > 1 and not self.use_union_operator, IMPORT_UNION),
)
if any(self.literals):
import_literal = (
IMPORT_LITERAL
if self.python_version.has_literal_type
else IMPORT_LITERAL_BACKPORT
)
imports = (
*imports,
(any(self.literals), import_literal),
)
if self.use_generic_container:
if self.use_standard_collections:
imports = (
*imports,
(self.is_list, IMPORT_ABC_SEQUENCE),
(self.is_set, IMPORT_ABC_SET),
(self.is_dict, IMPORT_ABC_MAPPING),
)
else:
imports = (
*imports,
(self.is_list, IMPORT_SEQUENCE),
(self.is_set, IMPORT_FROZEN_SET),
(self.is_dict, IMPORT_MAPPING),
)
elif not self.use_standard_collections:
imports = (
*imports,
(self.is_list, IMPORT_LIST),
(self.is_set, IMPORT_SET),
(self.is_dict, IMPORT_DICT),
)
for field, import_ in imports:
if field and import_ != self.import_:
yield import_
if self.dict_key:
yield from self.dict_key.imports
def __init__(self, **values: Any) -> None:
if not TYPE_CHECKING:
super().__init__(**values)
for type_ in self.data_types:
if type_.type == ANY and type_.is_optional:
if any(t for t in self.data_types if t.type != ANY): # pragma: no cover
self.is_optional = True
self.data_types = [
t
for t in self.data_types
if not (t.type == ANY and t.is_optional)
]
break # pragma: no cover
for data_type in self.data_types:
if data_type.reference or data_type.data_types:
data_type.parent = self
if self.reference:
self.reference.children.append(self)
@property
def type_hint(self) -> str:
type_: Optional[str] = self.alias or self.type
if not type_:
if self.is_union:
data_types: List[str] = []
for data_type in self.data_types:
data_type_type = data_type.type_hint
if data_type_type in data_types: # pragma: no cover
continue
if NONE == data_type_type:
self.is_optional = True
continue
non_optional_data_type_type = _remove_none_from_union(
data_type_type, self.use_union_operator
)
if non_optional_data_type_type != data_type_type:
self.is_optional = True
data_types.append(non_optional_data_type_type)
if len(data_types) == 1:
type_ = data_types[0]
else:
if self.use_union_operator:
type_ = UNION_OPERATOR_DELIMITER.join(data_types)
else:
type_ = f'{UNION_PREFIX}{UNION_DELIMITER.join(data_types)}]'
elif len(self.data_types) == 1:
type_ = self.data_types[0].type_hint
elif self.literals:
type_ = f"{LITERAL}[{', '.join(repr(literal) for literal in self.literals)}]"
else:
if self.reference:
type_ = self.reference.short_name
else:
# TODO support strict Any
# type_ = 'Any'
type_ = ''
if self.reference:
source = self.reference.source
if isinstance(source, Nullable) and source.nullable:
self.is_optional = True
if self.reference and self.python_version == PythonVersion.PY_36:
type_ = f"'{type_}'"
if self.is_list:
if self.use_generic_container:
list_ = SEQUENCE
elif self.use_standard_collections:
list_ = STANDARD_LIST
else:
list_ = LIST
type_ = f'{list_}[{type_}]' if type_ else list_
elif self.is_set:
if self.use_generic_container:
set_ = FROZEN_SET
elif self.use_standard_collections:
set_ = STANDARD_SET
else:
set_ = SET
type_ = f'{set_}[{type_}]' if type_ else set_
elif self.is_dict:
if self.use_generic_container:
dict_ = MAPPING
elif self.use_standard_collections:
dict_ = STANDARD_DICT
else:
dict_ = DICT
if self.dict_key or type_:
key = self.dict_key.type_hint if self.dict_key else STR
type_ = f'{dict_}[{key}, {type_ or ANY}]'
else: # pragma: no cover
type_ = dict_
if self.is_optional and type_ != ANY:
return get_optional_type(type_, self.use_union_operator)
elif self.is_func:
if self.kwargs:
kwargs: str = ', '.join(f'{k}={v}' for k, v in self.kwargs.items())
return f'{type_}({kwargs})'
return f'{type_}()'
return type_
@property
def is_union(self) -> bool:
return len(self.data_types) > 1
DataType.model_rebuild()
DataTypeT = TypeVar('DataTypeT', bound=DataType)
class EmptyDataType(DataType):
pass
class Types(Enum):
integer = auto()
int32 = auto()
int64 = auto()
number = auto()
float = auto()
double = auto()
decimal = auto()
time = auto()
string = auto()
byte = auto()
binary = auto()
date = auto()
date_time = auto()
timedelta = auto()
password = auto()
path = auto()
email = auto()
uuid = auto()
uuid1 = auto()
uuid2 = auto()
uuid3 = auto()
uuid4 = auto()
uuid5 = auto()
uri = auto()
hostname = auto()
ipv4 = auto()
ipv4_network = auto()
ipv6 = auto()
ipv6_network = auto()
boolean = auto()
object = auto()
null = auto()
array = auto()
any = auto()
class DataTypeManager(ABC):
def __init__(
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
target_datetime_class: Optional[DatetimeClassType] = None,
) -> None:
self.python_version = python_version
self.use_standard_collections: bool = use_standard_collections
self.use_generic_container_types: bool = use_generic_container_types
self.strict_types: Sequence[StrictTypes] = strict_types or ()
self.use_non_positive_negative_number_constrained_types: bool = (
use_non_positive_negative_number_constrained_types
)
self.use_union_operator: bool = use_union_operator
self.use_pendulum: bool = use_pendulum
self.target_datetime_class: DatetimeClassType = target_datetime_class
if (
use_generic_container_types and python_version == PythonVersion.PY_36
): # pragma: no cover
raise Exception(
'use_generic_container_types can not be used with target_python_version 3.6.\n'
' The version will be not supported in a future version'
)
if TYPE_CHECKING:
self.data_type: Type[DataType]
else:
self.data_type: Type[DataType] = create_model(
'ContextDataType',
python_version=(PythonVersion, python_version),
use_standard_collections=(bool, use_standard_collections),
use_generic_container=(bool, use_generic_container_types),
use_union_operator=(bool, use_union_operator),
__base__=DataType,
)
@abstractmethod
def get_data_type(self, types: Types, **kwargs: Any) -> DataType:
raise NotImplementedError
def get_data_type_from_full_path(
self, full_path: str, is_custom_type: bool
) -> DataType:
return self.data_type.from_import(
Import.from_full_path(full_path), is_custom_type=is_custom_type
)
def get_data_type_from_value(self, value: Any) -> DataType:
type_: Optional[Types] = None
if isinstance(value, str):
type_ = Types.string
elif isinstance(value, bool):
type_ = Types.boolean
elif isinstance(value, int):
type_ = Types.integer
elif isinstance(value, float):
type_ = Types.float
elif isinstance(value, dict):
return self.data_type.from_import(IMPORT_DICT)
elif isinstance(value, list):
return self.data_type.from_import(IMPORT_LIST)
else:
type_ = Types.any
return self.get_data_type(type_)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1734283557.706037
datamodel_code_generator-0.26.4/datamodel_code_generator/util.py 0000644 0000000 0000000 00000005415 00000000000 023252 0 ustar 00 0000000 0000000 from __future__ import annotations
import copy
from functools import cached_property # noqa: F401
from pathlib import Path
from typing import ( # noqa: F401
TYPE_CHECKING,
Any,
Callable,
Dict,
Protocol,
TypeVar,
runtime_checkable,
)
import pydantic
from packaging import version
from pydantic import BaseModel as _BaseModel
PYDANTIC_VERSION = version.parse(
pydantic.VERSION if isinstance(pydantic.VERSION, str) else str(pydantic.VERSION)
)
PYDANTIC_V2: bool = PYDANTIC_VERSION >= version.parse('2.0b3')
if TYPE_CHECKING:
from typing import Literal
from yaml import SafeLoader
def load_toml(path: Path) -> Dict[str, Any]: ...
else:
try:
from yaml import CSafeLoader as SafeLoader
except ImportError: # pragma: no cover
from yaml import SafeLoader
try:
import tomllib
def load_toml(path: Path) -> Dict[str, Any]:
with path.open('rb') as f:
return tomllib.load(f)
except ImportError:
import toml
def load_toml(path: Path) -> Dict[str, Any]:
return toml.load(path)
SafeLoaderTemp = copy.deepcopy(SafeLoader)
SafeLoaderTemp.yaml_constructors = copy.deepcopy(SafeLoader.yaml_constructors)
SafeLoaderTemp.add_constructor(
'tag:yaml.org,2002:timestamp',
SafeLoaderTemp.yaml_constructors['tag:yaml.org,2002:str'],
)
SafeLoader = SafeLoaderTemp
Model = TypeVar('Model', bound=_BaseModel)
def model_validator(
mode: Literal['before', 'after'] = 'after',
) -> Callable[[Callable[[Model, Any], Any]], Callable[[Model, Any], Any]]:
def inner(method: Callable[[Model, Any], Any]) -> Callable[[Model, Any], Any]:
if PYDANTIC_V2:
from pydantic import model_validator as model_validator_v2
return model_validator_v2(mode=mode)(method) # type: ignore
else:
from pydantic import root_validator
return root_validator(method, pre=mode == 'before') # type: ignore
return inner
def field_validator(
field_name: str,
*fields: str,
mode: Literal['before', 'after'] = 'after',
) -> Callable[[Any], Callable[[Model, Any], Any]]:
def inner(method: Callable[[Model, Any], Any]) -> Callable[[Model, Any], Any]:
if PYDANTIC_V2:
from pydantic import field_validator as field_validator_v2
return field_validator_v2(field_name, *fields, mode=mode)(method) # type: ignore
else:
from pydantic import validator
return validator(field_name, *fields, pre=mode == 'before')(method) # type: ignore
return inner
if PYDANTIC_V2:
from pydantic import ConfigDict as ConfigDict
else:
ConfigDict = dict # type: ignore
class BaseModel(_BaseModel):
if PYDANTIC_V2:
model_config = ConfigDict(strict=False)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283577.1750064
datamodel_code_generator-0.26.4/datamodel_code_generator/version.py 0000644 0000000 0000000 00000000030 00000000000 023746 0 ustar 00 0000000 0000000 version: str = '0.26.4'
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1734283577.1750064
datamodel_code_generator-0.26.4/pyproject.toml 0000644 0000000 0000000 00000010667 00000000000 017652 0 ustar 00 0000000 0000000 [tool.poetry]
name = "datamodel-code-generator"
version = "0.26.4"
description = "Datamodel Code Generator"
authors = ["Koudai Aono "]
readme = "README.md"
license = "MIT"
homepage = "https://github.com/koxudaxi/datamodel-code-generator"
repository = "https://github.com/koxudaxi/datamodel-code-generator"
classifiers = [
"Development Status :: 4 - Beta",
"Natural Language :: English",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython"]
[build-system]
requires = ["poetry-core>=1.0.0", "poetry-dynamic-versioning"]
build-backend = "poetry.core.masonry.api"
[tool.poetry-dynamic-versioning]
enable = false
vcs = "git"
# language=RegExp
pattern = '^(?P \d+\.\d+\.\d+)(-?((?P[a-zA-Z]+)\.?(?P\d+)?))?$'
[tool.poetry-dynamic-versioning.substitution]
files = ["*/version.py"]
patterns = ["(^version: str = ')[^']*(')"]
[tool.poetry.scripts]
datamodel-codegen = "datamodel_code_generator.__main__:main"
[tool.poetry.dependencies]
python = "^3.8"
pydantic = [
{extras = ["email"], version = ">=1.5.1,<3.0,!=2.4.0", python = "<3.10"},
{extras = ["email"], version = ">=1.9.0,<3.0,!=2.4.0", python = "~3.10"},
{extras = ["email"], version = ">=1.10.0,<3.0,!=2.4.0", python = "^3.11"},
{extras = ["email"], version = ">=1.10.0,!=2.0.0,!=2.0.1,<3.0,!=2.4.0", python = "^3.12"}
]
argcomplete = ">=1.10,<4.0"
jinja2 = ">=2.10.1,<4.0"
inflect = ">=4.1.0,<6.0"
black = ">=19.10b0"
isort = ">=4.3.21,<6.0"
genson = ">=1.2.1,<2.0"
packaging = "*"
prance = { version = ">=0.18.2", optional = true }
openapi-spec-validator = { version = ">=0.2.8,<0.7.0", optional = true }
toml = { version = ">=0.10.0,<1.0.0", python = "<3.11" }
PySnooper = { version = ">=0.4.1,<2.0.0", optional = true }
httpx = { version = "*", optional = true }
pyyaml = ">=6.0.1"
graphql-core = {version = "^3.2.3", optional = true}
[tool.poetry.group.dev.dependencies]
pytest = ">6.1"
pytest-benchmark = "*"
pytest-cov = ">=2.12.1"
pytest-mock = "*"
mypy = ">=1.4.1,<1.5.0"
black = ">=23.3,<25.0"
freezegun = "*"
types-Jinja2 = "*"
types-PyYAML = "*"
types-toml = "*"
types-setuptools = ">=67.6.0.5,<70.0.0.0"
pydantic = "*"
httpx = ">=0.24.1"
PySnooper = "*"
ruff = ">=0.0.290,<0.7.5"
ruff-lsp = ">=0.0.39,<0.0.60"
pre-commit = "*"
pytest-xdist = "^3.3.1"
prance = "*"
openapi-spec-validator = "*"
pytest-codspeed = "^2.2.0"
[tool.poetry.extras]
http = ["httpx"]
graphql = ["graphql-core"]
debug = ["PySnooper"]
validation = ["prance", "openapi-spec-validator"]
[tool.ruff]
line-length = 88
extend-select = ['Q', 'RUF100', 'C4', 'UP', 'I']
flake8-quotes = {inline-quotes = 'single', multiline-quotes = 'double'}
target-version = 'py37'
ignore = ['E501', 'UP006', 'UP007', 'Q000', 'Q003' ]
extend-exclude = ['tests/data']
[tool.ruff.format]
quote-style = "single"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.mypy]
plugins = "pydantic.mypy"
ignore_missing_imports = true
follow_imports = "silent"
strict_optional = true
warn_redundant_casts = true
warn_unused_ignores = true
disallow_any_generics = true
check_untyped_defs = true
no_implicit_reexport = true
disallow_untyped_defs = true
[tool.pydantic-mypy]
init_forbid_extra = true
init_typed = true
warn_required_dynamic_aliases = false
warn_untyped_fields = true
[tool.pytest.ini_options]
filterwarnings = "ignore::DeprecationWarning:distutils"
norecursedirs = "tests/data/*"
[tool.coverage.run]
source = ["datamodel_code_generator"]
branch = true
omit = ["scripts/*"]
[tool.coverage.report]
ignore_errors = true
exclude_lines = [
"if self.debug:",
"pragma: no cover",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"if TYPE_CHECKING:",
"if not TYPE_CHECKING:"]
omit = ["tests/*"]
[tool.pydantic-pycharm-plugin]
ignore-init-method-arguments = true
[tool.pydantic-pycharm-plugin.parsable-types]
# str field may parse int and float
str = ["int", "float"]
[tool.codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = '.git,*.lock,tests'
# check-hidden = true
# ignore-regex = ''
# ignore-words-list = ''
datamodel_code_generator-0.26.4/PKG-INFO 0000644 0000000 0000000 00000060573 00000000000 016034 0 ustar 00 0000000 0000000 Metadata-Version: 2.1
Name: datamodel-code-generator
Version: 0.26.4
Summary: Datamodel Code Generator
Home-page: https://github.com/koxudaxi/datamodel-code-generator
License: MIT
Author: Koudai Aono
Author-email: koxudaxi@gmail.com
Requires-Python: >=3.8,<4.0
Classifier: Development Status :: 4 - Beta
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: Implementation :: CPython
Provides-Extra: debug
Provides-Extra: graphql
Provides-Extra: http
Provides-Extra: validation
Requires-Dist: PySnooper (>=0.4.1,<2.0.0) ; extra == "debug"
Requires-Dist: argcomplete (>=1.10,<4.0)
Requires-Dist: black (>=19.10b0)
Requires-Dist: genson (>=1.2.1,<2.0)
Requires-Dist: graphql-core (>=3.2.3,<4.0.0) ; extra == "graphql"
Requires-Dist: httpx ; extra == "http"
Requires-Dist: inflect (>=4.1.0,<6.0)
Requires-Dist: isort (>=4.3.21,<6.0)
Requires-Dist: jinja2 (>=2.10.1,<4.0)
Requires-Dist: openapi-spec-validator (>=0.2.8,<0.7.0) ; extra == "validation"
Requires-Dist: packaging
Requires-Dist: prance (>=0.18.2) ; extra == "validation"
Requires-Dist: pydantic[email] (>=1.10.0,!=2.0.0,!=2.0.1,<3.0,!=2.4.0) ; python_version >= "3.12" and python_version < "4.0"
Requires-Dist: pydantic[email] (>=1.10.0,<3.0,!=2.4.0) ; python_version >= "3.11" and python_version < "4.0"
Requires-Dist: pydantic[email] (>=1.5.1,<3.0,!=2.4.0) ; python_version < "3.10"
Requires-Dist: pydantic[email] (>=1.9.0,<3.0,!=2.4.0) ; python_version >= "3.10" and python_version < "3.11"
Requires-Dist: pyyaml (>=6.0.1)
Requires-Dist: toml (>=0.10.0,<1.0.0) ; python_version < "3.11"
Project-URL: Repository, https://github.com/koxudaxi/datamodel-code-generator
Description-Content-Type: text/markdown
# datamodel-code-generator
This code generator creates [pydantic v1 and v2](https://docs.pydantic.dev/) model, [dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html), [typing.TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict)
and [msgspec.Struct](https://github.com/jcrist/msgspec) from an openapi file and others.
[](https://pypi.python.org/pypi/datamodel-code-generator)
[](https://anaconda.org/conda-forge/datamodel-code-generator)
[](https://pepy.tech/project/datamodel-code-generator)
[](https://pypi.python.org/pypi/datamodel-code-generator)
[](https://codecov.io/gh/koxudaxi/datamodel-code-generator)

[](https://github.com/astral-sh/ruff)
[](https://pydantic.dev)
[](https://pydantic.dev)
## Help
See [documentation](https://koxudaxi.github.io/datamodel-code-generator) for more details.
## Quick Installation
To install `datamodel-code-generator`:
```bash
$ pip install datamodel-code-generator
```
## Simple Usage
You can generate models from a local file.
```bash
$ datamodel-codegen --input api.yaml --output model.py
```
api.yaml
```yaml
openapi: "3.0.0"
info:
version: 1.0.0
title: Swagger Petstore
license:
name: MIT
servers:
- url: http://petstore.swagger.io/v1
paths:
/pets:
get:
summary: List all pets
operationId: listPets
tags:
- pets
parameters:
- name: limit
in: query
description: How many items to return at one time (max 100)
required: false
schema:
type: integer
format: int32
responses:
'200':
description: A paged array of pets
headers:
x-next:
description: A link to the next page of responses
schema:
type: string
content:
application/json:
schema:
$ref: "#/components/schemas/Pets"
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
post:
summary: Create a pet
operationId: createPets
tags:
- pets
responses:
'201':
description: Null response
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
/pets/{petId}:
get:
summary: Info for a specific pet
operationId: showPetById
tags:
- pets
parameters:
- name: petId
in: path
required: true
description: The id of the pet to retrieve
schema:
type: string
responses:
'200':
description: Expected response to a valid request
content:
application/json:
schema:
$ref: "#/components/schemas/Pets"
default:
description: unexpected error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-amazon-apigateway-integration:
uri:
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations
passthroughBehavior: when_no_templates
httpMethod: POST
type: aws_proxy
components:
schemas:
Pet:
required:
- id
- name
properties:
id:
type: integer
format: int64
name:
type: string
tag:
type: string
Pets:
type: array
items:
$ref: "#/components/schemas/Pet"
Error:
required:
- code
- message
properties:
code:
type: integer
format: int32
message:
type: string
apis:
type: array
items:
type: object
properties:
apiKey:
type: string
description: To be used as a dataset parameter value
apiVersionNumber:
type: string
description: To be used as a version parameter value
apiUrl:
type: string
format: uri
description: "The URL describing the dataset's fields"
apiDocumentationUrl:
type: string
format: uri
description: A URL to the API console for each API
```
model.py
```python
# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2020-06-02T05:28:24+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class Error(BaseModel):
code: int
message: str
class Api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset's fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class Apis(BaseModel):
__root__: List[Api]
```
## Supported input types
- OpenAPI 3 (YAML/JSON, [OpenAPI Data Type](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#data-types));
- JSON Schema ([JSON Schema Core](http://json-schema.org/draft/2019-09/json-schema-validation.html)/[JSON Schema Validation](http://json-schema.org/draft/2019-09/json-schema-validation.html));
- JSON/YAML/CSV Data (it will be converted to JSON Schema);
- Python dictionary (it will be converted to JSON Schema);
- GraphQL schema ([GraphQL Schemas and Types](https://graphql.org/learn/schema/));
## Supported output types
- [pydantic](https://docs.pydantic.dev/1.10/).BaseModel;
- [pydantic_v2](https://docs.pydantic.dev/2.0/).BaseModel;
- [dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html);
- [typing.TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict);
- [msgspec.Struct](https://github.com/jcrist/msgspec);
- Custom type from your [jinja2](https://jinja.palletsprojects.com/en/3.1.x/) template;
## Sponsors
## Projects that use datamodel-code-generator
These OSS projects use datamodel-code-generator to generate many models.
See the following linked projects for real world examples and inspiration.
- [airbytehq/airbyte](https://github.com/airbytehq/airbyte)
- *[Generate Python, Java/Kotlin, and Typescript protocol models](https://github.com/airbytehq/airbyte-protocol/tree/main/protocol-models/bin)*
- [apache/iceberg](https://github.com/apache/iceberg)
- *[Generate Python code](https://github.com/apache/iceberg/blob/d2e1094ee0cc6239d43f63ba5114272f59d605d2/open-api/README.md?plain=1#L39)*
*[`make generate`](https://github.com/apache/iceberg/blob/d2e1094ee0cc6239d43f63ba5114272f59d605d2/open-api/Makefile#L24-L34)*
- [argoproj-labs/hera](https://github.com/argoproj-labs/hera)
- *[`Makefile`](https://github.com/argoproj-labs/hera/blob/c8cbf0c7a676de57469ca3d6aeacde7a5e84f8b7/Makefile#L53-L62)*
- [awslabs/aws-lambda-powertools-python](https://github.com/awslabs/aws-lambda-powertools-python)
- *Recommended for [advanced-use-cases](https://awslabs.github.io/aws-lambda-powertools-python/2.6.0/utilities/parser/#advanced-use-cases) in the official documentation*
- [DataDog/integrations-core](https://github.com/DataDog/integrations-core)
- *[Config models](https://github.com/DataDog/integrations-core/blob/master/docs/developer/meta/config-models.md)*
- [hashintel/hash](https://github.com/hashintel/hash)
- *[`codegen.sh`](https://github.com/hashintel/hash/blob/9762b1a1937e14f6b387677e4c7fe4a5f3d4a1e1/libs/%40local/hash-graph-client/python/scripts/codegen.sh#L21-L39)*
- [IBM/compliance-trestle](https://github.com/IBM/compliance-trestle)
- *[Building the models from the OSCAL schemas.](https://github.com/IBM/compliance-trestle/blob/develop/docs/contributing/website.md#building-the-models-from-the-oscal-schemas)*
- [Netflix/consoleme](https://github.com/Netflix/consoleme)
- *[How do I generate models from the Swagger specification?](https://github.com/Netflix/consoleme/blob/master/docs/gitbook/faq.md#how-do-i-generate-models-from-the-swagger-specification)*
- [Nike-Inc/brickflow](https://github.com/Nike-Inc/brickflow)
- *[Code generate tools](https://github.com/Nike-Inc/brickflow/blob/e3245bf638588867b831820a6675ada76b2010bf/tools/README.md?plain=1#L8)[`./tools/gen-bundle.sh`](https://github.com/Nike-Inc/brickflow/blob/e3245bf638588867b831820a6675ada76b2010bf/tools/gen-bundle.sh#L15-L22)*
- [open-metadata/OpenMetadata](https://github.com/open-metadata/OpenMetadata)
- *[Makefile](https://github.com/open-metadata/OpenMetadata/blob/main/Makefile)*
- [PostHog/posthog](https://github.com/PostHog/posthog)
- *[Generate models via `npm run`](https://github.com/PostHog/posthog/blob/e1a55b9cb38d01225224bebf8f0c1e28faa22399/package.json#L41)*
- [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer)
- *[generate-types.sh](https://github.com/SeldonIO/MLServer/blob/master/hack/generate-types.sh)*
## Installation
To install `datamodel-code-generator`:
```bash
$ pip install datamodel-code-generator
```
### `http` extra option
If you want to resolve `$ref` for remote files then you should specify `http` extra option.
```bash
$ pip install 'datamodel-code-generator[http]'
```
### `graphql` extra option
If you want to generate data model from a GraphQL schema then you should specify `graphql` extra option.
```bash
$ pip install 'datamodel-code-generator[graphql]'
```
### Docker Image
The docker image is in [Docker Hub](https://hub.docker.com/r/koxudaxi/datamodel-code-generator)
```bash
$ docker pull koxudaxi/datamodel-code-generator
```
## Advanced Uses
You can generate models from a URL.
```bash
$ datamodel-codegen --url https:// --output model.py
```
This method needs the [http extra option](#http-extra-option)
## All Command Options
The `datamodel-codegen` command:
```bash
usage:
datamodel-codegen [options]
Generate Python data models from schema definitions or structured data
Options:
--additional-imports ADDITIONAL_IMPORTS
Custom imports for output (delimited list input). For example
"datetime.date,datetime.datetime"
--custom-formatters CUSTOM_FORMATTERS
List of modules with custom formatter (delimited list input).
--http-headers HTTP_HEADER [HTTP_HEADER ...]
Set headers in HTTP requests to the remote host. (example:
"Authorization: Basic dXNlcjpwYXNz")
--http-ignore-tls Disable verification of the remote host''s TLS certificate
--http-query-parameters HTTP_QUERY_PARAMETERS [HTTP_QUERY_PARAMETERS ...]
Set query parameters in HTTP requests to the remote host. (example:
"ref=branch")
--input INPUT Input file/directory (default: stdin)
--input-file-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}
Input file type (default: auto)
--output OUTPUT Output file (default: stdout)
--output-model-type {pydantic.BaseModel,pydantic_v2.BaseModel,dataclasses.dataclass,typing.TypedDict,msgspec.Struct}
Output model type (default: pydantic.BaseModel)
--url URL Input file URL. `--input` is ignored when `--url` is used
Typing customization:
--base-class BASE_CLASS
Base Class (default: pydantic.BaseModel)
--enum-field-as-literal {all,one}
Parse enum field as literal. all: all enum field type are Literal.
one: field type is Literal when an enum has only one possible value
--field-constraints Use field constraints and not con* annotations
--set-default-enum-member
Set enum members as default values for enum field
--strict-types {str,bytes,int,float,bool} [{str,bytes,int,float,bool} ...]
Use strict types
--use-annotated Use typing.Annotated for Field(). Also, `--field-constraints` option
will be enabled.
--use-generic-container-types
Use generic container types for type hinting (typing.Sequence,
typing.Mapping). If `--use-standard-collections` option is set, then
import from collections.abc instead of typing
--use-non-positive-negative-number-constrained-types
Use the Non{Positive,Negative}{FloatInt} types instead of the
corresponding con* constrained types.
--use-one-literal-as-default
Use one literal as default value for one literal field
--use-standard-collections
Use standard collections for type hinting (list, dict)
--use-subclass-enum Define Enum class as subclass with field type when enum has type
(int, float, bytes, str)
--use-union-operator Use | operator for Union type (PEP 604).
--use-unique-items-as-set
define field type as `set` when the field attribute has
`uniqueItems`
Field customization:
--capitalise-enum-members, --capitalize-enum-members
Capitalize field names on enum
--empty-enum-field-name EMPTY_ENUM_FIELD_NAME
Set field name when enum value is empty (default: `_`)
--field-extra-keys FIELD_EXTRA_KEYS [FIELD_EXTRA_KEYS ...]
Add extra keys to field parameters
--field-extra-keys-without-x-prefix FIELD_EXTRA_KEYS_WITHOUT_X_PREFIX [FIELD_EXTRA_KEYS_WITHOUT_X_PREFIX ...]
Add extra keys with `x-` prefix to field parameters. The extra keys
are stripped of the `x-` prefix.
--field-include-all-keys
Add all keys to field parameters
--force-optional Force optional for required fields
--no-alias Do not add a field alias. E.g., if --snake-case-field is used along
with a base class, which has an alias_generator
--original-field-name-delimiter ORIGINAL_FIELD_NAME_DELIMITER
Set delimiter to convert to snake case. This option only can be used
with --snake-case-field (default: `_` )
--remove-special-field-name-prefix
Remove field name prefix if it has a special meaning e.g.
underscores
--snake-case-field Change camel-case field name to snake-case
--special-field-name-prefix SPECIAL_FIELD_NAME_PREFIX
Set field name prefix when first character can''t be used as Python
field name (default: `field`)
--strip-default-none Strip default None on fields
--union-mode {smart,left_to_right}
Union mode for only pydantic v2 field
--use-default Use default value even if a field is required
--use-default-kwarg Use `default=` instead of a positional argument for Fields that have
default values.
--use-field-description
Use schema description to populate field docstring
Model customization:
--allow-extra-fields Allow to pass extra fields, if this flag is not passed, extra fields
are forbidden.
--allow-population-by-field-name
Allow population by field name
--class-name CLASS_NAME
Set class name of root model
--collapse-root-models
Models generated with a root-type field will be merged into the
models using that root-type model
--disable-appending-item-suffix
Disable appending `Item` suffix to model name in an array
--disable-timestamp Disable timestamp on file headers
--enable-faux-immutability
Enable faux immutability
--enable-version-header
Enable package version on file headers
--keep-model-order Keep generated models'' order
--keyword-only Defined models as keyword only (for example
dataclass(kw_only=True)).
--output-datetime-class {datetime,AwareDatetime,NaiveDatetime}
Choose Datetime class between AwareDatetime, NaiveDatetime or
datetime. Each output model has its default mapping (for example
pydantic: datetime, dataclass: str, ...)
--reuse-model Reuse models on the field when a module has the model with the same
content
--target-python-version {3.6,3.7,3.8,3.9,3.10,3.11,3.12}
target python version (default: 3.8)
--treat-dot-as-module
treat dotted module names as modules
--use-exact-imports import exact types instead of modules, for example: "from .foo
import Bar" instead of "from . import foo" with "foo.Bar"
--use-pendulum use pendulum instead of datetime
--use-schema-description
Use schema description to populate class docstring
--use-title-as-name use titles as class names of models
Template customization:
--aliases ALIASES Alias mapping file
--custom-file-header CUSTOM_FILE_HEADER
Custom file header
--custom-file-header-path CUSTOM_FILE_HEADER_PATH
Custom file header file path
--custom-formatters-kwargs CUSTOM_FORMATTERS_KWARGS
A file with kwargs for custom formatters.
--custom-template-dir CUSTOM_TEMPLATE_DIR
Custom template directory
--encoding ENCODING The encoding of input and output (default: utf-8)
--extra-template-data EXTRA_TEMPLATE_DATA
Extra template data
--use-double-quotes Model generated with double quotes. Single quotes or your black
config skip_string_normalization value will be used without this
option.
--wrap-string-literal
Wrap string literal by using black `experimental-string-processing`
option (require black 20.8b0 or later)
OpenAPI-only options:
--openapi-scopes {schemas,paths,tags,parameters} [{schemas,paths,tags,parameters} ...]
Scopes of OpenAPI model generation (default: schemas)
--strict-nullable Treat default field as a non-nullable field (Only OpenAPI)
--use-operation-id-as-name
use operation id of OpenAPI as class names of models
--validation Deprecated: Enable validation (Only OpenAPI). this option is
deprecated. it will be removed in future releases
General options:
--debug show debug message (require "debug". `$ pip install ''datamodel-code-
generator[debug]''`)
--disable-warnings disable warnings
--no-color disable colorized output
--version show version
-h, --help show this help message and exit
```
## Related projects
### fastapi-code-generator
This code generator creates [FastAPI](https://github.com/tiangolo/fastapi) app from an openapi file.
[https://github.com/koxudaxi/fastapi-code-generator](https://github.com/koxudaxi/fastapi-code-generator)
### pydantic-pycharm-plugin
[A JetBrains PyCharm plugin](https://plugins.jetbrains.com/plugin/12861-pydantic) for [`pydantic`](https://github.com/samuelcolvin/pydantic).
[https://github.com/koxudaxi/pydantic-pycharm-plugin](https://github.com/koxudaxi/pydantic-pycharm-plugin)
## PyPi
[https://pypi.org/project/datamodel-code-generator](https://pypi.org/project/datamodel-code-generator)
## Contributing
See `docs/development-contributing.md` for how to get started!
## License
datamodel-code-generator is released under the MIT License. http://www.opensource.org/licenses/mit-license