Model Config¶
- class ModelConfig(model_json, base_dir=None, labels=None, colors=None)¶
The model configuration parameters.
- Parameters
model_json (
dict
) – The parsed alwaysai.model.json.labels (
Optional
[List
[str
]]) – The label list for the model.colors (
Optional
[ndarray
]) – The color list for the model.
- classmethod from_model_id(model_id)¶
- Return type
- property config: dict¶
The config loaded from the model JSON file
- Return type
dict
- property model_parameters: dict¶
The model parameters in the config
- Return type
dict
- property id: str¶
The model ID
- Return type
str
- property label_file: Optional[str]¶
Path to the label file
- Return type
Optional
[str
]
- property colors_file: Optional[str]¶
Path to the colors file
- Return type
Optional
[str
]
- property model_file: str¶
Path to the model weights file
- Return type
str
- property config_file: Optional[str]¶
Relative path to the model framework config file
- Return type
Optional
[str
]
- property mean: Tuple[float, float, float]¶
The RGB/BGR mean values for the model
- Return type
Tuple
[float
,float
,float
]
- property scalefactor: float¶
The scale factor for the model input
- Return type
float
- property size: Tuple[int, int]¶
The input image size of the model
- Return type
Tuple
[int
,int
]
- property purpose: SupportedPurposes¶
The purpose of the model
- Return type
SupportedPurposes
- property framework_type: str¶
The framework type of the model
- Return type
str
- property crop: bool¶
Whether or not to crop the image prior to inferencing
- Return type
bool
- property colors_dtype: str¶
The data type of the color values
- Return type
str
- property labels: Optional[List[str]]¶
The labels of the model
- Return type
Optional
[List
[str
]]
- property colors: Optional[ndarray]¶
The colors for each label of the model.
Each array element is a 3 dimensional array of 8 bit integers representing red, green, and blue
- Return type
Optional
[ndarray
]
- property swaprb: bool¶
Whether to swap the red and blue channels of the image prior to inference
- Return type
bool
- property architecture: Optional[str]¶
The architecture of the model
- Return type
Optional
[str
]
- property softmax: bool¶
Whether to perform softmax after the inference
- Return type
bool
- property device: Optional[SupportedDevices]¶
The device the model was built for
- Return type
Optional
[SupportedDevices
]
- property output_layer_names: Optional[List[str]]¶
The output layer names of the model
- Return type
Optional
[List
[str
]]
- property hailo_quantize_input: Optional[bool]¶
Whether to quantize input of Hailo model
- Return type
Optional
[bool
]
- property hailo_quantize_output: Optional[bool]¶
Whether to quantize output of Hailo model
- Return type
Optional
[bool
]
- property hailo_input_format: Optional[str]¶
Input format for Hailo model
- Return type
Optional
[str
]
- property hailo_output_format: Optional[str]¶
Output format of Hailo model
- Return type
Optional
[str
]
- property dnn_support: bool¶
Whether DNN Engine supports the model
- Return type
bool
- property dnn_cuda_support: bool¶
Whether DNN CUDA Engine supports the model
- Return type
bool
- property tensor_rt_support: bool¶
Whether TensorRT Engine supports the model
- Return type
bool
- property hailo_support: bool¶
Whether Hailo RT Engine supports the model
- Return type
bool
- property qaic_support: bool¶
Whether QAIC RT Engine supports the model
- Return type
bool
- property onnx_rt_support: bool¶
Whether ONNX RT Engine supports the model
- Return type
bool
- property pytorch_support: bool¶
Whether PYTORCH Engine supports the model
- Return type
bool
- property batch_size: int¶
Inference batch size of model
- Return type
int
- property hub_repo: Optional[str]¶
Torch hub repo
- Return type
Optional
[str
]
- property hub_model: Optional[str]¶
Torch hub model
- Return type
Optional
[str
]
- property hub_pretrained: Optional[bool]¶
Torch pretrained model
- Return type
Optional
[bool
]
- property hub_force_reload: Optional[bool]¶
Torch force reload the model
- Return type
Optional
[bool
]