Working with Models

A key aspect of the alwaysAI workflow is the flexible way models can be used in your computer vision applications. Any model that is within the alwaysAI system can be used in any alwaysAI application and added to any Project. This includes any models contained in the public alwaysAI model catalog, or that were trained using the alwaysAI model retraining toolkit. Use models side-by-side to test out functionality, swap models within a core service, or use models from different core services in the same application. This page will cover core concepts of alwaysAI models as well as all the ways you can leverage alwaysAI models in building computer vision applications.

Core Concepts

Types of Models

alwaysAI enables you to work with models built for image classification, object detection, pose estimation (key points), and semantic segmentation. If you are new to computer vision, you can read more about these types of models in our documentation on core computer vision services.

Datasets

The models in the public alwaysAI model catalog are based on one of three datasets: ImageNet (image classification), COCO (object detection), and Pascal VOC (object detection). If you want to train your own model, you can read about dataset generation and collection guidelines in our documentation.

ImageNet Dataset

Models based on the ImageNet dataset can classify an image into 1,000 separate categories and are trained on a dataset consisting of more than 1.2 million images. A full list of ImageNet object categories can be found here.

COCO Dataset

The Common Object in Context (COCO) dataset is a large-scale object detection dataset consisting of 330,000 images. Models in the catalog are capable of identifying between 90 - 100 unique object categories depending on their training. More information on the COCO dataset can be found here.

Pascal VOC

Pascal Visual Object Classes (VOC) is an object detection dataset consisting of 11,530 images and capable of identifying 20 unique object classes (person, bird, cat, cow, dog, horse, sheep, aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, tv/monitor). More information on the VOC dataset can be found here.

Model Accuracy and Inference Times

Where available, accuracy information for a model is shown in mean Average Precision (mAP). Two mAP values are given based on how often an object is correctly predicted within the first predictions (top-1) or within the top five predictions (top-5) returned by the inference engine. For full information on mAP and how it is calculated see this Medium article.

Inference times for models measure how long it takes for the inference engine to process an image and return predictions. Inference times are given in seconds.

Models within alwaysAI

There are many options for models in alwaysAI: in addition to the large number of models available in the alwaysAI public model catalog, you can upload your own pre-trained custom model, or train a custom model using the alwaysAI model retraining toolkit. Once inside the alwaysAI system, any model can be used virtually interchangeably using the alwaysAI API, edgeIQ.

The alwaysAI Public Model Catalog

The alwaysAI model catalog provides a set of pre-trained machine learning models that, when combined with the alwaysAI edgeIQ APIs, allows developers to quickly prototype a computer vision application without the need to first create and train a custom model.

Uploading Models

In order to use your own custom model that you have trained or obtained outside of alwaysAI, you need to upload the model to your private model catalog.

Note: We currently do not support h5 format.

Place Your Model in a Folder

To prepare your model for upload, place the required files into a single directory. The specific files that are required vary depending on your model framework and architecture. You can see a non-exhaustive example list below, but generally you will need a model file, usually you will need a label file and configuration file, and if you have a segmentation model you will need a color file. The file types may vary based on the method you used to train, possible extensions for these files you may already have are shown in the table below, but these are shown as a guideline. The important aspects are that you specify:

  • A model file, which contains the weights

  • A config file, which contains the model structure

  • A label file, which is a text file containing labels, one on each line

  • A path to a colors file, which is a text file containing RGB or BGR values, one per line, in the following format: <x>,<y>,<z>.

Framework

Model File

Configuration File

Label File

Colors File

Caffe

.caffemodel

.prototext

.txt

.txt

darknet

.weights

.cfg

.names

None

dldt

.bin

.xml

.txt

None

enet

.net

None

.txt

.txt

myriad

.blob

None

.txt

None

onnx

.onnx

None

.txt

.txt

tensor-rt

.trt or .bin

None

.txt or .names

None

tensorflow

.pb

.pbtxt

.txt

None

hailo

.hef

None

.txt or .names

None

Generate Model JSON File

In your terminal, navigate to the directory that contains your model files. From within this directory you are going to run the aai model configure command. The command has a few required flags, and has a few more that are optional; these are shown in the next section. The result of the command is a .json file for your model that will allow you to upload it to your private model catalog.

Model Configuraton Flags

Flag

Required For

Value(s)

Description

–framework

All

Allowed values: tensorflow, caffe, dldt, enet, darknet, onnx, tensor-rt, myriad, hailo

Framework of the model

–model_file

String

Path to model binary file

–mean

Number

The average pixel intensity in the red, green, and blue channels of the training dataset

–scalefactor

Number

Factor to scale pixel intensities by

–size

[Number, Number]

The input size of the neural network

–purpose

All

Allowed values: Classification, ObjectDetection, PoseEstimation, SemanticSegmentation

Computer vision purpose of the model

–crop

Boolean

Crop before resize

–config_file

String

Path to model structure

–label_file

String

Path to file containing labels for each class index

–colors_file

String

Path to file containing colors to be used by each class index

–swaprb

Boolean

Swap red and blue channels after image blog generation

–softmax

Boolean

Apply softmax to the output of the neural network

–output_layer_names

String

List of output layers provided in advance

–device

tensor-rt

Allowed values: nano, xavier-nx, agx-xavier

Define the device on which the model is intended to be used

–architecture

tensor-rt, hailo

Allowed values for tensor-rt: yolo, mobilenet_ssd; Allowed values for hailo: yolov3, mobilenet_ssd

Define the architecture type intended to be used

–quantize_input

Boolean

Quantize the input

–quantize_output

Boolean

Quantize the output

–input_format

String

Input format

–output_format

String

Output format

Example Model Configuration Commands

Yolo Object Detection Model

$ aai model configure --framework darknet --purpose ObjectDetection --model_file yolo.weights --config_file yolo.cfg --label_file yolo.names

Tensor-RT Object Detection Model

$ aai model configure --framework tensor-rt --model_file newModel.bin --label_file newModel.names --purpose ObjectDetection --architecture yolo --device nano

Hailo Object Detection Model

$ aai model configure --framework hailo --model_file newHailoModel.bin --label_file newHailoModel.names --purpose ObjectDetection --architecture yolov3 --quantize_input false --quantize_output false --input_format auto --output_format none

Model JSON

Once you have ran the aai model configure command, a file named alwaysai.model.json will be generated in the current directory. It will contain all the fields you need, but they won’t all have values. Any values you entered as flags will be pre-filled, and most other fields that are required will be filled with default values. The one exception to this is the “id” field.

After generating alwaysai.model.json you must open it and fill out the “id” field. The id of your model should follow the pattern username/modelname. You can find your username by going to the Profile page from the account menu accessed by clicking the arrow next to your email address in the top right corner of the page. When entering your id, confirm the values of the fields that have been generated by the aai model configure command.

Due to the wide range of model architectures and frameworks and their configurations, it is difficult to simply convey what is required and what is not. However, if you use aai model configure and ensure all the files associated with your model are included, then change the id, you are most likely good. The other fields that may need to be changed for your model are the size, scale, and swaprb.

Field

Required For

Default Value

Description

accuracy

Model accuracy

dataset

Dataset the model is trained on

id

All

<usernam>/<modelname>

inference_time

0

Inference time

license

License for your model

mean_average_precision_top_1

0

mAP

mean_average_precision_top_5

0

mAP

public

All

False

Whether your model is available to everyone

website_url

Website for your model

framework_type

All

Must be entered

Framework for the model

model_file

All

Must be entered

Path to the model binary file

mean

All

[0,0,0]

The average intensity of the red, green, and blue channels of the training dataset

scalefactor

All

1

Factor to scale pixel intensity by

size

All

Default for tensorflow: [300, 300]; Default for yolo: [320, 320]

The input size of the neural network

purpose

All

Must be entered

Computer vision purpose of the model

crop

False

Whether to crop before resizing

config_file

Sometimes

Path to model structure

label_file

Object Detection

File containing labels for each class index

colors_file

Sometimes

File containing colors for each class index

swaprb

False

Swap red and blue channels after image blog generation

softmax

False

Apply softmax to the output of the neural network

output_layer_names

List of output layers provided in advance

device

tensor-rt

Must be entered

Define the device on which the model is intended to be used

architecture

tensor-rt, hailo

Must be entered

Define the architecture type intended to be used

quantize_input

hailo

True

Quantize the input

quantize_output

hailo

True

Quantize the output

input_format

hailo

Input format

output_format

hailo

Output format

Publishing Your Model

Once your json is created and filled out, simply run the following command from the model directory:

$ aai model publish

You will receive a confirmation in the terminal with your model id and version. If you make any changes to your model or its configuration, run the command again and as long as it has the same id, a new version of the same model will be published.

Training Your Own Model

If you can’t find a suitable model in the alwaysAI model catalog, you can also train your own model using the retraining toolkit and use it in the same manner that you would any other model in the alwaysAI platform.

Using Models

All models in the alwaysAI system can be used in any application, added to any Project, and shared with any alwaysAI Collaborator, in exactly the same way. Both the Dashboard and CLI enable you to add or remove models to a project, including models that you are training locally and which haven’t been published to the alwaysAI cloud yet. The alwaysAI API, edgeIQ, has standardized interfaces for working with each model type in each of the core services.

Adding Models to Your Application

We have detailed documentation on adding models to your application and Project via the Dashboard. Any time you make a change to models for a Project in your Dashboard, you can sync the changes locally using the command:

$ aai app models sync

You can also use the CLI locally to add models to your application, as shown below.

Navigate to your app directory and run the following command in the terminal:

$ aai app models add <username>/<modelName>

Where username is your alwaysAI username, and modelName is the name you gave the model when you uploaded it. Here’s an example:

$ aai app models add alwaysai/MyFirstModel

In addition to adding the model to your Project, you must update the class instantiation in your application source (app.py) to use the new model. If you’re using an ObjectDetection model, your code might look like this:

obj_detect = ObjectDetection("alwaysai/MyFirstModel")

Note: The specific CLI commands and Python code needed to add a given model to an application are provided within the model details page for every model in the alwaysAI public model catalog for convenience.

Display Models

At any time, you can check which models have been added to a Project on the Dashboard, by looking under the ‘Models’ tab for a specific Project, or by running the following command from within a project directory:

$ aai app models show

Install a Model Locally

To install locally on your development machine or to a remote device, use the command:

$ aai app install

This will pull the necessary model files down to your local machine, storing them in the models folder in the local Project directory. Run this command after making any model changes to your application and before running aai app start.

(Optional) Removing Models From Your Application

If you have a previous model in your app, you can delete it from the Project using your Dashboard. Click the three dots next to any model listed under ‘Models’ in a specific Project, and select ‘Delete’ to remove the model from the Project.

You can also remove it from the Project from the CLI using the command:

$ aai app models remove <username>/<modelName>

For instance, to remove the model alwaysai/mobilenet_ssd, you would run:

$ aai app models remove alwaysai/mobilenet_ssd

Related Tutorials