Tools¶
Performance¶
-
class
FPS
¶ Monitor the frames per second (FPS) which were processed by the application for performance tracking.
Typical usage:
fps = edgeiq.FPS().start() while True: <main processing loop> fps.update() # Get the elapsed time and FPS fps.stop() print("Elapsed seconds: {}".format(fps.get_elapsed_seconds())) print("FPS: {}".format(fps.compute_fps()))
compute_fps()
may also be called in the main processing loop to compute an instantaneous estimate of the FPS.-
start
()¶ Start tracking FPS.
-
stop
()¶ Stop tracking FPS.
-
update
()¶ Increment the total number of frames examined during the start and end intervals.
- Raises
RuntimeError
-
get_elapsed_seconds
()¶ Return the total number of seconds between the start and end intervals.
- Returns
float – The elapsed time in seconds between start and end, or since start if stop() has not been called.
-
compute_fps
()¶ Compute the (approximate) frames per second.
- Returns
float – the approximate frames per second.
-
-
class
TimingProfiler
¶ Time segments of processing and generate reports.
To time segments of a function:
def func(): prof = edgeiq.TimingProfiler() for i in range(10): prof.mark_start('start-loop') ... prof.mark('finish-block-1') ... prof.mark('finish-block-2') ... prof.mark_end('end-loop') print(json.dumps(prof.durations, indent=2)) print(json.dumps(prof.generate_report(), indent=2))
-
property
durations
¶ The durations of the last completed iteration.
The durations object has the form:
[ { "segment": "<prev_mark>-><cur_mark>", "duration_s": <duration in seconds> }, ... ]
- Return type
List
[dict
]
-
mark_start
(tag)¶ Mark the start of a timing profiling iteration.
This will reset the iteration memory and will complete when mark_end() is called.
- Parameters
tag (
str
) – The tag to identify this event. Should be the same as other start events you’d like to compare this to.
-
mark
(tag)¶ Mark a timing profiling event.
This must be called after mark_start() and will generate a timing event which compares to the previous event.
- Parameters
tag (
str
) – The tag to identify this event. Should be the same as other start events you’d like to compare this to.
-
mark_end
(tag)¶ Mark the end of a timing profiling iteration.
This clears the last durations attribute and generates new durations based on the marks for the last iteration.
- Parameters
tag (
str
) – The tag to identify this event. Should be the same as other end events you’d like to compare this to.
-
generate_report
()¶ Generate a report from the collected timing data.
The report has the form:
{ "<segment 0>": { "mean": <mean>, "std_dev": <standard deviation>, "max": <max value>, "min": <min value> }, ... },
- Return type
Dict
[str
,Dict
[str
,float
]]
-
property
-
parse_cvat_annotations
(path, start_frame=0, end_frame=None, new_id_for_occlusion=False)¶ Parse CVAT annotations file to edgeIQ predictions
- Parameters
path (
str
) – The path of CVAT dumped ground truth annotations (.xml)start_frame (
int
) – Starting Frameend_frame (
Optional
[int
]) – Ending Frame (default: None -> Parses all available frames)new_id_for_occlusion (
bool
) – Associates a new object_id to predictions if occluded in previous frame
- Returns
dict – Frame-By-Frame Data {frame_num: list of
ObjectDetectionPrediction
}- Return type
Tuple
[Dict
[int
,ObjectDetectionPrediction
],Dict
[int
,Dict
[str
,List
[int
]]]]- Returns
dict – Object-By-Object Data {object: {‘tracked_frames : list of frames in which object was tracked, ‘occluded_frames’ : list of frames in which object was occluded}}
- Raises
FileNotFoundError if file doesn’t exist.
- Raises
ValueError if start frame is greater than end.
-
parse_coco_annotations
(path)¶ Parse COCO annotations and convert to
ObjectDetectionResults
.- Parameters
path (
str
) – The path to the COCO annotations file.- Return type
List
[ObjectDetectionResults
]- Returns
Frame-By-Frame
ObjectDetectionResults
.- Raises
FileNotFoundError if file doesn’t exist.
-
parse_mot_annotations
(path, labels)¶ Parse MOT annotations and convert to
TrackingResults
.An entry in the list will be provided for every frame, even if MOT results do not exist for those frames.
- Parameters
path (
str
) – The path to the MOT annotations file.labels (
List
[str
]) – A list of labels for the class IDs.
- Return type
List
[TrackingResults
[TrackablePrediction
[ObjectDetectionPrediction
]]]- Returns
Frame-By-Frame
TrackingResults
.- Raises
FileNotFoundError if file doesn’t exist.
-
class
ModelPerformanceAnalyzer
(ground_truth_path, start_frame=0, end_frame=None)¶ Get statistics to analyse the performance of models.
Typical usage:
analyzer = edgeiq.ModelPerformanceAnalyzer(ground_truth_path = 'annotations.xml') model_results = edgeiq.load_analytics_results('logs/analytics.txt') analyzer.set_results(model_results) analyzer.write_analysis_output(output_dir = 'output', filename_suffix = '1', iou_threshold = 0.3)
- Parameters
ground_truth_path (string) – The path of CVAT dumped (for Videos) ground truth annotations file (.xml)
start_frame (int) – Starting Frame
end_frame (int) – Ending Frame
-
set_results
(results)¶ Set the model detections list to analyse
- Parameters
results (list of
ObjectDetectionResults
) – The list ofObjectDetectionResults
to use for analysis.
-
get_detections_per_object
(iou_threshold=0.01)¶ Get percentage detections per object
- Parameters
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns
dict
-
get_iou_distribution
(iou_threshold=0.01, bins=10)¶ Get distribution of IOU
- Parameters
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
bins (int) – Number of bins to distribute in
- Returns
tuple – (Array of counts, Array of bin floors)
-
get_overlap_distribution
(iou_threshold=0.01, bins=10)¶ Get distribution of Overlap
- Parameters
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
bins (int) – Number of bins to distribute in
- Returns
tuple – (Array of counts, Array of bin floors)
-
get_missed_detections
(iou_threshold=0.01)¶ Get missed detections per objects
- Parameters
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns
dict – {ground_truth ID : list of missed frames}
-
get_class_based_stats
(iou_threshold=0.01)¶ Get per class statistics like number of True Positives, False Positives, False Negatives, precision and recall for the class
- Parameters
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns
dict – {class_name : { num_gt: , num_detections: , TP: ,FP: ,FN: , precision: , recall: }}
-
write_analysis_output
(output_dir='output_data', filename_suffix='', iou_threshold=0.01)¶ Compute all available stats and write data to csv files.
class_stats.csv -> Per class values for True Positives, False Positives, False Negatives, precision and recall object_stats.csv -> Per object values for ground truth, detections and %correct detections distribution_stats.csv -> IOU and Overlap distributions of True Positives
- Parameters
filename_suffix (string) – Suffix to be added at the end of the generated filenames
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
-
class
TrackerPerformanceAnalyzer
(ground_truth, max_distance)¶ Analyze tracker performance against ground truth annotations.
TrackerPerformanceAnalyzer compares tracker results to ground truth annotations and collects data for two of the main performance flaws of tracking: ID changes and ID swaps.
ID changes occur when the tracker assigns a new tracker ID to an existing object. This can happen for a number of reasons
Object became occluded
Object was lost and found by tracker (tracker parameters are too tight)
Object ID was swapped with another object (this is examined more closely in ID swap analysis)
The impact of ID changes on the performance of your app depends on the scenario, but a typical result is logging more unique object than there actually were. ID changes can often be overcome by higher-layer analysis to associate objects that were occluded or lost.
ID swaps occur when an object is assigned a tracker ID that was previously assigned to another. This can be very hard to recover from, since it is hard to detect in real-life use cases. The most common causes are
Objects occlude each other
Tracker parameters are too loose
The impact of ID swaps on the performance of your app will be combining metrics from multiple objects into a single object.
Typical usage:
gt_res = edgeiq.parse_mot_annotations( path=gt_path, labels=LABELS ) actual_res = edgeiq.parse_mot_annotations( path=results_path, labels=LABELS ) perf_analyzer = edgeiq.TrackerPerformanceAnalyzer( ground_truth=gt_res, max_distance=100 ) for frame_idx in range(start_frame, end_frame - 1): frame = load_annotation_frame(frame_idx) frame = tpa.markup_image(frame_idx, frame, (255, 100, 0)) # Get predictions and tracker results tpa.update(frame_idx, tracked_objects) id_changes, id_swaps = tpa.generate_report() id_changes.write_to_file(output_dir) id_swaps.write_to_file(output_dir)
- Parameters
annotations_path – The path of CVAT dumped ground truth annotations (.xml)
start_frame – The start frame to load annotations for.
end_frame – The end frame to load annotations for, or None to load all remaining.
max_distance (
int
) – The max distance to be used for matching tracked objects with annotations.
-
update
(frame_idx, results)¶ Match a new set of tracker results with the ground truth results from annotations.
- Parameters
frame_idx (
int
) – The frame index to read annotations from.results (
TrackingResults
[TrackablePrediction
[ObjectDetectionPrediction
]]) – The output of an Object Tracker
-
markup_image
(frame_idx, frame, color)¶ Draw boxes, centers, and matching radius of ground truth predictions on the frame.
- Parameters
frame_idx (
int
) – The frame index to read annotations from.frame (
ndarray
) – The image to draw oncolor (
Tuple
[int
,int
,int
]) – The color to draw the center and radius
- Return type
ndarray
-
generate_report
()¶ Evaluate the tracker data and generate the report.
- Return type
Tuple
[IdChangeReport
,IdSwapReport
]
-
class
IdChangeReport
(num_objects_with_id_changes, objects_with_id_changes, total_id_changes, id_change_events_by_frame, id_changes_by_ground_truth_id)¶ -
num_objects_with_id_changes
: int¶
-
objects_with_id_changes
: List[int]¶
-
total_id_changes
: int¶
-
id_change_events_by_frame
: Dict[int, List[edgeiq.performance_analysis.tracker_performance_analysis.IdChangeEvent]]¶
-
id_changes_by_ground_truth_id
: Dict[int, edgeiq.performance_analysis.tracker_performance_analysis.ObjectData]¶
-
write_to_file
(output_dir)¶ Write the report to file.
This function will save ‘id_change_report.json’ to the provided directory.
- Parameters
output_dir (
str
) – The directory to save the report to.
-
-
class
IdSwapReport
(num_objects_with_id_swaps, id_swaps_by_ground_truth_id, total_object_swaps, id_swap_events_by_frame)¶ -
num_objects_with_id_swaps
: int¶
-
id_swaps_by_ground_truth_id
: Dict[int, List[int]]¶
-
total_object_swaps
: int¶
-
id_swap_events_by_frame
: Dict[int, List[edgeiq.performance_analysis.tracker_performance_analysis.IdSwapEvent]]¶
-
write_to_file
(output_dir)¶ Write the report to file.
This function will save ‘id_swap_report.json’ to the provided directory.
- Parameters
output_dir (
str
) – The directory to save the report to.
-
Image Manipulation¶
-
translate
(image, x, y)¶ Translate an image on the X and/or Y axis.
- Parameters
image (
ndarray
) – The image to manipulate.x (
int
) – Translate image on X axis by this amount.y (
int
) – Translate image on Y axis by this amount.
- Return type
ndarray
- Returns
The translated image.
-
rotate
(image, angle)¶ Rotate an image by specified angle.
- Parameters
image (
ndarray
) – The image to manipulate.angle (
int
) – The angle to rotate the image by (degrees).
- Return type
ndarray
- Returns
The rotated image.
-
resize
(image, width=None, height=None, keep_scale=True, inter=3)¶ Resize an image to specified height and width.
When both a width and height are given and keep_scale is True, these are treated as the maximum width and height.
- Parameters
image (
ndarray
) – The image to manipulate.height (
Optional
[int
]) – The new height of image.width (
Optional
[int
]) – The new width of image.keep_scale (
bool
) – Maintain the original scale of the image.inter (
int
) – The interpolation method (One of OpenCV InterpolationFlags).
- Return type
ndarray
-
convert_to_jpg
(image, jpg_quality)¶ Convert the given image to JPEG represented in bytes.
When on a Jetson device, this will use the nvjpg hardware accelerator.
- Parameters
image (
ndarray
) – The image to convert to JPEGjpg_quality (
int
) – The quality setting for the JPEG conversion, in the range (0, 100)
- Return type
bytes
- Returns
The bytes of the encoded image
-
list_images
(base_path, contains=None)¶ List all images in specified path.
Finds images with the following extensions:
.jpg
.jpeg
.png
.bmp
.tif
.tiff
- Parameters
base_path (
str
) – The base path of folder where images are located.contains (
Optional
[str
]) – Select only filenames that contain this string.
- Return type
List
[str
]- Returns
The valid image file paths.
-
list_files
(base_path, valid_exts, contains=None)¶ List all files in specified path.
- Parameters
base_path (
str
) – The base path of folder where files are located.valid_exts (
List
[str
]) – The list of valid extensions to filter for.contains (
Optional
[str
]) – Select only filenames that contain this string.
- Return type
List
[str
]- Returns
The valid file paths.
-
safe_hstack
(frames, pad=False)¶ Horizontally stack images from left to right. If pad is not specified, images are resized while maintaining aspect ratio.
- Parameters
frames (
List
[ndarray
]) – The array of images to concatenate.pad (
bool
) – Pad the frames - padding is added to the bottom of the image if heights are not equivalent.
- Returns
The image formed by horizontally stacking the given array of images.
-
safe_vstack
(frames, pad=False)¶ Vertically stack images from top to bottom. If pad is not specified, images are resized while maintaining aspect ratio.
- Parameters
frames (
List
[ndarray
]) – The array of images to concatenate.pad (
bool
) – Pad the frames - padding is added to the right of the image if widths are not equivalent.
- Returns
The image formed by vertically stacking the given array of images.
-
pad_to_aspect_ratio
(image, a_ratio)¶ Pad an image to a certain aspect ration.
Padding is added to the bottom and right of the image.
- Parameters
image (
ndarray
) – The image to pad.a_ratio (
float
) – The aspect ratio to pad to.
- Return type
Tuple
[ndarray
,int
,int
]- Returns
The padded image, padding in x, padding in y.
-
cutout_image
(image, box)¶ Cut out the portion of an image outlined by a bounding box.
- Parameters
image (
ndarray
) – The image to cut out from.box (
BoundingBox
) – The bounding box outlining the section of the image to cut out.
- Return type
ndarray
- Returns
The segment of the image outlined by the bounding box. Will be independent from the original image.
-
blend_images
(foreground_image, background_image, alpha)¶ Blend a foreground image with a background image, foreground image and background image must have the same dimensions and same color format (RGB/BGR).
- Parameters
foreground_image (
ndarray
) – The image to be scaled by alpha in the blend.background_image (
ndarray
) – The image to be scaled by 1 - alpha in the blend.alpha (
float
) – The ratio of foreground to background image in the blend, in the range [0.0, 1.0]
- Return type
ndarray
- Returns
numpy array – The blended image.
-
overlay_image
(foreground_image, background_image, foreground_mask)¶ Overlay a foreground image with a background image according to the foreground mask.
This function will mask both the foreground and background images, then combine them into the output image.
- Parameters
foreground_image (
ndarray
) – The image to be overlaid on the background.background_image (
ndarray
) – The image for the foreground to be overlaid on.foreground_mask (
ndarray
) – A mask with white indicating foreground and black indicating background. Shades in between will blend the foreground and background accordingly.
- Return type
ndarray
- Returns
The overlaid image.
-
perform_histogram_equalization
(image, color_space='GS', adaptive=False, clip_limit=2.0, tile_grid_size=(8, 8))¶ Performs Histogram Equalization on the input image and returns the equalized image.
Histogram equalization is a basic image processing technique that adjusts the global contrast of an image by updating the image histogram’s pixel intensity distribution. Doing so enables areas of low contrast to obtain higher contrast in the output image. This function includes implementations of both basic and adaptive histogram equalization. The basic histogram equalization will spread pixels to intensity “buckets” that don’t have as many pixels binned to them. Mathematically, what this means is that the function is applying a linear trend to the image’s cumulative distribution function (CDF). The adaptive histogram equalization function divides an input image into an M x N grid, and then applies equalization to each cell in the grid, resulting in a higher quality output image.
- Parameters
image (
ndarray
) – The image on which we will do Histogram Equalization operation. (Gray Scaled or in BGR format)color_space (
str
) – The color space of the image on which we will do Histogram Equalization. Supported color_space parameters: [“GS”, “YCrCb”, “YUV”, “HSV”, “LAB”]. If ‘color_space’ = “GS”, output image will be in gray scaled format(2D array). If ‘color_space’ != “GS”, output image will be in BGR format(3D array).adaptive (
bool
) – Whether we want to enable adaptive Histogram Equalization or not.clip_limit (
float
) – The clip limit value for Adaptive Histogram Equalization. The ‘clip_limit’ is used only if ‘adaptive’ = True. ‘clip_limit’ value is the threshold for contrast limiting. Typically it is advised to use the value ranging from 2-5. Allowed range is 0-40. Larger values results in more local contrast and more noise. Try to keep the ‘clip_limit’ value as low as possible.tile_grid_size (
Tuple
[int
,int
]) – Number of grids we want to divide the image into for Adaptive Histogram Equalization. The ‘tile_grid_size’ is used only if ‘adaptive’ = True.
- Return type
ndarray
- Returns
The image after doing Histogram Equalization(Gray Scaled or in BGR format)
-
perform_gamma_correction
(image, gamma_value=0.8, color=False)¶ Performs gamma correction operation on the input image and returns the corrected image.
Gamma correction is done when you want to control a camera sensor’s color and luminance. Gamma correction is also known as the Power Law Transform: O = I ^ (1 / G) I = input image O = scaled back to the range [0, 255] G = gamma value, should be greater than 0. For gamma values < 1 will shift the image towards the darker end of the spectrum For gamma values > 1 will shift will make the image appear lighter For gamma value of 1 will have no effect
- Parameters
image (
ndarray
) – The image on which we will do Gamma Correction operation.color (
bool
) – True will do gamma correction on BGR image and False on Gray-Scaled image. If ‘color’ = True, the output image will be in BGR format(3D array). If ‘color’ = False, the output image will be gray-scale format(2D array).gamma_value (
float
) – The gamma value for Gamma Correction.
- Return type
ndarray
- Returns
The image after doing Gamma Correction(2D or 3D array)
-
draw_rounded_rectangle
(image, pt1, pt2, color, thickness, corner_radius)¶ Draw a rectangle with rounded corners in place on an image.
- Parameters
image (
ndarray
) – The image to draw the rectangle on.pt1 (
Tuple
[int
,int
]) – THe x,y coordinates of the upper left corner.pt2 (
Tuple
[int
,int
]) – The x,y coordinates of the lower right corner.color (
Tuple
[int
,int
,int
]) – The color to draw the box, in BGR.thickness (
int
) – The thickness of the edge lines. A value of -1 will fill the rectangle.corner_radius (
int
) – The radius of the corners.
- Return type
None
- Returns
None
-
draw_text_with_background
(image, text, start_x, start_y, font_size, font_thickness, color, background_padding=10, background_corner_radius=0, text_alignment=('left', 'top'), **kwargs)¶ Draw text with a colored background on an image.
- Parameters
image (
ndarray
) – The image to draw on.text (
str
) – The text to write on the image.start_x (
int
) – The x coordinate of the upper left corner.start_y (
int
) – The y coordinate of the upper left corner.font_size (
float
) – The scale factor for the text.font_thickness (
int
) – The thickness of the lines used to draw the text.color (
Tuple
[int
,int
,int
]) – The color for the text background. The text color, black or white, will be selected based on the background color for maximum visibility.background_padding (
int
) – The padding around the text of the background.background_corner_radius (
int
) – The corner radius of the background box.text_alignment (
Tuple
[Literal
[‘left’, ‘center’, ‘right’],Literal
[‘top’, ‘middle’, ‘bottom’]]) – Specifies the alignment of the text in reference to the origin point (x, y). Accepts a tuple of horizontal (‘left’, ‘center’, ‘right’) and vertical (‘top’, ‘middle’, ‘bottom’) alignment literals.
- Return type
Tuple
[ndarray
,int
,int
]- Returns
The marked up image, the box width, and the box height.
Results Serialization¶
-
to_json_serializable
(results_input)¶ Takes in core Computer Vision service results, such as
ObjectDetectionResults
,ClassificationResults
,HumanPoseResult
,InstanceSegmentationResults
, or results returned by calling the update() method on any of the tracking classes, such asCentroidTracker
and returns them in a JSON-serializable format.Typical usage:
... results = obj_detect.detect_objects(frame, confidence_level=.5) serialized_results = edgeiq.to_json_serializable(results)
- Parameters
results_input (A core Computer Vision service result object.) – The object to serialize.
- Returns
a json serializable object.
HW Discovery¶
-
find_usb_device
(id_vendor, id_product)¶ Check if a USB device is connected.
- Parameters
id_vendor (
int
) – The vendor ID.id_product (
int
) – The product ID.
- Return type
bool
-
find_pcie_device
(id_vendor, id_product)¶ Check if a PCIe device is connected. :type id_vendor:
str
:param id_vendor: The vendor ID. :type id_product:str
:param id_product: The product ID. :raises: RuntimeError if pciutils library is not found.- Return type
bool
-
find_hailo
()¶ Check if Hailo-8 PCIe chip is connected.
- Return type
bool
-
find_nvidia_gpu
()¶ Determine if running on a NVIDIA device.
- Return type
bool
-
is_jetson
()¶ Determine if running on a NVIDIA Jetson device.
- Return type
bool
-
is_jetson_xavier_nx
()¶ Determine if running on a NVIDIA Jetson Xavier NX.
- Return type
bool
-
is_jetson_agx_xavier
()¶ Determine if running on a NVIDIA Jetson AGX Xavier.
- Return type
bool
-
is_jetson_agx_orin
()¶ Determine if running on a NVIDIA Jetson AGX Orin.
- Return type
bool
-
is_jetson_orin_nx
()¶ Determine if running on a NVIDIA Jetson Orin NX.
- Return type
bool
-
is_opencv_cuda_available
()¶ Check if OpenCV is built with CUDA support and a CUDA device is available.
- Return type
bool
- Returns
True if OpenCV is built with CUDA and at least one CUDA device is available.
-
get_gpu_archs
()¶ Find GPU compute architecture.
- Returns
List of GPU compute architectures
Analytics¶
-
class
CustomEvent
(results)¶ Store the attributes of this result.
- Parameters
results (
Any
) – The custom event data parsed from the analytics file.
-
property
results
¶ The custom event data parsed from the analytics file.
- Return type
Any
-
property
tag
¶ The tag for the custom event.
- Return type
Optional
[Any
]
-
load_analytics_results
(filepath)¶ Load results from file published by the alwaysAI Analytics Service.
Typical usage:
deserialized_results = edgeiq.load_analytics_results('logs/analytics.txt') left_camera_results = [result for result in deserialized_results if 'left' in result.tag] right_camera_results = [result for result in deserialized_results if 'right' in result.tag]
- Parameters
filepath (
str
) – The full path to the file to load.- Return type
list
- Returns
A list of the deserialized results. Each deserialized result will include a tag property.
-
parse_analytics_packet
(packet_str)¶ Takes in an analytics packet as input and parses it to extract relevant information. The analytics packet is expected to be in JSON format. The function then returns the extracted result object.
Typical Usage:
packet = '{"type": "TRACKING_RESULT", "results": {...}}' result = parse_analytics_packet(packet)
-
publish_analytics
(results, tag=None, **kwargs)¶ Publish data to the alwaysAI Analytics Service
Example usage:
try: edgeiq.publish_analytics(results, tag='custom_tag') except edgeiq.PublishError as e: # Retry publish except edgeiq.ConnectionError as e: # Save state and exit app to reconnect
- Parameters
results (JSON-serializable object.) – The results to publish.
tag (
Optional
[Any
]) – Additional information to assist in querying and visualizations.
- Raises
ConnectionBlockedError
when using connection to the alwaysAI Device Agent and resources are at capacity,- Raises
PacketRateError
when publish rate exceeds current limit,- Raises
PacketSizeError
when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.
-
write_object_detection_results_to_analytics_file
(output_file_path, results)¶ Write results to an analytics file at a custom path.
This is useful for writing to analytics files in a scripting environment, as opposed to an application environment. This function doesn’t require alwaysai.app.json.
- Parameters
output_file_path (
str
) – Full path to the analytics file to write to. The file may not exist, but the directory must exist.results (
ObjectDetectionResults
) – TheObjectDetectionResults
to write to the analytics file.
- Return type
None
-
write_tracking_results_to_analytics_file
(output_file_path, results)¶ Write tracking results to an analytics file at a custom path.
This is useful for writing to analytics files in a scripting environment, as opposed to an application environment. This function doesn’t require alwaysai.app.json.
- Parameters
output_file_path (
str
) – Full path to the analytics file to write to. The file may not exist, but the directory must exist.results (
TrackingResults
) – TheTrackingResults
to write to the analytics file.
- Return type
None