Tools
Performance
- class FPS
Monitor the frames per second (FPS) which were processed by the application for performance tracking.
Typical usage:
fps = edgeiq.FPS().start() while True: <main processing loop> fps.update() # Get the elapsed time and FPS fps.stop() print("Elapsed seconds: {}".format(fps.get_elapsed_seconds())) print("FPS: {}".format(fps.compute_fps()))
compute_fps()
may also be called in the main processing loop to compute an instantaneous estimate of the FPS.- start()
Start tracking FPS.
- stop()
Stop tracking FPS.
- update()
Increment the total number of frames examined during the start and end intervals.
- Raises:
RuntimeError
- get_elapsed_seconds()
Return the total number of seconds between the start and end intervals.
- Returns:
float – The elapsed time in seconds between start and end, or since start if stop() has not been called.
- compute_fps()
Compute the (approximate) frames per second.
- Returns:
float – the approximate frames per second.
- class TimingProfiler
Time segments of processing and generate reports.
To time segments of a function:
def func(): prof = edgeiq.TimingProfiler() for i in range(10): prof.mark_start('start-loop') ... prof.mark('finish-block-1') ... prof.mark('finish-block-2') ... prof.mark_end('end-loop') print(json.dumps(prof.durations, indent=2)) print(json.dumps(prof.generate_report(), indent=2))
- property durations: List[dict]
The durations of the last completed iteration.
The durations object has the form:
[ { "segment": "<prev_mark>-><cur_mark>", "duration_s": <duration in seconds> }, ... ]
- mark_start(tag)
Mark the start of a timing profiling iteration.
This will reset the iteration memory and will complete when mark_end() is called.
- Parameters:
tag (
str
) – The tag to identify this event. Should be the same as other start events you’d like to compare this to.
- mark(tag)
Mark a timing profiling event.
This must be called after mark_start() and will generate a timing event which compares to the previous event.
- Parameters:
tag (
str
) – The tag to identify this event. Should be the same as other start events you’d like to compare this to.
- parse_cvat_annotations(path, start_frame=0, end_frame=None, new_id_for_occlusion=False)
Parse CVAT annotations file to edgeIQ predictions
- Parameters:
- Return type:
Tuple
[Dict
[int
,ObjectDetectionPrediction
],Dict
[int
,Dict
[str
,List
[int
]]]]- Returns:
dict – Frame-By-Frame Data {frame_num: list of
ObjectDetectionPrediction
}- Returns:
dict – Object-By-Object Data {object: {‘tracked_frames : list of frames in which object was tracked, ‘occluded_frames’ : list of frames in which object was occluded}}
- Raises:
FileNotFoundError if file doesn’t exist.
- Raises:
ValueError if start frame is greater than end.
- parse_coco_annotations(path, start_frame=None, end_frame=None)
Parse COCO annotations and convert to
ObjectDetectionResults
.If annotations do not start at the first frame or go to the last frame, set start_frame and end_frame to the desired frame indices.
- Parameters:
- Return type:
- Returns:
Frame-By-Frame
ObjectDetectionResults
.- Raises:
FileNotFoundError if file doesn’t exist.
- parse_mot_annotations(path, labels, start_frame=None, end_frame=None)
Parse MOT annotations and convert to
TrackingResults
.An entry in the list will be provided for every frame, even if MOT results do not exist for those frames.
If annotations do not start at the first frame or go to the last frame, set start_frame and end_frame to the desired frame indices.
- Parameters:
- Return type:
List
[TrackingResults
[TrackablePrediction
[ObjectDetectionPrediction
]]]- Returns:
Frame-By-Frame
TrackingResults
.- Raises:
FileNotFoundError if file doesn’t exist.
- class ModelPerformanceAnalyzer(ground_truth_path, start_frame=0, end_frame=None)
Get statistics to analyse the performance of models.
Typical usage:
analyzer = edgeiq.ModelPerformanceAnalyzer(ground_truth_path = 'annotations.xml') model_results = edgeiq.load_analytics_results('logs/analytics.txt') analyzer.set_results(model_results) analyzer.write_analysis_output(output_dir = 'output', filename_suffix = '1', iou_threshold = 0.3)
- Parameters:
- set_results(results)
Set the model detections list to analyse
- Parameters:
results (list of
ObjectDetectionResults
) – The list ofObjectDetectionResults
to use for analysis.
- get_detections_per_object(iou_threshold=0.01)
Get percentage detections per object
- Parameters:
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns:
dict
- get_iou_distribution(iou_threshold=0.01, bins=10)
Get distribution of IOU
- get_overlap_distribution(iou_threshold=0.01, bins=10)
Get distribution of Overlap
- get_missed_detections(iou_threshold=0.01)
Get missed detections per objects
- Parameters:
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns:
dict – {ground_truth ID : list of missed frames}
- get_class_based_stats(iou_threshold=0.01)
Get per class statistics like number of True Positives, False Positives, False Negatives, precision and recall for the class
- Parameters:
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- Returns:
dict – {class_name : { num_gt: , num_detections: , TP: ,FP: ,FN: , precision: , recall: }}
- write_analysis_output(output_dir='output_data', filename_suffix='', iou_threshold=0.01)
Compute all available stats and write data to csv files.
class_stats.csv -> Per class values for True Positives, False Positives, False Negatives, precision and recall object_stats.csv -> Per object values for ground truth, detections and %correct detections distribution_stats.csv -> IOU and Overlap distributions of True Positives
- Parameters:
filename_suffix (string) – Suffix to be added at the end of the generated filenames
iou_threshold (float) – Minimum IOU required to associate model detection with ground truth
- class TrackerPerformanceAnalyzer(ground_truth, max_distance)
Analyze tracker performance against ground truth annotations.
TrackerPerformanceAnalyzer compares tracker results to ground truth annotations and collects data for two of the main performance flaws of tracking: ID changes and ID swaps.
ID changes occur when the tracker assigns a new tracker ID to an existing object. This can happen for a number of reasons
Object became occluded
Object was lost and found by tracker (tracker parameters are too tight)
Object ID was swapped with another object (this is examined more closely in ID swap analysis)
The impact of ID changes on the performance of your app depends on the scenario, but a typical result is logging more unique object than there actually were. ID changes can often be overcome by higher-layer analysis to associate objects that were occluded or lost.
ID swaps occur when an object is assigned a tracker ID that was previously assigned to another. This can be very hard to recover from, since it is hard to detect in real-life use cases. The most common causes are
Objects occlude each other
Tracker parameters are too loose
The impact of ID swaps on the performance of your app will be combining metrics from multiple objects into a single object.
Typical usage:
gt_res = edgeiq.parse_mot_annotations( path=gt_path, labels=LABELS ) actual_res = edgeiq.parse_mot_annotations( path=results_path, labels=LABELS ) perf_analyzer = edgeiq.TrackerPerformanceAnalyzer( ground_truth=gt_res, max_distance=100 ) for frame_idx in range(start_frame, end_frame - 1): frame = load_annotation_frame(frame_idx) frame = tpa.markup_image(frame_idx, frame, (255, 100, 0)) # Get predictions and tracker results tpa.update(frame_idx, tracked_objects) id_changes, id_swaps = tpa.generate_report() id_changes.write_to_file(output_dir) id_swaps.write_to_file(output_dir)
- Parameters:
annotations_path – The path of CVAT dumped ground truth annotations (.xml)
start_frame – The start frame to load annotations for.
end_frame – The end frame to load annotations for, or None to load all remaining.
max_distance (
int
) – The max distance to be used for matching tracked objects with annotations.
- update(frame_idx, results)
Match a new set of tracker results with the ground truth results from annotations.
- Parameters:
frame_idx (
int
) – The frame index to read annotations from.results (
TrackingResults
[TrackablePrediction
[ObjectDetectionPrediction
]]) – The output of an Object Tracker
- markup_image(frame_idx, frame, color)
Draw boxes, centers, and matching radius of ground truth predictions on the frame.
- generate_report()
Evaluate the tracker data and generate the report.
- Return type:
- class IdChangeReport(num_objects_with_id_changes, objects_with_id_changes, total_id_changes, id_change_events_by_frame, id_changes_by_ground_truth_id)
- class IdSwapReport(num_objects_with_id_swaps, id_swaps_by_ground_truth_id, total_object_swaps, id_swap_events_by_frame)
Image Manipulation
- translate(image, x, y)
Translate an image on the X and/or Y axis.
- rotate(image, angle)
Rotate an image by specified angle.
- resize(image, width=None, height=None, keep_scale=True, inter=3)
Resize an image to specified height and width.
When both a width and height are given and keep_scale is True, these are treated as the maximum width and height.
- Parameters:
- Return type:
- convert_to_jpg(image, jpg_quality)
Convert the given image to JPEG represented in bytes.
When on a Jetson device, this will use the nvjpg hardware accelerator.
- list_images(base_path, contains=None)
List all images in specified path.
Finds images with the following extensions:
.jpg
.jpeg
.png
.bmp
.tif
.tiff
- list_files(base_path, valid_exts, contains=None)
List all files in specified path.
- safe_hstack(frames, pad=False)
Horizontally stack images from left to right. If pad is not specified, images are resized while maintaining aspect ratio.
- safe_vstack(frames, pad=False)
Vertically stack images from top to bottom. If pad is not specified, images are resized while maintaining aspect ratio.
- pad_to_aspect_ratio(image, a_ratio)
Pad an image to a certain aspect ration.
Padding is added to the bottom and right of the image.
- cutout_image(image, box)
Cut out the portion of an image outlined by a bounding box.
- Parameters:
image (
ndarray
) – The image to cut out from.box (
BoundingBox
) – The bounding box outlining the section of the image to cut out.
- Return type:
- Returns:
The segment of the image outlined by the bounding box. Will be independent from the original image.
- blend_images(foreground_image, background_image, alpha)
Blend a foreground image with a background image, foreground image and background image must have the same dimensions and same color format (RGB/BGR).
- Parameters:
- Return type:
- Returns:
numpy array – The blended image.
- overlay_image(foreground_image, background_image, foreground_mask)
Overlay a foreground image with a background image according to the foreground mask.
This function will mask both the foreground and background images, then combine them into the output image.
- Parameters:
foreground_image (
ndarray
) – The image to be overlaid on the background.background_image (
ndarray
) – The image for the foreground to be overlaid on.foreground_mask (
ndarray
) – A mask with white indicating foreground and black indicating background. Shades in between will blend the foreground and background accordingly.
- Return type:
- Returns:
The overlaid image.
- perform_histogram_equalization(image, color_space='GS', adaptive=False, clip_limit=2.0, tile_grid_size=(8, 8))
Performs Histogram Equalization on the input image and returns the equalized image.
Histogram equalization is a basic image processing technique that adjusts the global contrast of an image by updating the image histogram’s pixel intensity distribution. Doing so enables areas of low contrast to obtain higher contrast in the output image. This function includes implementations of both basic and adaptive histogram equalization. The basic histogram equalization will spread pixels to intensity “buckets” that don’t have as many pixels binned to them. Mathematically, what this means is that the function is applying a linear trend to the image’s cumulative distribution function (CDF). The adaptive histogram equalization function divides an input image into an M x N grid, and then applies equalization to each cell in the grid, resulting in a higher quality output image.
- Parameters:
image (
ndarray
) – The image on which we will do Histogram Equalization operation. (Gray Scaled or in BGR format)color_space (
str
) – The color space of the image on which we will do Histogram Equalization. Supported color_space parameters: [“GS”, “YCrCb”, “YUV”, “HSV”, “LAB”]. If ‘color_space’ = “GS”, output image will be in gray scaled format(2D array). If ‘color_space’ != “GS”, output image will be in BGR format(3D array).adaptive (
bool
) – Whether we want to enable adaptive Histogram Equalization or not.clip_limit (
float
) – The clip limit value for Adaptive Histogram Equalization. The ‘clip_limit’ is used only if ‘adaptive’ = True. ‘clip_limit’ value is the threshold for contrast limiting. Typically it is advised to use the value ranging from 2-5. Allowed range is 0-40. Larger values results in more local contrast and more noise. Try to keep the ‘clip_limit’ value as low as possible.tile_grid_size (
Tuple
[int
,int
]) – Number of grids we want to divide the image into for Adaptive Histogram Equalization. The ‘tile_grid_size’ is used only if ‘adaptive’ = True.
- Return type:
- Returns:
The image after doing Histogram Equalization(Gray Scaled or in BGR format)
- perform_gamma_correction(image, gamma_value=0.8, color=False)
Performs gamma correction operation on the input image and returns the corrected image.
Gamma correction is done when you want to control a camera sensor’s color and luminance. Gamma correction is also known as the Power Law Transform: O = I ^ (1 / G) I = input image O = scaled back to the range [0, 255] G = gamma value, should be greater than 0. For gamma values < 1 will shift the image towards the darker end of the spectrum For gamma values > 1 will shift will make the image appear lighter For gamma value of 1 will have no effect
- Parameters:
image (
ndarray
) – The image on which we will do Gamma Correction operation.color (
bool
) – True will do gamma correction on BGR image and False on Gray-Scaled image. If ‘color’ = True, the output image will be in BGR format(3D array). If ‘color’ = False, the output image will be gray-scale format(2D array).gamma_value (
float
) – The gamma value for Gamma Correction.
- Return type:
- Returns:
The image after doing Gamma Correction(2D or 3D array)
- draw_rounded_rectangle(image, pt1, pt2, color, thickness, corner_radius)
Draw a rectangle with rounded corners in place on an image.
- Parameters:
image (
ndarray
) – The image to draw the rectangle on.pt1 (
Tuple
[int
,int
]) – THe x,y coordinates of the upper left corner.pt2 (
Tuple
[int
,int
]) – The x,y coordinates of the lower right corner.color (
Tuple
[int
,int
,int
]) – The color to draw the box, in BGR.thickness (
int
) – The thickness of the edge lines. A value of -1 will fill the rectangle.corner_radius (
int
) – The radius of the corners.
- Return type:
- Returns:
None
- draw_text_with_background(image, text, start_x, start_y, font_size, font_thickness, color, background_padding=10, background_corner_radius=0, text_alignment=('left', 'top'), **kwargs)
Draw text with a colored background on an image.
- Parameters:
image (
ndarray
) – The image to draw on.text (
str
) – The text to write on the image.start_x (
int
) – The x coordinate of the upper left corner.start_y (
int
) – The y coordinate of the upper left corner.font_size (
float
) – The scale factor for the text.font_thickness (
int
) – The thickness of the lines used to draw the text.color (
Tuple
[int
,int
,int
]) – The color for the text background. The text color, black or white, will be selected based on the background color for maximum visibility.background_padding (
int
) – The padding around the text of the background.background_corner_radius (
int
) – The corner radius of the background box.text_alignment (
Tuple
[Literal
['left'
,'center'
,'right'
],Literal
['top'
,'middle'
,'bottom'
]]) – Specifies the alignment of the text in reference to the origin point (x, y). Accepts a tuple of horizontal (‘left’, ‘center’, ‘right’) and vertical (‘top’, ‘middle’, ‘bottom’) alignment literals.
- Return type:
- Returns:
The marked up image, the box width, and the box height.
Results Serialization
- to_json_serializable(results_input)
Takes in core Computer Vision service results, such as
ObjectDetectionResults
,ClassificationResults
,HumanPoseResult
,InstanceSegmentationResults
, or results returned by calling the update() method on any of the tracking classes, such asCentroidTracker
and returns them in a JSON-serializable format.Typical usage:
... results = obj_detect.detect_objects(frame, confidence_level=.5) serialized_results = edgeiq.to_json_serializable(results)
- Parameters:
results_input (A core Computer Vision service result object.) – The object to serialize.
- Returns:
a json serializable object.
HW Discovery
- find_usb_device(id_vendor, id_product)
Check if a USB device is connected.
- find_pcie_device(id_vendor, id_product)
Check if a PCIe device is connected. :type id_vendor:
str
:param id_vendor: The vendor ID. :type id_product:str
:param id_product: The product ID. :raises: RuntimeError if pciutils library is not found.- Return type:
- is_opencv_cuda_available()
Check if OpenCV is built with CUDA support and a CUDA device is available.
- Return type:
- Returns:
True if OpenCV is built with CUDA and at least one CUDA device is available.
- get_gpu_archs()
Find GPU compute architecture.
- Returns:
List of GPU compute architectures