edgeIQ Release Notes¶
Release 2.6.0¶
Released September 9, 2024
New features¶
Add support for pytorch runtime engine
edgeiq.Engine.PYTORCH
.Publish object tracking results with
edgeiq.TrackerAlgorithm.publish_analytics
.Analyze the performance of the object tracker with
edgeiq.TrackerPerformanceAnalyzer
.
Enhancements¶
Enable multistage matching to match unmatched tracked objects.
Add GPU support for
ONNX_RT
engine.Add
get_zone_for_prediction
to identify the zone associated with a prediction.Enable publish analytics on tracker objects.
Add support to load analytics events from an analytics file.
Add support to convert coco formatted annotations to
edgeiq.object_detection.ObjectDetectionResults
object.Calculate the intersection over Union of two bounding boxes with
edgeiq.BoundingBox.compute_iou
.
Bug Fixes¶
Resolve yoloX loading error log.
Release 2.5.0¶
Released July 1, 2024
Enhancements¶
Support for new configuration structure on host machines and devices.
edgeIQ remains backwards compatible with the legacy configuration structure. Please migrate to this release in order to support future CLI and Device Agent releases.
Upgrade SocketIO version to latest for
edgeiq.Streamer
.Ensure clean application exit upon pass-through connection lost
When the connection is lost,
edgeiq.publish_analytics()
will raiseedgeiq.ConnectionError
. To reconnect, the application must restart and the pass-through container must be up.
Bug Fixes¶
Resolve an issue where application may not fully exit upon stop when connected to pass-through connection.
Release 2.4.0¶
Released June 6, 2024
New Features¶
Add support for NVIDIA JetPack 5.1.1.
Add support for YOLOv4 ONNX model, which enables runtime conversion to TensorRT.
Enable object tracking with an analytics file with
edgeiq.TrackerAnalytics
Bug Fixes¶
Resolve an issue where non-integer zone coordinates would cause a crash.
Additional Notes¶
Support for NVIDIA JetPack 4.6.x has been removed. Please upgrade to JetPack version 5.1 or 5.1.1.
Release 2.3.1¶
Released April 25, 2024
New Features¶
Enable object detection with an analytics file with
edgeiq.ObjectDetectionAnalytics
.
Enhancements¶
Python Base Image upgrade from Python3.7 to Python3.10.
This may break all the existing python virtual environments. Please run
aai app install --clean
Expose image size controls for Streamer – image height and width on the webpage can be adjusted.
Release 2.3.0¶
Released February 13, 2024
New Features¶
Add support for ONNX runtime engine
edgeiq.Engine.ONNX_RT
.Add support for runtime DLA conversion for ONNX models.
Add support for three new Object Detection architectures - DETR, CenterNet and YOLOX.
Add
MultiStreamFramework
to run multiple applications concurrently in separate processes.Add support for scaling bounding boxes along x and y axis about the center.
Enhancements¶
Python wheel support available for Python3.9 and Python3.10.
Reduce zone overlay loading time.
Bug Fixes¶
Fix error when bounding box zone is drawn from bottom right to top left.
Additional Notes¶
Support for arm32 has been removed.
Release 2.2.1¶
Released January 11, 2024
Enhancements¶
NVIDIA Jetson Orin NX TensorRT inference support.
Support for loading Re-ID analytics files.
Update keepalive handling of analytics connection to the alwaysAI Device Agent. This feature requires the Device Agent to be version 1.2.0 or newer.
Bug Fixes¶
Fix for
EventVideoWriter
interface state was a string instead of enum.Fix in handling human pose ONNX runtime conversion to TensorRT.
Release 2.2.0¶
Released October 4, 2023
New Features¶
Define pre and post processing for custom models. See the YOLOX Reference App to understand how pre and post processing was implemented for YOLOX.
Enhancements¶
Add new attributes to
TrackablePrediction
, including:hits
: The number of detections matched to this trackable.age
: The number of iterations this trackable has existed.
Improve readability of bounding box text with image markup.
Add
BaseVideoStream
to public interface to enable building generic and extendable video streams.Improve analytics packet throttling.
Bug Fixes¶
Resolve issue with JetPack 4.6 YOLO binaries for Jetson devices.
Resolve an issue where pose instances are not properly created when loading from analytics files.
Release 2.1.1¶
Released July 24, 2023
Bug Fixes¶
Resolve an issue where Windows systems with NVIDIA GPUs installed would cause an exception due to an unsupported environment. This error now only occurs if an invalid engine is loaded on those systems.
Release 2.1.0¶
Released June 27, 2023
New Features¶
ReIdentification
added to enable object re-identification against know objects. See starter app on the alwaysAI GitHub to properly configure environment.Add support for JetPack 5.1, in addition to existing support for JetPack 4.6 and 4.6.1/2/3.
Add support for TensorRT on x86 systems. Supported CUDA versions include 22.12 and 23.02.
TestVideoStream
added to enable validating application operation without connecting to any cameras or video files.Add safe hstack and vstack to easily combine images of different resolutions.
Enhancements¶
Improve error logs for analytics publish errors.
Use production server for
edgeiq.HttpInferenceServer
.Improve environment validation upon edgeIQ initialization.
Add support for YOLO models running on TensorRT engine on NVIDIA AGX Orin.
Add support for human pose to Qualcomm AIC 100.
Bug Fixes¶
Resolve issue with bash configuration in edgeIQ docker images.
Additional Notes¶
Support for MYRIAD has been removed, including EyeCloud, OAK, and the
DNN_OPENVINO
engine.
Release 2.0.0¶
Released April 5, 2023
New Features¶
Connects to the alwaysAI Device Agent for Remote Deployment functionality. New behavior includes:
Guaranteed cloud connection when enabled: When cloud connection is enabled and the connection fails, edgeIQ will crash.
Quality of service considerations for analytics data:
publish_analytics()
will throw exceptions when error conditions hit, including rate limiting, going beyond the packet size limit, and message queue overloads.
Build an HTTP inference server and client with
edgeiq.HttpInferenceServer
andedgeiq.ObjectDetectionHttpClient
Enhancements¶
Improved multiprocessing support for Qualcomm AIC 100 inference engine.
Improve zone interface and usability
Improve performance of zone drawing
Enable per-zone color selection
TensorRT support for Image Classification
transparent_overlay_boxes
renamed tooverlay_transparent_boxes
Release 1.12.0¶
Released February 13, 2023
New Features¶
Add support for Qualcomm’s cloud AI 100 accelerator card.
Add functionality to validate model configuration.
Enhancements¶
Add support for Jetpack 4.6.1.
Add
compute_prediction_overlap_with_zone
to compute the fraction of prediction box overlapped by the zone.
Release 1.11.0¶
Released January 11, 2023
New Features¶
Batch support for TensorRT models.
TimingProfiler
added to make timing analysis easier.Add
GStreamerCustomVideoWriter
to enable fully customizable GStreamer video writers. TheGStreamerUdpVideoWriter
was removed by this change and documentation describes how to write a GStreamer pipeline for a UDP stream.
Enhancements¶
Enable restarting video streams by calling
start()
a second time.Improve type hints for Object Tracking and Base Service results.
Raise exception if analytics packet publish fails.
Bug Fixes¶
Resolved an issue where MJPG Streamer did not close properly.
Release 1.10.0¶
Released August 18, 2022
New Features¶
Publish Instance Segmentation results with
edgeiq.InstanceSegmentation.publish_analytics
Set FPS as property and override the default fps of the video file in
edgeiq.FileVideoStream
.
Enhancements¶
Provide the list of models installed for the application in the edgeIQ API:
edgeiq.AppConfig().model_id_list
Publish wheel files for Python3.8 in addition to Python3.7
Numpy version upgraded from 1.18.1 to 1.21.0
Bug Fixes¶
Resolved an issue with framework checks for eyecloud
Release 1.9.0¶
Released June 27, 2022
New Features¶
Enable use of CUDA on amd64.
Perform Instance Segmentation with
edgeiq.InstanceSegmentation
.Analyze the real-world performance of your object tracker with
edgeiq.TrackerPerformanceAnalyzer
.
Enhancements¶
Expose estimated position and velocity in
KalmanTracker
.Automatically serialize dictionaries and lists in
publish_analytics
.Save converted TensorRT model after conversion from ONNX.
Remove a tracked object from an object tracker.
Track the history of an object with an object tracker.
Access the full model configuration from a base service. e.g.
obj_detect.model_config
.
Bug Fixes¶
Resolved an issue with loading custom analytics packets.
Resolved an issue where a Jetson device would appear unsupported if the YOLO plugin wasn’t present but YOLO was not being used.
Release 1.8.1¶
Released April 21, 2022
Enhancements¶
Validate TensorRT models are run on the correct device.
Make DLIB an optional dependency. This won’t effect most users, but will make the manual installation of edgeIQ wheels much quicker since DLIB will no longer need to be built.
Bug Fixes¶
Lock Flask version to 2.1.0 and Werkzeug,a requirement of Flask, to version prior to 2.1 for compatibility
Strip leading and trailing whitespaces from labels of the models.
Release 1.8.0¶
Released March 9, 2022
New Features¶
Upgrade to JetPack 4.6 for NVIDIA Jetson devices. Follow the instructions on the Working with edge devices page to flash your Jetson device with JetPack 4.6.
Support production analytics on host machines (the device that is running the CLI).
Enhancements¶
Support for parsing tags in analytics files loaded by
load_analytics_results()
.Offload JPEG encoding to hardware encoder on NVIDIA Jetson devices in the Streamer datapath.
Enable updates to
predictions
list ofObjectDetectionsResults
.
Release 1.7.0¶
Released January 31, 2022
New Features¶
Add functionality for incorporating regions of interest into applications. This includes the ability to ingest JSON configuration files generated with the
Zone Creator Tool
and the addition ofZoneList
andZone
classes, as well as associated utility functions.Add device support for Jetson AGX Xavier, and add
is_jetson_agx_xavier()
tool to check if running on AGX Xavier.Add
BarcodeDetection
andQRCodeDetection
features.Add
opencv_bbox()
utility toBoundingBox
, for getting the bounding box in OpenCV format.
Enhancements¶
Upgrade OpenCV to 4.5.4.
Release 1.6.0¶
Released December 6, 2021
Enhancements¶
Improve the
RealSense
API.Reduce the memory footprint of
import edgeiq
. As a result,realsense
,oak
, andeyecloud
modules must be explicitly imported before usage. For example:
import edgeiq
from edgeiq import realsense
rs = edgeiq.realsense.RealSense()
Bug Fixes¶
Add check for valid inputs in
BoundingBox
to prevent divide by 0 error.Fix incorrect logs in
RtspVideoWriter
.
Release 1.5.0¶
Released November 8, 2021
New Features¶
Add analytics publishing features.
Add
RtspVideoWriter
to host an RTSP server with a variable number of H264-encoded streams.Add
GStreamerCustomVideoStream
to enable custom GStreamer pipelines as input to your application.
Enhancements¶
Add non-blocking frame read to all video streams.
Add
min_inertia
to Object Tracking to reduce tracking false detections.
Release 1.4.0¶
Released August 23, 2021
New Features¶
Introduce MOG2 background subtraction utility
Add contours utility functions
Add serialization utility function for core CV services
Bug Fixes¶
Fix image rotation utility
Release 1.3.1¶
Released June 8, 2021
Bug Fixes¶
Resolved an issue where
FileVideoStream.read
would occasionally return None.
Release 1.3.0¶
Released May 10, 2021
New Features¶
Add
Kalman Tracker
to track objects between frames using a Kalman filter-based tracker.
Bug Fixes¶
Resolved an issue where the model ID is not correctly displayed on Windows.
Release 1.2.2¶
Released May 4, 2021
New Features¶
Add April Tag Detector to detect april tags.
Enhancements¶
Upgrade OpenCV to 4.5.1 and OpenVINO to 2021.2.
Upgrade to Python 3.7 across the platform.
Speedup of YOLOv3 post processing on DNN backend.
Release 1.2.1¶
Released April 6, 2021
Enhancements¶
Improve OAK camera API.
Bug Fixes¶
Resolve an issue with restarting camera video streams.
Release 1.2.0¶
Released March 12, 2021
New Features¶
Add support for OAK-D camera.
Replace existing
CorrelationTracker
that provides a dictionary of objects organized by ID. The interface has changed so any code utilizingCorrelationTracker
will need to be updated. See additional details in the Object Tracking Section.
Enhancements¶
Improve mode selection for
JetsonVideoStream
withJetsonCameraMode
.
Release 1.1.0¶
Released February 24, 2021
New Features¶
Add
Oak
to capture frames and to execute models on Oak cameras.
Release 1.0.1¶
Released February 19, 2021
Bug Fixes¶
Fix an issue where
Object Detection
on TensorRT would fail due to incorrect resize of input image.
Release 1.0.0¶
Released February 15, 2021
New Features¶
Support for production mode deployment.
Support for YOLO on TensorRT.
Release 0.17.1¶
Released December 16, 2020
Bug Fixes¶
Resolve an issue where the
Streamer
would crash due to mismatching socketIO versions.
Release 0.17.0¶
Released December 10, 2020
New Features¶
Add
MjpgVideoWriter
to host a MJPG video streaming server on the device.Add
EyeCloud
to capture frames and to execute models on a EyeCloud cameras.
Enhancements¶
Update
RealSense
API to improve usability. This is a breaking change so users should update their applications to match the new API.
Known Issues¶
EyeCloud
: delay in the video feed when running on the Raspberry Pi 4.EyeCloud.get_model_result
: thesequence_index
of the image that comes with the result objects may be inaccurate. Use thesequence_index
attribute of the result object directly.
Release 0.16.1¶
Released October 5, 2020
New Features¶
Add
ObjectDetection
support for TensorRT.
Bug Fixes¶
Resolve an issue where
FileVideoStream
fails to exit on crash.
Release 0.16.0¶
Released September 9, 2020
New Features¶
Upgrade JetPack to 4.4 for NVIDIA Jetson devices. The base docker image works on all Jetson devices and is tagged
alwaysai/edgeiq:jetson-0.16.0
. The image requires JetPack 4.4 to run. Make sure to reconfigure (aai app configure
) your app after you upgrade your Jetson device.Add support for NVIDIA Jetson Xavier NX, and add
is_jetson_xavier_nx()
tool to check if running on Xavier NX.
Enhancements¶
Improve performance of
PoseEstimation
on edge devices.
Release 0.15.1¶
Released August 21, 2020
New Features¶
Add tools for working on Jetson devices:
is_jetson()
,is_jetson_nano()
, andis_jetson_tx2()
.
Release 0.15.0¶
Released August 6, 2020
New Features¶
Upgrade OpenCV version to 4.4.0 and OpenVINO version to 2020.4. OpenVINO version 2020.4 has dropped support for NCS1, therefore NCS1 is no longer supported on alwaysAI. The
find_ncs1()
tool is no longer available.Update Dockerfile naming to be more specific. This only impacts users who have hardcoded their docker image
FROM
line for a specific architecture. Here are the changes:arm -> armv7hf
arm64 -> aarch64
Enhancements¶
FileVideoStream
now restarts whenstart()
is called a second time.Support for OPENVINO engine on CPU dropped due to uneven support across devices.
Known Issues¶
OpenCV + OpenVINO on armv7hf docker image still at 4.1.2 + 2019_R3.1 due to issues with OpenVINO on 32 bit systems.
Release 0.14.2¶
Released June 10, 2020
New Features¶
Enable RealSense cameras for Mac OSX and Windows.
Release 0.14.1¶
Released June 5, 2020
New Features¶
Add
RealSense.compute_distance_between_objects
to compute the distance between two detected objects in 3D space.
Bug Fixes¶
Fix TensorRT memory leak.
Release 0.14.0¶
Released May 19, 2020
New Features¶
Pose Estimation on all platforms.
RealSense depth camera support.
Pose Estimation using TensorRT Engine, optimization for NVIDIA GPUs.
Support OPENVINO engine on Intel CPUs.
Bug Fixes¶
Fix windows DLL import error
Known Issues¶
The Intel Neural Compute Stick is not supported running directly on Mac OSX or Windows.
Release 0.13.0¶
Released April 28, 2020
New Features¶
Add
edgeiq.overlay_image
to combine foreground and background images based on an input mask.Support for NVIDIA Jetson TX2. Docker containers are tagged as
alwaysai/edgeiq:tx2-x.x.x
.NVIDIA Jetson Nano containers upgraded to Jetpack 4.3/L4T 32.3.1.
Mac OSX installer for Mojave and Catalina.
Enhancements¶
Added debug logs to
EventVideoWriter
.Reduced network load when using
Streamer
.Improved
Streamer
shutdown process.Jetson CSI cameras are now supported by the CLI.
Improved exception handling for cameras.
Bug Fixes¶
Fix a bug in
VideoWriter
where a None object is called and throws an exception.Fix a bug in
FileVideoStream
to handle no frame exception.
Known Issues¶
The Intel Neural Compute Stick is not supported running directly on Mac OSX.
Release 0.12.0¶
Released March 13, 2020
New Features¶
Add filtering by class to Semantic Segmentation (
SemanticSegmentation.build_object_map(self, class_map, class_list)
).Add
JetsonVideoStream()
for Jetson Nano CSI camera support. Note that the alwaysAI CLI does not support the Jetson Nano CSI camera at the time of this release.
Enhancements¶
Add
edgeiq.Accelerator.NVIDIA_FP16
to run models on an NVIDIA GPU compressed to 16 bit floating point.Detect when NCS devices are not attached when using
edgeiq.Accelerator.MYRIAD
.
Known Issues¶
The
alwaysai/enet
model is currently not supported onedgeiq.Accelerator.NVIDIA_FP16
.
Release 0.11.1¶
Released February 13, 2020
Enhancements¶
Install OpenCV C++ headers so that 3rd party libraries can use them.
Usability and performance improvements in Pose Estimation.
.-.
.-""`""-. |(@ @)
_/`oOoOoOoOo`\_ \ \-/
'.-=-=-=-=-=-=-.' \/ \
`-=.=-.-=.=-' \ /\
^ ^ ^ _H_ \
Release 0.11.0¶
Released January 29, 2020
New Features¶
Add support for the NVIDIA Jetson Nano. The GPU on the Jetson Nano can be used by starting your Dockerfile with
FROM alwaysai/edgeiq:nano-0.11.0
docker image, and usingedgeiq.engine.DNN_CUDA
as the engine andedgeiq.accelerator.NVIDIA
as the accelerator.
Enhancements¶
Split off
GStreamerVideoStream
fromWebcamVideoStream
to enable more flexibility on gstreamer inputs. Thegstreamer
parameter is no longer available inWebcamVideoStream
.Add
repr
to results objects to easily print the contents.Updated to Python 3.6 in Docker images and Windows native.
Update OpenCV to version 4.1.2 in Linux docker images and Windows native.
Update OpenVINO to version 2019 R3.1 in Linux docker images.
Known Issues¶
The Intel Neural Compute Stick is not supported running directly on Windows.
Release 0.10.0¶
Released January 14, 2020
New Features¶
Add
SemanticSegmentation
to classify and label each individual pixel in an image. Try out this new feature with one of the following starter apps:semantic_segmentation_cityscape
semantic_segmentation_voc
.
Add
IPVideoStream
to capture data from cameras and videos streaming over the internet.
Enhancements¶
Update
FileVideoStream
to support simulating real time video playback.
Release 0.9.0¶
Released December 5, 2019
New Features¶
Add support for CSI cameras with GStreamer. Set the
gstreamer
flag toTrue
when instantiating aWebcamVideoStream
object to use this feature.
Enhancements¶
Host Web Streamer libraries locally to enable using the Streamer without an internet connection.
Release 0.8.0¶
Released November 8, 2019
New Features¶
Add
PoseEstimation
for characterizing human poses in images and video. Try it out with therealtime_pose_estimator
starter app. Currently,PoseEstimation
is only supported on Linux using MYRIAD accelerators (NCS1/NCS2).Add
blur_objects()
utility to blur a list of detected objects in an image.Add
transparent_overlay_boxes()
utility to overlay colored boxes on detected objects in an image.Add
detect_simple_camera_blockage()
to detect camera blockages in some simplistic cases.
Bug Fixes¶
Fix issue with installing edgeIQ autocomplete helper on Raspberry Pi.
Release 0.7.20¶
Released October 17, 2019
Enhancements¶
Improve bash prompt styling when using
aai app shell
.Improve error logs for
FileVideoStream
.
Bug Fixes¶
Resolve pip install failure on Raspberry Pi when using requirements.txt.
Release 0.7.0¶
Released September 17, 2019.
New Features¶
Add
EventVideoWriter
to enable recording video clips based on event triggers. The class supports pre-roll and post-roll recording to capture video before and after the event occurs.The obsolete
EventWriter
class was removed.
Add button on Streamer to stop a running app. For realtime apps, use the
check_exit()
function to break out of your main loop.Add
g++
andcmake
to edgeIQ Docker image to enable building wheels from requirements.txt.
Enhancements¶
Update
FPS
interface to better reflect function action:elapsed()
changed toget_elapsed_seconds()
.fps()
changed tocompute_fps()
.
Release 0.6.3¶
Released September 10, 2019.
Enhancements¶
Pass exceptions that occur in Streamer up to the top level app using the
send_data()
function.Improve
FPS
class to compute an approximate FPS value without calling thefps.stop()
function.Visual improvements to Streamer, including expanding the output text box width and displaying the current edgeIQ version.
Bug Fixes¶
Resolve issue with displaying a single image on the Streamer in slideshow mode.
Release 0.6.2¶
Released August 27, 2019.
Enhancements¶
Add additional console logs and Streamer text data to Starter Applications.
Bug Fixes¶
Resolve “divide by 0” error seen when using
cutout_image()
withObjectDetection.detect_objects()
output.
Release 0.6.0¶
Released August 13, 2019.
New Features¶
Add Correlation Tracker for tracking objects across frames between detections
Add Centroid Tracker for tracking unique objects between two detections.
Add MYRIAD support across arm32, arm64, and amd64 architectures.
Add utilities to work with bounding boxes, including computing overlap and intersection between boxes as well as cutting out image segments.
Add edgeIQ abstract package for desktop autocomplete support.
Enhancements¶
Improve Streamer performance in poor network conditions.