Application Analysis¶
This documentation will help you debug your application and analyze the performance of your application, as well as identify potential areas for performance improvement.
Debug With the Streamer¶
View Videos, Images, and Text Data¶
Many devices won’t have a graphical connection set up, so the Streamer is
provided to make debugging easier. The Streamer
class can be used
to display real-time and batch-processed video, image, and text data.
For real-time streaming, use the following command to initialize and start the Streamer:
with edgeiq.Streamer() as streamer:
To update the Streamer with the latest video frame and text data, use the
send_data()
function:
streamer.send_data(frame, text)
The text field takes in a list of strings, and each string will be displayed on its own line in the output section of the Streamer.
Once the Streamer starts, it will print a link in the console logs.
[INFO] Streamer started at http://localhost:5000
To display batch-processed images in a format similar to a slideshow,
set the queue depth large enough to store all processed images and set the
desired delay between frames. Then call the
wait()
function once all images have been
added to the queue to keep the server up for all images to be displayed:
with edgeiq.Streamer(
queue_depth=len(image_paths), inter_msg_time=3) as streamer:
for image in image_paths:
...
<some code>
...
streamer.send_data(image, text)
streamer.wait()
The Streamer will be automatically cleaned up when the code block exits.
Stop Your App With the Streamer¶
The Streamer can also be used to stop your app using the red stop button
in the output box. For real-time apps, the
check_exit()
function will return true
if the stop button has been pressed, so it can be used to exit long-running
loops:
while True:
# Perform processing
...
if streamer.check_exit():
break
For batch-processing mode, the stop button status is checked in the
wait()
function and will exit at the next
wakeup.
Analyze the Performance of Your App¶
Understand the Inference Time¶
While using the edgeIQ APIs, it can be very helpful to understand the timing
of different aspects of your app. One major piece of the overall timing will
be the inference time, or the time it takes the engine and accelerator to run
a forward pass on the network. The inference time is provided in the results
object for Classification
(ClassificationResults
),
ObjectDetection
(ObjectDetectionResults
),
PoseEstimation
(HumanPoseResult
), and
SemanticSegmentation
(SemanticSegmentationResults
).
If you’re seeing an inference time that is longer than what your app requires, there are three main ways to improve it:
Use an accelerator: An
Accelerator
can provide major improvements to inference times. For example, for many models the Intel NCS2 has inference times around 100 ms.Change your computer vision model: Model inference times range from tens of milliseconds to tens of seconds, so your choice of model could have a large impact on your inference time. The alwaysAI Model Catalog provides inference times for popular processors and accelerators.
Use a board with more compute power: If you can’t sacrifice on the accuracy of your model, you may just need a board with more compute power. Take a look at the supported edge devices to see if there’s another board that meets your needs.
Analyze the Frames Per Second¶
For a real-time app, the frames per second will be an important performance
metric, and edgeIQ provides the FPS
tool to measure frames per
second. First, instantiate the FPS
object:
fps = edgeiq.FPS()
Next, start the timer before starting your main processing loop:
fps.start()
For each processed frame, update the FPS counter:
fps.update()
When your main processing loop exits, stop the FPS timer and capture the approximate FPS:
fps.stop()
print(fps.get_elapsed_seconds())
print(fps.compute_fps())
You can also get an estimate of the instantaneous FPS in your main processing
loop by calling compute_fps()
without calling
stop()
. (Note that this will add additional
processing to your loop, and may not be desired if high performance is crucial.)
The frames per second are largely determined by two things — the inference time, described above, and any post-processing you’ve done on the results. If the frames per second closely matches the inverse of the inference time, then the post-processing time is an insignificant part of the total time per frame. However, if the FPS is much less than the inverse of the inference time, then your post-processing is contributing to your overall performance. Check to see if any portions of your post-processing can be made more efficient.