MODEL OF ALGORITHM FOR STREAMING LABELING OF WIDE FORMAT IMAGES
Keywords:
Labeling, segmentation, connected areas, FPGA, stream processingAbstract
This article proposes a wide-format image processing algorithm for use in systems operating in real
time with a high-speed video data stream. The issue of image preprocessing, its clustering, segmentation
and labeling is of particular importance for systems for processing high-resolution video streams in real
time. In addition, when implementing such algorithms, there is an urgent issue of minimizing the cost of
computational resources of programmable logic integrated circuits (FPGAs), on which the direct deployment
of streaming image processing algorithms takes place. Minimal resource consumption is ensured by
single-pass marking algorithms, which eliminate the need for image buffering, which is especially important
when processing high-resolution wide-format images. However, when implementing a single pass
of an image through the processing system, many additional markers may be created that are subject to
further combining, especially when analyzing images with high resolution. The additional markers created
require an increase in the requirements for the number of usable memory cells on the FPGA. The algorithm
for streaming high-resolution wide-format images described in the article makes it possible to label
high-resolution streaming video images, reducing the likelihood of creating additional tags that need to be
further combined. The essence of improving the algorithm relative to the standard one-pass one is to add
additional elements to the scanning mask, which avoid the appearance of different labels corresponding to
the same object, which allows, with a minimal increase in the amount of memory used on the FPGA, to
avoid duplication of labels and overuse of device memory. The algorithm was simulated for implementation
on an FPGA using the Xilinx System Generator for DSP tool in conjunction with the Matlab Simulink
environment for model-based design (MBD). The results of the algorithm are presented on images obtained
from a high-speed linear camera TELEDYNE DALSA LA-CC-04K05B-00-R using the Integre
Technologies LLC FMC-200-A mezzanine, as well as the Xilinx ZYNQ Ultrascale+ MPSoC ZCU106 development
board.








