“During the process of image acquisition and transmission, noise is usually generated, which reduces the image quality and affects subsequent processing. Therefore, some preprocessing such as image filtering and image enhancement must be performed on the image. To improve image quality, noise removal usually involves filtering the image, which removes noise while maintaining image detail.
“
During the process of image acquisition and transmission, noise is usually generated, which reduces the image quality and affects subsequent processing. Therefore, some preprocessing such as image filtering and image enhancement must be performed on the image. To improve image quality, noise removal usually involves filtering the image, which removes noise while maintaining image detail.
FPGA image processing method
1. Image enhancement
Two major methods: spatial domain method and time domain method (more on this later)
2. Image filtering
(1) Smoothing spatial filter
(2) Median filter algorithm
3. Image edge detection
The edge refers to the part of the image where the local intensity changes most significantly. Edges mainly exist between objects and objects, objects and backgrounds, regions and regions (different colors). Edge detection is the basis of image analysis such as image segmentation, texture features and shapes.
4. Image target extraction algorithm
(1) Adjacent frame difference method
The difference between two adjacent frames of images is calculated, and the position and shape of the target in the image are displayed, and the image after the difference is not zero is the target. In the two frames of images, the parts with no change in gray value are clipped, mainly the background and some small parts of the target.
The position of the moving target can be roughly determined from the detected part, but the determination of this method is that when the displacement of the object is small, it is difficult to determine the moving direction of the target and a void is generated inside the target.
(2) Optical flow method
(3) Background frame difference method
This method selects an image as the background image, and uses the difference between the collected image and the background image. When the background image is properly selected, the target object can be segmented more accurately. It is fast, easy to implement, and provides complete motion area information.
Specific schematic diagram:
As shown in the figure above, firstly, the background image and the current image are differentiated to obtain the background difference image of the two images (the brightness component is extracted from the memory to obtain the grayscale image, and after the median filtering of the image is processed, it enters the image detection algorithm module; Then you only need to subtract the corresponding pixels in the two images and then take the absolute value to get the background difference image), and use the method of histogram statistics to determine the binarization threshold of the image (the threshold is generally set as the average of the G component. value), and finally binarize the image to extract the contour of the target.
Find the background difference VHDL code implementation:
5. Pay attention
(1) The sampling frequency of the video input device is usually different from the crystal frequency of the FPGA, so there will be a problem of asynchronous clock domain, so the collected image data can be stored in the FIFO first, and then stored in the SRAM.
(2) Metastable conditions will occur between different clock domains: when the signal passes through the junction of the two clock domains, the value of the signal will be controlled by two clocks respectively. At this time, if the sensitive delay of the two clock signals is very close, the data signal will be unstable.
Image data storage
The data collected from the camera first enters the FIFO buffer, and after a row of data is filled, it is read into the SRAM by the SRAM controller. Note: The collected video image data is interlaced, that is, the legend number field is transmitted first and then the even number field is transmitted. For the convenience of subsequent image processing, the data of the two fields must be combined into a complete frame of image.
Specific method: first store the odd-numbered field data in the SRAM in an interlaced manner, that is, the first row of the odd-numbered field is stored in the first row of the SRAM, the second row is stored in the third row of the SRAM, and the address space is one row away from the first row. , until the line data is all sent, and then start to store the even field, the first line is placed on the second line, and so on.
The internal controller controller enables the SRAM controller according to the full and empty status of the asynchronous FIFO. When the FIFO full status is valid and the empty status is invalid, the SRAM starts to read data from the FIFO.
The purpose of using FIFO: to avoid metastability. Because the acquisition frequency is different from the clock frequency of the FPGA, it is an asynchronous sequential circuit. And the clock signal after the data passes through the FIFO is unified as the system clock.
Asynchronous FIFOzho includes: write address generation, read address generation and a dual-port RAM.
The Links: PM75CS1D060 SEMIX353GB126V1 LCDDISPLAY