Abstract
Both background subtraction and foreground extraction are the typical methods used to detect moving objects in video sequences. In order to flexibly represent the long-term state and the short-term changes in a scene, a new weighted Kernel Density Estimation (KDE) is proposed to build the long-term background (LTB) and short-term foreground (STF) models, respectively. A novel mechanism is proposed to support the interaction between the LTB and STF models. The interaction includes the weight transmission and the fusion between the LTB and STF models. In the weight transmission process between the LTB and STF models, the sample weight of one model (either the background model or the foreground model) in the current time step is updated under the guidance of the decision of the other model in the previous time step. In the background-foreground fusion stage, a unified Bayesian framework is proposed to detect objects and the detection result in any time step is given by the logarithm of the posterior ratio between the background and foreground models. This interactive approach proposed in this paper improves the robustness of moving object detection, preventing deadlocks and degeneration in the models. The experimental results demonstrate that our proposed approach outperforms previous ones.
Original language | English |
---|---|
Pages (from-to) | 65-81 |
Number of pages | 17 |
Journal | Information Sciences |
Volume | 483 |
DOIs | |
Publication status | Published - May 2019 |
Externally published | Yes |
Keywords
- Background-foreground interaction
- Dynamic scene
- Moving object detection
- Weighted kernel density estimation
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Theoretical Computer Science
- Computer Science Applications
- Information Systems and Management
- Artificial Intelligence