| Русский Русский | English English |
   
Главная Archive
22 | 12 | 2024
10.14489/vkit.2017.03.pp.034-041

DOI: 10.14489/vkit.2017.03.pp.034-041

Сидякин С. В., Вишняков В. Б., Визильтер Ю. В., Рослов Н. И.
ПОИСК ОТЛИЧИЙ НА ПОСЛЕДОВАТЕЛЬНОСТЯХ ИЗОБРАЖЕНИЙ В СЛОЖНЫХ СЦЕНАХ
(c. 34-41)

Аннотация. Предложен подход к детектированию отличий на видеопоследовательностях в условиях меняющейся освещенности, основанный на взаимных компаративных фильтрах и нормализации фона. Даны определение взаимных компаративных фильтров и описание их свойств. Рекомендован эффективный алгоритм выделения базовых отличий, основное преимущество которого заключается в отсутствии предварительной сегментации изображения на области постоянной яркости. Данный алгоритм не использует законы преобразования яркостей пикселов, которыми оперируют другие методы. Рассматриваемый подход может быть скомбинирован с практически любым методом моделирования фона, не устойчивым к изменению освещенности, в целях повышения качества его работы.

Ключевые слова:  поиск отличий; изменение освещенности; моделирование фона; морфологическая фильтрация; нормализация фона; видеонаблюдение.

 

Sidyakin S. V., Vishnyakov B. V., Vizilter Yu. V., Roslov N. I.
CHANGE DETECTION IN THE SEQUENCES OF IMAGES IN COMPLEX SCENES
(pp. 34-41)

Abstract. Over the last decade, large number of methods and approaches for change detection in videos were proposed. The vast majority of them have been developed for moving objects detection in video surveillance systems. They rely on background subtraction techniques for segmentation of the scene into the foreground and the background. In practice, each method faces serious and sometimes unsolvable problems: the high variation in shooting conditions, illumination changes of different duration, matrix noise, the presence of dynamic objects in the background, for example, swaying branches and bushes.We propose a new approach to the change detection in the sequences of images in difficult conditions. This approach is based on mutual comparative filters and background normalization, i.e. we calculate the difference between the “filtered by the accumulated background model” current image and the “original” current image. Since the size of the found changes depends on the size of the filtering window, in order to avoid fragmentation of foreground objects into unconnected zones it is suggested to use the image pyramid followed by intelligent aggregation of found changes between the pyramid layers. Comparative filtering allows us to compare images by their shape without complex scene segmentation into regions of constant brightness. Comparative filtering makes the proposed method resistant to changes in illumination conditions and it is individual for each scenario. Overall, the proposed algorithm is simple and efficient. It does not use predefined transformation laws of pixel brightness, which are often used by other methods. The proposed approach can be combined with almost any method of background modeling that is not robust to changing light conditions, to improve the overall quality.

Keywords: Change detection; Illumination changes; Background modeling; Morphological filtering; Background normalization; Video surveillance.

Рус

С. В. Сидякин, Б. В. Вишняков, Ю. В. Визильтер, Н. И. Рослов (ФГУП «Государственный научно-исследовательский институт авиационных систем» ГНЦ РФ, Москва, Россия) E-mail: Этот e-mail адрес защищен от спам-ботов, для его просмотра у Вас должен быть включен Javascript  

Eng

S. V. Sidyakin, B. V. Vishnyakov, Yu. V. Vizilter, N. I. Roslov (State Research Institute of Aviation Systems State Scientific Center of Russian Federation, Moscow, Russia) E-mail: Этот e-mail адрес защищен от спам-ботов, для его просмотра у Вас должен быть включен Javascript  

Рус

1. Bianco S., Ciocca G., Schettini R. How Far Can You Get by Combining Change Detection Algorithms // Computing Research Repository (CoRR). 2015. URL: http://arxiv.org/ abs/1505.02921 (дата обращения: 23.12.2016).
2. Heikkila M., Pietikainen M. A Texture-Based Method for Modeling the Background and Detecting Moving Objects // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006. V. 28, Is. 4. P. 657 – 662.
3. KaewTraKulPong P., Rowden R. An Improved Adaptive Background Mixture Model for Real-Time Tracking with Shadow Detection // Proc. on the Second European Workshop on Advanced Video Based Surveillance Systems. UK, September, London, 2001. P. 149 – 158.
4. Tian Y.-L., Lu M., Hampapur A. Robust and Efficient Foreground Analysis for Real-Time Video Surveillance // IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’05). USA, 20 – 26 June, San Diego, 2005. V. 1. P. 1182 – 1187.
5. Xiangdong Y., Jie Y., Na W. Removal of Disturbance of Sudden Illumination Change Based on Color Gradient Fusion Gaussian Model // International Journal of Advancements in Computing Technology. 2013. V. 5, Is. 2. P. 86 – 92.
6. Farcas D., Marghes C., Bouwmans T. Background Subtraction Via Incremental Maximum Margin Criterion: a Discriminative Subspace Approach // Machine Vision and Applications. 2012. V. 23, Is. 6. P. 1083 – 1101.
7. Van Es J., Vladusich T., Cornelissen F. Local and Relational Judgements of Surface Color: Constancy Indices and Discrimination Performance // Spatial Vision. 2007. V. 20. P. 139 – 154.
8. Exploiting Multiple Cues in Motion Segmentation Based on Background Subtraction / I. Huerta et al. // Neurocomputing. 2013. V. 100. P. 183 – 196.
9. Pyt’ev Yu. Morphological Image Analysis // Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications. 1993. V. 3, Is. 1. P. 19 – 28.
10. Shape-Based Image Matching Using Heat Kernels and Diffusion Maps / Yu. V. Vizilter et al. // Proc. on the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. September 5 – 7, Zurich, Switzerland, 2014. V. XL-3. P. 357 – 364.
11. Vishnyakov B. V., Sidyakin S. V., Vizilter Yu. V. Diffusion Background Model for Moving Objects Detection // Proc. on the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. May 25 – 27, Moscow, Russia, 2015. V. XL-5/W6. P. 65 – 71.
12. CDnet 2014: An Expanded Change Detection Benchmark Dataset / Y. Wang et al. // IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops. USA, June, Columbus, 2014. P. 393 – 400. doi: 10.1109/CVPRW.2014.126
13. Обработка и анализ изображений в задачах машинного зрения / Ю. В. Визильтер и др. М.: Физматкнига, 2010. 689 c.
14. Canny J. A Computational Approach to Edge Detection // IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986. V. 8, Is. 6. P. 679 – 698.
15. Serra J. Image Analysis and Mathematical Morphology. USA, Orlando, FL: Academic Press, 1983. 610 p.
16. BigBackground-Based Illumination Compensation for Surveillance Video / R. M. Bales et al. // Journal on Image and Video Processing. 2011. P. 1 – 22. doi: 10.1155/2011/171363

Eng

1. Bianco S., Ciocca G., Schettini R. (2015). How far can you get by combining change detection algorithms. Computing Research Repository (CoRR). Available at: http://arxiv.org/abs/1505.02921 (Accessed: 23.12.2016).
2. Heikkila M., Pietikainen M. (2006). A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), pp. 657-662. doi: 10.1109/tpami.2006.68
3. KaewTraKulPong P., Rowden R. (2001). An improved adaptive background mixture model for real-time tracking with shadow detection. Proc. on the Second European Workshop on Advanced Video Based Surveillance Systems, (pp. 149-158). UK, September, London.
4. Tian Y.-L., Lu M., Hampapur A. (2005). Robust and efficient foreground analysis for real-time video surveillance. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’05), V. 1, (pp. 1182-1187), USA, 20-26 June, San Diego.
5. Xiangdong Y., Jie Y., Na W. (2013). Removal of disturbance of sudden illumination change based on color gradient fusion Gaussian model. International Journal of Advancements in Computing Technology, 5(2), pp. 86-92. doi: 10.4156/ijact.vol5.issue2.12
6. Farcas D., Marghes C., Bouwmans T. (2012). Background subtraction via incremental maximum margin criterion: a discriminative subspace approach. Machine Vision and Applications, 23(6), pp. 1083-1101. doi: 10.1007/s00138-012-0421-9
7. Van Es J., Vladusich T., Cornelissen F. (2007). Local and relational judgements of surface color: constancy indices and discrimination performance. Spatial Vision, 20, pp. 139 – 154. doi: 10.1163/156856807779369733
8. Huerta I. et al. (2013). Exploiting multiple cues in motion segmentation based on background subtraction. Neurocomputing, 100, pp. 183-196. doi: 10.1016/j.neucom.2011.10.036
9. Pyt’ev Yu. (1993). Morphological image analysis. Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications, 3(1), pp. 19-28.
10. Vizilter Yu. V. et al. (2014). Shape-based image matching using heat kernels and diffusion maps. Proc. on the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-3, (pp. 357-364). September 5 – 7, Zurich, Switzerland.
11. Vishnyakov B. V., Sidyakin S. V., Vizilter Yu. V. (2015). Diffusion background model for moving objects detection. Proc. on the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-5/W6, (pp.65-71). May 25 – 27, Moscow, Russia.
12. Wang Y. et al. (2014). CDnet 2014: an expanded change detection benchmark dataset. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops. (pp. 393-400). USA, June, Columbus. doi: 10.1109/CVPRW.2014.126
13. Vizilter Yu. V. et al. (2010). Processing and analysis of images in machine vision tasks. Moscow: Fizmatkniga. [in Russian language]
14. Canny J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), pp. 679-698. doi: 10.1109/TPAMI.1986.4767851
15. Serra J. (1983). Image analysis and mathematical morphology. USA, Orlando, FL: Academic Press.
16. Bales R. M. et al. (2011). Bigbackground-based illumination compensation for surveillance video. Journal on Image and Video Processing, pp. 1-22. doi: 10.1155/2011/171363

Рус

Статью можно приобрести в электронном виде (PDF формат).

Стоимость статьи 350 руб. (в том числе НДС 18%). После оформления заказа, в течение нескольких дней, на указанный вами e-mail придут счет и квитанция для оплаты в банке.

После поступления денег на счет издательства, вам будет выслан электронный вариант статьи.

Для заказа скопируйте doi статьи:

10.14489/vkit.2017.03.pp.034-041

и заполните  ФОРМУ 

Отправляя форму вы даете согласие на обработку персональных данных.

.

Eng

This article  is available in electronic format (PDF).

The cost of a single article is 350 rubles. (including VAT 18%). After you place an order within a few days, you will receive following documents to your specified e-mail: account on payment and receipt to pay in the bank.

After depositing your payment on our bank account we send you file of the article by e-mail.

To order articles please copy the article doi:

10.14489/vkit.2017.03.pp.034-041

and fill out the  FORM  

.

 

 

 
Search
Rambler's Top100 Яндекс цитирования