Logo
User Name

Vladan Stojnic

Czech Technical University of Prague

Društvene mreže:

Polje Istraživanja: Computer vision

J. Filipi, Vladan Stojnić, M. Mustra, R. Gillanders, Vedran Jovanovic, Slavica Gajić, G. Turnbull, Z. Babic, N. Kezic et al.

Vladan Stojnić, V. Risojević, M. Mustra, Vedran Jovanovic, J. Filipi, N. Kezic, Z. Babic

Detection of small moving objects is an important research area with applications including monitoring of flying insects, studying their foraging behavior, using insect pollinators to monitor flowering and pollination of crops, surveillance of honeybee colonies, and tracking movement of honeybees. However, due to the lack of distinctive shape and textural details on small objects, direct application of modern object detection methods based on convolutional neural networks (CNNs) shows considerably lower performance. In this paper we propose a method for the detection of small moving objects in videos recorded using unmanned aerial vehicles equipped with standard video cameras. The main steps of the proposed method are video stabilization, background estimation and subtraction, frame segmentation using a CNN, and thresholding the segmented frame. However, for training a CNN it is required that a large labeled dataset is available. Manual labelling of small moving objects in videos is very difficult and time consuming, and such labeled datasets do not exist at the moment. To circumvent this problem, we propose training a CNN using synthetic videos generated by adding small blob-like objects to video sequences with real-world backgrounds. The experimental results on detection of flying honeybees show that by using a combination of classical computer vision techniques and CNNs, as well as synthetic training sets, the proposed approach overcomes the problems associated with direct application of CNNs to the given problem and achieves an average F1-score of 0.86 in tests on real-world videos.

M. Simić, Vladan Stojnić, V. Starčević, Z. Babic, J. Filipi

Although there are plenty of commercial systems dedicated to measuring environmental parameters, because of need for specific parameters of interest (temperature, relative humidity, atmospheric pressure, ultraviolet light level, infrared light level and visible light level, micro-sized particles concentrations, magnetic flux density, and wind speed) we proposed a multi-sensors and low-cost platform designed for remote monitoring of environmental conditions on a bee yard. The proposed system has specific features such as portability, battery-power supply, and unit for transmitting data to the remote server. Initial testing showed ability of the system to process all data on time and to perform reliable acquisition.

A. Avramović, Davor Sluga, Domen Tabernik, D. Skočaj, Vladan Stojnić, Nejc Ilc

Recent trends in the development of autonomous vehicles focus on real-time processing of vast amounts of data from various sensors. The data can be acquired using multiple cameras, lidars, ultrasonic sensors, and radars to collect useful information about the state of the traffic and the surroundings. Significant computational power is required to process the data fast enough, and this is even more pronounced in vehicles that not only assist the driver but are capable of fully autonomous driving. This article proposes speed and accuracy improvement of traffic sign detection and recognition in high-definition images, based on focusing on different regions of interest in traffic images. These regions are determined with efficient and parallelized preprocessing of every traffic image, after which convolutional neural network is applied for detection and recognition in parallel on graphics processing units. We employed different “You Only Look Once” (YOLO) architectures as baseline detectors, due to their speed, straightforward architecture, and high accuracy in general object detection tasks. Several preprocessing procedures were proposed, to achieve real-time performance requirement. Our experiments using a large-scale traffic sign dataset show that we can achieve real-time detection in high-definition images with high recognition accuracy.

M. Simić, R. Gillanders, A. Avramović, Slavica Gajić, Vedran Jovanovic, Vladan Stojnić, V. Risojević, J. Glackin, G. Turnbull et al.

A. Avramović, Vedran Jovanovic, Ratko Pilipović, Vladan Stojnić, V. Risojević, Slavica Gajić, M. Simić, Igor Sevo, M. Mustra et al.

Studying the behavior of social insect using computer vision algorithms is an interesting topic for both biological and signal processing communities. One of the most interesting aspects in the field is tracking of honeybees. Regarding computer vision method, honeybees’ behavior has been mostly monitored inside and at the entrance of the hive. In this research we are proposing the method for automatic monitoring of honeybees’ activity outside of the hive. Experiments showed that the activity of honeybees outside the hive can estimated using an ultra-high definition video captured with UAV from distance of 10 meters. Specific spots where honeybees are gathered can be detected using heat maps which represent the density of their occurrence in the observed time interval.

Vladan Stojnić, V. Risojević

This paper investigates the importance of different parameters of split-brain autoencoder to performance of learned image representations for remote sensing scene classification. We investigate the usage of LAB color space as well as color space created using PCA applied to RGB pixel values. We show that these two spaces give almost equal results, with slight favor towards the LAB color space. We also investigate choices of different quantization methods of color targets and number of quantization bins. We have found that using k-means clustering for quantization works slightly better than using uniform quantization. We also show that even when using really small number of bins it is possible to get only slightly worse results.

Vladan Stojnić, V. Risojević

Self-supervised methods are interesting for remote sensing because there are not many human labeled datasets available, but there is practically unlimited amount of data that can be used for self-supervised learning. In this paper we analyze the use of split-brain autoencoders in the context of remote sensing image classification. Weinvestigate the importance of training set size, choice of color space and size of the model to the classification accuracy. We show that even with small amount of unlabeled training images, if we finetune the weights learned by the autoencoder, we can achieve almost state of the art results of 89.27% on AID dataset.

A. Avramović, Ratko Pilipović, Vladan Stojnić, Vedran Jovanovic, Igor Sevo, M. Simić, V. Risojević, Z. Babic

...
...
...

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više