Scientific Instruments Australia
Polje Istraživanja: Social engineering Image processing
High-throughput plant phenotyping using RGB imaging offers a scalable and non-invasive solution for monitoring plant growth and extracting various traits. However, achieving accurate segmentation across experiments remains a challenging task due to image variability usually caused by shifts in pot positions. This study introduces a customized image stabilization method to align pots consistently across time-series images of Arabidopsis thaliana, enhancing spatial consistency. A large-scale RGB dataset was collected and prepared, with 4,000 manually annotated images used to train multiple encoder–decoder deep learning models. Various CNN-based encoders were paired with well-known decoders, including U-Net, $\mathbf{U}^{2}$-Net, PANet, and DeepLabv3. Stabilization significantly improved performance of models, with the $EffNetB1 +\mathbf{U}^{2}$-Net encoder-decoder combination achieving the highest precision score of 0.95 and Intersection over Union of 0.96. These results demonstrate the value of spatial consistency and offer a robust, scalable pipeline for automated plant segmentation in indoor phenotyping systems.
Image‐based plant phenotyping has diverse applications, ranging from providing quantitative traits for genetic breeding to enhancing management practices for indoor and outdoor production systems. Misidentification of cell lines or ecotypes/varieties is a major problem across all biological research disciplines. With the 1000 Arabidopsis Genome Project facilitating the use of various ecotypes, it is crucial to verify the identity of ecotypes in discovery‐based genetic screens involving hundreds of ecotypes. To address this issue, an RGB image analysis pipeline was established for the accurate recognition of different Arabidopsis thaliana ecotypes. In the developed pipeline, the most crucial aspects for accurately capturing traits and training deep learning models were identified as follows: (i) assessment of data complexity using spatial‐temporal features of the RGB spectrum and data entropy, the latter defined as the variability within the dataset; (ii) data redefinition in instances of high data complexity; and (iii) data partitioning based on extracted morphological similarity among ecotype replicates. The pipeline includes several supervised deep learning models integrated into an auto‐optimization subsystem. Extensive hyperparameter tuning was performed to identify the best‐performing models for single‐image and image‐sequence ecotype classification. Two external datasets were evaluated to demonstrate the robustness of the pipeline, regardless of how they were collected. A graphical user interface is provided to prepare these images for input into the pipeline in cases of extreme variability. The pipeline can automatically verify ecotypes in large‐scale studies and extract traits for further analysis and correlation, as needed, using datasets from a variety of sources.
Image-based high-throughput plant phenotyping utilises various imaging techniques to automatically and non-invasively understand the growth of different plant species. These innovative imaging infrastructures are implemented to monitor plant development over time in indoor or outdoor environments. However, understanding the relationship between genotype and phenotype interactions under different environments remains challenging. This research study demonstrates superior extraction of leaf morphological features of different Arabidopsis thaliana ecotypes by analysing leaf geometry using a sequence of RGB images. Upon successful extraction of anatomical features, leaf length and area are converted into physical coordinates. Furthermore, considering these leaf features as 1D signals, the Fourier Spectrum is analysed, and most descriptive features are selected using PCA. Finally, leaf shape classification is established by training and testing five distinct ML models. A thorough evaluation of selected models demonstrates superiority in classifying two common leaf shapes of Arabidopsis plants.
This paper presents a robust exploration of the capabilities of conditional Generative Adversarial Networks (GANs) in harnessing labeled data to produce high-quality labels for unlabeled samples. By leveraging conditional information, our approach guides the network to generate contextually relevant labels for specific time series data, accelerating the labeling process. A comprehensive evaluation of our model's performance, incorporating diverse metrics, visual representations, and his-tograms, illuminates the effectiveness of conditional GANs for the Assistive Label Generation (ALG) of time series Arabidopsis thaliana images. The Structural Similarity Index (SSIM) high-lights an average similarity of 98.89 % between the generated and manually labeled images. This innovative methodology holds the promise of significantly reducing labeling efforts.
The precise detection of plant centres is important for growth monitoring, enabling the continuous tracking of plant development to discern the influence of diverse factors. It holds significance for automated systems like robotic harvesting, facilitating machines in locating and engaging with plants. In this paper, we explore the YOLOv4 (You Only Look Once) real-time neural network detector for plant centre detection. Our dataset, comprising over 12,000 images from 151 Arabidopsis thaliana accessions, is used to fine-tune the model. Evaluation of the dataset reveals the model's proficiency in centre detection across various accessions, boasting an mAP of 99.79% at a 50 % IoU threshold. The model demonstrates real-time processing capabilities, achieving a frame rate of approximately 50 FPS. This outcome underscores its rapid and efficient analysis of video or image data, showcasing practical utility in time-sensitive applications.
Abstract Jennings, J, Štaka, Z, Wundersitz, DW, Sullivan, CJ, Cousins, SD, Čustović, E, and Kingsley, MI. Position-specific running and technical demands during male elite-junior and elite-senior Australian rules football match-play. J Strength Cond Res 37(7): 1449–1455, 2023—The aim of this study was to compare position-specific running and technical demands of elite-junior and elite-senior Australian rules football match-play to better inform practice and assist transition between the levels. Global positioning system and technical involvement data were collated from 12 Victorian U18 male NAB League (n = 553) and 18 Australian Football League (n = 702) teams competing in their respective 2019 seasons. Players were grouped by position as nomadic, fixed, or ruck, and data subsets were used for specific analyses. Relative total distance (p = 0.635, trivial effect), high-speed running (HSR) distance (p = 0.433, trivial effect), acceleration efforts (p = 0.830, trivial effect), deceleration efforts (p = 0.983, trivial effect), and efforts at >150 m·min−1 (p = 0.229, trivial effect) and >200 m·min−1 (p = 0.962, trivial effect) did not differ between elite-junior and elite-senior match-play. Elite juniors covered less total and HSR distance during peak periods (5 seconds–10 minutes) of demand (p ≤ 0.022, small-moderate effects). Within both leagues, nomadic players had the greatest running demands followed by fixed position and then rucks. Relative disposals (p = 0.330, trivial effect) and possessions (p = 0.084, trivial effect) were comparable between the leagues. During peak periods (10 seconds to 2 minutes), elite juniors had less technical involvements than elite seniors (p ≤ 0.001, small effects). Although relative running demands and positional differences were comparable between the leagues, elite juniors perform less running, HSR, and technical involvements during peak periods when compared with elite seniors. Therefore, coaching staff in elite-senior clubs should maintain intensity while progressively increasing the volume of training that recently drafted players undertake when they have transitioned from elite-junior leagues.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više