The aviation industry operates as a complex, dynamic system generating vast volumes of data from aircraft sensors, flight schedules, and external sources. Managing this data is critical for mitigating disruptive and costly events such as mechanical failures and flight delays. This paper presents a comprehensive application of predictive analytics and machine learning to enhance aviation safety and operational efficiency. We address two core challenges: predictive maintenance of aircraft engines and forecasting flight delays. For maintenance, we utilise NASA’s C-MAPSS simulation dataset to develop and compare models, including one-dimensional convolutional neural networks (1D CNNs) and long short-term memory networks (LSTMs), for classifying engine health status and predicting the Remaining Useful Life (RUL), achieving classification accuracy up to 97%. For operational efficiency, we analyse historical flight data to build regression models for predicting departure delays, identifying key contributing factors such as airline, origin airport, and scheduled time. Our methodology highlights the critical role of Exploratory Data Analysis (EDA), feature selection, and data preprocessing in managing high-volume, heterogeneous data sources. The results demonstrate the significant potential of integrating these predictive models into aviation Business Intelligence (BI) systems to transition from reactive to proactive decision-making. The study concludes by discussing the integration challenges within existing data architectures and the future potential of these approaches for optimising complex, networked transportation systems.
The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of annotated video across ten diverse categories were analyzed. Audio features were extracted with pyAudioAnalysis, while visual features (colour histograms, optical flow, object detection, facial recognition) were derived using OpenCV. Six supervised classifiers—Naive Bayes, K-Nearest Neighbors, Logistic Regression, Decision Tree, Random Forest, and XGBoost—were evaluated, with hyperparameters optimized via grid search. Temporal coherence was enhanced using median filtering. Random Forest achieved the best performance, with 74% AUC on fused features and a 3% F1-score gain after post-processing. Spectral flux, grayscale histograms, and optical flow emerged as key discriminative features. The best model was deployed as a practical web service using TensorFlow and Flask, integrating informative segment detection with subtitle generation via beam search to ensure coherence and coverage. System-level evaluation demonstrated low latency and efficient resource utilization under load. Overall, the results confirm the strength of multimodal fusion and ensemble learning for video summarization and highlight their potential for real-world applications in surveillance, digital archiving, and online education.
Embedded systems, particularly when integrated into the Internet of Things (IoT) landscape, are critical for projects requiring robust, energy-efficient interfaces to collect real-time data from the environment. As these systems become complex, the need for dynamic reconfiguration, improved availability, and stability becomes increasingly important. This paper presents the design of a framework architecture that supports dynamic reconfiguration and “on-the-fly” code execution in IoT-enabled embedded systems, including a virtual machine capable of hot reloads, ensuring system availability even during configuration updates. A “hardware-in-the-loop” workflow manages communication between the embedded components, while low-level coding constraints are accessible through an additional abstraction layer, with examples such as MicroPython or Lua. The study results demonstrate the VM’s ability to handle serialization and deserialization with minimal impact on system performance, even under high workloads, with serialization having a median time of 160 microseconds and deserialization having a median of 964 microseconds. Both processes were fast and resource-efficient under normal conditions, supporting real-time updates with occasional outliers, suggesting room for optimization and also highlighting the advantages of VM-based firmware update methods, which outperform traditional approaches like Serial and OTA (Over-the-Air, the ability to update or configure firmware, software, or devices via wireless connection) updates by achieving lower latency and greater consistency. With these promising results, however, challenges like occasional deserialization time outliers and the need for optimization in memory management and network protocols remain for future work. This study also provides a comparative analysis of currently available commercial solutions, highlighting their strengths and weaknesses.
In the era of exponentially expanding data, particularly driven by social media development, effective data management and query processing have become critical challenges in application development. Graph databases, such as Neo4j, JanusGraph, ArangoDB, and OrientDB, offer significant advantages for applications requiring intensive processing of interconnected data, including social networks and recommendation systems. In this work, we focus on Neo4j as a representative of graph databases and MySQL as a representative of relational SQL databases for clarity and precision in data representation. We begin by introducing fundamental optimization techniques specific to each type of database. Subsequently, we concentrate on an experimental and investigative analysis of query performance on Neo4j and MySQL databases using original datasets and structures under consideration. The findings reveal that SQL databases outperform simpler queries, whereas graph databases excel in handling complex structures with multiple relationships. Moreover, the complexity of composing queries becomes apparent when addressing territories requiring table mergers (or node and relationship manipulation in graph databases). We also evaluate related research in this area, which further demonstrates that integrating graph and relational databases effectively can lead to optimal data management solutions, while utilizing both types of databases may offer combined advantages depending on the application requirements.
Efficient and sustainable electrical grids are crucial for energy management in modern society and industry. Govern-ments recognize this and prioritize energy management in their plans, alongside significant progress made in theory and practice over the years. The complexity of power systems determines the unique nature of power communication networks, and most researches have been focusing on the dynamic nature of voltage stability, which led to the need for dynamic models of power systems. Control strategies based on stability assessments have become essential for managing grid stability, diverging from traditional methods and often leveraging advanced computational techniques based on deep learning algorithms and neural networks. This way, researchers can develop predictive models capable of forecasting voltage stability and detecting potential instability events in real-time, whereas neural networks can also optimize control strategies based on wide-area information and grid response, enabling more effective stability control measures, as well as detecting and classifying disturbances or faults in the grid. This paper explores the use of predictive models to assess smart grid stability, examining the benefits, risks, and comparing results to determine the most effective approach.
Working with different DBMS for programmers in their daily work represents a significant challenge in terms of choosing the appropriate way of connecting to the DBMS for the appropriate needs, given that a significant number of factors can influence the same. Although experience is usually one of the important elements that has influence on the selection of the appropriate way to connect to a DBMS, the choice can still vary from system to system and from situation to situation. For this reason, it is necessary to conduct appropriate analysis and research in accordance with various factors that can be an indicator of whether a connection with a DBMS is good or bad. In this research, an analysis was performed between the two leading methods of interaction between Java Spring Boot applications and PostgreSQL databases, namely Spring JDBC and Spring Hibernate. The results of the analysis indicate that there are certain differences in the speed of query execution in certain situations, which Java programmers should pay special attention to when choosing one of the two mentioned technologies to achieve more complex functionalities.
Predictive modelling and AI have become a ubiquitous part of many modern industries and provide promising opportunities for more accurate analysis, better decision-making, reducing risk and improving profitability. One of the most promising applications for these technologies is in the financial sector as these could be influential for fraud detection, credit risk, creditworthiness and payment analysis. By using machine learning algorithms for analysing larger datasets, financial institutions could identify patterns and anomalies that could indicate fraudulent activity, allowing them to take action in real-time and minimize losses. This paper aims to explore the application of predictive models for assessing customer worthiness, identify the benefits and risks involved with this approach and compare their results in order to provide insights into which model performs best in the given context.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više