Embedded systems, particularly when integrated into the Internet of Things (IoT) landscape, are critical for projects requiring robust, energy-efficient interfaces to collect real-time data from the environment. As these systems become complex, the need for dynamic reconfiguration, improved availability, and stability becomes increasingly important. This paper presents the design of a framework architecture that supports dynamic reconfiguration and “on-the-fly” code execution in IoT-enabled embedded systems, including a virtual machine capable of hot reloads, ensuring system availability even during configuration updates. A “hardware-in-the-loop” workflow manages communication between the embedded components, while low-level coding constraints are accessible through an additional abstraction layer, with examples such as MicroPython or Lua. The study results demonstrate the VM’s ability to handle serialization and deserialization with minimal impact on system performance, even under high workloads, with serialization having a median time of 160 microseconds and deserialization having a median of 964 microseconds. Both processes were fast and resource-efficient under normal conditions, supporting real-time updates with occasional outliers, suggesting room for optimization and also highlighting the advantages of VM-based firmware update methods, which outperform traditional approaches like Serial and OTA (Over-the-Air, the ability to update or configure firmware, software, or devices via wireless connection) updates by achieving lower latency and greater consistency. With these promising results, however, challenges like occasional deserialization time outliers and the need for optimization in memory management and network protocols remain for future work. This study also provides a comparative analysis of currently available commercial solutions, highlighting their strengths and weaknesses.
Efficient and sustainable electrical grids are crucial for energy management in modern society and industry. Govern-ments recognize this and prioritize energy management in their plans, alongside significant progress made in theory and practice over the years. The complexity of power systems determines the unique nature of power communication networks, and most researches have been focusing on the dynamic nature of voltage stability, which led to the need for dynamic models of power systems. Control strategies based on stability assessments have become essential for managing grid stability, diverging from traditional methods and often leveraging advanced computational techniques based on deep learning algorithms and neural networks. This way, researchers can develop predictive models capable of forecasting voltage stability and detecting potential instability events in real-time, whereas neural networks can also optimize control strategies based on wide-area information and grid response, enabling more effective stability control measures, as well as detecting and classifying disturbances or faults in the grid. This paper explores the use of predictive models to assess smart grid stability, examining the benefits, risks, and comparing results to determine the most effective approach.
Deep learning techniques in computer vision (CV) tasks such as object detection, classification, and tracking can be facilitated using predefined markers on those objects. Selecting markers is an objective that can potentially affect the performance of the algorithms used for tracking as the algorithm might swap similar markers more frequently and, therefore, require more training data and training time. Still, the issue of marker selection has not been explored in the literature and seems to be glossed over throughout the process of designing CV solutions. This research considered the effects of symbol selection for 2D-printed markers on the neural network’s performance. The study assessed over 250 ALT code symbols readily available on most consumer PCs and provided a go-to selection for effectively tracking n-objects. To this end, a neural network was trained to classify all the symbols and their augmentations, after which the confusion matrix was analysed to extract the symbols that the network distinguished the most. The results showed that selecting symbols in this way performed better than the random selection and the selection of common symbols. Furthermore, the methodology presented in this paper can easily be applied to a different set of symbols and different neural network architectures.
The number of loan requests is rapidly growing worldwide representing a multi-billion-dollar business in the credit approval industry. Large data volumes extracted from the banking transactions that represent customers’ behavior are available, but processing loan applications is a complex and time-consuming task for banking institutions. In 2022, over 20 million Americans had open loans, totaling USD 178 billion in debt, although over 20% of loan applications were rejected. Numerous statistical methods have been deployed to estimate loan risks opening the field to estimate whether machine learning techniques can better predict the potential risks. To study the machine learning paradigm in this sector, the mental health dataset and loan approval dataset presenting survey results from 1991 individuals are used as inputs to experiment with the credit risk prediction ability of the chosen machine learning algorithms. Giving a comprehensive comparative analysis, this paper shows how the chosen machine learning algorithms can distinguish between normal and risky loan customers who might never pay their debts back. The results from the tested algorithms show that XGBoost achieves the highest accuracy of 84% in the first dataset, surpassing gradient boost (83%) and KNN (83%). In the second dataset, random forest achieved the highest accuracy of 85%, followed by decision tree and KNN with 83%. Alongside accuracy, the precision, recall, and overall performance of the algorithms were tested and a confusion matrix analysis was performed producing numerical results that emphasized the superior performance of XGBoost and random forest in the classification tasks in the first dataset, and XGBoost and decision tree in the second dataset. Researchers and practitioners can rely on these findings to form their model selection process and enhance the accuracy and precision of their classification models.
Air pollution is a major problem in developing countries and around the world causing lung diseases such as asthma, chronic bronchitis, emphysema, and chronic obstructive pulmonary disease. Therefore, innovative methods and systems for predicting air pollution are needed to reduce such risks. Some Internet of Things (IoT) technologies have been developed to assess and monitor various air quality parameters. In the context of IoT, Artificial intelligence is one of the main segments of smart cities that enables collecting a large amount of data to make recommendations, predict future events and help make decisions. Big data, as part of artificial intelligence, greatly contributes to making further decisions, determining the necessary resources, and identifying critical places thanks to the large amount of data it collects. This paper proposes a solution, with the integration of the Internet of Things (IoT), to predict pollution for any given day. This paper aims to show how sensor-derived data in smart air pollution monitoring solutions can be used for intelligent pollution management. By collecting data from the air pollution sensor that sends the data to the server via. NET 6 REST API endpoint and places it in a SQL Server database together with additional weather data that is collected from REST API for that part of the day, a dataset is created through the ETL process in Jupyter notebook. Linear regression algorithms will be used for making predictions. By detecting the largest sources of air pollution, artificial intelligence solutions can proactively reduce pollution and thus improve health conditions and reduce health costs.
The designing process of an IoT (Internet of Things) network requires adequate knowledge of various communication technologies that make the connection of the IoT modules possible. Many important factors such as scalability, bandwidth, data rate (speed), coverage, power consumption, and security support need to be considered to answer the needs of an IoT application with regards to the implemented radio communication technology. This paper studies the choices of three major LPWAN (Low-Power Wide-Area Networks) technologies that are currently leading in the market of radio communication technologies. Focusing on Sigfox, LoRaWAN (Low-Range Wide-Area Networks), and NB-IoT (Narrow-Band Internet of Things), this work intends to give the respective pros and cons of the mentioned technologies and a clear view of the recent trends and effective choices of radio communication technologies for major smart IoT applications.
With the global transition to the IPv6 (Internet Protocol version 6), IP (Internet Protocol) validation efficiency and IPv6 support from the aspect of network programming are gaining more importance. As global computer networks grow in the era of IoT (Internet of Things), IP address validation is an inevitable process for assuring strong network privacy and security. The complexity of IP validation has been increased due to the rather drastic change in the memory architecture needed for storing IPv6 addresses. Low-level programming languages like C/C++ are a great choice for handling memory spaces and working with simple devices connected in an IoT (Internet of Things) network. This paper analyzes some user-defined and open-source implementations of IP validation codes in Boost. Asio and POCO C++ networking libraries, as well as the IP security support provided for general networking purposes and IoT. Considering a couple of sample codes, the paper gives a conclusion on whether these C++ implementations answer the needs for flexibility and security of the upcoming era of IPv6 addressed computers.
With the emerging Internet of Things (IoT) technologies, the smart city paradigm has become a reality. Wireless low-power communication technologies (LPWAN) are widely used for device connection in smart homes, smart lighting, mitering, and so on. This work suggests a new approach to a smart parking solution using the benefits of narrowband Internet of Things (NB-IoT) technology. NB-IoT is an LPWAN technology dedicated to sensor communication within 5G mobile networks. This paper proposes the integration of NB-IoT into the core IoT platform, enabling direct sensor data navigation to the IoT radio stations for processing, after which they are forwarded to the user application programming interface (API). Showcasing the results of our research and experiments, this work suggests the ability of NB-IoT technology to support geolocation and navigation services, as well as payment and reservation services for vehicle parking to make the smart parking solutions smarter.
Distributed Ledger Technologies are one of the pillars of future technologies, prognozing to have a great impact to many aspects of our lives, including social, economic, juristic, security and many others. Bitcoin is still the most popular blockchain currency, but the opportunities to use Distribute Ledger Technologies are much more wide, outperforming financial applications as most known and popular. Besides blockchains, there are also other architectures of Distributed Ledger Technologies. This paper observes and analyses one technology as a very strong alternative to blockchains: hashgraphs, which are promising to outperform blockchains, but also tangles. Basis of their architecture and functionality will be explained and directions and prognosis of the further development will be given. The main paper contribution is a comparison of a hashgraph technology to its concurrent architectures, i.e. blockchains and tangles, considering different segments and different properties that define a quality of Distributed Ledgers.
Connected devices in IoT as well as the smartwatch market are getting more and more popular every year. The main mode of communication in IoT is an easy-to-use MQTT protocol suitable for devices with limited resources and battery power. Tizen is used for platforms such as mobile devices, smartwatches, TVs and even Linux kernel-based IoT devices. In this paper, we explain how MQTT protocol, Tizen operating systems and their architecture work, and suggest one possible implementation of a MQTT protocol for Smartwatches based on the Tizen operating system. We list the types of Tizen applications, develop a native application, and suggest possible future upgrades and appliances in IoT.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više