In the digital era of e-commerce, effective content management is crucial for engaging and retaining online consumers. Traditional manual approaches to content creation often fall short in terms of speed, scalability, and adaptability. With over 26.5 million e-commerce stores worldwide, staying competitive requires leveraging all available tools. This research paper investigates the efficiency and effectiveness of AI-driven content generation compared to traditional methods. We examine AI technologies for creating titles, subtitles, and SEO optimization against content writers. The study involves five authors and an AI tool generating content for five products, with time taken for content creation measured and compared. Additionally, a group of 15 participants will evaluate the professional quality and click ability of the generated content. Using Python, we will analyze the potential time savings for generating 100 titles and assess the overall quality improvement. The results aim to provide empirical evidence on the benefits of AI in content creation for e-commerce. Our findings reveal that AI significantly reduces the time required for content creation. Specifically, AI-generated titles are 84.17% faster and AI-generated subtitles are 77.31% faster compared to those created by content writers. The content writers worked without the aid of any tools, relying solely on provided specifications. Additionally, 81.33% of participants preferred the titles generated by AI, while 88% favoured the AI-generated subtitles. These results underscore the potential of AI to enhance efficiency and effectiveness in e-commerce content management.
This paper concentrates on the analysis of spam messages as well as processing them by using machine learning models. The result of this research allows the reader to learn about the most important characteristics of spam messages in the form of the most common pattern used, which may assist in their detection as well as prevention of any kind of loss that may occur.
This study delves into the intersection of music and machine learning, examining the performance of five algorithms—Logistic Regression, Random Forest, Decision Tree, Support Vector Machine, and K-Nearest Neighbours—in sentiment analysis for music. The goal is to systematically evaluate their effectiveness in decoding and classifying the emotional content of musical compositions. The selected algorithms represent diverse computational approaches, contributing to the overarching objective of understanding the intricate emotional landscape of music. A crucial aspect of this comparative analysis involves assessing the accuracy of these machine learning models, both before and after applying feature selection techniques. This step proves critical in enhancing the predictive capabilities of the models. The observed accuracy levels exhibit a dynamic range from 57% to 67%, unveiling subtle yet noteworthy performance variations among the chosen algorithms.
Imposed changes in social conduct and the dynamics of living in cities, during COVID-19 pandemic, triggered an increase in the demand, availability, and accessibility of open public spaces. This has put forward questions of the relationship between open public spaces and disease transmission, as well as how planning and design strategies might be used to improve resilience in the face of future pandemics. Within this academic framework, this study focuses on object detection and human movement prediction in open public spaces, using the city of Sarajevo as a case study. Video recordings of parks and squares in morning, afternoon and evening are utilized to detect humans and predict their movements. Frame differentiation method proved to be the best for object detection and their motion. Linear regression is used on a dataset collected using the space syntax observation technique gate method. The best R-2 values, 0.97 and 0.61, are achieved for weekdays, for both parks and squares. Authors associated it with the dynamics of space use and frequency of space occupancy, which can be related to physical conditions and activity content of selected locations. The results of study provide an insight into analysis and prediction of direction, as well as density of pedestrian movement, which could be used in decision making directed towards more efficient and health oriented urban planning.
This paper demonstrates the application of business intelligence in decision-making in digital advertising through a case study. Data used for analysis was collected during a test phase of an advertising platform. The study analyzes multiple types of traffic, related to countries, browsers, household incomes, and days of a week. Beside tabular reports, the paper presents how to visualize those results using Python libraries to make them more visually appealing. Furthermore, logistic regression was used to build models to detect relationships between the number of impressions and clicks. Finally, the authors propose multiple combinations of data that could be used to create different reports that lead to smarter decision-making and cost-effectiveness.
Security is one of the most actual topics in the online world. Lists of security threats are constantly updated. One of those threats are phishing websites. In this work, we address the problem of phishing websites classification. Three classifiers were used: K-Nearest Neighbor, Decision Tree and Random Forest with the feature selection methods from Weka. Achieved accuracy was 100% and number of features was decreased to seven. Moreover, when we decreased the number of features, we decreased time to build models too. Time for Random Forest was decreased from the initial 2.88s and 3.05s for percentage split and 10-fold cross validation to 0.02s and 0.16s respectively.
Due to the rapid advancement of online social networks in recent years, the prevalence of fake news has increased significantly. Fake news is deliberately created to deceive users by imitating real news, making it challenging to identify early on. So, we need to explore the accompanying information to improve its disclosure such as the publisher. This study focuses on analyzing and investigating various traditional machine learning models to determine the most effective one. The goal is to develop a supervised machine learning algorithm that can classify news articles as either true or fake, utilizing tools like Python‘s scikit-learn and NLP for text analysis. The proposed approach involves feature extraction and vectorization. To accomplish this, the scikit-learn library in Python is utilized, which offers helpful tools like CountVectorizer and TfidfVectorizer. The experiment involved implementing well-known algorithms: Logistic regression, Neural networks and SVM, and comparing their performance to determine the most suitable one. Each of the three algorithms performed well, but SVM demonstrated superior outcomes across nearly all categories.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više