Collocational competence, the ability to use grammatical and lexical collocations accurately, is a crucial aspect of language proficiency, closely linked to natural and fluent language use. Despite its importance, non-native speakers often struggle with collocations, particularly in productive tasks such as writing. This study examines the frequency, types, and errors of collocations among B2-level English language students at the University of Zenica, as defined by the Common European Framework of Reference (2001). A corpus of 150 student essays (76,319 words) was compiled. Collocations were extracted, classified, and analysed based on Benson et al. (2010). The results indicate that lexical collocations (3.3%) were more frequent than grammatical collocations (2.68%), confirming the first hypothesis. However, grammatical collocations exhibited a higher error rate (6.53%) compared to lexical collocations (5.15%), supporting the second hypothesis. Error analysis revealed that negative L1 transfer was the main cause of grammatical collocation errors, while synonymy and analogy contributed significantly to lexical errors. The findings also indicated that students tend to rely on familiar collocations, showing limited experimentation with less common structures. The study has pedagogical implications, suggesting that contrastive analysis, exposure to authentic materials, and creative writing activities could enhance students’ collocational competence. Addressing L1 interference and verb-preposition collocations through targeted instruction could further improve accuracy. These insights contribute to a deeper understanding of collocational competence in EFL learning, offering practical strategies for improving teaching methods and student writing skills.
Purpose – Money laundering is one of the most widespread phenomena in the financial world which is seriously threatening the integrity of system and representing a significant risk to a country’s economic development, as well as its progress in geopolitical and infrastructural terms. In recent years, Bosnia and Herzegovina (B&H) has frequently appeared in various studies, articles, and media publications as one of the countries where this phenomenon is becoming more and more popular, and now we are witnessing that our country is being referred to as a “paradise” for money laundering. This research will focus on the role of Bosnia and Herzegovina’s financial and business sectors, analyzing their role in the money laundering process and attempting to light up on some of the most common methods related to this phenomenon in Bosnia and Herzegovina. Methodology/Research Approach – The research will be conducted using both qualitative and quantitative methods. A detailed analysis of secondary sources of information will be carried out, along with the collection of primary data on the given topic. A review of previously published works and relevant literature will also be conducted. Limitations/Implications – The topic of this research is relatively unexplored and does not receive enough attention in the existing literature/studies, which presents a challenge in gathering needed data. The high unavailability of key information may limit the depth of analysis and accuracy of conclusions. Given the limited data sources, the research has been conducted in accordance with the available information from the approximately last 10 years, which may affect the scope and validity of the findings. Practical Implications – This research contributes to a better understanding of the money laundering phenomenon, with a particular focus on the role of the business and financial sectors in Bosnia and Herzegovina. The research results can help in developing more effective strategies to combat money laundering, thereby reducing the harmful economic and social consequences that this phenomenon brings. Practical recommendations may include improvements in legal provisions and strengthening oversight and control in the business and financial sectors. Originality – This research provides an original perspective on money laundering in the context of Bosnia and Herzegovina’s business and financial sectors and encourages further discussions and deeper investigations. Previous studies can mostly be characterized as reviews, whereas this paper brings together all relevant macroeconomic variables and variables of interest in this case, offering a deeper insight into and addressing a previously unexplored area.
Scientific applications increasingly demand real-time surrogate models that can capture the behavior of strongly coupled multiphysics systems driven by multiple input functions, such as in thermo-mechanical and electro-thermal processes. While neural operator frameworks, such as Deep Operator Networks (DeepONets), have shown considerable success in single-physics settings, their extension to multiphysics problems remains poorly understood. In particular, the challenge of learning nonlinear interactions between tightly coupled physical fields has received little systematic attention. This study addresses a foundational question: should the architectural design of a neural operator reflect the strength of physical coupling it aims to model? To answer this, we present the first comprehensive, architecture-aware evaluation of DeepONet variants across three regimes: single-physics, weakly coupled, and strongly coupled multiphysics systems. We consider a reaction-diffusion equation with dual spatial inputs, a nonlinear thermo-electrical problem with bidirectional coupling through temperature-dependent conductivity, and a viscoplastic thermo-mechanical model of steel solidification governed by transient phase-driven interactions. Two operator-learning frameworks, the classical DeepONet and its sequential GRU-based extension, S-DeepONet, are benchmarked using both single-branch and multi-branch (MIONet-style) architectures. Our results demonstrate that architectural alignment with physical coupling is crucial: single-branch networks significantly outperform multi-branch counterparts in strongly coupled settings, whereas multi-branch encodings offer advantages for decoupled or single-physics problems. Once trained, these surrogates achieve full-field predictions up to 1.8e4 times faster than high-fidelity finite-element solvers, without compromising solution accuracy.
Partial differential equations (PDEs) are fundamental to modeling complex and nonlinear physical phenomena, but their numerical solution often requires significant computational resources, particularly when a large number of forward full solution evaluations are necessary, such as in design, optimization, sensitivity analysis, and uncertainty quantification. Recent progress in operator learning has enabled surrogate models that efficiently predict full PDE solution fields; however, these models often struggle with accuracy and robustness when faced with highly nonlinear responses driven by sequential input functions. To address these challenges, we propose the Sequential Neural Operator Transformer (S-NOT), a architecture that combines gated recurrent units (GRUs) with the self-attention mechanism of transformers to address time-dependent,nonlinear PDEs. Unlike S-DeepONet (S-DON), which uses a dot product to merge encoded outputs from the branch and trunk sub-networks, S-NOT leverages attention to better capture intricate dependencies between sequential inputs and spatial query points. We benchmark S-NOT on three challenging datasets from real-world applications with plastic and thermo-viscoplastic highly nonlinear material responses: multiphysics steel solidification, a 3D lug specimen, and a dogbone specimen under temporal and path-dependent loadings. The results show that S-NOT consistently achieves a higher prediction accuracy than S-DON even for data outliers, demonstrating its accuracy and robustness for drastically accelerating computational frameworks in scientific and engineering applications.
The future circular $e^+ e^-$ collider (FCC-ee) stands out as the next flagship project in particle physics, dedicated to uncovering the microscopic origin of the Higgs boson. In this context, we assess indirect probes of the Minimal Supersymmetric Standard Model (MSSM), a well-established benchmark hypothesis, exploring the complementarity between Higgs measurements and electroweak precision tests at the $Z$-pole. We study three key sectors: the heavy Higgs doublet, scalar top partners, and light gauginos and higgsinos, focusing on the parameter space favored by naturalness. Remarkably, the Tera-$Z$ program consistently offers significantly greater indirect sensitivity than the Mega-$h$ run. While promising, these prospects hinge on reducing SM uncertainties. Accordingly, we highlight key precision observables for targeted theoretical work.
Abstract This article focuses on the allocation of subnational aid from Central European donors and Serbia to Bosnia & Hercegovina between 2005 and 2020. Spatial and statistical analyses revealed different patterns of aid distribution among municipalities in Bosnia & Hercegovina. Two of the seven donors studied—Croatia and Serbia—showed a clear bias in favour of their ethnic minorities in Bosnia & Hercegovina. For other Central European donors there was a general tendency to provide less aid to municipalities with more Croats. The relationship between variables approximating recipients’ needs and Central European aid was weak or insignificant.
Analysis of conversions between compressional and shear waves is a workhorse method for constraining crustal and lithospheric structure on Earth; yet, such converted waves have not been unequivocally identified in seismic data from the largest events on the Moon, due to the highly scattered waveforms of shallow seismic events. We reanalyze the polarization attributes of waveforms recorded by the Apollo seismic network to identify signals with rectilinear particle motion below 1 Hz, arising from conversions across the crust‐mantle boundary. Delay times of these converted waves are inverted to estimate crustal thickness and wavespeeds beneath the seismometers. Combined with gravimetric modeling, these new crustal thickness tie‐points yield an updated lunar crustal model with an average thickness of 29–47 km. Unlike previous models, ours include explicit uncertainty estimates, offering critical context for future lunar missions, geophysical studies, and predicting 15–36 km crust at Schrödinger and 29–52 km at Artemis III sites.
Brewer’s spent grain (BSG), the most abundant by-product from breweries, is mainly discarded or used as animal feed. However, to increase the brewing sustainability, biotechnological utilization of BSG is a much preferred solution. This study examined the fermentation of BSG, composed of old wheat bread and barley malt, by metabolic activity of Saccharomyces cerevisiae on both hydrolyzed and non-hydrolyzed media. Enzymatic hydrolysis with Viscozyme® W FG for 6 h was selected as the most effective and was used in the further research step to prepare the hydrolyzed BSG-based medium. Both media supported almost uniform yeast growth (numbers of S. cerevisiae cells was about 8 log10 CFU/g) in an acidic environment (pH value was about 5), but fermentation of hydrolyzed BSG resulted in 20% higher sugar consumption and 10% higher total titratable acidity. These findings underscore the potential of enzymatic pretreatment to improve fermentation performance. The adaptability of S. cerevisiae and the fermentability of both substrates suggest promising potential for scalable BSG valorization strategies in circular food systems.
We introduce ManifoldMind, a probabilistic geometric recommender system for exploratory reasoning over semantic hierarchies in hyperbolic space. Unlike prior methods with fixed curvature and rigid embeddings, ManifoldMind represents users, items, and tags as adaptive-curvature probabilistic spheres, enabling personalised uncertainty modeling and geometry-aware semantic exploration. A curvature-aware semantic kernel supports soft, multi-hop inference, allowing the model to explore diverse conceptual paths instead of overfitting to shallow or direct interactions. Experiments on four public benchmarks show superior NDCG, calibration, and diversity compared to strong baselines. ManifoldMind produces explicit reasoning traces, enabling transparent, trustworthy, and exploration-driven recommendations in sparse or abstract domains.
It can be stated that quality is an integral part of our daily life. All people constantly insist on quality in certain areas of life, which indicates that quality can be found in all segments in which a person work. The main objective of this study is to examine the satisfaction of clients/users with the services of spa centers. The basic research methods used are: synthesis, analysis, induction and deduction, comparative and statistical methods. The collection of primary data was carried out through an online survey, which contains a standardized scale (SERVQUAL). The correlation analysis confirms the general objective, so it can be concluded that the Pearson coefficient is -0.158, from which it follows that there is a very weak negative correlation between these two variables. It is concluded that sociodemographic factors do not at all influence the attitude of respondents about the quality of service of spa resorts. But, Pearson coefficient indicates a high degree of correlation between respondents' satisfaction with the quality of service in sparesorts and other factors. There is a very high degree of correlation between respondents' satisfaction with service quality and other factors -81%, which have an impact on the respondents' satisfaction with thequality of service in the spa: the first contact in the spa, the reason for coming to the spa, the distance from home to the spa, travel time and the manner the therapy is introduced. Key words: Quality, safety of services, spa resorts, Bosnia and Herzegovina
The paper aims to identify and analyze effective strategies aimed at managing autistic behaviorand learning barriers. A qualitative analysis of the relevant scientific and professional literature published in the last decade was carried out, and after screening, 41 papers were included in thematic analysis. Strategies are divided into six categories: Behavioral interventions and behavior management, Education of children and youth with ASD and the empowerment of educators, Teaching social skills, Sensory integration therapies, Digital and assistive technologies, and Transition support. All included strategies are evidence-based practices (EBPs). The literature review confirms that there is no universal approach in working with children and youth with ASD. Still, successful intervention is based on the application of a combination of strategies adapted to the individual needs of students, the educational environment, and developmental goals. Despite the multitude of strategies at a given setting's disposal effective implementation of EBPs is often thwarted by system, school, and individual factors suchas limited resources, training, as well as consistency across environments. By addressing these challenges in a comprehensive manner—through inclusive pedagogy, adaptive technology, and collaborative support systems—we can bridge the research-practice gapand provide rich, enabling learning experiences for students with autism spectrum disorders. Key words: autism, learning strategies, behavior management, learning barriers
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više