Logo
User Name

Irdin Pekarić

Društvene mreže:

Jonas Ave, Irdin Pekaric, M. Frohner, Giovanni Apruzzese

Toxicity and harassment are widespread in the video-gaming context. Especially in competitive online multiplayer scenarios, gamers oftentimes send harmful messages to other players (teammates or opponents) whose consequences span from mild annoyance to withdrawal and depression. Abundant prior work tackled these problems, e.g., pointing out the negative effects of toxic interactions. However, few works proposed countermeasures specifically developed and tested on textual messages sent during a match -- i.e., when the"harassment"actually occurs. We posit that such a scarcity stems from the lack of high-quality datasets that can be used to devise"automated"detectors based on natural-language processing (NLP) and machine learning (ML), and which can -- ideally -- mitigate the harm of toxic comments during a gaming session. This work provides a foundation for addressing the problem of toxicity and harassment in video games. First, through a systematic literature review (n=1,039), we provide evidence that only few works proposed ML/NLP-based detectors of toxicity/harassment during live matches. Then, we partner-up with 8 expert League of Legend (LoL) players and create a fine-grained labelled dataset, L2DTnH, containing 1.4k toxic and 13.8k non-toxic messages exchanged during LoL matches. We use L2DTnH to develop a detector that we then empirically show outperforms general-purpose and state-of-the-art toxicity detectors reliant on NLP. To further demonstrate the practicality of our resources, we test our detector on game-related data beyond that included in L2DTnH; and we develop a Web-browser extension that flags toxic content in Webpages -- without querying third-party servers owned by AI companies. We publicly release all of our resources. Our contributions pave the way for more applied research devoted to fighting the spread of toxicity and harassment in video games.

Emir Karaosman, Advije Rizvani, Irdin Pekaric

Financial institutions face increasing cyber risk while operating under strict regulatory oversight. To manage this risk, they rely heavily on Cyber Threat Intelligence (CTI) to inform detection, response, and strategic security decisions. Artificial intelligence (AI) is widely suggested as a means to strengthen CTI. However, evidence of trustworthy production use in finance remains limited. Adoption depends not only on predictive performance, but also on governance, integration into security workflows and analyst trust. Thus, we examine how AI is used for CTI in practice within financial institutions and what barriers prevent trustworthy deployment. We report a mixed-methods, user-centric study combining a CTI-finance-focused systematic literature review, semi-structured interviews, and an exploratory survey. Our review screened 330 publications (2019-2025) and retained 12 finance-relevant studies for analysis; we further conducted six interviews and collected 14 survey responses from banks and consultancies. Across research and practice, we identify four recurrent socio-technical failure modes that hinder trustworthy AI-driven CTI: (i) shadow use of public AI tools outside institutional controls, (ii) license-first enablement without operational integration, (iii) attacker-perception gaps that limit adversarial threat modeling, and (iv) missing security for the AI models themselves, including limited monitoring, robustness evaluation and audit-ready evidence. Survey results provide additional insights: 71.4% of respondents expect AI to become central within five years, 57.1% report infrequent current use due to interpretability and assurance concerns and 28.6% report direct encounters with adversarial risks. Based on these findings, we derive three security-oriented operational safeguards for AI-enabled CTI deployments.

Irdin Pekaric, Raffaela Groner, Alexander Raschke, Thomas Witte, Jubril Gbolahan Adigun, Michael Felderer, Matthias Tichy

In the rapidly evolving landscape of software engineering, the demand for robust and secure systems has become increasingly critical. This is especially true for self-adaptive systems due to their complexity and the dynamic environments in which they operate. To address this issue, we designed and developed the SAFT-GT toolchain that tackles the multifaceted challenges associated with ensuring both safety and security. This paper provides a comprehensive description of the toolchain's architecture and functionalities, including the Attack-Fault Trees generation and model combination approaches. We emphasize the toolchain's ability to integrate seamlessly with existing systems, allowing for enhanced safety and security analyses without requiring extensive modifications and domain knowledge. Our proposed approach can address evolving security threats, including both known vulnerabilities and emerging attack vectors that could compromise the system. As a use case for the toolchain, we integrate it into the feedback loop of self-adaptive systems. Finally, to validate the practical applicability of the toolchain, we conducted an extensive user study involving domain experts, whose insights and feedback underscore the toolchain's relevance and usability in real-world scenarios. Our findings demonstrate the toolchain's effectiveness in real-world applications while highlighting areas for future improvements. The toolchain and associated resources are available in an open-source repository to promote reproducibility and encourage further research in this field.

M. Pfister, Giovanni Apruzzese, Irdin Pekaric

Many cyberattacks succeed because they exploit flaws at the human level. To address this problem, organizations rely on security awareness programs, which aim to make employees more resilient against social engineering. While some works have, implicitly or explicitly, suggested that such programs should account for contextual relevance, the common praxis in research is to adopt a "general" viewpoint. For instance, instead of focusing on department-specific issues, prior user studies sought to provide organization-wide conclusions by treating all participants equally. Such a protocol may lead to overlooking vulnerabilities that affect only specific subsets of an organization, and which can be (or are) exploited by real-world attackers.In this paper, we tackle such an oversight. First, through a systematic literature review encompassing over 1k papers, we provide factual evidence that prior literature poorly accounted for department-specific needs. Then, building on this (worrying) finding, we carry out a multi-company and mixed-methods study focusing on two pivotal departments of modern organizations: human resources (HR) and accounting. We explore three dimensions: what specific threats are faced by these departments; what topics should be covered in the security-awareness campaigns delivered to these departments; and which delivery methods would maximize the effectiveness of such campaigns for these departments. We begin by interviewing 16 employees of a multinational enterprise, and then use these results as a scaffold to design a structured survey through which we collect the responses of over 90 HR/accounting members of 9 organizations of varying size. We find that HR and accounting departments face distinct threats: HR is targeted through job applications containing mal-ware and executive impersonation, while accounting is exposed to invoice fraud, credential theft, and ransomware. Current training is often viewed as too generic, with employees preferring shorter, scenario-based formats like videos and simulations. These preferences contradict the common industry practice of lengthy, annual sessions. Based on these insights, we propose practical recommendations for designing awareness programs tailored to departmental needs and workflows.

P. Zech, Irdin Pekaric

The long-term sustainability of research software is a critical challenge, as it usually suffers from poor maintainability, lack of adaptability, and eventual obsolescence. This paper proposes a novel approach to addressing this issue by leveraging the concept of fitness functions from evolutionary architecture. Fitness functions are automated, continuously evaluated metrics designed to ensure that software systems meet desired non-functional, architectural qualities over time. We define a set of fitness functions tailored to the unique requirements of research software, focusing on findability, accessibility, interoperability and reusability (FAIR). These fitness functions act as proactive safeguards, promoting practices such as modular design, comprehensive documentation, version control, and compatibility with evolving technological ecosystems. By integrating these metrics into the development life cycle, we aim to foster a culture of sustainability within the research community. Case studies and experimental results demonstrate the potential of this approach to enhance the long-term FAIR of research software, bridging the gap between ephemeral project-based development and enduring scientific impact.

Irdin Pekaric, Giovanni Apruzzese

Every day, new discoveries are made by researchers from all across the globe and fields. HICSS is a flagship venue to present and discuss such scientific advances. Yet, the activities carried out for any given research can hardly be fully contained in a single document of a few pages-the"paper."Indeed, any given study entails data, artifacts, or other material that is crucial to truly appreciate the contributions claimed in the corresponding paper. External repositories (e.g., GitHub) are a convenient tool to store all such resources so that future work can freely observe and build upon them -- thereby improving transparency and promoting reproducibility of research as a whole. In this work, we scrutinize the extent to which papers recently accepted to HICSS leverage such repositories to provide supplementary material. To this end, we collect all the 5579 papers included in HICSS proceedings from 2017-2024. Then, we identify those entailing either human subject research (850) or technical implementations (737), or both (147). Finally, we review their text, examining how many include a link to an external repository-and, inspect its contents. Overall, out of 2028 papers, only 3\% have a functional and publicly available repository that is usable by downstream research. We release all our tools.

Irdin Pekaric, P. Zech, Thomas Mattson

Large Language Models (LLMs) are transforming human decision-making by acting as cognitive collaborators. Yet, this promise comes with a paradox: while LLMs can improve accuracy, they may also erode independent reasoning, promote over-reliance and homogenize decisions. In this paper, we investigate how LLMs shape human judgment in security-critical contexts. Through two exploratory focus groups (unaided and LLM-supported), we assess decision accuracy, behavioral resilience and reliance dynamics. Our findings reveal that while LLMs enhance accuracy and consistency in routine decisions, they can inadvertently reduce cognitive diversity and improve automation bias, which is especially the case among users with lower resilience. In contrast, high-resilience individuals leverage LLMs more effectively, suggesting that cognitive traits mediate AI benefit.

Saskia Laura Schröer, No'e Canevascini, Irdin Pekaric, Philine Widmer, Pavel Laskov

Cyber threats have become increasingly prevalent and sophisticated. Prior work has extracted actionable cyber threat intelligence (CTI), such as indicators of compromise, tactics, techniques, and procedures (TTPs), or threat feeds from various sources: open source data (e.g., social networks), internal intelligence (e.g., log data), and “first-hand” communications from cybercriminals (e.g., underground forums, chats, darknet websites). However, “first-hand” data sources remain underutilized because it is difficult to access or scrape their data. In this work, we analyze (i) 6.6 million posts, (ii) 3.4 million messages, and (iii) 120,000 darknet websites. We combine NLP tools to address several challenges in analyzing such data. First, even on dedicated platforms, only some content is CTI-relevant, requiring effective filtering. Second, “first-hand” data can be CTI-relevant from a technical or strategic viewpoint. We demonstrate how to organize content along this distinction. Third, we describe the topics discussed and how “first-hand” data sources differ from each other. According to our filtering, 20% of our sample is CTI-relevant. Most of the CTI-relevant data focuses on strategic rather than technical discussions. Credit card-related crime is the most prevalent topic on darknet websites. On underground forums and chat channels, account and subscription selling is discussed most. Topic diversity is higher on underground forums and chat channels than on darknet websites. Our analyses suggest that different platforms may be used for activities with varying complexity and risks for criminals.

...
...
...

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više