Procedural modeling methods are used to automatically generate virtual scenes. There is a large number of available top‐down methods for generating partial content for specific purposes. However, little research was done on enabling the generation of content in the presence of manually modeled elements, from the bottom‐up direction, or without significant assistance from the user. No existing approach provides a platform that can combine the results of different methods, which leaves them isolated. This paper presents an integration approach that generates complete virtual space organizations by combining the usage of top‐down and bottom‐up procedural generation of content, with support for the placement of manually modeled content. The integration is made possible by using shape conversion to match the input and output shape types of different methods. The evaluation of the proposed approach was performed on a 2D polygon dataset by using four different scenarios, validating that it works as intended. Additional testing was performed by using a case study of organizing 3D virtual space around the manually modeled element of virtual heritage Tašlihan to demonstrate all capabilities of the integration approach and the different outputs depending on the level of user interaction and the desired results.
Texas Instruments development kits have a wide application in practical and scientific experiments due to their small size, processing power, available booster packs, and compatibility with different environments. The most popular integrated development environments for programming these development kits are Energia and Code Composer Studio. Unfortunately, there are no existing studies that compare the benefits and drawbacks of these environments and their performances. Conversely, the performances of the FreeRTOS environment are well-explored, making it a suitable baseline for embedded systems execution. In this paper, we performed the experimental evaluation of the performance of Texas Instruments MSP-EXP432P401R when using Energia, Code Composer Studio, and FreeRTOS for program execution. Three different sorting algorithms (bubble sort, radix sort, merge sort) and three different search algorithms (binary search, random search, linear search) were used for this purpose. The results show that Energia sorting algorithms outperform other environments with a maximum of 400 elements. On the other hand, FreeRTOS search algorithms far outperform other environments with a maximum of $\mathbf{2 5 5, 0 0 0}$ elements (whereas this maximum was $\mathbf{1 0, 0 0 0}$ elements for other environments). Code Composer Studio resulted in the largest processing time, which indicates that the lowlevel registry editing performed in this environment leads to significant performance issues.
Ultrasound images are used in various branches of medicine to detect diseases. The process of obtaining this data is complex due to procedures and legal restrictions, leading to scarce datasets. Different data augmentation techniques can be employed to improve classification performance. This paper shows that augmenting the ultrasound breast cancer images dataset using generative adversarial networks (GANs) increased the classification accuracy compared to the original dataset and compared to the dataset augmented using standard techniques.
Software development is implemented in several key phases, one of which is software testing. Software testing consists of selecting techniques for the purpose of finding software defects and bugs in the process of writing code. There are several ways and approaches that lead us to that purpose, with the goal of selecting the most adequate method in terms of cost, complexity, and efficiency. In this paper, we will take a deeper dive into mutation testing techniques. Mutation testing techniques are fault-based and focus more on test structures than the input data, which is considered the testing start point. The basic concept of mutation testing consists of a few steps, which will be covered in this paper, and metrics that measure how effective the tests really are. With a few code examples, we will show why code coverage, which is mostly taken as a measure while testing, is sometimes not the most reliable source and does not give a full picture when talking about the quality of written tests.
The applications presented in this conference paper focus on the development of a mobile and web application serving as a planner with a focus on tracking persons with Down syndrome. These innovative technological solutions contribute to the development of independence and functionality for persons with Down syndrome while emphasizing the importance of inclusivity in society. In addition to focusing on organizing activities, the mobile and web applications provide support and facilitate daily tasks. The web application allows parents/guardians/teachers to add new activities to the planner and track the progress of these activities. On the other hand, the mobile application enables persons with Down syndrome to record their activities within the application, considering their specific challenges, and customizing the user interface to their needs.
Procedural modeling is used to generate virtual content in organized layouts of exterior and interior elements. There is a large number of existing layout generation methods, and newer approaches propose the generation of multiple layout types within the same generation session. This introduces additional constraints when manually created layout elements need to be combined with the automatically generated content. Existing approaches are either designed to work with existing elements for a single layout type, or require a high amount of manual work for adding existing elements within multiple layouts. This paper presents a method that enables the application of existing subdivision methods on multiple layout types by inserting existing content into the generation result. This method can generate test cases by creating variations of partially generated layouts for procedural modeling methods that can work with existing content.
Professional football players often need legal help in managing disputes with football clubs. The Professional Football Players Syndicate of Bosnia and Herzegovina is an organization founded with this purpose. Due to an increasing need for legal help and a large number of cases, their legal associates need systematic management of data. This work presents the first information system entirely intended for the usage by sports law professionals. It contains a desktop application where legal disputes are shown in the form of an organized dispute table. Real-time information about football players is acquired by using the TransferMarkt web API. The system was successfully used for two years, resulting in 103 documented cases involving 87 players and 31 clubs. As a result, 69.90% of disputes were archived and 43.69% of disputes resulted in agreements, indicating that the productivity of legal associates and the mediator role of the Syndicate were improved.
Digital credentials represent crucial elements of digital identity on the Internet. Credentials should have specific properties that allow them to achieve privacy-preserving capabilities. One of these properties is selective disclosure, which allows users to disclose only the claims or attributes they must. This paper presents a novel approach to selective disclosure BLS-MT-ZKP that combines existing cryptographic primitives: Boneh-Lynn-Shacham (BLS) signatures, Merkle hash trees (MT) and zero-knowledge proof (ZKP) method called Bulletproofs. Combining these methods, we achieve selective disclosure of claims while conforming to selective disclosure requirements. New requirements are defined based on the definition of selective disclosure and privacy spectrum. Besides selective disclosure, specific use cases for equating digital credentials with paper credentials are achieved. The proposed approach was compared to the existing solutions, and its security, threat, performance and limitation analysis was done. For validation, a proof-of-concept was implemented, and the execution time was measured to demonstrate the practicality and efficiency of the approach.
Digital credentials represent a cornerstone of digital identity on the Internet. To achieve privacy, certain functionalities in credentials should be implemented. One is selective disclosure, which allows users to disclose only the claims or attributes they want. This paper presents a novel approach to selective disclosure that combines Merkle hash trees and Boneh-Lynn-Shacham (BLS) signatures. Combining these approaches, we achieve selective disclosure of claims in a single credential and creation of a verifiable presentation containing selectively disclosed claims from multiple credentials signed by different parties. Besides selective disclosure, we enable issuing credentials signed by multiple issuers using this approach.
Digital credentials represent crucial elements of digital identity on the Internet. Credentials should have specific properties that allow them to achieve privacy-preserving capabilities. One of these properties is selective disclosure, which allows users to disclose only the claims or attributes they must. This paper presents a novel approach to selective disclosure BLS-MT-ZKP that combines existing cryptographic primitives: Boneh-Lynn-Shacham (BLS) signatures, Merkle hash trees (MT) and zero-knowledge proof (ZKP) method called Bulletproofs. Combining these methods, we achieve selective disclosure of claims while conforming to selective disclosure requirements. New requirements are defined based on the definition of selective disclosure and privacy spectrum. Besides selective disclosure, specific use cases for equating digital credentials with paper credentials are achieved. The proposed approach was compared to the existing solutions, and its security, threat, performance and limitation analysis was done. For validation, a proof-of-concept was implemented, and the execution time was measured to demonstrate the practicality and efficiency of the approach.
The visual layout has an enormous influence on human perception and is a subject of many studies, including research on web page similarity comparison. Structure-based approaches use the possibility of direct access to HTML content, whereas visual methods have widespread usage due to the ability to analyze image screenshots of entire web pages. A solution described within this paper will focus on extracting web page layout in forms needed by both above-mentioned approaches.
In this paper, we introduce and provide insight into the two innovative applications designed to enhance the lives of persons with Down syndrome, focusing on seamless integration between the two. The first is a mobile application that helps users manage their daily routines by monitoring and predicting activity durations, considering their unique challenges. The second is a web application for parents/teachers/other adults to streamline activity scheduling, progress tracking, and reminders.
Webpage layout presentation failures can negatively affect the usability of a web application as well as the end-to-end user experience. The need for automated methods of visual inspection becomes obvious in complex web applications. However, visual inspection still heavily relies on manual inspection because the tools currently available are not yet advanced enough. This paper compares the performance results of three visual testing tools: Galen, AyeSpy, and Percy, and focuses on opportunities for their enhancement.
Cause-effect graphs are a commonly used black-box testing method, and many different algorithms for converting system requirements to cause-effect graph specifications and deriving test case suites have been proposed. However, in order to test the efficiency of black-box testing algorithms on a variety of cause-effect graphs containing different numbers of nodes, logical relations and dependency constraints, a dataset containing a collection of cause-effect graph specifications created by authors of existing papers is necessary. This paper presents CEGSet, the first collection of existing cause-effect graph specifications. The dataset contains a total of 65 graphs collected from the available relevant literature. The specifications were created by using the ETF-RI-CEG graphical software tool and can be used by future authors of papers focusing on the cause-effect graphing technique. The collected graphs can be re-imported in the tool and used for the desired purposes. The collection also includes the specification of system requirements in the form of natural language from which the cause-effect graphs were derived where possible. This will encourage future work on automatizing the process of converting system requirements to cause-effect graph specifications.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više