A common way of exposing functionality in contemporary systems is by providing a Web-API based on the REST API architectural guidelines. To describe REST APIs, the industry standard is currently OpenAPI-specifications. Test generation and fuzzing methods targeting OpenAPI-described REST APIs have been a very active research area in recent years. An open research challenge is to aid users in better understanding their API, in addition to finding faults and to cover all the code. In this paper, we address this challenge by proposing a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit. These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases. Our evaluation shows that our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners. In addition, we show that basing test generation on behavioural properties provides tests that are less dependent on the state of the system, while at the same time yielding a similar code coverage as state-of-the-art methods in REST API fuzzing in a given time limit.
Understanding the behaviour of a system’s API can be hard. Giving users access to relevant examples of how an API behaves has been shown to make this easier for them. In addition, such examples can be used to verify expected behaviour or identify unwanted behaviours. Methods for automatically generating examples have existed for a long time. However, state-of-the-art methods rely on either white-box information, such as source code, or on formal specifications of the system behaviour. But what if you do not have access to either? This may be the case, for example, when interacting with a third-party API. In this paper, we present an approach to automatically generate relevant examples of behaviours of an API, without requiring either source code or a formal specification of behaviour. Evaluation on an industry-grade REST API shows that our method can produce small and relevant examples that can help engineers to understand the system under exploration.
Test automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of test cases that often comes first to mind. Simply running test cases using a unit testing framework is no longer enough for test automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now test automation needs to rise to the next level by going beyond mere test execution.
During testing of parallel systems, which allow asynchronous communication, test flakiness is sometimes avoided by explicitly inserting delays in test code. The choice of delay approach can be a trade-off between short-term gain and long-term robustness. In this work, we present an approach for automatic detection and classification of delay insertions, with the goal of identifying those that could be made more robust. The approach has been implemented using an open-source compiler tooling framework and validated using test code from the telecom industry.
Many organizations developing software-intensive systems face challenges with high product complexity and large numbers of variants. In order to effectively maintain and develop these product variants, Product-Line Engineering methods are often considered, while Model-based Systems Engineering practices are commonly utilized to tackle product complexity. In this paper, we report on an industrial case study concerning the ongoing adoption of Product Line Engineering in the Model-based Systems Engineering environment at Volvo Construction Equipment (Volvo CE) in Sweden. In the study, we identify and define a Product Line Engineering process that is aligned with Model-based Systems Engineering activities at the engines control department of Volvo CE. Furthermore, we discuss the implications of the migration from the current development process to a Model-based Product Line Engineering-oriented process. This process, and its implications, are derived by conducting and analyzing interviews with Volvo CE employees, inspecting artifacts and documents, and by means of participant observation. Based on the results of a first system model iteration, we were able to document how Model-based Systems Engineering and variability modeling will affect development activities, work products and stakeholders of the work products.
To keep internet based services available despite inevitable local internet and power outages, their data must be replicated to one or more other sites. For most systems using the store-and-forward architecture, data loss can also be prevented by using end-to-end acknowledgements. So far we have not found any sufficiently good solutions for replication of data in store-and-forward systems without acknowledgements and with geographically separated system nodes. We therefore designed a new replication protocol, which could take advantage of the lack of a global order between the messages and the acceptance of a slightly higher risk for duplicated deliveries than existing protocols. We tested a proof-of-concept implementation of the protocol for throughput and latency in a controlled experiment using 7 nodes in 4 geographically separated areas, and observed the throughput increasing superlinearly with the number of nodes up to almost 3500 messages per second. It is also, to the best of our knowledge, the first replication protocol with a bandwidth usage that scales according to the number of nodes allowed to fail and not the total number of nodes in the system.
RESTful APIs are an increasingly common way to expose software systems functionality and it is therefore of high interest to find methods to automatically test and verify such APIs. To lower the barrier for industry adoption, such methods need to be straightforward to use with a low effort. This paper introduces a method to explore the behaviour of a RESTful API. This is done by using automatic property-based tests produced from OpenAPI documents that describe the REST API under test. We describe how this method creates artifacts that can be leveraged both as property-based test generators and as a source of validation for results (i.e., as test oracles). Experimental results, on both industrial and open source services, indicate how this approach is a low effort way of finding real faults. Furthermore, it supports building additional knowledge about the system under test by automatically exposing misalignment of specification and implementation. Since the tests are generated from the OpenAPI document this method automatically evolves test cases as the REST API evolves.
We are pleased to welcome you to the NEXTA 2019 workshop session, as part of the NEXTINTUI-VV-PART Joint Workshop Day (bringing together the workshops NEXTA, INTUITTESTBEDS, VVIoT, and TAIC PART), which is organized with the 12th IEEE International Conference on Testing, Verification and Validation of Software (ICST 2019) in Xi’an, China. This will be a forum to bring together researchers and practitioners, enabling them to exchange ideas, address fundamental challenges in software test automation, testing through the GUI, testing event-driven software, V&V of IoT, and address software problems faced by the industry.
NEXTA is a new workshop on test automation that provides a meeting point for academic researchers and industry practitioners. While test automation already is an established practice in industry, the concept needs to evolve to go beyond its current state to support the ever faster release cycles of tomorrow's software engineering. NEXTA implications for research and practice will include test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation. The first instance of NEXTA was co-located with the 11th IEEE Conference on Software Testing, Verification and Validation (ICST 2018) in Vasterfias, Sweden on April 9, 2018. NEXTA 2018 offered an interactive setting with a keynote and paper presentations, stimulated by two novel awards to incentivize interaction and dissemination: a Best Questions Award and a Most Viral Tweet Award. The workshop attracted 15 paper submissions and about 50 participants. Based on the positive feedback, we plan to organize the workshop again next year.
NEXTA is a new workshop on test automation that provides a meeting point for academic researchers and industry practitioners. While test automation already is an established practice in industry, the concept needs to evolve to go beyond its current state to support the ever faster release cycles of tomorrow's software engineering. NEXTA implications for research and practice will include test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation. The rst instance of NEXTA was co-located with the 11th IEEE Conference on Software Testing, Veri cation and Validation (ICST 2018) in Vasteras, Sweden on April 9, 2018. NEXTA 2018 o ered an interactive setting with a keynote and paper presentations, stimulated by two novel awards to incentivize interaction and dissemination: a Best Questions Award and a Most Viral Tweet Award. The workshop attracted 15 paper submissions and about 50 participants. Based on the positive feedback, we plan to organize the workshop again next year.
Product Line Engineering is an approach to reuse assets of complex systems by taking advantage of commonalities between product families. Reuse within complex systems usually means reuse of artifacts from different engineering domains such as mechanical, electronics and software engineering. Model-based systems engineering is becoming a standard for systems engineering and collaboration within different domains. This paper presents an exploratory case study on initial efforts of adopting Product Line Engineering practices within the model-based systems engineering process at Volvo Construction Equipment (Volvo CE), Sweden. We have used SysML to create overloaded models of the engine systems at Volvo CE. The variability within the engine systems was captured by using the Orthogonal Variability Modeling language. The case study has shown us that overloaded SysML models tend to become complex even on small scale systems, which in turn makes scalability of the approach a major challenge. For successful reuse and to, possibly, tackle scalability, it is necessary to have a database of reusable assets from which product variants can be derived.
Businesses often use mobile text messages (SMS) as a cost effective and universal way of communicating concise information to their customers. Today, these messages are usually sent via SMS brokers, which forward them further to the next stakeholder, typically the various mobile operators, and then the messages eventually reach the intended recipients. Infoflex Connect AB delivers an SMS gateway application to the brokers with the main responsibility of reliable message delivery within set quality thresholds. However, the protocols used for SMS communication are not designed for reliability and thus messages may be lost. In this position paper we deduce requirements for a new protocol for routing messages through the SMS gateway application running at a set of broker nodes, in order to increase the reliability. The requirements cover important topics for the required communication protocol such as event ordering, message handling and system membership. The specification of such requirements sets the foundation for the forthcoming design and implementation of such a protocol and its evaluation.
Mobile text messages (SMS) are sometimes used for authentication, which requires short and reliable delivery times. The observed round-trip times when sending an SMS message provide valuable information on the quality of the connection. In this industry paper, we propose a method for detecting round-trip time anomalies, where the exact distribution is unknown, the variance is several orders of magnitude, and there are lots of shorter spikes that should be ignored. In particular, we show that using an adaption of Double Seasonal Exponential Smoothing to reduce the content dependent variations, followed by the Remedian to find short-term and long-term medians, successfully identifies larger groups of outliers. As training data for our method we use log files from a live SMS gateway. In order to verify the effectiveness of our approach, we utilize simulated data. Our contributions are a description on how to isolate content dependent variations, and the sequence of steps to find significant anomalies in big data.
Courses related to software testing education, at the university level, in most cases have a learning outcome requiring from students to understand and apply a set of test design techniques upon co ...
The 1st IEEE Workshop on the Next Level of Test Automation : (NEXTA 2018) - From the Program Chairs
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više