Logo
Nazad
F. Nizamic, R. Groenboom, A. Lazovik
0 2012.

of the 17 th Dutch Testing Day Testing Evolvability

ion refinement: initially, the algorithm starts with a very course abstraction A, which is subsequently refined if it turns out that M∥A is not behavior-deterministic. Non-determinism arises naturally when we apply abstraction: it may occur that the behavior of a SUT is fully deterministic but that due to the mapper (which, for instance, abstracts from the precise value of certain input parameters), the system appears to behave non-deterministically from the perspective of the learner. We used LearnLib as our basic learning tool and therefore the abstraction of the SUT may not exhibit any non-determinism: if it does then LearnLib crashes and we have to refine the abstraction. This is exactly what has been done repeatedly during the manual construction of the abstraction mappings in the case studies of [3]. We formalized this procedure and described the construction of the mapper in terms of a counterexample guided abstraction refinement (CEGAR) procedure, similar to the approach developed by Clarke et al [4] in the context of model checking. Using Tomte, we have succeeded to learn fully automatically models of several realistic software components, including the biometric passport, the SIP protocol and the various components of the Alternating Bit Protocol. The Tomte tool and all the models that we used in our experiments are available via http://www.italia.cs.ru.nl/. Table 1 gives an overview of the systems we learned with the number of input refinement steps, total learning and testing queries, number of states of the learned abstract model, and time needed for learning and testing (in seconds). For example, the learned SIP model is an extended finite state machine with 29 states, 3741 transitions, and 17 state variables with various types (booleans, enumerated types, (long) integers, character strings,..). This corresponds to a state machine with an astronomical number of states and transitions, thus far fully out of reach of automata learning techniques. We have checked that all models inferred are bisimilar to their SUT. For this purpose we combined the learned model with the abstraction and used the CADP tool set, http://www.inrialpes.fr/vasy/cadp/, for equivalence checking. System under test Input refinements Learning/ Testing queries States Learning/Testing time Alternating Bit Protocol Sender 1 193/3001 7 1.3s/104.9s Alternating Bit Protocol Receiver 2 145/3002 4 0.9s/134.5s Alternating Bit Protocol Channel 0 31/3000 2 0.3s/107.5s Biometric Passport 3 2199/3582 5 7.7s/94.5s Session Initiation Protocol 3 1755/3402 13 8.3s/35.9s Login 3 639/3063 5 2.0s/56.8s Farmer-Wolf-Goat-Cabbage Puzzle 4 699/3467 10 4.4s/121.8s Palindrome/Repdigit Checker 11 3461/3293 1 10.3s/256.4s Table 1. Learning statistics Currently, Tomte can learn SUTs that may only remember the last and first occurrence of a parameter. We expect that it will be relatively easy to dispose of this restriction. We also expect that our CEGAR based approach can be further extended to systems that may apply simple or

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više