Abstractions On Test Design Techniques

Transcription

Proceedings of the 2014 Federated Conference onComputer Science and Information Systems pp. 1575–1584DOI: 10.15439/2014F316ACSIS, Vol. 2Abstractions on Test Design TechniquesMarc-Florian WendlandSystems Quality CenterFraunhofer Institute FOKUSBerlin, stract—Automated test design is an approach to test designin which automata are utilized for generating test artifacts suchas test cases and test data from a formal test basis, most oftencalled test model. A test generator operates on such a testmodel to meet a certain test coverage goal. In the plethora ofthe approaches, tools and standards for model-based testdesign, the test design techniques to be applied and testcoverage goals to be met are not part of the test model, whichmay easily lead to difficulties regarding comprehensibility andrepeatability of the test design process. This paper analyzescurrent approaches to and languages for automated modelbased test design and shows that they are lacking importantinformation about the applied test design techniques. Based onthis analysis, we propose to introduce another layer ofabstraction for expressing test design techniques in a toolindependent, yet generic way.Keywords- Model-based testing (MBT), test generation,automated test design, test design techniques, UML TestingProfile (UTP)I.TINTRODUCTIONdegree of automation in industrial software testingwas consequently raised within the last two decades. Inthe 1990s, efforts had been undertaken to increase the degreeof automation for test execution, resulting in today’s acceptedtechnologies like keyword-driven testing [3]. Standards havebeen built upon this principle like TTCN-3 1 or the newlydeveloped ISO 29119 [9] standards family. With the widespread acceptance of UML in the late 1990s and the adventof UML 2 early 2000s, the idea to automate also parts of thetest design activities was pursued in research and industry.The outcome of these efforts is what is known today asmodel-based testing (MBT) and test generation. Both,automation of test execution and automation of test designrely on abstraction of irrelevant details. Of course, when itcomes down to actually test execution, the abstracted detailsneed to be provided, but this is commonly accepted to bepertinent and indispensable. In keyword-driven testingapproaches, the so called adaptation layer is in charge ofmaking logical test cases executable [21].1HEThe UML Testing Profile (UTP) [12] is a modelinglanguage for MBT approaches based on the UML. It is thefirst industry-driven, standardized modeling language forMBT. It was adopted by the Object Management Group(OMG) as far back as 2003 and is currently under majorrevision. In addition, the European TelecommunicationsStandardizations Institute (ETSI) has funded efforts todevelop its own modeling language for MBT, called TestDescription Language (TDL). Thus, two important technicalstandardization bodies offer languages to build MBTmethodologies upon.Interestingly, none of the above mentioned standardsprovides concepts to specify the test design techniques thatshall be applied for test generation. This seems inconsistent,since one of the most communicated benefits of MBT isautomated generation of test artifacts and the increasedsystematics, comprehensibility and repeatability of the testdesign process [20]. Until today, there is no generallyaccepted approach found in the literature how test designtechniques for model-based test generation shall be specifiedthe best. In fact, almost every test generator provides its ownproprietary configuration for specifying the test coveragegoal. This lead to several issues regarding comprehensibilityand repeatability the automated test design activities.Moreover, the exchangeability of test generators on models,even of the same modeling language, becomes risky since itbears a great potential for loss of relevant knowledge.This paper addresses the abstraction from technical, tooldependent representations of test design techniques byproviding an extensible language framework for specifyingtool-independent test design techniques that can be sharedacross multiple test generators. This step is a consequentevolution of the automation through abstraction principlealready applied in keyword-driven testing or test generation.The contributions of the work are:- A thorough analysis of current approaches to modelbased test generation.- The development of a conceptual model of testdesign based on the ISO 29110 standard. Theconceptual model builds the foundation on whichthe abstractions of test design techniques rely on.http://www.ttcn-3.org978-83-60810-58-3/ 25.00 c 2014, IEEE1575

1576PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014-The refinement of the conceptual model with anapproach to test generation that is motivated bymeans of directives and strategies.- Provision of an extensible, yet flexible UMLprofile-based implementation of the refinedconceptual model as an extension to the UTPThe remainder of this paper is structured as follows:Section 2 describes the problems of today’s approaches toautomated test design from the viewpoint ofcomprehensibility and repeatability. Section 3 elaborates theconceptual model of test design and the refinement towardstest design directives and test design strategies. Section 4discusses the extension of the UTP with test design directivesand test design strategies. Section 5 demonstrates thefeasibility of our approach by applying it to two noncommercial test generators, i.e., the Spec Explorer andGraphwalker. Section 6 presents the work related to ours.Finally, section 7 concludes our work and highlights futurework in that context.II.PROBLEM STATEMENTMost of the today’s model-based test generators are ableto work on UML or derivatives. In this paper, we employ theSpecExplorer 2 and the Graphwalker 3 generation engine thatdo not operate directly on UML, but on closely relatedconcepts. The SpecExplorer input is actually based on atextual representation of an Abstract State Machine (ASM)[7] which is called Spec#. On contrast, the input forGraphwalker is GraphML, a XML format for describinggraph structures. Both test generators can operate on graphstructures, although the input format is different. These inputformats can be derived from UML behaviors, though. In thelast years, we have in particular integrated these two testgenerators with UTP, which allows us to generate therequired input format from the very same UTP model forboth generators ([23], [22]). Our overall vision is to integratea wide variety of test generators with UTP to counteract thebroadening of proprietary, yet technically incompatiblemodeling languages.Hence, the following problem statement was identified inthe context of MBT with UTP, so is the technical solutionpresented in this paper. The conceptual solution, however, isnot bound to any particular modeling language.A. Test Design Techniques in MBTIf we consider the commonly understood advantages ofMBT – such as efficient solutions for test design, increasingthe degree of automation, prevention of loss of knowledge byusing (semi-)formal models, more systematic and, even mostimportant, repeatability of test case derivation and selfexplanatory of test specifications ([20],[6]) – MBT s/specexplorer/http://www.graphwalker.orgalong with an indispensable change of paradigms for testingactivities. The most central artifact in an MBT approachshould be the model itself, so test processes have to movefrom a document-centric to a model-centric paradigm. Amodel that describes test-relevant information from a tester’spoint of view is called test model. A test model is a “ model that specifies various testing aspects, such as testobjectives, test plans, test architecture, test cases, test dataetc.” [12]. The UTP, in combination with UML, provides atest engineer with numerous possibilities for building testmodels, since it offers the expressiveness of UML andamends it with test-specific artifacts. Thus, UTP is deemedsuitable to support the change to the model-centric paradigmwhere test models are single source of truth. Even though notas a dedicated concept, UTP allows for modeling the inputsfor test generation just by relying on the underlying UMLconcepts. Inputs to test generation are referred to as testmodel in the ISO 29119 terminology as well. To avoidconfusion with the much broader understanding of a testmodel given by the UTP specification, we henceforth refer toinputs to test generation as test design models. ISO 29119defines test design as all activities in a test process thatactually derive test cases, test data and test configurationfrom test conditions. This derivation may be carried outautomatically or manually. The term automated test design iscommonly known as test generation.Even though UTP is an expressive language, it does notoffer concepts to specify the test design techniques that shallbe applied for deriving test artifact. If we consider the beforementioned benefits of MBT, first and foremost testgeneration, it is most surprising that one of the mostimportant information for automating the test designactivities is missing in the test model: The information aboutwhich test design techniques shall be carried out on the testdesign model by the test generator. In other words, theinformation why a certain test artifact has been generated isnot part of the test model. As a matter of fact, today’s testgenerators define their own proprietary presentation of testdesign techniques that resides within the tool. It complicates,however, the application of an entire model-centric approachto testing, for it does not allow integrating all the requiredinformation into the test model. This can have severeimplications, since it may easily happen that the applied testgenerator shall be replaced, for whatever reason, while thedefined test design techniques shall be retained. If thishappened and access to the previous test generator is notgiven any longer, the information where a certain test artifactoriginated from in the first place is lost.Figure 1 (a) illustrates this problem in a three-layeredapproach. The domain layer encodes the test model (morespecific, the test design model), which it is completelydecoupled from a certain test generator and simply focus onthe specification of a system under test. The engine layer, in

MARC-FLORIAN WENDLAND: ABSTRACTIONS ON TEST DESIGN TECHNIQUES1577TABLE I.INVESTIGATION OF TEST GENERATORSGeneratorInputOutputConfigurationSpec ExplorerSpec# (C#)NUnit Test CasesCoord LanguageProprietarySettings in the toolProprietarySettings in the toolMBTsuiteConformiqUML Activity kerGraphMLSequence of StringsCommand line parameterTedesoUML approximatedActivityProprietarySettings in the toolCertifyITUML and OCLProprietarySettings in the toolcontrast, is the most fundamental component of a testgenerator. It is, conversely, completely decoupled from thedomain layer and simply operates on its inputs, withouttaking into account where that input comes from. Bothcapabilities, complexity and underlying principle of theengine layer vary among test generators from powerfulsymbolic execution (like the SpecExplorer) to a simpletraversal engine that simply operates on already exploredgraph structures such as finite state machines (like theGraphwalker). Regardless how powerful or sophisticated thegeneration engine actually is, it is necessary to transform theinformation encoded in the domain layer into a formatunderstood by the generation layer. A mediator, anadaptation layer, is required. An adaptation layer is a toolspecific component that serves two purposes. First, ittransforms (γ(i)) the input i (i.e., a test design modelcontained in the test model) into a format understood by thetest generation engine. Secondly, it offers some kind ofinterface to the test analyst in order to configure thegeneration engine. We call this the configuration (c) of a testgenerator. The configuration contains the information ofwhich test design technique shall be applied to the input i.For example, the SpecExplorer is configured with aproprietary language called coord. If the user wants to ensurea certain traversal order of events, he/she has to specify aregular expression over events. This is called a scenario incoord terminology. The semantics of the coord scenariomatches with the standardized specification-based test designtechnique scenario testing in ISO 29119. This informationought to be part of the test model for it contains importanttest-relevant information to comprehend the automated testdesign activities. This holds true for other test designtechniques and further test generators as well. TABLE I lists afew of the commercially relevant or prominent open sourcetest generators that fit into our view of MBT. Interestingly,all of the investigated generators do offer proprietary meansto configure the generation engine. Furthermore, none ofthese generators really follow the model-centric paradigm,since they simply employ the test design models for theParadigm1-way(not centric)1-way(not centric)1-way(not centric)1-way(not centric)1-way(not centric)1-way(not centric)LicenseFreeCommercialCommercialOpen SourceCommercialCommercialpurpose of test generation. There is commonly no feedbackof the generated test cases into the test model in order toabide by the single source of truth principle as proposed anddescribed by Wendland ([23], [24]). This is a situation westrive to improve with our work with our work.B. Abstractions of Test Design TechniquesWe propose to abstract from a tool-specific representationto tool-independent representation of test design techniques.Fig. 1 b) illustrates this abstraction. The configuration(shaded grey) are extracted from the tool-specific layer andabstracted (α(i)) to test design techniques that are part of thetest model itself. The adaptation layer is still required totransform the input i (i.e., the test design model plus toolindependent test design techniques) into the input format γ(i)for the generation engine. As such, it is possible to share testdesign techniques across multiple test generators that providean adapter for the utilized test design technique. With testdesign techniques removed from the realm of a specific testcase generator and becoming part of the test model a moreholistic approach to test design is being provided and hensibility. Such an approach is in-line with the ideaof abstraction for test generation as it is done for MBT [18],but for the specification of test design techniques instead ofthe test design model. This abstraction of test designtechniques has not yet been discussed in the literature.III.A CONCEPTUAL MODEL OF TEST DESIGNThe conceptual field of test design techniques is actuallywell known. Several academic and industrial literature dealswith the application and formalization of test designtechniques for different test design models. A good overviewis given by Utting [21] and ISO 29119-4 [9]. Based on theconcepts and terminology provided in ISO 29119, aconceptual model of test design can be deduced (see Fig. 2)which will be explained in great detail in the subsequentsections.

1578PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014Fig. 1 Abstraction of test design techniquesA. The Principles of Test DesignThe derivation of test artifacts is usually done byapplying a test design technique, or ad-hoc if no systematicapproach is applied. A test design technique is a method or aprocess, often supported by dedicated tooling that derives aset of test coverage items from an appropriate test designmodel. The test design model is obtained from the identifiedtest conditions. According to ISO 29119, a test condition is a“testable aspect of a component or system, such as afunction, transaction, feature, quality attribute, or structuralelement identified as a basis for testing.” A test analystutilizes the information accompanied with the test conditionsto construct the test design model in whateverrepresentation. This gave rise to Robert Binder’s famousquote that testing is always model-based [1]. A test designmodel refers to a specification of the expected behavior ofthe system under test that is represented either as mentalmodel, informal model, semi-formal model, or formalmodel.Fig. 2 A conceptual model of test designAs always with models [19] the test design model mustbe appropriate for the test design technique to be applied. Aninappropriate model might not be able to produce an optimalresult. The details of a test design model can usually bederived from the test conditions of the test basis.There is a correlation between the test design techniqueand the test design model, however, since both aredetermined or influenced by the test conditions. For example,if the test condition indicates that the system under test mightassume different states while operating, the test design modelmay result in a finite state machine (FSM) or similar.Consequently, a test design technique (like state-based testdesign) is most likely to be applied on this test design model.A test design technique tries to fulfill a certain testcoverage goal (the term used by ISO 29119 is SuspensionCriteria, which is actually not that commonly understood). Atest coverage goal determines the number and kind of testcoverage items that have to be derived from a test designmodel and represented as test cases. The actual derivationactivity might be carried out manually or in an automatedmanner.A test coverage item is an “attribute or combination ofattributes to be exercised by a test case that is derived fromone or more test conditions by using a test design technique”[9]. The term test coverage item has been newly introducedby ISO 29119, thus, it is expected not to be fully understoodat first sight. A test coverage item is usually been obtainedfrom the test condition, and made been explicit (in a sense ofthat it can be used for coverage analysis etc.) through a testdesign technique. The following example discusses the subtledifferences between test condition, test design model and testcoverage item:Let us assume there is a functional requirement that saysthe following: “If the On-Button is pushed and the system isoff, the system shall be energized.”The bold words indicate the system under test, the italicwords potential states the system under test shall assume undthe underlined word an action that triggers a state change.According to ISO 29119, all the identifiable states (and thetransitions and the events) encoded in the functionalrequirement represent the test conditions for that systemunder test. A state machine according to the test conditionswould represent the test design model. As test designtechnique could be decided to be structural coverage criterionlike transition coverage or similar. The test coverage goalwould represent a measurable statement about what shall becovered after the test design technique has operated on thetest design model. This might be one of Chow’s N-SwitchCoverage 0 like full 1-Switch-Coverage (or transition-paircoverage). The test coverage items would eventually berepresented by all transition pairs that have been derived bythe test generator, and which are finally covered by test cases.However, there are certain inaccuracies in the ISO29119’s test design concepts which are subsequentlyclassified into three issues.1) Test Coverage CalculationAt first, the term test coverage, defined by ISO 29119 asthe “degree, expressed as a percentage, to which specifiedcoverage items have been exercised by a test case or testcases”, does not take the actual number of potentially

MARC-FLORIAN WENDLAND: ABSTRACTIONS ON TEST DESIGN TECHNIQUESavailable test coverage items into account. According to thegiven definition of test coverage, the coverage would alwaysbe 100% since it is calculated on the actual derived testcoverage items. What is missing is a calculation of all testcoverage items that actually should be derived. Otherwise, itwould be possible to state that 100% test coverage has beenachieved, even though just 10% of all to be covered testconditions were actually covered. This is in particularrelevant for model-based approaches to test design, for thetest coverage items are usually not explicitly stored forfurther test case derivation, but rather automaticallytransformed into test cases by the test generator on the fly.This means that in today’s model-based test generators thetest cases always cover 100% of the derived test coverageitems. This is just consequent, since the test coverage itemswere derived according to a specific test coverage goal, thus,the test design technique only selected those test coverageitems (out of all potentially reachable test coverage items)that are required to fulfill the test coverage goal. Ending upin a situation where the eventually derived test cases wouldnot cover 100% of the produced test coverage items wouldviolate the whole idea of specifying test coverage goals.2) Output of Test Design TechniquesTest design techniques do not only derive test cases, butalso test data or test configurations. The test design processdeals with the derivation of all aspects that are relevant forfinally executing test cases. The test configuration or testinterface (i.e., the identification of the system under tests, itsinterfaces and the communication channels among the testenvironment and the system under test) is a crucial part ofeach test case, when it comes down to execution. Same, ofcourse, holds true for test data. In this paper we deal not withthe generation of test configuration, however, test datageneration is covered.3) Test Design Techniques Are Not MonolithicThe concept of a test design technique, as defined anddescribed by ISO 29119, needs to be further differentiated. Inrelevant, yet established standards for industrial softwaretesting (such as ISO 29119, IEEE:829 and ISTQB) a testdesign technique is described as a monolithic concept. This isnot the case, because the actual test derivation processconsists of a number of techniques that representdistinguished course of actions to achieve test coverage.These courses of actions operate in combination with eachother to derive the desired test coverage items. Thosetechniques contribute their semantics to the overall testdesign activity for a given test design model. Examples forwell-known strategies are structural coverage criteria orequivalence partitioning, but also less obvious and ratherimplicit strategies like the naming of test cases or thephysical structure or representation format of test cases. Forexample, the so called State-Transition test design techniquemight be based on an extended Finite State Machine (EFSM).1579Hence, the sole application of structural coverage criteria(like all-transition-coverage etc.) might not suffice to produceexecutable test cases, for EFSM may also deal with datainputs, guard conditions etc. By adding also data-relatedtechniques (such as equivalence partitioning) to structuralcoverage criteria, it is possible to explore and unfold theEFSM into an FSM that ultimately represents the availabletest coverage items for finally deriving test cases. So, thediscussion gives rise to the fact that the conceptual model ofISO 29119 regarding test design techniques shall be furtherdifferentiated to allow combining several test designtechniques with each other in a systematic manner.B. Towards Strategies and DirectivesWhen further differentiating the concept of test designtechnique, a wider search beyond the field of testing ofsoftware systems seems to be appropriate. Based on whatwas discussed earlier, test design techniques are required tobe grouped by different test design process, thus, they arereusable. Test design techniques are consequentlydecomposed into a thing that groups different techniques andthe techniques themselves. This conceptual structure issimilar to the Business Motivation Model (BMM) [14]concepts for directives and strategies (see Fig. 3). We adaptthe terms directives and strategies for the scope test design.The BMM provides a fine-grained conceptual model toanalyze the visions, reasons and influencers of a business (orendeavor) in order to deduce its overall motivation. TheBMM is enunciated in the Semantics of Business Vocabularyand Business Rules (SBVR) [15], a standard which is by theOMG, a standard which is adopted by the OMG to formalizea vocabulary for semantically documentation of anorganization’s business facts, plans and rules.Fig. 3 Relations of Strategy, Directive and GoalNotwithstanding the motivation for BMM to applydirectives is outside of MBT in the first place, the BMMcontains concepts and notations that can be beneficial to therealization of test design directives and test design strategies.According to BMM a directive is a means to achieve acertain goal. A goal is a statement about a state or conditionof the endeavor to be brought about or sustained throughappropriate means. Therefore, a directive (as specialized

1580means) utilizes (a set of) strategies that are governed by thedirective to achieve the goal. A strategy channels effortstowards the achievement of that goal. This means that thesame strategy can be utilized by different directives in orderto achieve different goals, hence, strategies are reusableacross directives. The notions of strategy, directive and goalcan be mapped to the business test design. The BMM goalwould map almost inherently to the ISO 29119 concept testcoverage goal. As with a goal, a test coverage goal imposes acondition on the test design activity that need to be achievedin order to deem the test design activity completed. SinceBMM strategies are the actual actions that need to be carriedout in a controlled manner, the notation of BMM strategystands for a single test design technique, such as equivalencepartitioning, all-transition-coverage or similar. The directive,however, does not have a direct counterpart in the ISO 29119conceptual model on test design. From a logical point ofview, it is part of the test design technique concept eventhough not explicitly enunciated. We are going to leveragethe notion of strategies and directives for the area of modelbased test generation in order to refine the ISO 29119conceptual model with test design strategies and directivesthat replaces the monolithic test design technique.C. Refined Conceptual Model of Test DesignThis section mitigates the conceptual imprecisions of theISO 29119 conceptual model of test design by furtherdifferentiating the test design technique into test designdirectives and test design strategies. These notions areadopted from the BMM. Fig. 4 shows the redefined testdesign conceptual model in which the monolithic test designtechnique is split up into test design strategy and test designdirective.A test design strategy describes a single, yet combinable(thus, not isolated) technique to derive test coverage itemsfrom a certain test design model either in an automatedmanner (i.e., by using a test generator) or manually (i.e.,performed by a test analyst). A test design strategy representsthe logic of a certain test design technique (such as structuralcoverage criteria or equivalence partitioning) in a tool- andmethodology-independent way and is understood as logicalinstructions for the entity that finally carries out the testderivation activity. Test design strategies are decoupled fromthe test design model, since the semantics of a test designstrategy can be applied to various test design models. Thisgives rise to the fact that test design strategies can be reusedacross different test design models. This fits with the moregeneral notation of a strategy that can be utilized by severalmeans. The intrinsic semantics of a test design strategy,however, needs always be interpreted and applied within thecontext of a test design model. According to and slightlyadapted from the BMM, this context is identified by a testdesign directive.PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014Fig. 4 Redefined conceptual model on test designA test design directive governs an arbitrary number of testdesign strategies that a certain test derivation entity has toobey to within the context of a test design model. A testdesign directive is in charge of achieving the test coveragegoal. Therefore, it assembles appropriately deemed testdesign strategies to eventual fulfill the test coverage goal.The assembled test design strategies, however, channel theefforts of their intrinsic semantics towards the achievement ofthe test coverage goal. The test coverage items that areproduced by test design strategies are always fully covered,thus, they are reduced to a pure transient concept. Accordingto Fig. 1 b), the test design directive will be passed to thetool-specific adaptation layer, since it is the test designdirective that has access to all required information. At first,it specifies the test design models out of which test artifactsshall be generated. Next, it governs the test design strategiesthat shall operate on the test design models. In the nextsections, we show an implementation of the conceptualmodel as UML profile.IV.A LANGUAGE FRAMEWORK FOR TEST DESIGNThe implementation of the refined ISO 29119 conceptualmodel on test design was from the very first idea incepted asan extension of the UTP. This mitigates one of the mostobvious deficiencies of the UTP and allows the creation offully comprehensible test models. The extension is kept mostflexible, so that new test design directives and test designstrategies can be easily incorporated.A. Realization as UML ProfileSince the test design framework has to be keptminimalistic and left open for multiple modeling and testingmethodologies, it is important to find a means to not beingtoo intrusive while defining the framework. Fortunately, aUML profi

Conformiq UML-approximated StateMachine Proprietary Settings in the tool 1-way (not centric) Commercial Graphwalker GraphML Sequence of Strings Command line parameter 1-way (not centric) Open Source Tedeso UML approximated-Activity Proprietary Settings in the tool 1-way (not centric) Commercial