Agile Testing - LogiGear

Transcription

SHOWCASING THOUGHT LEADERSHIP AND ADVANCES IN SOFTWARE TESTINGLogiGear MAGAZINEA gileTestingPrinciples for Agile TestAutomationEmily BacheIs Your Cloud ProjectReady to be Agile?David TaberQuantify the Impact ofAgile App DevelopmentLarry MaccheroneTechnical Debt: A Nightmare for TestersBy Michael HackettJuly 2013 Volume VII Issue 3

Letter from the EditorIn our continuing effort to be the best source ofinformation for keeping testers and test teamscurrent, we have another issue to explore testing in Agile development. As Agile evolves, systemic problems arise and common rough situationsbecome apparent. We want to provide solutions.Editor in ChiefMichael HackettManaging EditorBrian LetwinDeputy EditorJoe LuthyWorldwide OfficesUnited States Headquarters2015 Pioneer Ct., Suite BSan Mateo, CA 94403Tel 01 650 572 1400Fax 01 650 572 2822Viet Nam Headquarters1A Phan Xich Long, Ward 2Phu Nhuan DistrictHo Chi Minh CityTel 84 8 3995 4072Fax 84 8 3995 4076Viet Nam, Da Nang7th Floor, Dana Book building76-78 Bach DangHai Chau DistrictTel 84 511 3655 33Fax 84 511 3655 zine.comCopyright 2013LogiGear CorporationAll rights reserved.Reproduction without permission is prohibited.Submission guidelines are located ARM AGAZINE.COMFor anyone who has worked on Agile projects, especially if you have worked at more than one company or for a few clients, you know ―testing in Agile‖ can be an adventure. Remember there are no―rules‖ or best practices for Agile testing. There are better practices. Every team and Scrum implementation is unique. This is still evolving.The varieties of Agile implementations, most commonly Scrum, have anontraditional concept of testing. Yet, most organizations still wantsomeone to do the tasks associated with traditional testing, such as validation, regression testing, bug hunting, exploratory testing, scenario testing, data driven testing, etc. These words have different connotations inScrum and Agile.This month we are tackling more Agile topics with a specific focus on howthese practices impact testing. As Agile matures and comes of age weare learning more, adjusting our practices, modifying our strategies andhopefully communicating better; we are being Agile.In this issue, Emily Bache highlights principles to help you plan your testing strategy; I warn teams about the implications of technical debt; DavidTaber explains that if your team is set up to handle it, Agile can greatlybenefit your projects in the cloud; Larry Maccherone looks at why Agile isbecoming a vital strategy for small and large businesses; John Turnerreviews A Practical Guide for Testers and Agile Teams by Lisa Crispin andJanet Gregory.As always, we hope this information is helping you solve problems andrelease higher quality products. September‘s issue is on Mobile Testing.Happy Summer!Michael HackettSenior Vice President, LogiGear CorporationEditor in ChiefJULY 2013 ǀ VOL VII ǀ ISSUE 3

3In this Issue4IN THE NEWS5PRINCIPLES FORAGILE TESTAU TO M AT I O NEmily BachePrinciples to help you planyour testing strategy andtools for functional automated testing to design moremaintainable, useful testcases.7T E CH N IC A L D EB T:A NIGHTMARE FORTESTERSMichael Hackett,LogiGear CorporationLearning more about Scrumprocesses, or whateverlifecycle processes yourteam follows, can be a bigbenefit in preventing anddealing with debt.14 I S Y O U R C L O U DPROJECT READYTO BE AGILE?David Taber, CIO, SalesLogistixAgile can greatly benefityour projects in the cloud,provided that your team isset up to handle it.WWW.LOGIGEARM AGAZINE.COM16 QUANTIFY THE IMPACT OF AGILE APPDEVELOMENTLarry Maccherone, Rally SoftwareAs the world demands more software, development teams - fromscrappy startups to big corporations - are meeting the challengewith Agile.18 AGILE TESTINGGLOSSARYSome of the terms used when discussingAgile testing.19 BOOK REVIEWJohn TurnerA review of a Practical Guide for Testers and Agile Teams by Lisa Crispinand Janet Gregory.23 V I E T N A M ‘ S N AT I O N A LCOSTUME—THE ÁO DÀIBrian Letwin, LogiGear CorporationToday’s áo dàis have come a longway from their imperial ancestors.But even as the country continuesits march towards modernization,the dress is a core element of Vietnam’s illustrious history.JULY 2013 ǀ VOL VII ǀ ISSUE 3

4In the NewsThree New TestArchitectTM ProductsLogiGear has expanded its TestArchitect product line with the introduction of three new TestArchitect Editions— Professional, MobilePlus and Enterprise.The Professional Edition is an economical automation solution forWindows-based applications. Mobile Plus offers Windows-basedapplication plus mobile testing, and Enterprise includes all Windows, web, cloud and mobile testing capability. Mobile Plus andEnterprise support iOS and Android phones and tablets, with bothweb and hybrid app testing capability.The Enterprise Edition includes name refactoring to reduce test case maintenance by making it possible toautomatically update test case suites whenever a test entity name is changed.Most Software Development Heads Fail to Meet DeadlinesAlmost three-quarters (71 percent) of UK software developmentheads say conventional approaches to software development andtesting means new customer-facing applications are delayed.CA Technologies questioned 301 in-house software developmentmanagers in enterprises across the UK, France and Germany.More than half (56 percent) of UK developers reported that their ITdepartment's reputation had been tarnished because of issuesrelating to "out-dated" application development and testing methods.While 59 percent of UK respondents cited quality and time-tomarket on integration testing as major challenges, it was lower (48percent) across all three countries. In the UK, 41 percent had issues with performance testing compared to 32 percent overall.Win a Free Ticket to EuroSTAR 2013To celebrate the launch of the 21st EuroSTAR Conference Program, they are giving members of the EuroSTARSoftware Testing Community the opportunity to win oneof three FREE places at this year‘s conference.All you have to do to be in with a chance of winning oneof three free places is tell them why you want to attendEuroSTAR Conference 2013 this November.Read more here: urg-in-60-secondsWWW.LOGIGEARM AGAZINE.COMJULY 2013 ǀ VOL VII ǀ ISSUE 3

“5Blogger of the MonthPrinciples for Agile Test AutomationPrinciples to help you plan your testing strategy and tools for functional automated testing, todesign more maintainable, useful test cases.By Emily BacheReadabilityWhen you look at the test case, you can read it through andunderstand what the test is for. You can see what the expected behavior is, and what aspects of it are covered by thetest. When the test fails, you can quickly see what is broken.If your test case is not readable, it will not be useful, neitherfor understanding what the system does, nor identifying regression errors. When it fails, you will have to dig though other sources outside of the test case to find out what is wrong.You may not understand what is wrong and you will rewritethe test to check for something else, or simply delete it.RobustnessIfeel like I‘ve spent most of my career learning how towrite good automated tests in an Agile environment.When I downloaded JUnit in the year 2000 it didn‘ttake long before I was hooked – unit tests for everything insight. That gratifying green bar is near-instant feedback thateverything is going as expected, my code does what I intended and I can continue developing from a firm foundation.Later, starting in about 2002, I began writing larger granularity tests, for whole subsystems; functional tests if youlike. The feedback that my code does what I intended, andthat it has working functionality has given me confidencetime and again to release updated versions to end-users.I was not the first to discover that developers design automated functional tests for two main purposes. Initially wedesign them to help clarify our understanding of what tobuild. In fact, at that point, they‘re not really tests, we usually call them scenarios, or examples. Later, the main purpose of the tests becomes to detect regression errors, although we continue use them to document what the systemdoes.When you‘re designing a functional test suite, you‘re tryingto support both aims, and sometimes you have to maketradeoffs between them. You‘re also trying to keep the costof writing and maintaining the tests as low as possible, andas with most software, it‘s the maintenance cost that dominates. Over the years, I‘ve begun to think in terms of fourprinciples that help me to design functional test suites thatmake good tradeoffs and identify when a particular testcase is fit for purpose.WWW.LOGIGEARM AGAZINE.COMWhen a test fails, it means there is a regression error,(functionality is broken), or the system has changed and thetests no longer document the correct behavior. You need totake action to correct the system or update the test, andthis is as it should be. If however, the test has failed for nogood reason, you have a problem: a fragile test.There are many causes of fragile tests. For example, teststhat are not isolated from one another, duplication betweentest cases, and dependencies on random or threaded code. Ifyou run a test by itself and it passes, but fails in a suite together with other tests, then you have an isolation problem. Ifyou have one broken feature and it causes a large number oftest failures, you have duplication between test cases. If youhave a test that fails in one test run, then passes in the nextwhen nothing changed, you have a flickering test.If your tests often fail for no good reason, you will start toignore them. Quite likely there will be real failures hidingamongst all the false ones, and the danger is you will notsee them.SpeedAs an Agile developer you run your test suite frequently.Both (a) every time you build the system, (b) before youcheck in changes, and (c) after check-in in an automated Continuous Integration environment. I recommendtime limits of 2 minutes for (a), 10 minutes for (b), and60 minutes for (c). This fast feedback gives you the bestchance of actually being willing to run the tests, and tofind defects when they‘re cheapest to fix, soon afterinsertion.JULY 2013 ǀ VOL VII ǀ ISSUE 3”

6If your test suite is slow, it will not be used. When you‘refeeling stressed, you‘ll skip running them, and problemcode will enter the system. In the worst case, the test suitewill never become green. You‘ll fix the one or two problemsin a given run and kick off a new test run, but in the meantime you‘ll continue developing and making other changes.The diagnose-and-fix loop gets longer and the tests becomeless likely to ever all pass at the same time.UpdatabilityWhen the needs of the users change, and the system isupdated, your tests also need to be updated in tandem. Itshould be straightforward to identify which tests are affected by a given change, and quick to update them all.If your tests are not easy to update, they will likely get leftbehind as the system moves on. Faced with a small changethat causes thousands of failures and hours of work to update them all, you‘ll likely delete most of the tests.Following these four principles implies MaintainabilityI also find these principles useful when I‘m trying to diagnose why a test suite is not being useful to a developmentteam, especially if things have got so bad they havestopped maintaining it. I can often identify whichprinciple(s) the team has missed, and advise how to refactor the test suite to compensate.For example, if the problem is lack of speed you have someoptions and tradeoffs to make: Replace some of the thicker, slower end-to-end testswith lots of skinny fast unit tests, (may reduce regression protection). Invest in hardware and run tests in parallel (costs ). Use more fakes to replace slow parts of the system(may reduce regression protection). Identify key test cases for essential functionality andremove the other test cases. (sacrifice regression protection to get Speed).Use a profiler to optimize the tests for speed the sameas you would production code (may affect Readability).Taken all together, I think how well your tests adhere tothese principles will determine how maintainable they are,or in other words, how much they will cost. That cost needs Strategic Decisionsto be in proportion to the benefits you get: helping you unThese principles also help me when I‘m discussing autoderstand what the system does, and regression protection.mated testing strategy, and choosing testing tools. Sometools have better support for updating test cases and testAs your test suite grows, it becomes ever more challengingdata. Some allow very Readable test cases. It‘s worth notingto adhere to all the principles. Readability suffers whenthat automated tests in Agile are quite different from in athere are so many test cases you can‘t see the forest fortraditional process, since they are run continually throughthe trees. The more details of your system that you coverout the process, not just at the end. I‘ve found many tradiwith tests, the more likely you are to have Robustness probtional automation tools don‘t lead to enough speed andlems – tests that fail when these details change. SpeedRobustness to support Agile development.obviously also suffers – the time to run the test suite usually scales linearly with the number of test cases. UpdatabilityI hope you will find these principles help you to reasondoesn‘t necessarily get worse as the number of test casesabout your strategy and tools for functional automated testincreases, but it will if you don‘t adhere to good design principles in your test code, or lack tools for bulk update of test ing, and to design more maintainable, useful test cases. data for example.About EmilyI think the principles are largely the same whether you‘rewriting skinny little unit tests or fatter functional tests thattouch more of the codebase. My experience tells me thatit‘s a lot easier to be successful with unit tests. As the testing thickness increases, the feedback cycle gets slower,and your mistakes are amplified. That‘s why I concentrateon teaching these principles through unit testing exercises.Once you understand what you‘re aiming for, you can transfer your skills to functional tests.Emily Bache is an independent consultantspecializing in automated testing andAgile methods. With over 15 yearsof experience working as a software developer in organizations as diverse as multinational corporation to small startup, shehas learnt to value the technical practicesthat underpin truly Agile teams. Emily isthe author of ―The Coding Dojo Handbook:a practical guide to creating a spaceHow can you use these principles?where good programmers can become great programmers‖I find it useful to remember these principles when designing and speaks regularly at international conferences such astest cases. I may need to make tradeoffs between them, Agile Testing Days and XP2013.and it helps just to step back and assess how I‘m doing oneach principle from time to time as I develop. If I‘m reviewing someone else‘s test cases, I can point to code and saywhich principles it‘s not following, and give them concreteadvice about how to make improvements. We can have adiscussion for example about whether to add more testcases in order to improve regression protection, and how todo that without reducing overall readability.WWW.LOGIGEARM AGAZINE.COMJULY 2013 ǀ VOL VII ǀ ISSUE 3

7Cover Stor yTechnical Debt: A Nightmare for TestersLearning more about Scrum processes, or whichever Agile lifecycle processes your teamfollows, can be a big benefit in preventing and dealing with debt.By Michael Hackett, LogiGear Corporationings, and sadly - they are real. Some products, projects andteams have imploded from the weight of debt.Like most concepts in Agile, technical debt can be used asa broad-brush classification, but here I will explore technical debt from just the testing perspective focusing ontesters and their part in technical debt.What is technical debt?Technical debt has a large and growing definition. Beforegoing any further, let‘s look at the original definition of technical debt. First coined by Ward Cunningham, the financialmetaphor referred only to refactoring.Code RefactoringThe sprint is almost over; the burndown chart hasnot budged. The test team sits around waiting. Theyhear about all kinds of issues, obstacles and impediments at the daily stand-up but there is no code to test.Closing in on the demo and sprint review. then at Wednesday‘s standup: the heroes arrive and tell everyone, ―All thestories are done. Everything is in the new build. Test team get to work! You have one day to test everything for thissprint, we will have an internal demo of everything tomorrow afternoon and a demo to the PO on Friday morning.Get busy!‖Sound familiar? Your team has just gone over the cliff intocertain technical debt.As organizations build more experience being Agile, sometrends have emerged. Technical debt is one of thesetrends. That is not a good thing. Technical debt is a bigtopic and getting larger by the day. Much is even writtenjust about what it is! There are definitions of debt far fromthe original definition with some definitions completelywrong.Companies and teams struggle with technical debt concerning its: governance, management, documentation,communication, sizing and estimating, as well as trackingand prioritizing. Dealing with technical debt is difficult andnew for most teams. There are dire predictions and warn-WWW.LOGIGEARM AGAZINE.COMCode refactoring is a "disciplined techniquefor restructuring an existing body of code,altering its internal structure withoutchanging its external behavior," undertaken in order to improve some of the nonfunctional attributes of the software.-WikipediaNow people talk and write about technical debt using allsorts of financial jargon like good debt, bad debt, interest,principle, mortgage, futures and interest while losing trackof the real problem. Resist this. Stay basic. It is key for anyorganization to have a good, agreed upon working definition of debt.Technical debt happens when the team decides to ―fix itlater.‖ Anything we put off or postpone is considered debt.And it will come due with an interest payment. This is not tobe confused with bugs that need to be fixed. Bugs are almost always associated with the function of the system, nottesting tasks. Bugs are communicated, handled and managed differently. Technical debt is, as Joanna Rothmansays, ―what you owe the product,‖ such as missing unittests and out of date database schemas - it‘s not aboutbugs!Think of the difference in technical debt and bugs as similar to the old discussion of ‗issues vs. bugs.‘JULY 2013 ǀ VOL VII ǀ ISSUE 3

8You know you have debt when you start hearing things like: ―Black box‖ components. Overly long classes, functions, control structures(cyclomatic complexity).Third-party code that‘s fallen far behind its public stable release. ―Don‘t we have documentation on the file layouts?‖―Don‘t touch that code. The last time we did it tookweeks to fix.‖ Clashing programming or software architectural styleswithin a single application. ―The server is down. Where are the backups?‖ Multiple or obscure configuration file languages.―I thought we had a test for that!‖―If I change X it is going to break Y .I think.‖―Where is the email about that bug?‖―We can‘t upgrade. No one understands the code.‖Andy Lester, Get out of Technical Debt Now!Hardwired reliance on a specific platform or product(e.g., MySQL, Solaris, Apache httpd).Matt Holford, Can Technical Debt Be Quantified? The Limitsand Promise of the MetaphorThe problemFrom reading the list of technical debt, it‘s easy to see howproducts, processes and practices can get unnecessarilycomplicated and become slow, buggy and difficult to execute and manage. What follows is that the teams workingon these systems spend more time dealing with systematicissues than developing new functionality which slows downthe delivery of customer value. By the way, decreasing velocity is often one of the first signs a team is dealing withtoo much technical debt.Technical debt happens and sometimes it is understandable. Software development happens over time. It‘s not anice, linear process. Very often things are not clear until theteam attempts to actually build something. Problems andsolutions unfold along with the project‘s clarity and we allknow that not everything can be planned for.Let‘s look at some reasons why this occurs:Now let‘s look at the common causes and symptoms oftechnical debt so you can recognize when you are gettinginto a debt situation. This list has been gathered from avariety of sources to provide a solid and broad understanding of the causes and symptoms: Lack of test coverage.Muddy or overly rigid content type definitions.Hardcoded values. User stories are too big. Low estimating skill or consistently unrealistic estimates. Team is too pressured to ―get it done!‖ JIT (just-in-time) architecture or design.Misused APIs.Redundant code.Inappropriate or misunderstood design patterns.Brittle, missing or non-escalating error handling.Unscalable software architectural design.Foundational commitment to an abandoned platform.Missing or inaccurate comments and documentation.WWW.LOGIGEARM AGAZINE.COMThe team did not fully understand the user story or itlacked acceptance criteria to better describe what wasto be built.No use of spikes to better understand what is to bedeveloped.Weak ScrumMaster or overbearing Product Owner.Unexpected things happened.Very short timeframes for sprints make teams rush andfocus only on what must be done to get a release - atthe exclusion of ―good things to do.‖Special concerns for Testers1– Team attitudes about TestingThere are situations where debt builds from how the teamhandles testing, specifically for testers. Some teams are stillunder intense pressure to deliver on a fixed date. Regard-JULY 2013 ǀ VOL VII ǀ ISSUE 3

9less of the state of testing or findings from testing or testcoverage, there is pressure on testers to ―say it works.‖ Some Agile basics, from XP (eXtreme Programming)need to be understood here. Working at a sustainablepace and respecting a team‘s velocity are important. When there is old style management (―chickens‖ dictating what has to be done to ―pigs‖) teams invariablyhave to cut corners and testing almost always getscrunched. Sometimes, teams get into debt trouble with testingbecause testers were not included in user story estimation. The testing takes longer than expected; the teamcuts corners and builds debt. And—there are alwaysbugs! That is not the issue. It is the pressure to defer,minimize, or ignore that build debt.Many of the original Scrum teams I worked with struggledwith having cross-functional teams. Now that Scrum hasbeen around for a few years, I see fewer companies attempting to have cross-functional teams.When the Scrum Guide explains cross functional teams, thedescription promotes iterative design, refactoring, collaboration, cooperation, and communication but shuns handoff.All these things will reduce gaps and provide early, expanded testing communication and information, providing formore full understanding – all this will reduce technical debt.Yet, the way Scrum has generally evolved promotes handoffand less collaboration and communication which will increase technical debt.For integrated teams, this means sitting together, discussing, talking and refactoring. It means asking questions,driving the development by developing tests (TDD); it isabsolutely iterative and full of refactoring. Anti-Agile is whendevelopers work in isolation and handoff completed code totesters to validate and call done.Handoff otherwise known as AgileFalls is a dirty word inAgile.I was asked to help a company and found out, within thefirst half hour, that they had a programmer sprint, then atester sprint. I said, ―That sounds like waterfall.‖They totally misunderstood Scrum teams.2 - The Cliff: a special Scrumbut situationTesters still get time crunched. Back in the traditional software development days, test teams very often lost scheduletime they had planned for. This continues as a commonpractice in the Agile world. The following graphs allow you tovisualize this situation.WWW.LOGIGEARM AGAZINE.COMJULY 2013 ǀ VOL VII ǀ ISSUE 3

10The CrunchHans Buwalda has often used these diagrams to describeproblematic software development projects. In the planning stage each phase or team gets their allotted time.When it comes to testing reality, requirements are definedlate, added late, the design was late or the code was late,testers get crunched on time so the team won‘t slip theschedule.The CliffA theoretical burndown chart has the same idea. Userstories and ―user story points‖ get moved from ―In Development‖ to ―In Testing‖ at a somewhat steady pace overtime and are delivered over time - this is ideal.There is no way a test team can do an effective job at thispoint. Most teams in this situation, under pressure fromproduct owners/customers/whomever, make up quick anddirty rules: The story is ―done but not tested.‖ (ScrumBut) Break the story into 2 stories, the coding and the testing. The coding is done. (ScrumBut and AgileFalls) Say it‘s done and if the PO finds a bug during the demowe can write a new user story on that bug. (ScrumBut)Test it in the next sprint while they wait for new functionality. (AgileFalls)WWW.LOGIGEARM AGAZINE.COMThe troubling phenomenon common to so many teamsthese days is the cliff. Testers wait and wait and, as thefinal days of the sprint approach, the bulk of user storiesget dumped on them, with the expectation of full validation and testing as the sprint demo and review come up. And many more creative and flawed ways to ―count thestory points for velocity,‖ or say it‘s done and build moretechnical debt.There is so much wrong with these solutions, so muchScrumBut and AgileFalls combined, these situations needtheir own article on how to recognize and remediate thesesituations. We will discuss solutions to these problems laterin the article, but for now, know that these are not normal,it's not Scrum, they‘re not good and they need to be resolved in sprint retrospectives.JULY 2013 ǀ VOL VII ǀ ISSUE 3

113 - Specific to automation:While many Agile teams take their development practicesfrom XP (eXtreme Programming), such as TDD (test drivendevelopment); CI (continuous integration) or pair programming; sustainable pace; small releases and the planninggame, there are many foundation practices that allowteams to achieve these goals. Specifically, unit test automation, user story acceptance criteria automation, high volume regression test automation and automated smoketests (quick build acceptance tests for the continuous integration process).Many test teams struggle with the need for speed in automating tests in Agile development. To create and automatetests quickly, some teams use unstructured record andplayback methods resulting in ―throw-away automation.‖Throw-away automation is ―quick and dirty,‖ typically suitable only for the current sprint and created with no intentionof maintenance. Struggling teams will be resigned to throwaway automation or do 100% manual testing during a sprintand automate what and if they can - 1 or 2 or 3 or moresprints after the production is written.TEST AUTOMATION TECHNICALDEBT REFERENCES―Software test automation in practice: empiricalobservations,‖ Advances in Software Engineering, vol. 2010, 2010.―Empirical Observations on Software TestingAutomation,‖ in 2009 International Conferenceon Software Testing Verification and Validation.IEEE, Apr. 2009, pp. 201–209.―Establishment of automated regression testing at ABB: industrial experience report on‗avoiding the pitfalls‘,‖ in Automated SoftwareEngineering, 2004 Proceedings. 19th International Conference on. IEEE, Sep. 2004, pp. 112–121.―Analysis of Problems in Testing Practices,‖ in16th Asia-Pacific Software Engineering Conference. Los Alamitos, CA, USA: IEEE ComputerSociety, 2009, pp. 309–315.―Observations and lessons learned from automated testing,‖ in Proceedings of the 27th international conference on Software engineering. ACM, 2005, pp. 571–579.WWW.LOGIGEARM AGAZINE.COMAutomation suites that lose relevance with new functionalreleases in Agile without enough time for maintenance,upgrade, infrastructure, or intelligent automation design area drain on resources. From my experience test automationis rarely accounted for when product teams quantify technical debt. This is changing and needs to change more.To remedy these problems, many teams are conducting testautomation framework development in independent sprintsfrom production code. Since automation is software, itsdevelopment can be treated as a different developmentproject, supporting the production code. Using this approach, automation code should have a code review, codingstandards and its own testing; otherwise technical debt willaccrue in the form of high maintenance costs.I‘ve always been amazed at the lack coding standards, design and bug finding work applied to automation code thatis intended to verify the production code that is createdwith rigorous processes. I hope I‘m not the only one whosees the shortsightedness in this.4 - ―Done but not done done.‖It could be said here, that any team using the phrase ―donebut not done done‖ is building debt just by saying it! Thereis a mess building up around the Definition of Done, theDoD. The Scrum Guide stresses that a team needs to havea clear Definition of Done, but it‘s becoming obvious overtime that teams don‘t always have one.The Defin

Agile testing. 19 B O O K R E V I E W John Turner A review of a Practical Guide for Test-ers and Agile Teams by Lisa Crispin and Janet Gregory. 23 VIETNAM‘S NATIONAL C O S T U M E —T H E Á O D À I Brian Letwin, LogiGear Corporation Today’s áo dài