Wednesday, 9 July 2008

Design for testability - developer testing context

Hello,

Some more thoughts on design for testatbility in the developer testing context (see previous post ).

Test automation plays an important role in the developer testing context. The advent of the xUnit testing concept , nUnit and the integration of xUnit concept in Visual Studio, has put developer testing more in focus.
The test case is a description that tells us how we can see whether to system has done something as we expected it do that something.
Before executing a test , we must put the put the system in state so our test can actually operate. For example before testing the deletion of an order in our order system, the system must contain an order that we know of. For example a order record in a database.
The act of deleting an order necessitates that we can say to the system that we want to delete that explicit order . So not only to we need control of the initial situation, but we also need control of specifying the input and controlling the place where the order is physically kept (i.e. our test database with Order record with ID ORD1)
After executing the functionality, we must be able to verify if the system has actually done its piece of work. We must “see” the result of its processing. We need to compare this outcome with our initial expectations. A discrepancy can mean either the behaviour of the system was not correct or test case was not correct in terms of initial situation , the actions we took or the expected results.

An automated test (case) in xUnit terms can be viewed as follows (freely adapted on drawings found on http://xunitpatterns.com/ (Gerard Meszaros)).











Automated means that important steps in the execution of a test are preformed “automatically” . In other words exercising a test case on a unit (a method ) involves programmatically

  • supplying the initial situation (setup)
  • doing the actions (execute)
  • comparing the actual results with expected values (assert)
  • Cleaning up (teardown) so the next test case can proceed in a clean situation.

So how can we improve the testability in a developer context. On which aspects of the code can we influence the ease of performing developer test and facilitate the localisation of defects.

  • Control : In order make automation possible we need to be control of the initial situation as of the all the input that a method needs, so we can steer the processing of the unit in such a way it will produce the outcome we expect.
  • Observation : In order to compare actual and expected results , we must be able to observe what the outcome of the processing of a unit is.
  • Complexity: Network integration, database integration, Message queuing, Security , Registry, Windows services, COM+ , …. Makes it difficult to set up a test environment.
    Large methods with complex conditional flow necessitates many test case in order to get desired test coverage.
    Inheritance : abstract base class can not be instantiated.
    Large inheritance tree : may test cases to see if combination of base and specialized code works as expected.
  • Isolation: Ability to isolate certain dependencies and replacing them with test-doubles improves testability because we control what the test-doubles will do. That way we can concentrate on the logic in the CUT . The DUT will replaced by a test-double. Any calls from the CUT to the DUT (replaced by the test-double) will result in “controlled answers or behaviour”. Hence our test case for the CUT will be easier to setup
  • Separation of concerns : The class with a clear responsibility should improve testability in terms of knowing what to test in the first place.
    Smaller things to test
  • Heterogenity : Single language or language framework improves testability

If you have remarks or other thoughts, don't hesitate to drop a comment.

Best regards,

Alexander

No comments: