Thursday, 17 July 2008

Design for Testability - Dependencies

Hello,


This post is a continuation on previous posts about design-for-testability (here and here).

Control and observation are important characteristics in test-automation. If our code under test makes an effort to facilitate this , we improve the testability of our code. Improving the testability should lead to find any errors in our code more easily. The more we errors we find early in the development cycle, the cheaper it is to correct them. We early testing and defect correction, we get a good feel on the quality of the software. We can make educated decisions on when to promote the code to QA and/or production. When our customer can use our software to do their job , they create value for the company they work for ….(I will stop rambling here) . But you see , testability (and in a broader sense quality assurance) should be baked into the software. The entire organization can benefit from it.

Let’s start on the “control” aspect of our code under test (CUT) and specifically on the controlling the dependencies of our CUT with other parts of the program or the environment it runs in.

It is in the nature of an object-oriented language to build up a program from lot of different “units” where some “units” or better classes work closely together to make something happen. Object-oriented principles like single-responsibility, information-hiding , inheritance , etc are common in this world. So it is quite natural that classes have dependencies on other classes. The term coupling is often used to state the existence of relationships between class (inheritance, composition , etc). coupling is evitable. If there were no coupling we would end up with monolithically part of codes, actually they would be separate programs. The large classes would be too complex and the cohesion of the operations and member data would very low.

In the realm of testability we sometimes want to be able to test a class in a controlled way, maybe even in total isolation. In order to do that we must be able to control the dependencies. The fewer the dependencies the easier it will be. Hence “managed” coupling can increase testability.

Programs run on machines and make also use of their environment. For example the file system, Message Queue system, database systems , web services, etc. These resource dependencies are expressed in the programs usually via string tokens. For example a folder name, database connection string, queue name, web service URL etc. A good practice of course is to externalize the values for these resources for example in a configuration file. These practice is also beneficial for testability. Avoiding “hard-coded” resource strings in your code is of course not a good programming practice. Because many times the resource string are depended on where the program actually will run. For example development , production or …testing. So controlling the resource tokens outside the program can give us flexibility will setting up tests.
.NET has excellent support to easily consume these externalized resource identifiers in your programs. In the Visual Studio test framework you can easily let the test framework pick up these configuration files and let your code under test use the configured element just like if the code under test was running in their proper host (windows app, web service, etc). In other words , you will copy the configuration info that you need for the code under test from the base app.config or web.config in your test project and modify it as needed.

Let’s focus again on class dependencies. So dependencies are inevitable but we can make them more manageable for testing purposes ( or other reasons). In order to continue our discussion lets introduce some terms. If a class A depends on another class B to fulfil its functionality A is called a client class and B is called a server class. During unit test we want to test a class in isolation. Therefore we want to substitute its server classes by stubs or mock objects . Hard-wired dependencies to server classes hinder this. A dependency on a server class is hard-wired if the server class can not be substituted by some other class (e.g. a subclass of the server class) without changing the source code of the client class. Similar problems arise during integration testing if a server class should be stubbed because it is not test-ready. So how can we manage class dependencies that will benefit testability.
One of the areas lies in the construction of the dependencies. We don’t control them. So our solution strategy should revolve around getting control about the creation of the dependencies. Several techniques can help us to come up with a solution:

  • Interface-based programming : Separating the interface from the implementation is core principle. Our client class uses an abstraction of a server class (interface) , not a particular implementation of it (object). So during testing we could create another implementation of the server class that implements the correct interface. This “test-double” (Gerard Meszaros , xUnit patterns) version of our server class can for example always give a fixed set of values back or just ignores the fact that we ask it to do something. Interfaces are first-class citizen in .NET so you should have no problems with this. But this technique is just a first step toward the solution. We still need to hook up an implementation of the server class into the client class at test time.
  • Service locator: We would delegate the creation of objects to a specialized object that for example would fabricate objects based on a string identifier.
  • Provider model : Variation of Interface-based programming :. Abstract class , concrete implementation hooked up at runtime (configuration file). You can a test-specific implementation of the provider in order to control the tests
  • Manual Dependency injection : We will make another class responsible to establish the link between the client and the server. We will add a parameter of the server type to a method of the client class (a constructor, or a method that requires access to a server instance, or a dedicated setup method) which allows other objects to set the link. During testing the parameter(s) can be used to establish a link to a “test-double” instead.
  • Automatic dependency injection : a DI container framework takes care of the details of which objects are interconnected, so you can build each one independently. No need for passing the dependencies along with the constructor or methods or assigning properties with dependent objects

Still there some concerns (or trade-offs) you should take into account;

  • Information hiding :Why can’t leave it up to A to know its dependencies : Exposing the dependencies in a constructor function can be viewed a violation of encapsulation because now a client of Class A would have to create instances of Class B first before calling the constructor function of class A
  • The ability to substitute a server class is not always important for the “production” implementation. So is testability a good enough reason for implementing this possibility?
    We could make a special constructor that will accept the dependencies. The default constructor would use that hard-wired implementations.
    If there are dependencies on many server classes, this approach would result in too many parameters.
  • DI Framework to the rescue ...but Someone has to know the dependencies
    Assembler , DI Framework via declarations or via config files

There are still other things besides managed coupling you can work on to improve testability , hence the quality of our software : cohesion, encapsulation, ... But that's for another post.

Maybe you have others techiques to improve testability? If you would like to share them, don't hesitate to drop a commont.

Thanks in advance.

Best regards,

Alexander

No comments: