Thursday, 20 December 2007

MSDN session Application Lifecycle Management with Visual Studio Team System (19/12/2007)

Hello,

Yesterday I attended the MSDN evening session about Application Life cycle Management (ALM) with Visual Studio Team System (VSTS).

It was very interesting and very well presented by Yves Goeleven (Compuware Belgium). He was very knowledgeable about the subject and VSTS in particular. His presentation was a good mix between slides and demo's so you had a good feel what actually was meant.

Still I have some "concerns" about the subject. Concerns are formulated as "I wish I knew how to ...."

  • Position VSTS/TFS as an ALM tool or more as an SDLC tool? Yes there are hooks (extensibility)....
  • Capture the requirements from the customer in VSTS? And manage them in VSTS? Yes a wokitem will be the "materialization" of a requirement. But is this still the language of the customer?
  • To model use case diagrams, UML class diagrams , User stories
  • To integrate with operations (deployment, monitoring, incidents , capacity management)
I looked up the the terms ALM and SDLC. Application Lifecycle Management (ALM) is the administration and control of an application from inception to its demise. It embraces requirements management, system design, software development , testing , configuration management and operating the information system. The following picture tries to capture that idea.












Key is that the life of application does not start at the beginning of development. It start somewhere in the “business”. The end of application is not when it is delivered or even installed. It must be kept up and running to serve the business. It is also called life cycle because in effect an “application” is never finished. It is not something like a tangible product that rolls out a factory. It’s an ongoing process where new insights and requirements shape the life of an application, hence application life cycle management. As the business changes, information requirements change, and the cycle continues. Sometimes resulting in the demise of an existing system and the birth of a new application.


The software development Lifecycle (SDLC) on the other hand has everything to do with developing the information system. But it gets kicked of by the business and hands over application to operations. It is also a lifecycle because within one ALM cycle because it is the rare exception that everything is said from the beginning. So there can be several iterations before an applications gets to be delivered to the business. On the other hand of course a change request or new requirements can trigger of a new ALM cycle.

In order to achieve business objectives within an organization via information systems, user functional and non functional requirements must be defined in a consistent manner, prioritized and monitored. The administration and control of the information needs of users (Requirement Management) is therefore primary motor in the ALM and de facto also in the SDLC. All the “knowledge” that’s is necessary to transform these requirements into an information system is produced during the SDLC and put into a repository (or a collection of repositories). This will for the test basis for the test cases that will be used to verify it the information does what needs to do.
The SDLC will take that the requirement will be turned into architecture and design specifications which will eventually lead into code doing what the business expects it to do. Herein lies the objective of testing. Testing actually is conducted throughout the SDLC. Testing preparation can even start as soon as the requirements are written down (testable requirements concept). During development in the form of unit and unit integration testing or system testing (functional, security , performance ,etc). Before the application get rolled-out, the business itself can conduct test in the form of user (functional) acceptance test. Also operations can conducts acceptance tests that verify the operational requirements.

The input for establishing these test case are the expectations of the business. Ideally written down in the form of requirement, detailed functional specification and design documents. So there is a strong link between all the knowledge of what the system must do and all the information of how system is going to be tested.


A integrated system (or an integrated set of tools) should bring some added value to manage this link. Because the objective of testing is to report if the system does what it needs to do. So traceability is a key feature of this tool. Other benefits are accountability (who did what when), visibility (what did we do so far).


So if we would define testing as a kind of “fit-for-purpose” activity, testing is another gatekeeper in the life cycle. With the results of testing the stakeholders (customer/project lead) can take a decision based on facts if there will be release or not (ALM cycle) or within the SDLC cycle if something has to be corrected before release.


Monitoring the information system in production and the actual usage of the application will further keep the ALM cycle running by incident reporting (functional and non-functional).

So were does VSTS/TFS fit in this story?


I tried use the following picture to get some (visual) answers.



I can see several aspects being covered by VSTS while others (at this time) seem to not be (immediately) clear to me. Maybe this will change over time as learn more about ALM and VSTS/TFS in particular? I would be glad to hear your insights on the subject.
Best regards,
Alexander

Wednesday, 5 December 2007

Trigger TimedSubscription programmatically

Hello,

Subscriptions are a way to kickstart Reporting Service in delivering a report to somewhere instead of you asking for the report through a URL access request (portal or custom app). This is also callled push versus pull.

You can configere a timed event to trigger off those kind of subscriptions. But in some occasions you want some "custom" event to trigger of a subscription. You can not configure something like that through the configuration tools (reportmanger or SqlServer Management Studio).

Lukasz Pawlowski (http://blogs.msdn.com/lukaszp/archive/2005/10/07/478391.aspx) explained a way to tell Reporting Services to kick of a subscription with the SOAP API of Reporting Services. There is a method called FireEvent that can trigger the subscription. Lukasz suggests creating a dummy schedule that is will never run because it's configured in the passed. The report subscription you want to launch uses this dummy schedule. In a program you call the Firevent method on the reporting WebService .... and your report is delivered as you specifed it in your subscription.

I will show some screenshots how I did it, following Lukasz's tips.
So this is the report I want to be delivered on a file share in PDF.


First I make a dummy shared schedule. But one that will never fire a time event.




Now I create a new subscription on the report I showed before. It will use the shared schedule as Time Event generator.



In order to launch this subscription I must simulate a time event. The SOAP API of Reporting Services just gives me that.


This is a code snippet in VB.NET 2005 of how you could just do that.


It is important that you use the ID (ie. GUID) to specify the subscription. You can also look it up in the Subscription table of the reportserver database.


But before you go off and launch this code one thing is left to be configured (besides the folder share of course) . The account under which this program must execute will need "generate event" privilege on Reporting Service level. Because this is a single machine demo setup (november 07 CTP of Rosario). The user tfssetup is also as system admin on RS. So i just checked the checkbox for generating events on the RS syst. admin role .


The code snippet you just so will actually write an entry in the reportserver.Event table. The RS service will pick it up and act upon it. In our example it will write a pdf file into the file share folder (on the local disk ).



Have fun.
Best regards,
Alexander




Tuesday, 4 December 2007

Navigation button Visual Studio 2005

Hello,

When you open a freshly installed Visual Studio 2005 , you might miss certain things. For example the navigate backwards and forward button. Very handy when you navigate throughout your code.


You can easily add those buttons but nevertheless it is a bit strange that are not included by default (at least not on my machine) In VS 2008 (at least in the Rosario November edition) they are present and even a litlle bit smarter too. You can choose where you want to go back.


Open the "tools" menu and go to the "customize" menu-item.In the "view" category you will find the two missing buttons. When you select them , you can drag-and-drop them on your toolbar.

















In Visual Studio 2008 you get the navigation buttons by default at a little bit more!







Best regards,




Alexander

Friday, 16 November 2007

Refactor!™ for Visual Basic® .NET 2008 and 2005

Hello,

In VS 2005 you have a refactor possibility in C# but there was none out-of-the-box in VB.NET. In VS 2008 it is still the same (at least in the Beta edition) for VB.NET. You still have to reside to third party products. Something odd to me because I would think that they would have had plenty of time to see how their C# colleagues did it . I guess they give the Visual Studio Eco system some crumbles from the table ...
DevExpress offers a FREE refactor product for Visual Studio. There was already a free version for VS2005. Now they have updated their refactor utility for VS2008. But the good news is that you also can use it in VS 2005. Moreover by registering, you are entitled to download and use a set of bonus refactorings for Visual Basic .NET including:
  • Create Method Contract
  • String Composition to String.Format
  • Rename Local
  • Safe Rename
Check out their web-site (http://www.devexpress.com/Products/NET/IDETools/VBRefactor/) for a complete overview of the refactors the add-on offers.
The latest incarnation is called Refactor! Free VB.NET with Bonus Refactorings v2.5.8 release 11 october 2007
  • Standard Version for Visual Studio 2005, 2008 (size 20,197,328 bytes)
  • Bonus Refactorings for Visual Studio 2005, 2008 (size 222,550 bytes) Only after registration.
An example of a refactor form the bonus pack : String.Format. Very handy!
Here some screenshots while executing the String.Format refactor




Regards,

Alexander

Monday, 22 October 2007

RDB Transactions [HYC00]

Today I got an error message when I tried to connect to an RDB data server through the ADO.NET ODBC brigde surrounded in a System.transaction. I'm using the ODBC driver for Oracle 3.02

So the following piece of code gives an error



Using tx As New TransactionScope
Using con As New Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
' do your work here
tx.Complete()
End Using
End Using


The following error is thrown : System.Data.Odbc.OdbcException: ERROR [HYC00] [Oracle][ODBC]Optional feature not implemented. Meaning that the driver does not support the version of ODBC behavior that the application requested.


but when i slightly change the code, it works fine



Using con As New Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
Using tx As New TransactionScope
' do your work here
tx.Complete()
End Using
End Using


Also using ODBCTransaction Object works fine


Dim tx As Odbc.OdbcTransaction = Nothing
Dim con As System.Data.Odbc.OdbcConnection = Nothing
Try
con = New System.Data.Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
tx = con.BeginTransaction
'do your work here
tx.Commit()
Catch ex As Exception
If tx IsNot Nothing Then tx.Rollback()
Throw
Finally
If tx IsNot Nothing Then tx.Dispose()
If con IsNot Nothing Then con.Dispose()
End Try


So morale of the story ... somewhere System.transactions uses ODBCtransactions ?

Any info is welcome!


Regards,

Alexander

Friday, 19 October 2007

VISUG-devmatch 18/10/2007 - Datasets vs. OO

During this evening event , the Belgian Visual Studio user group (http://www.visug.be/) invited two speaker to present their views regarding datasets-oriented development and a more object-oriented way of working respectively.



Kurt Claeys (Ordina) was the advocate for the dataset approach while Yves Goeleven (Compuware) "preached" the Domain-Driven Design(DDD) way. Both did a good job !



Benefits (What did I learn)


  • Use dataset-approach in application that needs to be build fast.
  • Use dataset-approach If you are more at ease with the relational model than OO-models.
  • Use a dataset to persist information on a PDA . You can use the dataset xml features for that.
  • Check out the new dataset features in VS2008 . You can separate the generated TableAdapters from the dataset type in different projects. So you make a assembly with dataset types that are shared on several layers and/or tiers. There is also a "manager" that handles multi-table updates.
  • Microsoft continues to invest in Datasets (VS2008 , LINQ to Datasets).
  • But Microsoft also tries to keep up with the DDD camp (Entity Framework)
  • There is also stuff in between LINQ to SQL
  • Domain-driven design(DDD) is object-oriented design with emphasis on the problem domain. The domain model is the basis. I should catch up on that (http://www.domaindrivendesign.com/).
  • DDD is difficult. Expect big learning curve.
  • Don't expose domain layer immediately to UserInterface layer. You should consider other objects to transport the information of an entity (DDD for principal domain object representing something valuable for the business (customer order, etc).
  • Use an ORM to help you bridge the Object-Relational mismatch
  • You need involvement from all stakeholders to do DDD to practice the Ubiquitous Language principle.



Concerns (What did I miss)

  • Example of DDD applications (or parts of ) to see side-by-side difference. Kurt prepared a lot of example applications to support his viewpoints.

Best regards,

Alexander

Wednesday, 17 October 2007

Datasets versus custom objects

Here is a list of benefits and concerns for each approach I compiled from various Internet sources and books. If you see other points or disagree with (some) things , don't hestitate to drop a line.

Datasets (.NET2.0)

Benefits



  • Disconnnected data-container out-of-the box
  • Typed datasets enhance ease-of-programming compared to "plain" datasets
  • Table dataAdpaters automatically generated
  • Visual designer in VS.NET to create typed datasets manually or via server explorer
  • Add new data source wizard creates Typed datasets from database schema.
  • Full two-way data-binding capabilities
  • Built-in support to receive notifications when something changes inside.
  • Search/sorting capabilities out-of-the-box (some help of DataView class)
  • Work together with data adapters to retrieve and persist data.
  • You can define relationships between tables (referential constraints)
  • You can define constraint on columns (Uniqueness)
  • Can hold many kinds of data types (.NET framework)
  • The DataSet is serializable out-of-the-box (also binary )
  • Has integrated XML capabilities
  • Has built-in support for optimistic concurrency
  • You can add custom logic in typed Dataset partial classes
  • Using annotations, you can change the names of contained objects to more meaningful names without changing the underlying schema, making code easier for clients to use.
  • handle NULL values out-of-the box
  • Lot of third party controls support datasets
  • Multiple version of a column value can exist and is available out-of-the-box (row state , GetChanges, Merge)
  • With SQLdataAdapters you can load muliple tables in a dataset at once

Concerns



  • Everything is represented in .NET Object type. There is a lot of casting/boxing going on
  • Datasets used in webservices are not ideal from interoperability standpoint
  • relational API maybe too restrictive (Tables and Relations properties). Need for special methods to represent/manipulate business entities.
  • DataSet is tied directly to the database model. Abstractions are more diffeiclut . you must adhere to thinking in tables and related concepts.
  • Inheritance. Your typed dataset must inherit from DataSet, which precludes the use of any other base classes.

Custom Collections

Benefits



  • provide the means to expose data in easy-to-access APIs without forcing every data model to fit in the relational model. You can still make a one-to-one mapping with the database but you can more easily use OO technisues to model the your problem domain.
  • Advanced relationships like inheritance are possible.
  • You can add any behaviour that is needed. Custom entities can contain methods to encapsulate (simple) business rules. Custom entity classes can perform simple validation tests in their property accessors to detect invalid business entity data.
  • custom class can be marked as serializable
  • Code can be easier to understand/maintain
  • LINQ To SQL, Entity Framework are special features in future version of .NET. So MS recognizes benefits of this way of working (or is it under market pressure :)
  • using custom classes makes for easier unit testing


concerns



  • Programming effort can be bigger than all-dataset scenario
  • you must implement a interfaces in order to provide for effective containment and data-binding capabilities
  • Mapping custom collections and entities to a database can also be a complicated process that can require a significant amount of code (need for code generation/ORM tool)
  • To support optimistic concurrency time stamp columns must be defined in the database and included as part of the instance data.
  • Support for multiple versions of data state within an entity must be coded.
  • No Searching and sorting of data out of the box. You must define your own mechanism to support searching and sorting of entities.

Sources & recommended reading

Monday, 15 October 2007

SSIS and RDB

Hi,

Facing some problems with the connectivity towards RDB, we were looking towards solutions

Problems

  • Schema information did not appear in SSIS wizards/Tools. For example character columns were represented as nvarchar (0).
  • Precision with numeric columns

Alternatives

After scanning the Internet for solutions, we came up with following alternatives
  • Buy a OLE-DB driver for RDB (Attunity)
  • Emulate RDB as an Orcale Database and use the OLE-Db driver for Oracle
  • New ODBC driver for RDB (3.0) from Oracle.

Solution

Fortunately a third option worked for us (free !)

Check out
http://www.oracle.com/technology/software/products/rdbodbc/index.html

Regards,

Alexander

Friday, 12 October 2007

SAI conference : KBC .NET factory

KBC .NET software factory

Yesterday (11/10/2007) I attended a conference organized by SAI (www.sai.be) about Agile development and a .NET software factory as used at the KBC group (www.kbc.be) , a banking and insurance group.

The presenters (Peter Bauwens, Jan Laureys, Kim Verlot) did a very fine job in explaining and sharing their experiences with setting up a Software delivery & Maintenance center for .NET application with the KBC .NET software factory .

I'll try to sum up some point in a benefits & concerns style, meaning the things I 've learned and the things I wish I understood better.

Benefits

  • Software factories (in the technology sense of the word) alone don't cut it. There is also a process part and an organization(people) part that are equally, if not more , important than the technology at hand.
  • Pragmatism is key. They were not afraid to mix more "classical" engineering approaches in their agile methodology: The key phrase of Jan was : being agile in an agile way. for example scrum was expanded with an envisioning and a kind of requirements gathering period. After the scrum sprints, a phase for stabilization and transition was included.
  • Visual Studio Team foundation was a big enabler in this way of working
  • Only allow a fixed application architecture with some variability (winforms, Asp.net, batch)
  • Always question your factory: it is never finished. Incorporate feedback of the developers.
  • Only work with people who believe in this way of working because not the technology is the biggest hurdle (you have a lot of training possibilities) but the process asks a particular kind of mentality.

Concerns

  • Having authority to impose certain things (process, technology) requires managemente involvement from day one. I wish I knew who I make a business case to invest in a software factory an SDC & Maintenance center
  • While having a very flat project structure (project administrator, Lead developer, developers) makes communication easy I would say nevertheless I found the "burden" and responsibity they put on the LDEV very big. He combines, in variuos degrees, technology , process and domain knowledge. While writing user stories and talking (and thinking with) the "business" , he also was responsible for keeping the process on track and prefereable also knew the technology. I did not hear anything about system analist or functional analysist or business analist being a part of the team writing the use cases. I wish I knew how I could manage that.
  • Project assignements go through a process of determing their criticality before being assigned to the development are's withing the KBC group (Low level SLA application and High Level SLA applications). Until now the delivery centers worked on LOLA apps. i wish I knew how I could organise a delivery center for HILA apps with .NET as major technology platform?
  • The technology pilar changes at a relatively fast pace. When they started in 2005/2006. Some technology from Microsoft wasn't mature enough (Microsoft Software factories) .I wish I knew if it would a safe bet now to go along that path to make our software factory/Delivery center.

Once again, congratualation for the presentors.

Best regards,

Alexander Nowak

Sunday, 7 October 2007

Unit tests (Part 2)

Working definition

Unit testing is a procedure used to validate if “units” of source code are working properly. More technically one should consider that a unit is the smallest testable part of an application. In an Object Oriented program, the smallest unit is a Class or more interesting, its operations.

Unit testing is done by the developers and not by end-users or separate QA. A unit test is a piece of code that calls the unit under test in particular way in order to find errors during development. Each Unit test verifies if it returns the predicted answer.

Automation is key in the success of unit testing because it gives the developers a way to create structured unit tests that can be executed easily and repeatable and can be verified through assertions in the test code.

Benefits

The goal of unit testing is show that the individual parts of the code are working correctly. In doing unit testing you gain several benefits


  • Raise level QA mindset of the developer
  • Raise confidence in the quality of the code
  • Tests are run repeatedly
  • Early warning detection system for programming and/or design errors or flaws
  • Living documentation on the API of the “units”
  • Aid for regression testing
  • Code refactoring is more stimulated


When writing unit tests you should think like a tester, not just a developer. The time you take to design your unit tests will help reduce the time spent resolving defects later. Focus on the details of your objects: How is data transferred between them? Who consumes them? How easy can you break the object? What happens if I "do this"?

When a developer is programming some functionality he usually writes in some way or another some testing code to see whether his implementation does what is supposed to do and if it doesn’t produce errors. The developer then moves on to program the next piece of functionality. Again he assures that run as expected. Maybe the developer moves on other functionalities without coming back to test previous piece of code until release data. With automated unit testing, the developer has a method so the test can be run repeatedly as the code is developed.

When you have a suite of unit test that effectively test your code, you can be confident that your code will have a low likelihood of errors. The confidence in the quality of the code can be increased.

The act of writing tests often uncovers design or implementation problems. The unit tests serve as the first users of your system and will frequently identify design issues or functionality that is lacking.

Once a unit test is written, it serves as a form of documentation for the use of the target system. Other developers can look to unit tests to see example calls into various classes and members.

Perhaps one of the most important benefits is that a well-written test suite provides the original developer with the freedom to pass the system off to other developers for maintenance and further enhancement. Should those developers introduce a bug in the original functionality, there is a strong likelihood that those unit tests will detect that failure and help diagnose the issue. Meanwhile, the original developer can focus on current tasks.

It takes the typical developer time and practice to become comfortable with unit testing. Once a developer has been saved enough time by unit tests, he or she will latch on to them as an indispensable part of the development process.

Unit testing does require more explicit coding, but this cost will be recovered, and typically exceeded, when you spend much less time debugging your application. In addition, some of this cost is typically already hidden in the form of test console- or Windows-based applications. Unlike these informal testing applications, which are frequently discarded after initial verification, unit tests become a permanent part of the project, run each time a change is made to help ensure that the system still functions as expected. Tests are stored in source control very near to the code they verify and are maintained along with the code under test, making it easier to keep them synchronized.

Unit tests are an essential element of regression testing. Regression testing involves retesting a piece of software after new features have been added to make sure that new bugs are not introduced. Regression testing also provides an essential quality check when you introduce bug fixes in your product.

It is difficult to overstate the importance of comprehensive unit test suites. They enable a developer to hand off a system to other developers with confidence that any changes they make should not introduce undetected side effects. However, because unit testing only provides one view of a system's behavior, no amount of unit testing should ever replace integration, acceptance, and load testing.

In the same line but when refactoring code (changing code without changing the intend) to improve design in terms of maintainability, performance, etc. are more stimulated if you have a battery of unit tests. You’re more confident when you refactor some code. You can easily check if the code change is done correctly.

Automated unit testing should help reduce the amount of time you spend in the debugger. However, if testing results and code coverage do not help in providing the reason why your test is failing, don't be afraid to debug your unit tests. In Visual Studio 2005 Team System, developers can debug their unit testing assemblies using the Debug Selected tests option in Test Manager. You can also debug Nunit test.


Concerns

Unit testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors (except when you consider a component or subsystem as a unit) , performance problems or any other system-wide issues. In addition, it may not be easy to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.

It is unrealistic to test all possible input combinations for any non-trivial piece of software. Like all forms of software testing, unit tests can only show the presence of errors; it cannot show the absence of errors. Writing unit tests for every single class to test every aspect of every function is a tall order. Even with a large number of unit tests, is complete testing possible (or desirable)?

What if there are bugs in the unit tests? Where there is code, there will be bugs. You’ll be getting false positives.

Not all code can be unit tested (at least not easily). It could, of course, be argued that in this case, the code should be simplified until it can be unit tested.
Passing a Unit test only means that no the test it self did not catch any problem. All the assertions were correct. This doesn’t mean that there are no bugs. The success depends on the quality of the unit test as well.


Automation

Unit testing in spirit has existed since the beginning of programming. Every environment, development shop or even programmer had/has its way of conducting these kinds of tests. For example creating a form with 20 –odd buttons calling various part of your code base.
This “Form with buttons where each button performs a functionality of the code to be tested” –approach work s but you need a manual operation for each single test: clicking the button. The main problem is that you need to have the discipline to click on very button after very change. Secondly, it is very easy to drag a button on a form and hook and eventhandler leaving the name of the button to its default. If you revisit the form a after some time, chances are you don’t remember to meaning behind the button. It will be even more difficult for someone else interpreting the test behind all the buttons. Thirdly, knowing if the test succeeded or not , you had to make assertions in your code , or make the assertion visually but also you needed a way to give feedback to the developer ( eg infamous msgbox).

Third-party vendors of course offer solutions to the testing problems but they are usually very expensive and introduce their own, proprietary scripting language to conducts test.

The xUnit framework was introduced as a core concept of eXtreme Programming in 1998. It introduced an efficient mechanism to help developers add structured, efficient, automated unit testing into their normal development activities. This Toolkit lets you develop tests using the same language and IDE that they are using to develop the application. The xUnit framework defines concept of "test runner" as the application responsible for (a) executing unit tests, and (b) reporting on the test results. Automated unit tests are based on "assertions," which can be defined as "the truth, or what you believe to be the truth." From a logic standpoint, consider this statement, "when I do {x}, I expect {y} as a result."

The atomic parts of a unit test fixture are the test methods that make assertions about the behavior of some (or all) public members of the unit you're testing. A test suite is a library assembly that contains one or more such test fixtures, along with optional setup and teardown methods. You normally run this assembly from a GUI app, but these unit tests can run also from a command-line and be activated from other applications (build process).

With the help of this testing automation framework your unit tests are

  • Test can be written in same language as the tested unit
  • Structured.
  • Self-documenting.
  • Automatic and repeatable.
  • Designed to test positive and negative actions.

Open questions

  • Testing should become an integral part of the programming effort , not some things that comes afterwards. But how can you educate your team? How can you convince management
  • How to ensure that having a lot of unit tests does in fact make tracking bugs easier?
  • How to make sure your tests are easy to maintain?
  • How to write readable unit tests

References and recommend reading

Friday, 5 October 2007

Testing types classification

There are different types of test you can conduct. These test types have different objectives and characteristics. There are several classification criteria to organize these types of test.




  • Who uses the information of the test results? (development team, end-user, operations , Audit firms, etc)

  • Phase in development cycles (Requirements gathering, Analysis, Design , Construction , etc)

  • Phase in life cycle of a application (New versus maintenance)

  • Who establishes and/or executes the test? (Development team, Test organization, end-user)

  • Dynamic tests versus static tests

  • Functional test versus non-functional test (performance, load, security ,etc)

Sometimes the classification name for type means one thing for one person and another for person. I tried to compile a list of test types from several Internet resources and books. I don't give any formal definition but try to specify the test type through its characteristics.



  • Objective

  • How are the tests executed?

  • Who executes the tests?

  • Who is the main interesseted party in the test-results?

  • Where are the tests executed?

  • When are the tests executed?



Static testing



  • To find errors before they become bugs early in the software development cycle.
  • Code is not being executed.
  • Usually take the form of code reviews (self-review, peer–review, walk-troughs, inspections, audits) BUT can also be conducted on level of requirements, analysis and design.
  • Usage of check-list for verification.
  • Code analysis preferably executed through tools.
  • Executor : Individual developer , Development team, Business analyst / Customer in case of requirements testing ,Business analyst in case of analysis reviews, Architect/Senior designer in case of design review
  • Primary beneficiary is the project team self
  • Can be executed on the private development environment
  • Can be executed in separate environment ( test environment , build environment in Continuous Integration scenario)
  • Code review usually in construction phase; Audits tend to be later in the cycle.
  • Reviews on requirements, analysis and design are executed early in the project


Unit testing



  • A.k.a. Program testing , Developer testing
  • The main objective is to eliminate coding errors
  • Testing the smallest unit in programming environment (class , function , module)
  • Typically the work of one programmer
  • In OO environment usage of Mock Libraries/Stubs to isoloated units
  • Executed by the developer
  • Feedback immediately used by the developer.
  • Test results are not necessarly logged somewhere.
  • Executed on the private development environment. So Tested in isolation of others developers.
  • But can be executed in separate environment ( build environment in Continuous Integration scenario)
  • Executed during construction phase


Unit Integration testing



  • A.k.a. Component testing, Module testing
  • To check if units can work together
  • It is possible for units to function perfectly in isolation but to fail when integrated
  • Typically the work of one programmer
  • Executed by the Developer
  • Test results immediatly used by Developer
  • Test results are not necessarly logged somewhere.
  • Executed on the private development environment. So Tested in isolation of others developers.
  • Can be executed in separate environment ( build environment in Continuous Integration scenario)
  • Exectued during construction phase


System testing



  • Compares the system or program to the original objectives
  • Verification with a written set of measurable objectives
  • Focuses on defects that arise at this highest level of integration.
  • The execution of the software in its final configuration, including integration with other software and hardware systems
  • Includes many types of testing: usually strong distinction between functional and non-functional requirements like usability, security, localization, availability, capacity, performance, backup and recovery, portability
  • Who's testing depends on sub type but usually a separate tester role within the project team
  • Test results are used by the Project team or Operations (performance , volume , -> capacity)
  • Executed on the separate test environment
  • Generally executed towards the end of construction when main functionality is working.


Functional testing



  • Finding discrepancies between the program and its external specification
  • Test result used primarily by the project team
  • focuses on validating the features of an entire function (use case)
  • Main question: Is it performing as our customers would expect?
  • White box approach
  • Single-user test. It does not attempt to emulate simultaneous users
  • Look at the software through the eyes of targeted end-user.
  • Preferable separate tester (non-biased)
  • Separate test environment
  • Generally more towards the end of construction



Beta testing


  • Check how the software affects the business when a real customers use the software
  • "Test results" used by customer and project team
  • Usually not a completely finished product
  • Sometimes used in parallel with previous application on the end-user environment.
  • Is usually not structured testing. No real test cases established. Application is used in everyday scenarios. Although some end-users use implicit error guessing techniques to discover defects.
  • Executed by selected end-users
  • In separate test environment or real production environment
  • It can be conducted after some construction (iterations) but normally before formal acceptance tests.


Acceptance Testing



  • Executed by the Customer or some appointed by the customer (Not the delivering party)
  • Determine if software meets customer requirements and to whether the user accepts (read pays) for the application.
  • Who defines the depth of the acceptance testing?
  • Who creates the test cases?
  • Who actually executes the tests?
  • What are the pass/fail criteria for the acceptance test?
  • When and how is payment arranged?
  • Test results primarily used by customer but are handed over to project team . Could be that application is accepted under some conditions . A.k. known bugs.
  • Separate test environment or Production environment
  • Executed after complete testing by delivering party



Performance testing



  • Searches for bottlenecks that limit the software’s response time and throughput
  • Results primarily used by development team and Operations
  • To identify the degradation in software that is caused by functional bottlenecks and to measure the speed at which the software can execute
  • Kind of system test
  • Mimic real processing patterns , mimic production-like situations
  • To identify performance strengths and weaknesses by gathering precise measurements
  • Don’t intentionally look for functional defects
  • Executed by separate test role
  • Executed on the separate test environment
  • Generally toward the end of construction unless performance-critical components are identified beforehand (calculations, external connectivity through WAN/Internet, etc)


Usability testing



  • Check if human Interface complies with the standards at hand and UI is easy to work with
  • Kind of system test
  • Test results serve the development team and end user
  • Components generally checked include screen layout, screen colours, output formats, input fields, program flow, spellings, Ergonomics , Navigation , and so on
  • Preferable separate tester (non-biased) conducts these test (speciality)
  • Executed on the separate test environment
  • Generally toward the end of construction


System Integration testing



  • To check if several sub-system can work together as specified
  • Kind of system test
  • Development team
  • Larger-scale integration than unit integration
  • Generally combines the work of several programmers.
  • Separate test role with the project team (delivering party)
  • Test result used by project team
  • Executed on the separate test environment
  • During construction when workable sub-systems are ready to be tested (i.e. basic functionality works in happy path)



Security testing



  • Tries to compromise the security mechanisms of an application or system.
  • Kind of system test
  • Need for separate test environment to mimic production environment
  • Separate test role
  • Executed at end of construction unless is primordial in the application



Volume testing



  • Other terms or specialisations Scalability testing, Load testing, Capacity testing
  • To determine whether the application can handle the volume of data specified in its objectives (current + future). So what happens to the system when a certain volume is processed over a longer period of time (resource leaks, etc)
  • Test results important for project team and Operations
  • Volume testing is not the same as stress testing (tries to break the application)
  • Separate test role
  • Executed on the separate test environment
  • Generally towards end of construction unless load/scalability is a critical success factor


Stress testing



  • To find the limits of the software regarding peak data volumes or maximum number of concurrent users
  • Interested parties are : Project team and Operations
  • Create chaos is the key premises.
  • Go beyond potential processing patterns
  • Try to demonstrate that an application does not meet certain criteria such as response time and throughput rates, under certain workloads or configurations
  • Kind of system test
  • Don’t look for functional defects (intentionally)
  • Need for separate test environment to mimic production environment
  • Special test software used to simulate load /concurrent users
  • Separate test role
  • End of construction

Availability testing



  • To assure that our application can continue functioning when certain type of faults occur.
  • Test in function of fail-over technology.
  • Don’t look for functional defects (intentionally)
  • Need for separate test environment to mimic production environment
  • Special test software used to simulate certain fault(network outage, etc)
  • Separate test role
  • End of construction



Deployment testing



  • A.k.a Installation testing
  • To verify installation procedures
  • Important for Release manager and Operations (start up/ business continuity)
  • checking automated installation programs
  • Configuration meta data
  • Installation manual review
  • Conducted by separate test role
  • Separate environment by default (clean machine principle)
  • Construction during stage promotion for example


Regression testing



  • To avoid that software goes into regression in light of changes or bug fixes
  • Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes
  • To make sure a fix/change correctly resolves the original problem reported by the customer.
  • To ensure that the fix/change does not break something else.
  • On different levels Unit, integration , system, functional
  • Executor depends on level of test
  • Primary interested party depends on level of test
  • Executed during Construction or Maintenance

Several Other Specialised tests


  • Testing for operational monitoring: Do faults we simulated find their way in the monitoring environment of our application?
  • Testing for portability: Does our product work on the targeted Operation Systems?
  • Testing for interoperability;For example does our software work with different targeted database systems?
  • Testing the localized versions.
  • Testing for recoverability after certain system failures.
  • etc.

Thursday, 4 October 2007

Test Specification techniques

Context

I assembled some information about techniques you can use to specify the test cases to see whether the code contains errors while assuring that the test cases covered all the code and possible conditions.

Generally the techniques are divided in two groups ; Black-box and white-box testing techniques where black-box techniques treat the unit only from an API standpoint and the white-box techniques require full understanding of the code.

It is important to note that some formal techniques can take a lot of effort to so I suggest a more pragmatic approach based on a combination of techniques.

There are different kinds of testing but for the discussion of this text we will use the following taxonomy.

  • Unit testing is a process of testing the individual subprograms (classes), subroutines or procedures (methods) in a program. Most of the times these “units” are tightly coupled with other modules. Sometimes the latter are stubbed out to assure that we only test the unit under test and not the secondary units. Unit integration testing concerns the inter-operation between the different units a developer has made.
  • Component testing or subsystem testing (unit integration) verifies a particular portion of the application, for example the data access layer code. This for example requires the integration of a database server.
  • System test is a higher level test that verifies non-functional characteristics at the entry point of a system. For example performance, stress, volume, etc.
  • Functional tests test the system much like a client would use the application. Things like the order of input , user error messages , etc

Some of the techniques discussed are more fit for a particular type of testing than others . I will concentrate on the techniques you could use in developer testing (unit and unit integration testing).

Test psychology


Is testing something we do to show that a piece program is working as expected or that it contains no errors or to find (all) errors? Or everything mentioned?

This might seem like subtle differences in semantics to some readers but it is actually quite strong because it gives you a different starting point while specifying test cases. Testing should have something destructive in nature. You’re going to write test cases to break the code; At least that should be your mindset while testing.

Testing has the purpose of increasing the reliability of your code. So actually when a test case finds an error , the test case should be mentally regarded as a success because it found an error and we can fix it before it was put through acceptance testing or worse, in production. Of course the meaning of success is double-faced because once we fix the code and execute the test case again we should not find the error anymore. The test-case passed successfully, meaning this time processing the correct result.
So saying that my piece of code is working as expected so not the ideal mindset because you can more easily write test case that will demonstrate that. That being said these “positive attitude“ test cased should nevertheless be included into your collection test cases to assure you code coverage objective.

Proving that your program contains no errors is very difficult to state because the psychological burden to achieve this is too big for a programmer. It is better to uncover as much as errors as possible. By increasing the test case you have, you’re more likely to find more errors. The more errors you find, an of course fix, the more confident you will be of your code. This will reduce the stress you’re feeling.



Typical Software defects and their causes



This is a huge area but ranging from wrong error messages to unable to free unused memory or missing database fields because of incomplete specifications. I will narrow the list to some typical coding errors and their sources.

  • Error against numeric boundaries
  • Wrong number of loops
  • Wrong values for constants
  • Wrong operation order
  • Overflows
  • Incorrect operator used in comparison
  • precision loss due to rounding or truncation
  • Unhandled case in logic
  • Failure to initialize a loop control variable
  • Failure to (re)initialize
  • Assuming one event always finishes before another
  • Required resource not available
  • Doesn't free unused memory
  • Device unavailable
  • Unexpected end of file
  • Wrong data types used

Test strategies

So how should we go about finding all errors in a program? We could try it by using every possible input condition as a test case. This would lead quickly to huge amounts of test cases with usually negligible added value. So what is out test-case design strategy?

One important testing strategy is black-box testing. Your goal is to be completely unconcerned about the internal behaviour and structure of the program. Instead, concentrate on finding circumstances in which the program does not behave according to its specifications. In this approach, test cases are derived solely from the specifications (i.e., without taking advantage of knowledge of the internal structure of the program).

Another testing strategy, white-box testing, forces you to examine the internal structure of the program. This strategy derives test data from an examination of the program’s logic.
In finding test cases, the following paragraphs sum up several techniques you can use to find appropriate test cases. It is not a “cook-book –like” description. It merely tries to serve as a reminders when you faced with coming up with test cases.

It neither a mutually exclusive set of techniques. As a matter a fact, the recommended procedure for example unit testing is to develop test cases using the black-box methods and then develop supplementary test cases as necessary with white-box methods.


Sources for test case specification








Depending on the type test some sources are more appropriate than others .


Equivalence classes

Exhaustive-input test of a program is impossible. Hence, in testing a program, you are limited to trying a small subset of all possible inputs. Of course, then, you want to select the right subset, the subset with the highest probability of finding the most errors. Instead of randomly selecting a subset, you can structure your effort with technique called equivalence partioning.The equivalence classes are identified by taking each input condition (a sentence or phrase in the specification, or a parameter of a method) and partitioning it into two or more groups. An equivalence class consists of a set of data that is treated the same by the module or that should produce the same result. Two types of equivalence classes can be identified: valid equivalence classes represent valid inputs to the program, and invalid equivalence classes represent all other possible states of the condition (i.e., erroneous input values).These equivalence classes will form the basis for your test cases and test case data

Here are some examples






Boundary value analysis

Boundary conditions are those situations directly on, above, and beneath the edges of input equivalence classes and output equivalence classes. Boundary value testing focuses on the boundaries simply because that is where so many defects hide. For example using > (greater than) in a condition instead of >= (greater than or equal as). "Below" and "above" are relative terms and depend on the data value's units. So rather than selecting any element in an equivalence class as being representative, boundary-value analysis requires that one or more elements be selected such that each edge of the equivalence class is the subject of a test.Another attention point is that rather than just focusing attention on the input conditions (input space), test cases are also derived by considering the result space (output equivalence classes). For example for determining the percentage of discount for a particular order, you could try to come up with a test case that would possible generate a discount more than the foreseen maximum discount. These boundary values will be the test case data values.

Here are some examples.






Decision table testing

In equivalence Class and Boundary Value testing we considered the testing of individual variables that took on values within specified ranges. Sometimes value of one variable constrains the acceptable values of another. In this case, certain defects cannot be discovered by testing them individually.This technique explores the combinations of input circumstances to come up with test cases.The testing of input combinations is not a simple task because the number of combinations can grow quicklyIf you have no systematic way of selecting a subset of a combination of input conditions, you’ll probably select an arbitrary subset of conditions, which could lead to an ineffective test.Decision tables represent actions that have to be taken based on a set of conditions. These are actually the rules. Each rule defines a unique combination of conditions that result in the execution ("firing") of the actions associated with that rule (If this and that is true than do this). If the system under test has complex business rules, presenting the system behaviour represented in this complete and compact form enables you to create test cases directly from the decision table.Create at least one test case for each rule. If the rule's conditions are binary, a single test for each combination is probably sufficient. On the other hand, if a condition is a range of values, consider testing at both the low and high end of the range. In this way we merge the ideas of Boundary Value testing with Decision Table testing.



Consider following psuedo-code











This exercise shows us that we need at least 18 different test case data sets to test all the logic.
You can merge this with explicit boundary checks.
Qty = 0, Qty = 9, qty= 10, qty = 11, qty = 14, qty = 15, qty = 16
Km = 0, km =49, km= 50, km = 51, km =99, km = 100, km=101
Of course combinations between them are also possible. You could even try a test with negative quantity to see if the code can handle that situation.

This all depends if you can look into the code to see how the decision tree is actually implemented. Some situations will result in the same outcome. You could decide to collapse the table. But be careful with collapsed tables. While this is an excellent idea to make the table more comprehensible it is dangerous from a testing standpoint. It is always possible that the table was collapsed incorrectly or the code was written improperly. Try to use, if possible, the un-collapsed table as the basis for your test case design.


Control flow testing

Modules of code are converted to graphs, the paths through the graphs are analyzed, and test cases are created from that analysis. This techniques requires are lot of preparation. Maybe usefull for safety -critical applications.


Depth of test specification


Some factors that come into play of determining the appropriate amount of testing effort (test case, time, depth , etc)
Cost of failure
Testability of the design
Life cycle of the application ; new or maintenance

Programming language

Sometimes the data types available in your programming language can help you reduce the number of test cases. Suppose you have a method in VB.NET that accepts an integer value that, according to the specs, must between 0 and 100. In case of the boundary values we can actually help reduce the number of test case because in VB.NET we have the possibility to use an unsigned integer data type (UInteger). It simply can not hold negative numbers so consequently we don’t have to specify a test that gives a negative number to the method.Enumerations can also help.

Design-by contract

In the design-by-contract approach, modules (called "methods" in the object-oriented paradigm, but "module" is a more generic term) are defined in terms of pre-conditions and post-conditions. Post-conditions define what a module promises to do (compute a value, open a file, print a report, update a database record, change the state of the system, etc.). Pre-conditions define what that module requires so that it can meet its post-conditions.More the code must use a mechanism to assert that these pre – and post conditions are met. Usually these assertions are placed inside the method. In this case the module is designed to accept any input. If the normal preconditions are met, the module will achieve its normal post conditions. If the normal pre-conditions are not met, the module will notify the caller by returning an error code or throwing an exception (depending on the programming language used). This notification is actually another one of the module's post conditions.Based on this approach we could define defensive testing: an approach that tests under both normal and abnormal pre-conditions. On the other hand if we trust the assertion mechanism we could go about and create test cases only for the situations in which the pre-conditions are met.
There are programming environments where the specification of pre-and post conditions are integrated directly in the language or language execution environment.
Should we create test cases for invalid input?

Difficult test conditions

Often modules have code that is executed only in exceptional circumstances—low memory, full disk, unreadable files, lost connections, Database locking, high network load etc. You may find it difficult or even impossible to simulate these circumstances and thus code that deals with these problems will remain untested. Still you would like to know how your code and more over your error handling code reacts on these situations.It is recommend to write that test case down and make the remark that the situation has not been tested.There are tools on the market that can help you simulate some of these conditions and check how your program reacts. For example DevPartner Fault Simulator 2.0. For example using DevPartner Fault Simulator you could observe how the application would handle the failures generated by a simulated network failure instead of you physically disconnecting a cable from the network,

Code coverage analysis

While the previous sections described techniques to specify test cases before or during programming, a code coverage analysis tool investigates your code during the execution of the current set of test cases (assuming you did some upfront test case design) if all code regions have been covered by the sets of test. It can point you the certain code path that haven’t been covered by the current set of test cases. It gives you the opportunity to specify new test case that will execute that part of your code. It can help you to eliminate gaps in your test suite.It is important to note that 100% code coverage does not equal to the fact that all errors are removed from your piece of code. It merely states that all code paths have been gone trough with your current test case suite. You still can miss out on errors simply because you’ve used a wrong comparisons operator ( > instead of >=) . So the previous techniques are still valuable. Code coverage gives you additional information that you can use. It just one of the methods to maximize the ability to find the errors in your code.Microsoft research is currently working on a tool that augments code coverage analysis with the creation of missing test case. Pex performs a systematic program analysis. It records detailed execution traces of existing test cases. Pex learns the program behaviour from the execution traces, and a constraint solver produces new test cases with different behaviour. The result is a test suite with maximal code coverage



Concluding remarks

Writing effective test cases is hard work and not always that easy as it seems to maximize error detection.
First of all, specifying test case requires a “destructive” mindset. You should find pleasure in finding errors. This is the right attitude to find as many errors as possible during your programming effort. So this implies that we write imperfect code. This is not always easy to accept for some people but nevertheless a good starting point to begin specifying test case data.

I summed up several techniques that can help you in finding the right set of test cases to find as many as errors as possible. The most important advice we can give you is to use the different methods in combination. Because only when you look at the problem from different angles, you can reasonably find the test cases that will filter out the bulk of the error conditions within the restrictions of budget and resources.

As you gradually become more familiar with the techniques you will not only come up with effective test cases but you will also think about testing while you’re coding. This will make you think how you can avoid an error condition in your code. Mind though you still will have to write out the test case even if you know your code can handle the situation.

Maybe while looking for test cases will give you more insight into the specifications that lead up to the code you’re investigating, making it more clear what you’re actually programming and how it fits in the whole picture. Maybe you will detect things that were omitted from the specification, either by accident or because the writer felt them to be obvious.

Continuing with this logic of thinking, Test-driven development puts testing completely on the fore-ground. You actually write test before writing any code. In that respect, testing becomes part of your (micro) design process. But that's for another post.

Sources and further reading

Wednesday, 3 October 2007

Continuous Integration

Early feedback for quality assurance

Continuous Integration (CI) is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. This approach can lead to reduced integration problems and allows a team to develop cohesive software more rapidly.

Build automation

Just building your application is relatively easy using a IDE : Decide whether you want a debug or a release build, and then select Build Solution or Rebuild Solution from the Build menu in for example Visual Studio .NET (VS .NET). But the manual approach is not sufficient for a project of any reasonable size. If you’re working with other developers, for example, you need to remember to get the latest pieces from your source code control system before building. You might also have unit tests to run through, documentation to update, and so on. The problem with attempting to put all the pieces together by hand is that people tend to forget things. This is where an automated build tool can be helpful with for example Nant or MsBuild.

Continuous integration

If daily builds are good, then even more frequent builds are better. After all, the sooner you spot a problem in a build, the sooner you can get to work fixing it. The logical extension of this line of reasoning is to rebuild your application every time anyone checks a change into your source code control system. This is where continuous integration (CI) steps in.

Continuous integration (CI) in that perspective is a technique to ensure the quality a software product during development where all the necessary assets of the product to build an application are under version control.

In order to integrate CI in your development process you make use of a CI application. This application monitors your source code control repository for check-ins. When it detects an updated file, it gets the current code, instructs to build the project, and look for failures. With instant feedback via e-mail or a web page, it can help you be sure that no one has accidentally checked in untested code.

Introducing CI is not only a technology implementation but also requires soft skills CI demands a certain discipline of the developer and project/team leaders to make CI give its benefits. The developer may not check in untested code and all the team members must interpret the reports generated by the CI application and do something with it if required (build failure, code complexity, etc.)

The CI process consists of the following steps:

  • SourceControl system monitoring
  • Building
  • Unit testing
  • Coverage Analysis
  • Static Code Analysis
  • Documentation Generation

These steps (some are optional) are executed by the CI build server every time a modification is detected in the source code. Each of these steps produces data, which is stored in a repository. A project portal is used to present these data items, in various reports.

All artifacts in order to build a working software product should be put into a source control system like Visual SourceSafe. The CI process that runs on the build server continuously scans the source control system for changes, and starts the build process as soon as a change is detected. The build server gets the latest version of the code base, and builds the software in one or more configurations. If the build fails, the build process is aborted and the build is considered to be broken. A broken build is a serious situation within CI, which should be remedied as soon as possible.

If the build succeeds, the other CI steps will be performed and eventually the build output will be copied to a drop location.

Unit testing and Code coverage analysis


The CI environment will run a set of specified unit test on the build server, each time integration of new functionality takes place. During these unit tests, code coverage is measured and registered by the build server. Code coverage provides developers and testers with information about what code is touched when a test is run. The results of the unit tests and the coverage analysis metrics are published on the project portal.


Static Code Analysis and Code metrics

Static code analysis checks code for conformance to the Design Guidelines and/or naming guidelines. The CI environment can perform Static Code Analysis on the build server, every time new functionality has been checked in to source control. A software metric is a measure of some property of a piece of software or its specifications. Tools can gather some code metrics on basis of the source code. These metrics can be used to track some aspects of your code that can influence some software quality aspects : maintainability, readability, etc.


Documentation Generation

The CI environment can automatically generate documentation during the build process. This is usually technical documentation that describes the internal structure of the application, such as classes and data structures. For example Ndoc or DoxyGen.

CI output

The CI environment will publish all the outcomes of the different workflows on a portal and can inform interested parties about the build outcome.

Conclusion

So the main purpose of CI is the early detection of integration issues and furthermore is quality improvement measure.

  • CI can help you measure cyclomatic complexity, code duplication, dependencies and coding standards so that developers can proactively refactor code before a defect is introduced.
  • If a defect is introduced into a code base, CI can provide feedback soon after, when defects are less complex and less expensive to fix.
  • CI provides quick feedback, via regression tests, on software that was previously working and is adversely affected by a new change.
But you need to understand that CI is more than a technology. It requires some discipline from the developers and involvement of project management. Running CI without doing something with the information will not improve the quality of your software application. Meaning that somebody has to watch the builds and care when they are broken. Furthermore the developer must "fix" any broken CI builds.

Sources and recommened reading

Software factory

Reuse of people,process and technology

A software factory is an approach to build applications with more or less the same structural characteristics through the use of more or less fixed combination of skilled people on the various topics within software development, processes and technology. So you say that software factories is a form of reuse. Not only "code", but also processes and people.

With a high degree of “industrialization”, it is seeking the maximum efficiency & productivity through:
  • Processes Standardization for project management and software development
  • Re-utilization & use of frameworks
  • Automation & tools
  • Skilled professionals
With these strategies it wants to reduce the development cycles (“Time to Market”) and the total cost of the project while keeping a high degree of quality for the complete cycle.
So if you have lot applications that share same structural characteristics such the development language, code organisation, and usage of third party libraries you can think about creating such a generic software Creation platform. I will be reused for all the concerned applications. More over you will be encouraged to let your future application share as much characteristics as possible. The more standardized to more benefit you’ll gain from re-use.

Usually Software factories are organized around a broad technology platform like .NET, JAVA, or SAP


Factory metaphor limitations


As with all metaphors you must not drive them to far else you’ll end up creating false expectations. Creating software is not the same as making cars or radios where predefined parts are assembled together while moving from one point in the factory to another until they are completed, checked and shipped. Software is not a mass-production item. It is not even tangible. More over, software is never finished. It build adjusted, expanded, reshaped throughout it life-time. Even during “assembly” the shape of the “product” is completely written in stone.

Microsoft definition of Software factories


Microsoft also uses the term software factories but it is not used to describe the assembly-line idea. They use it to name several Development accelerators packages. It frees the developer of creating something from scratch. The packages are materialized in the form of Visual Studio wizards, templates, project structures, components, frameworks but also best practices in the form of white papers.
De developer working with such a software factory in Visual studio will see the structure of the project so that the developer knows exactly which things to complete; which deliverables he has to work on; which are fixed for him, because they represent the architecture of that application.
So this definition does not emphasis on people and processes that much but rather on automation (technology).

Making unit tests

What approach should you take in setting up unit tests?

  • A more pragmatic approach where tests are more API usage-oriented, how you expected other parts of your application to interact with your unit of code.
  • Or should you strive for a formal way of working where systematically check every each method for all boundary conditions and error paths.
  • Or should you use both?
  • What approach in which conditions?
  • Should conduct a test-first approach, or test after approach? Or should you write your test during?

Take a look at this sample method:

'Method returns true if the person is over age 18, otherwise false
Public Function AgeCheck(dob as Date) as Boolean
'Some processing code here…
End Function


The premise of this code is fairly straightforward. It accepts a date of birth and validates the age. At first glance, you may think that you simply need to run a test that provides a date of birth prior to 18 years ago and a date of birth later than 18 years ago. And that works great for 80% of the dates that someone is likely to provide, but that only scratches the surface of what could be tested here. The following are some of the tests you may also consider to run for the AgeCheck method:

• Check a date from exactly 18 years ago to the day.
• Check a date from further back in time than 18 years.
• Check a date from fewer than 18 years ago.
• Check for the results based on the very minimum date that can be entered.
• Check for the results based on the very maximum date that can be entered.
• Check for an invalid date format being passed to the method.
• Check two year dates (such as 05/05/49) to determine what century the two-year format is being treated as.
• Check the time (if that matters) of the date.

As you can see, based on just a simple method declaration, you can determine at least 5 and maybe as many as 10 valid unit tests (not all tests may be required).


These following areas cover the majority of the situations you will run into when performing unit tests. Any approach should take into account tests that cover these areas of potential failure. Of course, this is by no means a comprehensive set of rules for unit testing. There will undoubtedly be others related to specific applications.

Boundary values

Testing the minimum and maximum values allowed for a given parameter. An example here is when dealing with strings. What happens if you are passed an empty string? Is it valid? On the other hand, what if someone passes a string value of 100,000 characters and you try to convert the length of the string to a short?

Equality

Test a value that you are looking for. In the case of the age check example used here, what if the date passed also includes the time? An equality check will always fail in that case, but in every other case, the results would come out correctly.

Format

Never trust data passed to your application. This is particularly applicable to any boundary methods, where you can get malicious hackers looking to crack your system. But poorly formatted data from any source can cause problems if you do not check it.

Culture

Various tests need to be performed to ensure that cultural information is taken into account. In general, you can skip these tests if the application is written for a single culture only (although, even this is not a good indicator, because you can write a test for a U.S. English system but have it installed by someone in the U.K.). Otherwise, string, date, currency, and other types of values that can be localized should always be checked (for example, if you are expecting a date in the U.S. format but get it in the European format, then some values will work, but some will not). An example is the date 02/01/2000. In the U.S. format, this is February 1, 2000; in the European format, it is January 2, 2000. While this test would pass (most of the time), changing the U.S. format to 01/23/2000 would fail in the European format.

Exception paths

Make sure that the application handles exceptions—both expected and unexpected—in the correct manner and in a secure fashion. Frequently, developers test the normal path and forget to test the exception path. VSTS and NUnit allow you to annotate test methods with expectedException. So while testing the exception paths, you expect that an exception will be thrown. So in that case the test is succeeded.

Coverage

Not everything needs to/can be tested. In most cases, you want to test the most used paths of the application for the most common scenarios. An example of wasted time would be writing tests for simple property assignments. While this may be needed in specific situations, you cannot possibly write every test for every scenario—it takes too long even with automated tools!

Test both expected behavior (normal workflows) and error conditions (exceptions and invalid operations). This often means that you will have multiple unit tests for the same method, but remember that developers will always find ways to call your objects that you did not intend. Expect the unexpected, code defensively, and test to ensure that your code reacts appropriately. Don’t write only “clean” test. Make sure your code responds to any and all scenarios, including when the unexpected happens.

In order for you to measure the effectiveness of your tests, you need to know what parts of your code are tested by unit test. Code coverage is the concept of observing which lines of code are used during execution of unit tests. You can easily identify regions of your code that are not being executed and create tests to verify that code. This will improve your ability to find problems before the users of your system do.

But that's for another post.