Monday, 22 October 2007

RDB Transactions [HYC00]

Today I got an error message when I tried to connect to an RDB data server through the ADO.NET ODBC brigde surrounded in a System.transaction. I'm using the ODBC driver for Oracle 3.02

So the following piece of code gives an error



Using tx As New TransactionScope
Using con As New Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
' do your work here
tx.Complete()
End Using
End Using


The following error is thrown : System.Data.Odbc.OdbcException: ERROR [HYC00] [Oracle][ODBC]Optional feature not implemented. Meaning that the driver does not support the version of ODBC behavior that the application requested.


but when i slightly change the code, it works fine



Using con As New Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
Using tx As New TransactionScope
' do your work here
tx.Complete()
End Using
End Using


Also using ODBCTransaction Object works fine


Dim tx As Odbc.OdbcTransaction = Nothing
Dim con As System.Data.Odbc.OdbcConnection = Nothing
Try
con = New System.Data.Odbc.OdbcConnection
con.ConnectionString = "Driver={Oracle RDB Driver};server= ......"
con.Open()
tx = con.BeginTransaction
'do your work here
tx.Commit()
Catch ex As Exception
If tx IsNot Nothing Then tx.Rollback()
Throw
Finally
If tx IsNot Nothing Then tx.Dispose()
If con IsNot Nothing Then con.Dispose()
End Try


So morale of the story ... somewhere System.transactions uses ODBCtransactions ?

Any info is welcome!


Regards,

Alexander

Friday, 19 October 2007

VISUG-devmatch 18/10/2007 - Datasets vs. OO

During this evening event , the Belgian Visual Studio user group (http://www.visug.be/) invited two speaker to present their views regarding datasets-oriented development and a more object-oriented way of working respectively.



Kurt Claeys (Ordina) was the advocate for the dataset approach while Yves Goeleven (Compuware) "preached" the Domain-Driven Design(DDD) way. Both did a good job !



Benefits (What did I learn)


  • Use dataset-approach in application that needs to be build fast.
  • Use dataset-approach If you are more at ease with the relational model than OO-models.
  • Use a dataset to persist information on a PDA . You can use the dataset xml features for that.
  • Check out the new dataset features in VS2008 . You can separate the generated TableAdapters from the dataset type in different projects. So you make a assembly with dataset types that are shared on several layers and/or tiers. There is also a "manager" that handles multi-table updates.
  • Microsoft continues to invest in Datasets (VS2008 , LINQ to Datasets).
  • But Microsoft also tries to keep up with the DDD camp (Entity Framework)
  • There is also stuff in between LINQ to SQL
  • Domain-driven design(DDD) is object-oriented design with emphasis on the problem domain. The domain model is the basis. I should catch up on that (http://www.domaindrivendesign.com/).
  • DDD is difficult. Expect big learning curve.
  • Don't expose domain layer immediately to UserInterface layer. You should consider other objects to transport the information of an entity (DDD for principal domain object representing something valuable for the business (customer order, etc).
  • Use an ORM to help you bridge the Object-Relational mismatch
  • You need involvement from all stakeholders to do DDD to practice the Ubiquitous Language principle.



Concerns (What did I miss)

  • Example of DDD applications (or parts of ) to see side-by-side difference. Kurt prepared a lot of example applications to support his viewpoints.

Best regards,

Alexander

Wednesday, 17 October 2007

Datasets versus custom objects

Here is a list of benefits and concerns for each approach I compiled from various Internet sources and books. If you see other points or disagree with (some) things , don't hestitate to drop a line.

Datasets (.NET2.0)

Benefits



  • Disconnnected data-container out-of-the box
  • Typed datasets enhance ease-of-programming compared to "plain" datasets
  • Table dataAdpaters automatically generated
  • Visual designer in VS.NET to create typed datasets manually or via server explorer
  • Add new data source wizard creates Typed datasets from database schema.
  • Full two-way data-binding capabilities
  • Built-in support to receive notifications when something changes inside.
  • Search/sorting capabilities out-of-the-box (some help of DataView class)
  • Work together with data adapters to retrieve and persist data.
  • You can define relationships between tables (referential constraints)
  • You can define constraint on columns (Uniqueness)
  • Can hold many kinds of data types (.NET framework)
  • The DataSet is serializable out-of-the-box (also binary )
  • Has integrated XML capabilities
  • Has built-in support for optimistic concurrency
  • You can add custom logic in typed Dataset partial classes
  • Using annotations, you can change the names of contained objects to more meaningful names without changing the underlying schema, making code easier for clients to use.
  • handle NULL values out-of-the box
  • Lot of third party controls support datasets
  • Multiple version of a column value can exist and is available out-of-the-box (row state , GetChanges, Merge)
  • With SQLdataAdapters you can load muliple tables in a dataset at once

Concerns



  • Everything is represented in .NET Object type. There is a lot of casting/boxing going on
  • Datasets used in webservices are not ideal from interoperability standpoint
  • relational API maybe too restrictive (Tables and Relations properties). Need for special methods to represent/manipulate business entities.
  • DataSet is tied directly to the database model. Abstractions are more diffeiclut . you must adhere to thinking in tables and related concepts.
  • Inheritance. Your typed dataset must inherit from DataSet, which precludes the use of any other base classes.

Custom Collections

Benefits



  • provide the means to expose data in easy-to-access APIs without forcing every data model to fit in the relational model. You can still make a one-to-one mapping with the database but you can more easily use OO technisues to model the your problem domain.
  • Advanced relationships like inheritance are possible.
  • You can add any behaviour that is needed. Custom entities can contain methods to encapsulate (simple) business rules. Custom entity classes can perform simple validation tests in their property accessors to detect invalid business entity data.
  • custom class can be marked as serializable
  • Code can be easier to understand/maintain
  • LINQ To SQL, Entity Framework are special features in future version of .NET. So MS recognizes benefits of this way of working (or is it under market pressure :)
  • using custom classes makes for easier unit testing


concerns



  • Programming effort can be bigger than all-dataset scenario
  • you must implement a interfaces in order to provide for effective containment and data-binding capabilities
  • Mapping custom collections and entities to a database can also be a complicated process that can require a significant amount of code (need for code generation/ORM tool)
  • To support optimistic concurrency time stamp columns must be defined in the database and included as part of the instance data.
  • Support for multiple versions of data state within an entity must be coded.
  • No Searching and sorting of data out of the box. You must define your own mechanism to support searching and sorting of entities.

Sources & recommended reading

Monday, 15 October 2007

SSIS and RDB

Hi,

Facing some problems with the connectivity towards RDB, we were looking towards solutions

Problems

  • Schema information did not appear in SSIS wizards/Tools. For example character columns were represented as nvarchar (0).
  • Precision with numeric columns

Alternatives

After scanning the Internet for solutions, we came up with following alternatives
  • Buy a OLE-DB driver for RDB (Attunity)
  • Emulate RDB as an Orcale Database and use the OLE-Db driver for Oracle
  • New ODBC driver for RDB (3.0) from Oracle.

Solution

Fortunately a third option worked for us (free !)

Check out
http://www.oracle.com/technology/software/products/rdbodbc/index.html

Regards,

Alexander

Friday, 12 October 2007

SAI conference : KBC .NET factory

KBC .NET software factory

Yesterday (11/10/2007) I attended a conference organized by SAI (www.sai.be) about Agile development and a .NET software factory as used at the KBC group (www.kbc.be) , a banking and insurance group.

The presenters (Peter Bauwens, Jan Laureys, Kim Verlot) did a very fine job in explaining and sharing their experiences with setting up a Software delivery & Maintenance center for .NET application with the KBC .NET software factory .

I'll try to sum up some point in a benefits & concerns style, meaning the things I 've learned and the things I wish I understood better.

Benefits

  • Software factories (in the technology sense of the word) alone don't cut it. There is also a process part and an organization(people) part that are equally, if not more , important than the technology at hand.
  • Pragmatism is key. They were not afraid to mix more "classical" engineering approaches in their agile methodology: The key phrase of Jan was : being agile in an agile way. for example scrum was expanded with an envisioning and a kind of requirements gathering period. After the scrum sprints, a phase for stabilization and transition was included.
  • Visual Studio Team foundation was a big enabler in this way of working
  • Only allow a fixed application architecture with some variability (winforms, Asp.net, batch)
  • Always question your factory: it is never finished. Incorporate feedback of the developers.
  • Only work with people who believe in this way of working because not the technology is the biggest hurdle (you have a lot of training possibilities) but the process asks a particular kind of mentality.

Concerns

  • Having authority to impose certain things (process, technology) requires managemente involvement from day one. I wish I knew who I make a business case to invest in a software factory an SDC & Maintenance center
  • While having a very flat project structure (project administrator, Lead developer, developers) makes communication easy I would say nevertheless I found the "burden" and responsibity they put on the LDEV very big. He combines, in variuos degrees, technology , process and domain knowledge. While writing user stories and talking (and thinking with) the "business" , he also was responsible for keeping the process on track and prefereable also knew the technology. I did not hear anything about system analist or functional analysist or business analist being a part of the team writing the use cases. I wish I knew how I could manage that.
  • Project assignements go through a process of determing their criticality before being assigned to the development are's withing the KBC group (Low level SLA application and High Level SLA applications). Until now the delivery centers worked on LOLA apps. i wish I knew how I could organise a delivery center for HILA apps with .NET as major technology platform?
  • The technology pilar changes at a relatively fast pace. When they started in 2005/2006. Some technology from Microsoft wasn't mature enough (Microsoft Software factories) .I wish I knew if it would a safe bet now to go along that path to make our software factory/Delivery center.

Once again, congratualation for the presentors.

Best regards,

Alexander Nowak

Sunday, 7 October 2007

Unit tests (Part 2)

Working definition

Unit testing is a procedure used to validate if “units” of source code are working properly. More technically one should consider that a unit is the smallest testable part of an application. In an Object Oriented program, the smallest unit is a Class or more interesting, its operations.

Unit testing is done by the developers and not by end-users or separate QA. A unit test is a piece of code that calls the unit under test in particular way in order to find errors during development. Each Unit test verifies if it returns the predicted answer.

Automation is key in the success of unit testing because it gives the developers a way to create structured unit tests that can be executed easily and repeatable and can be verified through assertions in the test code.

Benefits

The goal of unit testing is show that the individual parts of the code are working correctly. In doing unit testing you gain several benefits


  • Raise level QA mindset of the developer
  • Raise confidence in the quality of the code
  • Tests are run repeatedly
  • Early warning detection system for programming and/or design errors or flaws
  • Living documentation on the API of the “units”
  • Aid for regression testing
  • Code refactoring is more stimulated


When writing unit tests you should think like a tester, not just a developer. The time you take to design your unit tests will help reduce the time spent resolving defects later. Focus on the details of your objects: How is data transferred between them? Who consumes them? How easy can you break the object? What happens if I "do this"?

When a developer is programming some functionality he usually writes in some way or another some testing code to see whether his implementation does what is supposed to do and if it doesn’t produce errors. The developer then moves on to program the next piece of functionality. Again he assures that run as expected. Maybe the developer moves on other functionalities without coming back to test previous piece of code until release data. With automated unit testing, the developer has a method so the test can be run repeatedly as the code is developed.

When you have a suite of unit test that effectively test your code, you can be confident that your code will have a low likelihood of errors. The confidence in the quality of the code can be increased.

The act of writing tests often uncovers design or implementation problems. The unit tests serve as the first users of your system and will frequently identify design issues or functionality that is lacking.

Once a unit test is written, it serves as a form of documentation for the use of the target system. Other developers can look to unit tests to see example calls into various classes and members.

Perhaps one of the most important benefits is that a well-written test suite provides the original developer with the freedom to pass the system off to other developers for maintenance and further enhancement. Should those developers introduce a bug in the original functionality, there is a strong likelihood that those unit tests will detect that failure and help diagnose the issue. Meanwhile, the original developer can focus on current tasks.

It takes the typical developer time and practice to become comfortable with unit testing. Once a developer has been saved enough time by unit tests, he or she will latch on to them as an indispensable part of the development process.

Unit testing does require more explicit coding, but this cost will be recovered, and typically exceeded, when you spend much less time debugging your application. In addition, some of this cost is typically already hidden in the form of test console- or Windows-based applications. Unlike these informal testing applications, which are frequently discarded after initial verification, unit tests become a permanent part of the project, run each time a change is made to help ensure that the system still functions as expected. Tests are stored in source control very near to the code they verify and are maintained along with the code under test, making it easier to keep them synchronized.

Unit tests are an essential element of regression testing. Regression testing involves retesting a piece of software after new features have been added to make sure that new bugs are not introduced. Regression testing also provides an essential quality check when you introduce bug fixes in your product.

It is difficult to overstate the importance of comprehensive unit test suites. They enable a developer to hand off a system to other developers with confidence that any changes they make should not introduce undetected side effects. However, because unit testing only provides one view of a system's behavior, no amount of unit testing should ever replace integration, acceptance, and load testing.

In the same line but when refactoring code (changing code without changing the intend) to improve design in terms of maintainability, performance, etc. are more stimulated if you have a battery of unit tests. You’re more confident when you refactor some code. You can easily check if the code change is done correctly.

Automated unit testing should help reduce the amount of time you spend in the debugger. However, if testing results and code coverage do not help in providing the reason why your test is failing, don't be afraid to debug your unit tests. In Visual Studio 2005 Team System, developers can debug their unit testing assemblies using the Debug Selected tests option in Test Manager. You can also debug Nunit test.


Concerns

Unit testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors (except when you consider a component or subsystem as a unit) , performance problems or any other system-wide issues. In addition, it may not be easy to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.

It is unrealistic to test all possible input combinations for any non-trivial piece of software. Like all forms of software testing, unit tests can only show the presence of errors; it cannot show the absence of errors. Writing unit tests for every single class to test every aspect of every function is a tall order. Even with a large number of unit tests, is complete testing possible (or desirable)?

What if there are bugs in the unit tests? Where there is code, there will be bugs. You’ll be getting false positives.

Not all code can be unit tested (at least not easily). It could, of course, be argued that in this case, the code should be simplified until it can be unit tested.
Passing a Unit test only means that no the test it self did not catch any problem. All the assertions were correct. This doesn’t mean that there are no bugs. The success depends on the quality of the unit test as well.


Automation

Unit testing in spirit has existed since the beginning of programming. Every environment, development shop or even programmer had/has its way of conducting these kinds of tests. For example creating a form with 20 –odd buttons calling various part of your code base.
This “Form with buttons where each button performs a functionality of the code to be tested” –approach work s but you need a manual operation for each single test: clicking the button. The main problem is that you need to have the discipline to click on very button after very change. Secondly, it is very easy to drag a button on a form and hook and eventhandler leaving the name of the button to its default. If you revisit the form a after some time, chances are you don’t remember to meaning behind the button. It will be even more difficult for someone else interpreting the test behind all the buttons. Thirdly, knowing if the test succeeded or not , you had to make assertions in your code , or make the assertion visually but also you needed a way to give feedback to the developer ( eg infamous msgbox).

Third-party vendors of course offer solutions to the testing problems but they are usually very expensive and introduce their own, proprietary scripting language to conducts test.

The xUnit framework was introduced as a core concept of eXtreme Programming in 1998. It introduced an efficient mechanism to help developers add structured, efficient, automated unit testing into their normal development activities. This Toolkit lets you develop tests using the same language and IDE that they are using to develop the application. The xUnit framework defines concept of "test runner" as the application responsible for (a) executing unit tests, and (b) reporting on the test results. Automated unit tests are based on "assertions," which can be defined as "the truth, or what you believe to be the truth." From a logic standpoint, consider this statement, "when I do {x}, I expect {y} as a result."

The atomic parts of a unit test fixture are the test methods that make assertions about the behavior of some (or all) public members of the unit you're testing. A test suite is a library assembly that contains one or more such test fixtures, along with optional setup and teardown methods. You normally run this assembly from a GUI app, but these unit tests can run also from a command-line and be activated from other applications (build process).

With the help of this testing automation framework your unit tests are

  • Test can be written in same language as the tested unit
  • Structured.
  • Self-documenting.
  • Automatic and repeatable.
  • Designed to test positive and negative actions.

Open questions

  • Testing should become an integral part of the programming effort , not some things that comes afterwards. But how can you educate your team? How can you convince management
  • How to ensure that having a lot of unit tests does in fact make tracking bugs easier?
  • How to make sure your tests are easy to maintain?
  • How to write readable unit tests

References and recommend reading

Friday, 5 October 2007

Testing types classification

There are different types of test you can conduct. These test types have different objectives and characteristics. There are several classification criteria to organize these types of test.




  • Who uses the information of the test results? (development team, end-user, operations , Audit firms, etc)

  • Phase in development cycles (Requirements gathering, Analysis, Design , Construction , etc)

  • Phase in life cycle of a application (New versus maintenance)

  • Who establishes and/or executes the test? (Development team, Test organization, end-user)

  • Dynamic tests versus static tests

  • Functional test versus non-functional test (performance, load, security ,etc)

Sometimes the classification name for type means one thing for one person and another for person. I tried to compile a list of test types from several Internet resources and books. I don't give any formal definition but try to specify the test type through its characteristics.



  • Objective

  • How are the tests executed?

  • Who executes the tests?

  • Who is the main interesseted party in the test-results?

  • Where are the tests executed?

  • When are the tests executed?



Static testing



  • To find errors before they become bugs early in the software development cycle.
  • Code is not being executed.
  • Usually take the form of code reviews (self-review, peer–review, walk-troughs, inspections, audits) BUT can also be conducted on level of requirements, analysis and design.
  • Usage of check-list for verification.
  • Code analysis preferably executed through tools.
  • Executor : Individual developer , Development team, Business analyst / Customer in case of requirements testing ,Business analyst in case of analysis reviews, Architect/Senior designer in case of design review
  • Primary beneficiary is the project team self
  • Can be executed on the private development environment
  • Can be executed in separate environment ( test environment , build environment in Continuous Integration scenario)
  • Code review usually in construction phase; Audits tend to be later in the cycle.
  • Reviews on requirements, analysis and design are executed early in the project


Unit testing



  • A.k.a. Program testing , Developer testing
  • The main objective is to eliminate coding errors
  • Testing the smallest unit in programming environment (class , function , module)
  • Typically the work of one programmer
  • In OO environment usage of Mock Libraries/Stubs to isoloated units
  • Executed by the developer
  • Feedback immediately used by the developer.
  • Test results are not necessarly logged somewhere.
  • Executed on the private development environment. So Tested in isolation of others developers.
  • But can be executed in separate environment ( build environment in Continuous Integration scenario)
  • Executed during construction phase


Unit Integration testing



  • A.k.a. Component testing, Module testing
  • To check if units can work together
  • It is possible for units to function perfectly in isolation but to fail when integrated
  • Typically the work of one programmer
  • Executed by the Developer
  • Test results immediatly used by Developer
  • Test results are not necessarly logged somewhere.
  • Executed on the private development environment. So Tested in isolation of others developers.
  • Can be executed in separate environment ( build environment in Continuous Integration scenario)
  • Exectued during construction phase


System testing



  • Compares the system or program to the original objectives
  • Verification with a written set of measurable objectives
  • Focuses on defects that arise at this highest level of integration.
  • The execution of the software in its final configuration, including integration with other software and hardware systems
  • Includes many types of testing: usually strong distinction between functional and non-functional requirements like usability, security, localization, availability, capacity, performance, backup and recovery, portability
  • Who's testing depends on sub type but usually a separate tester role within the project team
  • Test results are used by the Project team or Operations (performance , volume , -> capacity)
  • Executed on the separate test environment
  • Generally executed towards the end of construction when main functionality is working.


Functional testing



  • Finding discrepancies between the program and its external specification
  • Test result used primarily by the project team
  • focuses on validating the features of an entire function (use case)
  • Main question: Is it performing as our customers would expect?
  • White box approach
  • Single-user test. It does not attempt to emulate simultaneous users
  • Look at the software through the eyes of targeted end-user.
  • Preferable separate tester (non-biased)
  • Separate test environment
  • Generally more towards the end of construction



Beta testing


  • Check how the software affects the business when a real customers use the software
  • "Test results" used by customer and project team
  • Usually not a completely finished product
  • Sometimes used in parallel with previous application on the end-user environment.
  • Is usually not structured testing. No real test cases established. Application is used in everyday scenarios. Although some end-users use implicit error guessing techniques to discover defects.
  • Executed by selected end-users
  • In separate test environment or real production environment
  • It can be conducted after some construction (iterations) but normally before formal acceptance tests.


Acceptance Testing



  • Executed by the Customer or some appointed by the customer (Not the delivering party)
  • Determine if software meets customer requirements and to whether the user accepts (read pays) for the application.
  • Who defines the depth of the acceptance testing?
  • Who creates the test cases?
  • Who actually executes the tests?
  • What are the pass/fail criteria for the acceptance test?
  • When and how is payment arranged?
  • Test results primarily used by customer but are handed over to project team . Could be that application is accepted under some conditions . A.k. known bugs.
  • Separate test environment or Production environment
  • Executed after complete testing by delivering party



Performance testing



  • Searches for bottlenecks that limit the software’s response time and throughput
  • Results primarily used by development team and Operations
  • To identify the degradation in software that is caused by functional bottlenecks and to measure the speed at which the software can execute
  • Kind of system test
  • Mimic real processing patterns , mimic production-like situations
  • To identify performance strengths and weaknesses by gathering precise measurements
  • Don’t intentionally look for functional defects
  • Executed by separate test role
  • Executed on the separate test environment
  • Generally toward the end of construction unless performance-critical components are identified beforehand (calculations, external connectivity through WAN/Internet, etc)


Usability testing



  • Check if human Interface complies with the standards at hand and UI is easy to work with
  • Kind of system test
  • Test results serve the development team and end user
  • Components generally checked include screen layout, screen colours, output formats, input fields, program flow, spellings, Ergonomics , Navigation , and so on
  • Preferable separate tester (non-biased) conducts these test (speciality)
  • Executed on the separate test environment
  • Generally toward the end of construction


System Integration testing



  • To check if several sub-system can work together as specified
  • Kind of system test
  • Development team
  • Larger-scale integration than unit integration
  • Generally combines the work of several programmers.
  • Separate test role with the project team (delivering party)
  • Test result used by project team
  • Executed on the separate test environment
  • During construction when workable sub-systems are ready to be tested (i.e. basic functionality works in happy path)



Security testing



  • Tries to compromise the security mechanisms of an application or system.
  • Kind of system test
  • Need for separate test environment to mimic production environment
  • Separate test role
  • Executed at end of construction unless is primordial in the application



Volume testing



  • Other terms or specialisations Scalability testing, Load testing, Capacity testing
  • To determine whether the application can handle the volume of data specified in its objectives (current + future). So what happens to the system when a certain volume is processed over a longer period of time (resource leaks, etc)
  • Test results important for project team and Operations
  • Volume testing is not the same as stress testing (tries to break the application)
  • Separate test role
  • Executed on the separate test environment
  • Generally towards end of construction unless load/scalability is a critical success factor


Stress testing



  • To find the limits of the software regarding peak data volumes or maximum number of concurrent users
  • Interested parties are : Project team and Operations
  • Create chaos is the key premises.
  • Go beyond potential processing patterns
  • Try to demonstrate that an application does not meet certain criteria such as response time and throughput rates, under certain workloads or configurations
  • Kind of system test
  • Don’t look for functional defects (intentionally)
  • Need for separate test environment to mimic production environment
  • Special test software used to simulate load /concurrent users
  • Separate test role
  • End of construction

Availability testing



  • To assure that our application can continue functioning when certain type of faults occur.
  • Test in function of fail-over technology.
  • Don’t look for functional defects (intentionally)
  • Need for separate test environment to mimic production environment
  • Special test software used to simulate certain fault(network outage, etc)
  • Separate test role
  • End of construction



Deployment testing



  • A.k.a Installation testing
  • To verify installation procedures
  • Important for Release manager and Operations (start up/ business continuity)
  • checking automated installation programs
  • Configuration meta data
  • Installation manual review
  • Conducted by separate test role
  • Separate environment by default (clean machine principle)
  • Construction during stage promotion for example


Regression testing



  • To avoid that software goes into regression in light of changes or bug fixes
  • Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes
  • To make sure a fix/change correctly resolves the original problem reported by the customer.
  • To ensure that the fix/change does not break something else.
  • On different levels Unit, integration , system, functional
  • Executor depends on level of test
  • Primary interested party depends on level of test
  • Executed during Construction or Maintenance

Several Other Specialised tests


  • Testing for operational monitoring: Do faults we simulated find their way in the monitoring environment of our application?
  • Testing for portability: Does our product work on the targeted Operation Systems?
  • Testing for interoperability;For example does our software work with different targeted database systems?
  • Testing the localized versions.
  • Testing for recoverability after certain system failures.
  • etc.