There are different types of test you can conduct. These test types have different objectives and characteristics. There are several classification criteria to organize these types of test.
- Who uses the information of the test results? (development team, end-user, operations , Audit firms, etc)
- Phase in development cycles (Requirements gathering, Analysis, Design , Construction , etc)
- Phase in life cycle of a application (New versus maintenance)
- Who establishes and/or executes the test? (Development team, Test organization, end-user)
- Dynamic tests versus static tests
- Functional test versus non-functional test (performance, load, security ,etc)
Sometimes the classification name for type means one thing for one person and another for person. I tried to compile a list of test types from several Internet resources and books. I don't give any formal definition but try to specify the test type through its characteristics.
- Objective
- How are the tests executed?
- Who executes the tests?
- Who is the main interesseted party in the test-results?
- Where are the tests executed?
- When are the tests executed?
Static testing
- To find errors before they become bugs early in the software development cycle.
- Code is not being executed.
- Usually take the form of code reviews (self-review, peer–review, walk-troughs, inspections, audits) BUT can also be conducted on level of requirements, analysis and design.
- Usage of check-list for verification.
- Code analysis preferably executed through tools.
- Executor : Individual developer , Development team, Business analyst / Customer in case of requirements testing ,Business analyst in case of analysis reviews, Architect/Senior designer in case of design review
- Primary beneficiary is the project team self
- Can be executed on the private development environment
- Can be executed in separate environment ( test environment , build environment in Continuous Integration scenario)
- Code review usually in construction phase; Audits tend to be later in the cycle.
- Reviews on requirements, analysis and design are executed early in the project
Unit testing
- A.k.a. Program testing , Developer testing
- The main objective is to eliminate coding errors
- Testing the smallest unit in programming environment (class , function , module)
- Typically the work of one programmer
- In OO environment usage of Mock Libraries/Stubs to isoloated units
- Executed by the developer
- Feedback immediately used by the developer.
- Test results are not necessarly logged somewhere.
- Executed on the private development environment. So Tested in isolation of others developers.
- But can be executed in separate environment ( build environment in Continuous Integration scenario)
- Executed during construction phase
Unit Integration testing
- A.k.a. Component testing, Module testing
- To check if units can work together
- It is possible for units to function perfectly in isolation but to fail when integrated
- Typically the work of one programmer
- Executed by the Developer
- Test results immediatly used by Developer
- Test results are not necessarly logged somewhere.
- Executed on the private development environment. So Tested in isolation of others developers.
- Can be executed in separate environment ( build environment in Continuous Integration scenario)
- Exectued during construction phase
System testing
- Compares the system or program to the original objectives
- Verification with a written set of measurable objectives
- Focuses on defects that arise at this highest level of integration.
- The execution of the software in its final configuration, including integration with other software and hardware systems
- Includes many types of testing: usually strong distinction between functional and non-functional requirements like usability, security, localization, availability, capacity, performance, backup and recovery, portability
- Who's testing depends on sub type but usually a separate tester role within the project team
- Test results are used by the Project team or Operations (performance , volume , -> capacity)
- Executed on the separate test environment
- Generally executed towards the end of construction when main functionality is working.
Functional testing
- Finding discrepancies between the program and its external specification
- Test result used primarily by the project team
- focuses on validating the features of an entire function (use case)
- Main question: Is it performing as our customers would expect?
- White box approach
- Single-user test. It does not attempt to emulate simultaneous users
- Look at the software through the eyes of targeted end-user.
- Preferable separate tester (non-biased)
- Separate test environment
- Generally more towards the end of construction
Beta testing
- Check how the software affects the business when a real customers use the software
- "Test results" used by customer and project team
- Usually not a completely finished product
- Sometimes used in parallel with previous application on the end-user environment.
- Is usually not structured testing. No real test cases established. Application is used in everyday scenarios. Although some end-users use implicit error guessing techniques to discover defects.
- Executed by selected end-users
- In separate test environment or real production environment
- It can be conducted after some construction (iterations) but normally before formal acceptance tests.
Acceptance Testing
- Executed by the Customer or some appointed by the customer (Not the delivering party)
- Determine if software meets customer requirements and to whether the user accepts (read pays) for the application.
- Who defines the depth of the acceptance testing?
- Who creates the test cases?
- Who actually executes the tests?
- What are the pass/fail criteria for the acceptance test?
- When and how is payment arranged?
- Test results primarily used by customer but are handed over to project team . Could be that application is accepted under some conditions . A.k. known bugs.
- Separate test environment or Production environment
- Executed after complete testing by delivering party
Performance testing
- Searches for bottlenecks that limit the software’s response time and throughput
- Results primarily used by development team and Operations
- To identify the degradation in software that is caused by functional bottlenecks and to measure the speed at which the software can execute
- Kind of system test
- Mimic real processing patterns , mimic production-like situations
- To identify performance strengths and weaknesses by gathering precise measurements
- Don’t intentionally look for functional defects
- Executed by separate test role
- Executed on the separate test environment
- Generally toward the end of construction unless performance-critical components are identified beforehand (calculations, external connectivity through WAN/Internet, etc)
Usability testing
- Check if human Interface complies with the standards at hand and UI is easy to work with
- Kind of system test
- Test results serve the development team and end user
- Components generally checked include screen layout, screen colours, output formats, input fields, program flow, spellings, Ergonomics , Navigation , and so on
- Preferable separate tester (non-biased) conducts these test (speciality)
- Executed on the separate test environment
- Generally toward the end of construction
System Integration testing
- To check if several sub-system can work together as specified
- Kind of system test
- Development team
- Larger-scale integration than unit integration
- Generally combines the work of several programmers.
- Separate test role with the project team (delivering party)
- Test result used by project team
- Executed on the separate test environment
- During construction when workable sub-systems are ready to be tested (i.e. basic functionality works in happy path)
Security testing
- Tries to compromise the security mechanisms of an application or system.
- Kind of system test
- Need for separate test environment to mimic production environment
- Separate test role
- Executed at end of construction unless is primordial in the application
Volume testing
- Other terms or specialisations Scalability testing, Load testing, Capacity testing
- To determine whether the application can handle the volume of data specified in its objectives (current + future). So what happens to the system when a certain volume is processed over a longer period of time (resource leaks, etc)
- Test results important for project team and Operations
- Volume testing is not the same as stress testing (tries to break the application)
- Separate test role
- Executed on the separate test environment
- Generally towards end of construction unless load/scalability is a critical success factor
Stress testing
- To find the limits of the software regarding peak data volumes or maximum number of concurrent users
- Interested parties are : Project team and Operations
- Create chaos is the key premises.
- Go beyond potential processing patterns
- Try to demonstrate that an application does not meet certain criteria such as response time and throughput rates, under certain workloads or configurations
- Kind of system test
- Don’t look for functional defects (intentionally)
- Need for separate test environment to mimic production environment
- Special test software used to simulate load /concurrent users
- Separate test role
- End of construction
Availability testing
- To assure that our application can continue functioning when certain type of faults occur.
- Test in function of fail-over technology.
- Don’t look for functional defects (intentionally)
- Need for separate test environment to mimic production environment
- Special test software used to simulate certain fault(network outage, etc)
- Separate test role
- End of construction
Deployment testing
- A.k.a Installation testing
- To verify installation procedures
- Important for Release manager and Operations (start up/ business continuity)
- checking automated installation programs
- Configuration meta data
- Installation manual review
- Conducted by separate test role
- Separate environment by default (clean machine principle)
- Construction during stage promotion for example
Regression testing
- To avoid that software goes into regression in light of changes or bug fixes
- Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes
- To make sure a fix/change correctly resolves the original problem reported by the customer.
- To ensure that the fix/change does not break something else.
- On different levels Unit, integration , system, functional
- Executor depends on level of test
- Primary interested party depends on level of test
- Executed during Construction or Maintenance
Several Other Specialised tests
- Testing for operational monitoring: Do faults we simulated find their way in the monitoring environment of our application?
- Testing for portability: Does our product work on the targeted Operation Systems?
- Testing for interoperability;For example does our software work with different targeted database systems?
- Testing the localized versions.
- Testing for recoverability after certain system failures.
- etc.
5 comments:
Your blog is really excellent. It inspires the readers who has that great desire to lead a better and happier life. Thanks for sharing this information and hope to read more from you.
Term Papers
Nice Blog! Rapid app development touches every part of the mobile app development process to reduce costs and increase development pace. Application Development & Testing
What about interesting educational info, I recommend you to read https://college-homework-help.org/blog/philosophy-degree. It will help you to know more about philosophy degree.
It's a nice article. You should post it https://paperovernight.com/blog/reaction-paper here. Don't miss it.
Post a Comment