Introduction
Basic Steps of Structured Testing
1. Write Test Management Plan
- Analysis:Project Plan
- Estimate:needed time, money, resources, test(prioritization) strategy
2. Select what has to be tested
- Analysis:What is to be delivered(requirements)
- Design:tested for cohesion(design)
3. Decide when, how and to what extent the testing needs to be done
- How:What techniques?
- When:Which phase(s) of the project
- Extent:Decide on whether or not to automate the tests(regression testing)
4. Develop test scenarios/cases
- Develop Test Scenarios
- Develop the Test
5. Execute the tests
- Select Test Data
- Execute the Tests
*Report the found Defects and follow-up/re-test the Defects
6. Write Test Report(s)
- Analyze the results
- Document the results in a report
V-Model
Project Phases
Identify | Plan | Design | Code | Test | Deliver | Close |
---|---|---|---|---|---|---|
Decision to Start a project | Project plans are written | Requirements, UML Design | Programming | Testing | Release, Customer tests | Get paid |
Testing start, Write Test Management Plan (TMP) | Correct! Testing normally ends here |
Waterfall Model
Plan | Design | Code | Test | Deliver |
---|---|---|---|---|
-----> | Customer Requirements | <------ | ----- | |
---------->System Design | ^ | |||
----------------------------> | Coding | ^ | ||
-------> | Testing | |||
-------> | Release |
V-Model
Plan | Design | Code | Test | Deliver | |
---|---|---|---|---|---|
Customer Level | Customer Requirements | Costomer Acce- | -ptance Test | ||
System Level | System Design and Requirements | System Test | |||
Integration Level | Component Designs and Requirements | Integration Testing | |||
Component Level | Coding | Component Testing | |||
Static- | Testing | Dynamic- | Testing |
Testing
- Static Testing:You don't need computers to perform the tests.
- Dynamic Testing:You need computer to perform the tests.
Static Testing
SMART
- S = Specific
- Not generic
- Not open to miss-interpretation when read by others
- State only 1 requirement in the statement
- M = Measurable
- It must be possible to verify the completion
- What is the acceptance criteria?
- A = Achievable
- Requirement must physically able to be achieved given existing circumstances
- Must be realistic
- R =Relevant
- Is the requirement a requirement?
- Duplicate requirements
- T = Traceable
- All requirements must have a unique ID
- A requirement must always be traced back to a customer requirement
- It prevents scope creep
- It prevents bad surprises at hand-over/CAT
Test Management Plan
- General sections
- Purpose
- What the purpose of the document?
- Why is the document written?
- For who is the document intended?
- Scope
- What is included and what is excluded?
- What information can you find in the document?
- What information will be excluded from the document?
- Test Management Plan specific sections
- Test Objects/Target
What will be tested?Entire system?Only part?If so ,which part? - Test Resources
What is needed to perform the tests? Human resources, computers, test tools, etc. - Test Level Strategy(V-Model)
What test will be planned? Unit Testing? Integration Test? System Test? Customer Acceptance Tests? - Test Prioritization Strategy
What Test Prioritization Strategy? - Test Reporting
- Are you going to write Test Reports?
- How to communicate and trace bugs?
Test Prioritization
Strategies
- Customer Requirements-Based Prioritization
- Test are ranked on basis of several factors/formulas
- Customer-Assigned priority(CP)
- Implementation/Requirements Complexity(RC)
- Requirements Volatility(RV)
- Test are ranked on basis of several factors/formulas
- Tests are then executed in the ranked order
- Coverage-Based Prioritization
Type of Coverage:
- Requirements Coverage
- Initial Requirements Coverage
- Total Requirements Coverage
- Additional Requirements Coverage
- Statement Coverage
- Code Coverage
- Cost Effective-Based Prioritization
- Tests are prioritized on basis of costs
- to automate the test?
- to analyse the results?
- to setup the test environment?
- to execute the test?
- etc.
- This strategy is often applied in combination with other strategies
- History-Based Prioritization
Test are prioritized on basis of Test execution history - Risk-Based Prioritization
- What the damage of failure of the requirement will be impact
- The probability that the failure of the requirement/function will occur
Test Preparation
Test Case
Test Cases are tests, which are designed to test the requirements.
Test Scenarios
Test Scenarios are designed to test functions
- Positive Test = Testing the normal flow through the application or code
- Negative Test = Testing an error (alternative / exceptional) situation.
Test Run
Test Run are sets of Test Scenarios, executed in a sequence without interrupting the system operation(shutdown, startup).
Static Testing Overview
Static Testing Methods:
- Review Requirements
- Static Code Analysis
Other Activities during Verification
- Writing test (Management) Plans
- Defining Test Scenarios and Test Cases
- Defining Test Runs
Dynamic Testing
Three types of Tests methods:
- Black Box Testing
- No programming knowledge required
- Based on requirements and / or specifications (testing funcationality)
- White Box Testing
- Write Test Code (use of testing frameworks)
- Programming knowledge is required
- Based on detailed designs / requirements (testing on code)
- Gray Box Testing
- Identify the valid / invalid input ranges
- Test boundaries of ranges (min, max, just inside / outside boundaries)
- Methods can be used for Black Box testing and White Box testing
Error Categories
Caused In | Caused by | Discovered During | Code Compiles | Application Runs | Discover Difficulty | Note | |
---|---|---|---|---|---|---|---|
Syntax Error | Code | Programmer | Compiling (at latest) | No | No | Easy | |
Compilation Errors | Code Compiler | Programmer, Compiler Software | Compiling | No | No | Easy up to Difficult | |
Semantic Errors | Requirements, Designs, Code | Requirements eng., Designer / Architect, Programmer | After application startup | Yes | Yes | Easy up to Difficult | Result in wrong system behavior |
Run Time Error | Requirements, Designs, Code | Requirements eng., Designer / Architest, Programmer | After application startup | Yes | Yes | Easy up to Difficult | Results in system (function) failure (e.g. crash) |
Non Functional Testing
- Load Testing
- Measuring of the system behavior for increasing system loads
- Use of Record & Playback Tools, Test Script to simulate the load
- Performance Testing
- Measuring of the processing speed (CPU) and response time for particular use cases usually dependent on increasing load
- Use of Record & Playback Tools, Test Script
- Stress Testing
- Observation of the system behavior when it is over loaded
- Use of Record & Playback Tools, Test Script
- Volume Testing
- Observation of the system behavior dependent on the amount of the data
- Define and use specific sets of Test Data
- Reliability Testing
- Run-Time operational testing to measure the mean time between failure or failure rate
- Let the system run in a defined state for a defined amount of time
- Robustness Testing
- Measuring the system's response to operating errors, wrong programming, hardware failure etc as well as examination of exception handing and recovery.
- Trigger exceptional situations to see how the system will respond.
- Compatibility and Conversion Testing
- Examination of compatibility to given systems, import / export of data etc.
- Define and use specific sets of Test Data / Test Environment(s)
- Back-to-Back Testing
- Testing of different configurations of the system
- Define and use different Test Environments
Test Techniques
White Box | Gray Box | Black Box |
---|---|---|
Unit Test Code | Boundary Value Analysis | Path Testing |
Loop Testing | Equivalent Partitioning | Cause-Effect Graphing |
Control Structure Testing | Comparison Testing | |
Fuzz Testing | ||
Monkey Testing |
Environment Tools
Emulators
An Emulator tries to duplicate the inner workings of the device/system.
Simulators
A simulator tries to duplicate the behavior of the device/system
Simulation Types:
- Live Simulations (real system timing)
- Virtual Simulations (controlled timing)
- Constructive Simulations(basis of a sequence of events)
Stubs/Test Stubs
A stub is an implementation of (one) specified behavior (partial simulation)
Security Testing
- Confidentiality(Illegal access to data)
- Integrity(Corrupting data)
- Authentication(Illegal access to the system parts)
- Authorization(Providing illegal access to system/data)
- Availability(DOS Attacks)
- No-Repudiation(MiM Attacks)
网友评论