📋 Software Testing Life Cycle (STLC) Phases
Requirement Analysis
↓
Test Planning
↓
Test Case Development
↓
Test Environment Setup
↓
Test Execution
↓
Test Cycle Closure
Phase 1: Requirement Analysis
Objective: Understand what needs to be testedActivities:
- Review functional & non-functional requirements
- Identify testable requirements
- Determine test priorities
- Identify testing types required
- Prepare Requirement Traceability Matrix (RTM)
Phase 2: Test Planning
Objective: Define testing approach and resourcesActivities:
- Estimate testing effort
- Select testing tools
- Define test strategy
- Allocate resources
- Identify entry/exit criteria
- Risk assessment
Phase 3: Test Case Development
Objective: Create test cases and test dataActivities:
- Write detailed test cases
- Create test scripts (for automation)
- Prepare test data
- Review & baseline test cases
- Update RTM
Phase 4: Test Environment Setup
Objective: Prepare testing environmentActivities:
- Setup test environment (hardware/software)
- Configure test data
- Perform smoke testing
- Verify environment readiness
Phase 5: Test Execution
Objective: Execute tests and log defectsActivities:
- Execute test cases
- Document test results
- Log defects
- Retest fixed defects
- Regression testing
- Update test status
Phase 6: Test Cycle Closure
Objective: Complete testing and document learningsActivities:
- Verify all defects closed/deferred
- Prepare test summary report
- Collect metrics
- Conduct retrospectives
- Archive test artifacts
🎯 Test Strategies
1. Proactive Strategy
- Risk-Based Testing: Prioritize testing based on risk assessment
- Requirements-Based Testing: Test cases derived from requirements
- Early Test Design: Test cases created during requirements phase
2. Reactive Strategy
- Exploratory Testing: Simultaneous learning, test design & execution
- Session-Based Testing: Time-boxed exploratory sessions
- Error Guessing: Based on tester's experience
3. Methodical Strategy
- Checklist-Based: Predefined checklists
- Standards-Based: Following industry standards (ISO, IEEE)
- Quality Characteristics-Based: Testing based on quality models
4. Analytical Strategy
- Risk Analysis: Focus on high-risk areas
- Coverage Analysis: Ensure adequate code/requirement coverage
- Cause-Effect Analysis: Identify causes of defects
5. Model-Based Strategy
- State Transition Testing: Based on state diagrams
- Use Case Testing: Based on use case models
- Decision Table Testing: Based on business rules
🔍 Test Types
| Type | Purpose | When to Use | Tools |
|---|---|---|---|
| Functional Testing | Verify features work as expected | Every release | Selenium, Playwright, TestNG |
| Non-Functional Testing | Test performance, security, usability | Before major releases | JMeter, LoadRunner, OWASP ZAP |
| Regression Testing | Ensure new changes don't break existing features | After every code change | Selenium, CI/CD pipelines |
| Smoke Testing | Verify critical functionalities work | After build deployment | Quick automated scripts |
| Sanity Testing | Quick check on specific functionality | After minor fixes | Manual or automated |
| Integration Testing | Test interaction between components | After unit testing | JUnit, TestNG, REST Assured |
| System Testing | End-to-end testing of complete system | Before UAT | Selenium, Playwright |
| Acceptance Testing | Verify system meets business requirements | Before production | Cucumber, Selenium |
| Performance Testing | Check speed, scalability, stability | Load testing phase | JMeter, Gatling, K6 |
| Security Testing | Identify vulnerabilities | Security audit phase | OWASP ZAP, Burp Suite |
| Usability Testing | Evaluate user-friendliness | UX review phase | Manual testing, User feedback |
| Compatibility Testing | Check across browsers/devices/OS | Cross-platform releases | BrowserStack, Sauce Labs |
Detailed Test Types
Functional Testing
- Unit Testing: Individual components
- Integration Testing: Component interactions
- System Testing: Complete system
- UAT: Business perspective validation
Non-Functional Testing
- Performance: Load, Stress, Spike, Endurance testing
- Security: Vulnerability, Penetration, Risk assessment
- Usability: User experience evaluation
- Compatibility: Browser, OS, Device compatibility
- Reliability: System uptime and recovery
- Scalability: Handling increased load
📊 Test Levels
┌─────────────────────────┐
│ Acceptance Testing │ ← Business Users
├─────────────────────────┤
│ System Testing │ ← QA Team
├─────────────────────────┤
│ Integration Testing │ ← Dev + QA
├─────────────────────────┤
│ Unit Testing │ ← Developers
└─────────────────────────┘
Level 1: Unit Testing
| Aspect | Details |
|---|---|
| Focus | Individual units/components |
| Who | Developers |
| When | During development |
| Tools | JUnit, TestNG, Mockito |
| Coverage | Classes, methods, functions |
Level 2: Integration Testing
| Aspect | Details |
|---|---|
| Focus | Component interactions, APIs, data flow |
| Who | Developers + QA |
| When | After unit testing |
| Tools | REST Assured, Postman, SoapUI |
| Approaches | Big Bang, Top-Down, Bottom-Up, Sandwich |
Level 3: System Testing
| Aspect | Details |
|---|---|
| Focus | Complete integrated system |
| Who | QA Team |
| When | After integration testing |
| Tools | Selenium, Playwright, Cypress |
| Coverage | Functional & non-functional requirements |
Level 4: Acceptance Testing
| Aspect | Details |
|---|---|
| Focus | Business requirements validation |
| Who | Business users, Stakeholders |
| When | Before production release |
| Types | UAT, Alpha, Beta, Contract testing |
| Tools | Cucumber, SpecFlow, FitNesse |
🛠️ Test Design Techniques
Black Box Testing Techniques
1. Equivalence Partitioning
2. Boundary Value Analysis
3. Decision Table Testing
| Condition | Rule 1 | Rule 2 | Rule 3 | Rule 4 |
|---|---|---|---|---|
| Valid Username | Y | Y | N | N |
| Valid Password | Y | N | Y | N |
| Action | Login | Error | Error | Error |
4. State Transition Testing
5. Use Case Testing
White Box Testing Techniques
1. Statement Coverage
2. Branch Coverage
3. Path Coverage
4. Condition Coverage
🤖 Test Automation Strategy
Test Automation Pyramid
┌───────────────┐
│ UI Tests │ ← 10% (Manual + Automated)
├───────────────┤
┌─┴───────────────┴─┐
│ Integration Tests │ ← 30% (Automated)
├───────────────────┤
┌─┴───────────────────┴─┐
│ Unit Tests │ ← 60% (Automated)
└───────────────────────┘
When to Automate
- Regression test suites
- Smoke/Sanity tests
- Repetitive tests
- Data-driven tests
- Performance/Load tests
- API tests
- Critical path scenarios
- One-time tests
- Unstable features
- Complex visual validation
- Usability testing
- Exploratory testing
- Tests requiring frequent updates
Automation Framework Types
| Framework | Description | Pros | Cons |
|---|---|---|---|
| Linear | Record and playback | Easy to create | Not maintainable |
| Modular | Divide application into modules | Reusable modules | Requires planning |
| Data-Driven | Separate test data from scripts | Easy data variation | Data dependency |
| Keyword-Driven | Keywords represent actions | Non-technical friendly | Initial effort high |
| Hybrid | Combination of above | Flexible, powerful | Complex setup |
| BDD | Behavior-driven (Cucumber) | Business readable | Learning curve |
CI/CD Integration
Test Automation Tools Comparison
| Tool | Type | Language | Best For |
|---|---|---|---|
| Selenium | UI | Java, Python, C#, JS | Cross-browser testing |
| Playwright | UI | Java, Python, JS, C# | Modern web apps, auto-wait |
| Cypress | UI | JavaScript | Frontend developers |
| REST Assured | API | Java | REST API testing |
| Postman | API | JavaScript | API manual + automation |
| JMeter | Performance | Java | Load/Stress testing |
| Cucumber | BDD | Java, Ruby, JS | Behavior-driven testing |
✨ Testing Best Practices
Test Case Design
- Write clear, concise test cases
- One test case = one objective
- Include prerequisites and test data
- Make test cases independent
- Use descriptive names
- Maintain traceability to requirements
- Review test cases before execution
Defect Management
- Report defects immediately
- Include steps to reproduce
- Attach screenshots/logs
- Assign severity and priority
- Verify fixes before closing
- Track defect metrics
Test Data Management
- Separate test data from test scripts
- Use realistic data
- Protect sensitive data (masking)
- Version control test data
- Create data setup/teardown scripts
- Use data generation tools
Test Environment
- Mirror production as closely as possible
- Maintain separate environments (Dev, QA, Staging, Prod)
- Document environment configuration
- Implement access controls
- Regular environment refreshes
- Monitor environment health
Test Metrics to Track
| Metric | Formula | Purpose |
|---|---|---|
| Test Coverage | (Tested / Total) × 100 | Measure completeness |
| Defect Density | Defects / Size (KLOC) | Code quality indicator |
| Defect Removal Efficiency | (Defects Found / Total Defects) × 100 | Testing effectiveness |
| Test Execution Rate | Tests Executed / Time | Team productivity |
| Pass Rate | (Passed / Executed) × 100 | Build stability |
| Automation ROI | (Manual Cost - Automation Cost) / Automation Cost | Automation value |
Shift-Left Testing
- Test planning during requirements phase
- Static testing (reviews, inspections)
- Unit testing by developers
- TDD (Test-Driven Development)
- BDD (Behavior-Driven Development)
- Early defect detection → Lower costs
Shift-Right Testing
- Monitoring and logging
- A/B testing
- Canary releases
- Feature flags
- Real user monitoring (RUM)
- Chaos engineering