Estimating and Controlling Testing

Reliably defining time, effort, and resources required for testing
1 Day Intensive Seminar Workshop

Unreliable estimates are a major reason managers allocate inadequate time and resources to testing. Historically, estimating has been so weak in IT that some people simply assume it is impossible to estimate IT activities accurately. Test estimates are especially prone to problems because they often depend on other (unreliable) estimates. In fact, though, it is possible to estimate accurately; and huge direct and indirect benefits can result. This interactive seminar describes key principles of effective estimating and how to apply them to the unique aspects of testing. And, rather than just being a static up-front exercise, the course shows dynamic techniques that effective estimators use throughout the project to control progress as well as to refine and improve their estimates and estimating skills. Exercises enhance learning by allowing participants to practice applying practical techniques to realistic examples.

Participants will learn:

  * Appropriate uses and limitations of rapid top-down estimates.
  * Work breakdown structure bottom-up estimating techniques and issues.
  * Test planning and design methods for determining the number and nature of tests to be run.
  * Identifying additional testing effort required besides test execution.
  * Controlling testing activities and refining estimates throughout the project.

WHO SHOULD ATTEND: This course has been designed for QA and testing specialists, managers, analysts, designers, programmers, auditors, and users who need estimates of testing time and resources.



  • “Knowing” estimating is impossible
  • How effective estimators differ
  • Top-down estimating advantages
  • Tester to developer ratio-based estimates
  • Percentage of total budget for testing
  • Lines of code, function points sizing
  • Calibrating effort to size
  • Historical testing effort as basis
  • General and testing-specific issues
  • Impacts rule-of-thumb methods cause
  • Appropriateness and caveats for use


  • What is a test case
  • How many tests do we need
  • Historical defect statistics, precision
  • Risk-based testing
  • IEEE Standard for test plans
  • Master test planning
  • Detailed test plans
  • Test design specifications
  • Test case specifications
  • Exploratory and ad hoc testing
  • Test automation factors and issues
  • Unique factors for each type of special test


  • Need to identify test activities regardless
  • Major reason estimates are inaccurate
  • Work-breakdown structure technique
  • Level-by-level increase in precision
  • Near- and far-term detail differences
  • Naming tasks to create success
  • Identifying effort by resource and skill level
  • Implicit vs. explicit duration
  • Work packet roll-up
  • Addressing contingencies and oversights
  • Relating to top-down estimates, fudge factor
  • Issues with task-oriented estimates
  • Addressing extrinsic task dependencies
  • Testing vs. debugging, when code arrives


  • Recording actuals against estimates
  • Projecting defects remaining
  • Statistical sampling
  • Seeding and mutation
  • Linking two independent measures
  • Identifying systematic estimating errors
  • Monitoring and projecting arrival rate
  • Identifying and attacking bug colonies
  • Reporting testing status, business value
  • Monitoring defect detection efficiency
  • Applying inspections, higher-yield methods
  • Refining estimates and estimating skills
  • Tracing through production

Upcoming Events
About Go Pro
Contact Us