How It Works — Testscope Estimator | Unified QA Estimation (Workbench → Methods → Inputs → Results)

How Testscope Works — From Workbench to Defensible QA Plans

Testscope brings 7 estimation methods into a single Estimator Workbench. Pick a preset or sample, choose your method (Monte Carlo, Planning Poker, T-Shirt, WBS, PERT, Bottom-Up, Function Point), set inputs & risk, then review a shared results panel with P50/P80/P90, phases, timeline, and cost you can export or share.

Overview

The Estimator Workbench lets you switch between methods while keeping a single source of truth for outputs. Monte Carlo adds P50/P80/P90 and a distribution; other methods normalize to hours and provide P80/P90 planning guidance. Either way, leaders see the same phase breakdown, timeline, and cost.

1 Open Workbench
Pick a preset (Web, Mobile, API, etc.) or load a sample to start with realistic defaults.
2 Choose Method
Monte Carlo for P50–P90, or Poker / T-Shirt / WBS / PERT / Bottom-Up / Function Point for alternative sizing.
3 Set Inputs & Risk
Adjust method-specific inputs. Toggle Performance/Security/A11y/Compliance. Optionally enable Risk Simulation.
4 Review Results
Shared panel shows effort, phases, timeline, cost, and a plain-English rationale. Export or share.

Switch methods anytime—outputs stay comparable so you can pick the most defensible plan.

The Steps

1) Preset or Sample

Start with presets (Web, Mobile, Desktop, Backend/API, eCommerce, Enterprise, Regulated, FinTech, Gaming, Firmware, IoT, AI/ML) or load a sample to demo quickly.

2) Pick a Method

Use Monte Carlo for P50/P80/P90. Compare with Poker, T-Shirt, WBS, PERT, Bottom-Up, or Function Point.

3) Provide Inputs

Method-specific fields update automatically. Common inputs include features, complexity, platforms, integrations, team size, and rate.

4) Model Risk & NFR

Enable Risk Simulation (Beta-PERT, Triangular, Log-normal). Toggle Performance, Security, Accessibility, Compliance.

5) Run & Compare

Run the method; the shared results panel updates. Switch methods to compare, keeping outputs aligned.

6) Share & Decide

Copy summary/email, Print/PDF, CSV/JSON, or a Share Link so stakeholders can review quickly.

Methods at a Glance

Monte Carlo (risk)

  • Inputs: features, complexity, platforms, integrations, risk sliders
  • Outputs: P50/P80/P90, histogram, phases
  • Distributions: Beta-PERT (default), Triangular, Log-normal; 10,000+ trials

Other methods (normalized)

  • Planning Poker: votes, velocity, hours/point → hours & iterations
  • T-Shirt: size→points map, hours/point → hours
  • WBS: tasks + buffer% → hours
  • PERT: O/M/P → E, σ, P80/P90
  • Bottom-Up: items (hours×rate), contingency → hours & cost
  • Function Point: EI/EO/EQ/ILF/EIF + weights & h/FP → hours

Non-MC methods are converted to hours and given P80/P90 planning guidance so outputs stay comparable.

Inputs (Shared & Method-Specific)

Shared inputs

  • Features / user stories • Avg. complexity (1–5)
  • Platforms/environments • System integrations
  • Team size • Productive hrs/day • Blended QA rate
  • Non-functional toggles: Perf, Security, A11y, Compliance

Method-specific

  • Monte Carlo: distribution type, trials, risk sliders
  • Poker/T-Shirt: votes or size→points, hours/point, velocity
  • WBS/Bottom-Up: line items, buffer/contingency
  • PERT: O/M/P (three-point)
  • FPA: counts & weights, hours per FP

The Engine (Under the Hood)

Scope → Effort

  • Base effort from features × complexity × platforms × integrations
  • Automation target & non-functional scope adjustments
  • Phase weights by preset (Planning/Design/Execution/Automation/Reporting)

Risk & Normalization

  • Monte Carlo produces P50/P80/P90 with a confidence band
  • Other methods normalize to hours with P80/P90 guidance (+10% / +20%)
  • All methods feed the same results panel for apples-to-apples comparison

What You Get

P50 / P80 / P90

Clear definitions and ranges (Monte Carlo). For other methods, planning guidance aligns expectations.

Timeline & Cost

P80 days/weeks from your team capacity, plus QA cost at your blended rate.

Phase Transparency

Hours & % by phase with charts, plus a plain-English rationale and next steps.

Shared results panel Exec-ready summary Distribution & phases

Save, Export & Share

Keep your work

  • Local save/load for scenarios
  • Share Link encodes your inputs for quick review

Export & communicate

  • Copy summary & copy as email
  • Print/PDF, CSV, JSON

Trial includes full Monte Carlo and all 7 methods. Pro adds unlimited exports, project history, and advanced distribution/config options.

Examples

Sprint-sized feature pack (Poker)

Team votes 42 pts @ 6 h/pt → ~252 h. At velocity 40 pts/iter, ≈ 2 iterations. Use P80/P90 guidance for planning.

Team sessions → fast sizing

Mobile + payments (Monte Carlo)

20 features, complexity 4, 5 platforms, 4 integrations, moderate change volatility → wider band; commit to P80, keep P90 buffer.

Risk made explicit

Switch methods to triangulate: validate Poker/T-Shirt with Monte Carlo before committing.

FAQ

What’s the difference between methods?

Monte Carlo provides statistical ranges (P50/P80/P90). Other methods convert to hours and use consistent P80/P90 guidance so outputs align in the shared panel.

Do I have to use Monte Carlo?

No. Use Poker, T-Shirt, WBS, PERT, Bottom-Up, or Function Point. Monte Carlo is great for risk-aware commitments; others are helpful for quick sizing or budget views.

How are phases determined?

Phase weights vary by preset (e.g., API may allocate more to Automation). You can see the split and rationale in the results panel.

What’s included in Trial vs Pro?

Trial: all 7 methods and full Monte Carlo. Pro: unlimited exports (CSV/JSON/PDF), project history & local save/load, and advanced distributions/configuration.

How do Share Links work?

Your inputs are encoded into the URL so others can load the exact scenario. No account required to view locally.

© Testscope. All rights reserved.
How It Works
Scroll to Top