Quality Assurance (QA) is an essential process for the Software Development Life Cycle with the intent to satisfy all the requirements to ensure enhanced customer experience. QA helps to improve quality and reduce costs by increasing efficiency and reducing defects. As software products and services are becoming complex, it requires more comprehensive testing for identifying the defects and pursuing the necessary fixes before it becomes available for public release. The QA process needs to be planned out and monitored so that it can be successful. The most effective way to track the efficacy of QA activities is to use well thought through metrics.

Metrics are numbers that portray important information about a process under question. It can show the accurate measurements about how the process is functioning and provide a baseline for the suggested improvements. It can drive strategy and direction, provide measurable data and trend to the project team or executive decision making.  Metrics ensure project deliverables and quantify the risk factors and pursue process improvements. It enables customer satisfaction with high quality products.  Metrics provide guidance for  resource or budget estimation..

There are two commonly used software execution models – Agile and Waterfall. Based on the model, metrics can slightly vary. I am sharing my thoughts on the Agile model here. It can also differ based on the QA team’s needs. Every team is different and every team has their own way of doing things. Metrics are role agnostic. It focuses on trends rather than absolute values. Metrics are used by QA teams to perform their own performance as well as release planning. Core Agile metrics should not be used to compare different teams or to determine underperforming teams or team members. QA leader generally tracks these metrics  in QA Dashboard in every sprint:

  • Features/Tasks/Sub-tasks
  • Bug created/verification
  • Automation progress/failure fix
  • Pre- and Post- production deployment/verification tasks

It is very important to identify the relation between the story -> task ->subtask as well as defects. Creating and verifying defects will give you the trend on how well the feature was covered by the test cases and acceptance criteria. It is really trivial that the QA team verifies pre and post deployment so it is very easy to identify what was changed by new features and how much regression it might create.

Another very important aspect to look into is the Burndown Chart which is a graphical representation of work remaining vs time. At any time of the sprint one can get a sense of progress in the particular release. One can figure out the efficiency of  sprint planning by looking into the Committed vs Completed.  It will portray the percentage of storypoints completed by the team of the committed storypoints for the sprint. Team execution Velocity is defined by the storypoints of work completed in a given sprint.

QA does not own or create quality, it ensures quality. QA  keeps monitoring, finding edge cases to identify things which are not following the requirements or failed to meet the end user expectations. Quality is everyone’s responsibility from requirement gathering to development leading to end products. QA engineers make sure the expectations are met.

Here are few important QA process metrics that ensure the acceptance criterias and meet customer expectations:

  • Test Coverage – test cases
  • Number of automated test cases
    • Percentage of automated test cases vs total automation candidates
  • Defects Found by Automation
  • Bug/Issue Metrics – MTTR
  • Performance Metrics – API/UI performance

Test Coverage is the top most important metric for any QA team. Basically it shows that every project is compliant with the feature acceptance criteria. Every team member also should understand the priority and severity of the test cases. Every test case does not need to be run in every build. Classification of the test cases (Golden Sanity, Smoke, Regression are to name a few) is very important so that the team knows which are Golden Sanity Cases which can be run every build. Test cases that are in new features should not be in the regression suite.  Not all test cases need to be automated. Identifying automatable test cases is very important. QA team should not burn cycles to automate on the low ROI test cases. These may be efficient to run manually. Defects found by Automation is also a good metric. It indicates the good work of  the QA team, performing automation on the right test cases. Managing the customer/internal found defects MTTR [Mean Time to Resolve] is also a validation team is prioritizing the defect priority as per the expectation. Performance metrics is another very important metrics to see the trend on how well a team is performing on API/UI at scale.

Defect tracking needs to be classified so that QA can differentiate between functional/feature and regression bugs. Regression bugs should be covered by automation as well as manual execution. The environment of defect finding is also very important. The goal is to always find the defect in staging or pre-production environments rather than in the field or in production environment. The staging environment has the test data so sometimes it is not possible to find the defects at scale in the staging environment. Pre-production or PoC environments are very useful to cover those scenarios. Still there will be production defects found by Sales Engineers or actual customers. In that case, QA should make sure RCA is done and new test cases are created from the field issues. Everyone can learn from the missed test cases or from the corner cases that are discovered in the field.

Metrics help the QA team to ensure smooth execution. Individual team members’ performance should not be measured by these metrics. Two different QA teams may have different metrics. So they should not be compared. The purpose of the metrics is to improve the QA process and reduce the risk of defects which ultimately ensures superior customer satisfaction.

 

X
X