Software Testing - Metrics
Software Testing has become more and more complex. There are many reasons attributed to this complexity; one of the main reasons is that software applications today have a variety of technology needs and user interfaces. The applications are all-pervasive. Today, the users touch and feel the application functionalities from many points - mobile devices, voice-activated kiosks, etc.
Unlike in the past, where the traditional medium of CRT, keyboard and mouse were the only interfaces that programmers need to tackle, today, the application development teams are faced with the ever increasing touch points or interfaces to an application. Consequently software testing has seen a significant increase in complexity while the users expect the testing cycles to shrink tremendously. This has become an inversely proportional relationship and thus the challenges faced by the testing team have gone up by an order of magnitude.
Metrics therefore provide a major assistance to both the software testers and application development teams to manage the expectations. Historically, application development teams have used one or other tools (FPA, for example) to estimate the size of a given system. The resultant effort estimates are then moderated or increased using complexity factors or overloading parameters.
Based on these, an estimate is arrived at and a plan is produced. Using this plan, the project teams get budgets, establish timelines and begin the execution. However, historically, 80% of such projects have failed to meet the expectations and been delivered over budget and beyond the timelines predicted earlier.
Metrics were then introduced as a means to bring in accuracy to such estimates and project schedules. Metrics used in this manner are nothing but a collection of past experience that is categorized and tabulated in an orderly manner. The senior team members in a development or testing team analyze the raw collected data and group them in a pre-determined order for future usage.
While estimating the testing efforts for applications, such metrics are still not completely usable because the lay of the land keeps changing every time. That is, the application characteristics are totally different from the ones that were used in collecting the data. How do they overcome such challenges? The same way that the teams in the past have done; that is using overload or weighted parameters. What happens here?
The subjectivity creeps in. The estimates are no longer the product of an objective process. Therefore, the real numbers when the project is executed will begin to differ from the estimated model. Is there a solution to this problem? Until now, nothing has been found that entirely addresses this anomaly.
Look at what happened when software systems were used in a controlled environment. The estimates in those days were used as a guideline or a base number. The only variable in those days was the user who was not able to clearly articulate what he/she expected out of a software application/system.
Today, this problem has diminished to a large extent; however, many new dimensions have been added to complicate any software development or testing - one such example is the variety of user interfaces that exist today.
In the past, primary point of interface to many applications was human being. Today, it does not have to be. With web 2.0, there are a lot of transactions that happen behind the scene where no human being is involved. It should be easy then to predict the behavior, because after all it is only two programs that connect with each other. However, the recent experiences do not corroborate this statement.
With so many uncontrolled variables, we still have to live a real world, where people want hard answers for any software development/testing effort numbers. Therefore, we should continue to pursue on perfecting the metrics database. This hopefully will take us the level of engineering software development/testing.