Software Testing - When and how early to do?
If you are familiar with the dilemma faced by every project manager about when to spend $ to test a particular piece of code, you know what this blog is about. The root cause of this problem is historical.
The fact that traditionally the testing used to be conducted after the programmers released the code into quality assurance cycle contributes to the cause of this problem. By this time, however, the code is expected to have been through the basic unit testing activity and probably in certain instances, the developer should have done white box testing, meaning he/she must have covered all the logic paths of the program through the test data.
Let us look at the next contributor to the root cause of delaying testing. Historically, systems were developed to a particular set of audience or end-users. The development folks were able to predict the behaviors of the application/systems. The testing to some extent thus was sidelined. The testing team therefore did not create much of an uproar when things failed and not worked as expected. Moreover, the testing teams were generally part of the development team and reporting to the same manager.
The priorities thus will always get shuffled. Specially, when there is a milestone looming ahead, the importance of testing is diluted and dispersed. This resulted in poor quality software to enter the market or end-users domain. The IT department's image is more often than not tainted from various perspectives not the least being the quality of the systems delivered.
Therefore, all along, while the theories suggest that testing should begin early enough in the SDLC, not many could practice the same.
In the recent years, the complexity of the user profiles have changed tremendously and hence the testing requirements have become more elaborate. With the advent of the web based access, you really cannot predict anymore the profile and the interest of the user who is accessing the system that is available on the web.
Coupled with this phenomenon, the time available to market has reduced and the IT practitioners have more challenges to address the problem of 'when to begin testing'. Let us look at what happened with one popular social networking site in the past month or so. This site has been attacked by over 10,000 computers and resulted in 'Denial of Service' status.
How do we project such scenarios? When do we begin the testing of such instances?
This phenomenon is attributed to which part of the lifecycle of the application/software development. Some would argue that the design or architecture of the site is where you need to probe such questions. That is, begin testing activities early enough in the development. Some may say, we would not know the characteristics of such attacks until the software is up and running.
Such arguments come from every expert in the field and confuse the site development team. Some may claim that these types of incidents are exceptions and therefore, no budget is allocated to test these type of accesses.
Could they have provided a mechanism by which such a large scale attack be prevented? Possibly, if someone had looked at various scenarios before the site went into production. Or even, someone would have prevented this type of incident after the site became so popular by running extra battery of tests.
Imagine the cost of such failures especially, when such incidents occur within the financial industry. Is this a possibility? Certainly with the talk of Web 2.0 becoming a corporate fabric, one must look at all possibilities.
The lesson learnt from such incidents is that it is never too late to test an application/software before it is deployed in production or released to public. If you can start early, it will be ideal. However, we, as IT practitioners must be able to adapt with the evolution of the systems. We should be able to quickly re-map our strategies and tactics.