Editor's note: What are the best practices in moving data to the clouds?
Learn more here!
Janet Gregory has been a long time icon of mine. When I was asked to write
an article for the Agile Journal regarding Agile testing I was honoured. The
first person I thought of was Janet Gregory and Lisa Crispin as their book,
Agile Testing, was a huge help to me when I was first learning about this topic.
After contacting Janet about co-authoring the article, we quickly discovered
that we had a very strong difference of opinion. We decided to share that with
you in this article. You may feel strongly one way or another from reading this
article, and the comments from the conversation, but please remember we are
all trying to do the same thing. Increase quality, of our software, and help
others to do the same.
Bob Small and Janet Gregory share their thoughts and experiences relating to
the difference between QA (quality assurance) and testing.
I see a clearly defined difference between quality assurance as an activity
or effort in preventing defects from entering the system and agile testing,
which would be any activity in discovering defects in the system.
I really don't like to draw a distinction between the two ideas. What testers
do on an agile team is testing. The prevention part is testing up front - I
test the assumptions, I test the customer's interpretation of the problem, etc.
I don't like the phrase "quality assurance," because I don't think
we can actually ever 'assure' the quality of a product.
Ok, so if you are on a Scrum team and you are in a planning meeting, who is
responsible for ensuring that the stories are testable? Not that I am saying
you need to actually test the stories then and there, but who raises their hand
and states that a story is un-testable? I do not see that as testing, I see
that as quality assurance. Here, you are assuring the quality of the application
through defect prevention, by not allowing the team to have a poor story in
the next iteration.
Over the years the phrase, 'quality assurance or QA,' has changed to mean different
things to different people, which is why I really hesitate to use it.
As a tester I don't need developed software with a UI to start testing. I can
test a requirement and test the proposed solution. I encourage people to test
scenarios if the customer draws a flow diagram. I can ask for, and maybe provide
examples and edge cases as tests. These types of tests (business facing tests
that support the team), help everyone to get a shared understanding of what
to expect, and eliminates many hidden assumptions.. Is that quality assurance?
I find it simpler to say that it is testing to support the team in its effort
to reduce the rework fixing defects later. Once we've developed something, we
can validate it against these examples (or tests) and check to see if it is
acceptable. I believe that is what you are calling testing.
You also asked who is responsible for ensuring the stories are testable. As
a tester, I have a special interest in it. If it is not obvious, I will ask
the question in the planning session. However, it is the whole team's responsibility
to make sure it meets that quality attribute.
I will agree with you that it is difficult to assure certain levels of quality;
however, we can assure that the application has a certain code coverage level,
that we require a code coverage percentage to class files with a code complexity
rating of a certain number. I know I am getting into the metrics argument, however
I think that code metrics plays a huge role in quality assurance. Hence, that
is assuring the quality of an application. Elisabeth Hendrickson has taught
me that defect metrics become pointless when we don't allow major defects into
the application. This too assures the quality of the system. As a QA Analyst
it is my job to report the level of quality of a system or application. As a
tester it is my job to discover defects in the system and report those to the
stake holders. This is why I see a very strong division. QA is about the process
and QC is about the product.
Software Quality Assurance is primarily concerned with assuring the quality
of the software development process rather than the quality of the product.
Quality Assurance is the set of support activities (including facilitation,
training, measurement, and analysis) needed to provide adequate confidence that
processes are established and continuously improved to produce products that
meet specifications and are fit for use. Quality Control is the process by which
product quality is compared with applicable standards, and the action taken
when nonconformance is detected. Its focus is defect detection and removal.
I agree with Elisabeth completely about the idea of tracking defect metrics.
I am not sure the other metrics you mention, provide a lot of value if you are
testing each story as it is built.
You talk about the QA Analyst and tester as two separate roles. Do you have
different people performing these roles? In traditional phased and gated projects,
there tends to be that differentiation - I'm a test designer or analyst so I
will create the tests. You, as the lowly tester get to run them over and over
and over again to make sure the application doesn't break. In agile projects,
we want to blur the roles so that no one can say "that's not my job."
We want everyone to take ownership of quality. I believe when we start naming
specific roles, we pigeon hole ourselves into that role and stop looking at
the big picture. If I think my job is only to report defects, my attitude can
become detrimental to the team's cohesiveness. One of the agile values on the
manifesto is "People over process," meaning, let's think about the
people first. They are the ones who make things happen. One of my favourite
sayings to testers who are new to agile projects is, "Instead of thinking
your job is to break the software, consider instead, how you can best help the
team to deliver good software."
One of the tasks to help the team might be to provide information on the quality
level of the system (metrics). Another might be to find and report defects (preferably
to the project team first) as a form of feedback. Can you see where I'm going
Good points Janet, and yes I see where you are going. Allow me to answer your
first question. In our development process we do have different people serving
as a QA Analyst and a Tester. Of course, I would like to qualify that a bit
in that most, if not all, parties involved do some level of testing. The visual
testing of a GUI or the unit testing of a specific method in the code is all
part of testing. As far as system level and user acceptance testing, we have
one person for each of these roles. The UAT is typically done by a BA or a Product
Owner. The system level testing (positive and negative test scenarios) is done
by a tester. Quality assurance is done by the QA Analyst and she is responsible
for the prevention of defects in to the system.
One of the key success factors for agile testing is the whole team approach.
This means the whole team is responsible for quality and do their part. I am
ok with specialists performing specific tasks like system level testing or the
actual end users for UAT. That makes sense. However, your last sentence made
me a little uneasy as I firmly disagree that one person is responsible for prevention
of defects. If the whole team doesn't take responsibility, and work together,
there is no way a single person can make that happen.
When you mention pigeon holing a role, do you mean that developers could test
their own code? If so, I would say that as developer, I think it would be a
very bad idea to rely solely on that role to execute the test cases, especially
when it is their own code. This is simply because the developer role is too
close to the code to see a bigger picture and come up with sufficient negative
test cases and scenarios.
That is not what I meant Bob. I was referring to the separation of tester vs.
QA Analyst. I think people grow faster when they are allowed to stretch and
try new things. You get different perspectives. I do think that testers can
learn lots from developers, and vice versa so we should collaborate to get the
best possible coverage. I don't think developers should be the only tester of
their own code except at the unit test level. However, that doesn't mean they
can't help with testing when it is needed.
Let's talk more about the separation of test activities and your point about
feedback. I agree that the development team should get the shortest feedback
loop possible. However, I disagree that you need a blurred role of Test/QA Analyst
to achieve this goal. The QA Analyst should be responsible for guiding the team
to do better testing, to focus on the problem areas, and to work with the BA
and PO for better acceptance criteria. It is not to spend time coming up with
a huge number of test cases that cover the code from end-to-end. This is a poor
use of time as the ROI is minimal and is only viable in an automation sense.
End-to-end testing for manually is time consuming and prone to human error.
It would be a disservice to the team if the QA Analyst spend his/her time working
on test cases the entire time and not focusing on test planning, metrics collection
and interpretation, and acceptance criteria definition.
I think here is our fundamental difference of opinions. In agile teams, the
tester collaborates very closely with the customer and developers throughout
the iteration. They are involved at the beginning to help uncover hidden assumptions
and collaborate with the Product Owner to define acceptance tests which guide
development. However, as the iteration begins, they do not hand over the work
to someone else. They continue working on the story expanding tests and automating
them at the API level to give to the developers. Once the coding is done and
all the acceptance tests pass, the tester is then able to perform exploratory
testing on the story. The Product Owner can do final acceptance. I feel that
you are advocating a hand-off, although that may not be what you are intending.
I also think that our definition of the type of testing that is being accomplished
is different, although that might be beyond the scope of this article.
Because the feedback to the developer is quick, there are little metrics to
gather as defects can be fixed immediately. Metrics such as velocity of the
team is the responsibility of the ScrumMaster or Iteration Manager. This means
that the QA Analyst should have little to do in the way of metrics collection.
Janet I see a huge issue here. Testers can only be directly connected to customers
if those customers are accessible. In a number of companies they do not have
the access you are assuming. The Scrum teams I have worked with rely on the
Product Owner to relay the customers/end users/ purchasers needs and meaning.
Also as you have described above, the role of the person who collaborates with
the Product Owner helps define acceptance tests and then continues to be the
one to run the acceptance tests and ad hoc testing. I think this is problematic.
The scenario I think of, is if there is only one person to do the testing after
coding has been done, then who works on the acceptance testing and the stories
for the next sprint? That person gets behind and slows does the velocity of
the team as a whole. In my experience the sprint planning for one sprint happens
during the previous sprint. This way the team has their sprint backlog Just
In Time (JIT). That is when their next sprint starts. There seems to be a lot
of different experiences we have been through, that may be the reason we are
seeing the difference. You see testers "testing up front" and I see
the roles of Tester and QA Analyst as separate but just as important.
The Product Owner can do acceptance testing; however, in most cases they are
unwilling or unable to do a good job at this due to time constraints. Product
Owners rely on business analysts, quality assurance analysts, and testers to
represent their interests. I believe you are assuming too much time is available
for exploratory testing. On the teams I have worked on we are only trying to
prove to the product owner and stakeholders that the software that was promised
was delivered to the specified level. If that means that the customer asked
for a website with a button, we don't spend time ad hoc testing all the possible
issues with that button and the website. We simply prove to the customer that
they got what they asked for in the requirements gathering. Now if they want
something more, like an enhancement we ask for an iteration to get it done and
delivered to them. But they have to wait.
We have a terminology issue here. I mean the customer in a generic way, as in
anyone representing the customer. In Scrum, that is the Product Owner. I can
accept that quality assurance and testing activities are different, but do not
see them needing to be separate people. If one person does the pre-planning
as you described, and then carries through the iteration planning and the testing
on the same story, she will have a much better understanding of what the issues
might be, and a clearer picture of the whole. I have seen this very successful
in many teams, and the testers do not get behind if the team is working together.
You are commenting on your experience in your context, so I'll not challenge
you on your perceptions. However, here's something for you to think about. Instead
of dividing up the activities between the people, what would happen if each
tester took responsibility for a story and the division was along story or features,
rather than activities. Your testers would gain skills, and I think you might
have better testing in the long run.
Well in that case I guess we would have to consider the ROI of having two people
on the team, a QA Analyst and a tester, versus one person doing both functions.
I can tell you that in my experience it is faster to have the two people versus
one person. As long as the team is completing testable functionality in a timely
manner both roles can be done efficiently and effectively. However, should you
find you are scarce that resource one person will need to do both functions.
What would prevent you from having two people both performing the same function,
just on different stories? You might find that it actually is more effective.
There is no one way to do agile and your team will need to figure out what works
for you. I do strongly recommend you start thinking about the whole team taking
responsibility for quality, and the team works together to get the stories "Done."
Janet I agree that as a team member you will have to find what works best for
you in the situation you are in at your company. That being said if you are
the sole test engineer on an agile team then you will need to do both job functions
and have a lot of work to do to help the team get the stories to doneness.
About the Authors
The co-author of Agile Testing: A Practical Guide for Agile Testers and Teams, Janet Gregory is a consultant who specializes in helping teams build quality systems using agile methods. Based in Calgary, Canada, Janet’s greatest passion is promoting agile quality processes. As tester or coach she has helped introduce agile development practices into companies and has successfully transitioned several traditional test teams into the agile world. Her focus is working with business users and testers to understand their roles in agile projects. Janet teaches courses on agile testing and is a frequent speaker at agile and testing software conferences around the world.
Bob Small founder of the Quality Consortium of Phoenix (http://qaphoenix.com) has 10 years in the IT industry. Bob has been a developer for a Professional Senior care provider. Bob started as a System Tester for the number one domain registrar in the world. Bob continued his career in testing and advanced into Quality Assurance at a leading contact center solution provider. Bob has recently started guest lecturing at local Universities and colleges. Bob has won worldwide online testing contests. He continues to learn Agile techniques and mentors those around him in testing techniques and methods. He has taught developers and mentored junior QA analysts in testing methodologies and QA responsibilities. Favorite quote is: “Plan your work, work your plan.”