This article is prepared for presenting in Software Systems Best Practice Conference, Anahiem, California. Final version is available on slideshare.
![]() |
Image Courtsy: softheme |
We can’t imagine software development without testing.
Programmers are human and they’re bound to make mistakes. We use testing as a
tool to ensure that defects are caught before we ship.
Good software engineering practice requires us to isolate
development and testing efforts. This is commonly implemented by assigning each
of these responsibilities to different teams. Along these lines, the idea of
the magical ratio of testers to developers was conceived.
In order to improve quality, we intuitively increase the
ratio of testers to developers. As a direct consequence of increasing the
testing effort in this way, budgets and time to market are directly affected.
There is also a hidden cost at work that results from the interpersonal issues
when two teams with different objectives try to work together.
Development teams that are directly responsible for the
quality of the code they ship reveal several desirable advantages. By
internalizing quality goals, developers are forced to adopt a ‘test first’
attitude. This approach allows them to get better over time at managing the
expected standards required to ship software to customers. By eliminating the
overhead of inter-team co-ordination, single teams find agility and are more
responsive. Turnaround time for bug resolution is also reduced to it’s minimum.
These are great qualities to achieve.
Software Development Life Cycle
Before we can review the role of the software development
team, we will need to revisit the traditional software development waterfall
model and the role the team plays in this model. In this model requirements
analysis, system and technical design, development, testing and release are
each discrete steps. In practice, management and development teams rarely have
the freedom to implement this model in this way. Let’s take a closer look at
what the underlying forces are that disrupt the simple flow of this model.
Requirements
If we think of any software project to be governed by
technical and non-technical requirements, then the key underlying constraints
are time and money.
At the start of the cycle, we describe the functional
requirements or what the software must deliver. These are created on basis of
our current knowledge of what our customers want. As market demands change,
functional requirements must follow. In many cases, customers are themselves
not aware of what it is they want.
Further, written functional requirements are at best an
abstraction of what is really desired. It is left to development team to fill
in the details and plug the gaps. Thus, after having created the functional
requirements, we are still left with lingering questions. How do we know for sure
that what the development team delivers will meet the requirements? Can all the
requirements be met within the budget defined?
Considering these factors, we allow ourselves to revisit and
change the requirements.
System Design
Since requirements are a prerequisite to the design stage,
many of the challenges we found in the first stage also affect this stage. You
could require that designs eliminate the need to revisit development even if
the requirements were to change. However, this approach risks over-engineering
the software and increases the development complexity. It will result in
significant waste when major changes are made to the original requirements in
order to keep pace with the market.
Development and Testing
Even with the backdrop of the flawed waterfall model, the
implicit expectation is that the software delivered has to be of excellent
quality. In a nutshell, it must meet the original every-changing requirements,
work in common and extreme scenarios that can be imagined, or at least
automatically degrade gracefully in the remaining scenarios that could not be
feasibly accounted for or imagined in the first place.
Delivering this magical software is the joint responsibility
of development and testing team. As we’re already aware, joint responsibilities
lead to imperfect outcomes. Thus optimizing the tradeoff between our
expectations and the ability for this joint team to deliver is of great
importance to delivering great software.
Let's dig a little deeper and see how development and
testing teams work. For the sake of simplicity, we will assume that testing is
given the same importance as development. For instance, requirements are
discussed with both development and testing teams, and both the teams are able
to begin their work together.
The problem
Communication Issues
While development team does low level technical design and
starts coding, testing team starts with writing test plan and test cases. At
this point, only black-box testing stuff can be planned because actual code is
not present for any white-box testing approaches.
As customer-facing teams learn more about the market and
make changes in the original requirements, these changes need to be
communicated to both teams, a costly and imperfect exercise. Further, as the
development team makes progress and learns more about implementation
limitations, requirements and designs will change to keep to delivery schedules
and budget limits. In many cases, these changes remain implicit within the
development team and their importance is realized only later.
Thus, the development and testing teams understanding of the
software in development begins to diverge. As anyone with experience in testing
will tell you, delays and imperfect communication results in unexpected pauses,
rework and a great deal of frustration. Features that commonly go ‘missing’ or are
found to be unexplainably ‘broken’ in the build routinely halt testing efforts.
Coordination Issues
The perspectives that the development and testing teams
adopt are also at odds. Testers need to break the software whereas developers
adopt an implementation perspective. As the perspectives are different, the
order in which they want to approach the software and it’s sub-components will
be different. However, in order to write comprehensive test cases, testing team
requires implementation details of features that the development team will simply
not be prepared for.
Development schedules also need to be adjusted for to help even
out the workload of the testing team. As a result, both developers and testers have
to compromise on their natural order of work and come up with a mid-way build
plan. Both teams are now constrained by this plan and must ignore any creative
optimization of the plan to suit their individual objectives. Failing to meet
the build plan affects both teams and raises more coordination issues.
Mindset Issues
Duality in rewards and holding the two teams accountable
also plays a significant role in further dividing the two teams. For instance,
on uncovering embarrassing or severe defects – the testing team is pushed to
tighten their processes. On the other hand, a successful release invites praise
for the developers.
The testing team essentially isolates developers from the how
the software performs with end-users. If the two teams work in a staggered
mode, developers simply throw code over the wall to the testers to test and
move on to their next responsibility. As a result, developers miss out on the
opportunity of processing valuable feedback from the field in context of the
current release effort. Now bug resolution cycles are also longer as developers
have to pre-empt their current responsibilities, switch context to fix bugs.
Isolation from the end-user also has a subtle effect on
individuals of allowing overall quality to slip. Developers begin to believe
that they must now write code to simply get past the test team. As someone else
is responsible for delivering quality, now in the back of developer's mind, it
is okay to write code that does not take care of all cases or does not deliver
the complete functionality. Also, it is fine not to do the impact analysis
upfront because if the new code breaks some other functionality, testing team
will report that as well and that will go through the bug fix cycle.
As testers are rewarded for the non-obvious defects find
before release. This encourages attachment with bug reports. If for the testing
team, the gross number of bug reports accepted for fixing is the only measuring
benchmark it can be disappointing for testers and the team at large to have
bugs rejected by developers.
The Problem in a nutshell
In a practical software development project, the company
needs to facilitate a great deal of coordination and communication
infrastructure. In spite of that, a divide gets created between development and
testing team members. Overall, these things not only add to the cost but also
ruin environment of the workplace.
The Solution
Simplest solution to eliminate the divide is to merge the
teams. However, it is observed that superficial merging does not work.
Superficial Merging
In an experiment to reduce this divide the two teams were
merged into one 'Engineering Team' with single reporting lead.
This is a cosmetic change and resulted into negative impact.
Processes stayed the same and since the engineering manager, being from development
stream, failed to understand the issues of testers.
In a different experiment, developers were asked to
alternate their primary development roles with testing responsibilities. This
also failed for following reasons:
- Developers were pre-empted in the midst of their testing assignments as and when they need to address urgent issues from previous releases.
- Some developers refused to be rotated to testing team.
- Testing team felt that developers coming on rotation need a lot of training for adopting formal testing and were not being helpful.
True Merging
True merging requires merging the objectives and the functions
of both teams. This implies that there are no specialist positions. Each
individual must do both development and testing. Thus, merging teams is
difficult to implement as an afterthought. The complete process right from
planning, recruitment, execution to the delivery needs to be aligned with this ideal
and everybody has to work with this approach.
The company management needs to make way for new processes.
For example, when recruiting, the job profile needs to communicate to
candidates that they are expected to wear both testing and development hats. Context
switches between development and testing must be instantaneous. Interestingly, no
formal testing documentation is needed – eliminating written test cases, test
plans and test reports. The number and quality of bug reports also goes down.
All this looks horrible but it works in favor of quality. In
the remaining document, we shall discuss that in length.
How it works
As said earlier, there is no dedicated testing team.
Everybody involved in the project knows how the software works or what is
expected from it. The waterfall model implementation remains the same. Just the
development and testing is carried out differently from how two teams would
approach it. Individuals are better aware of changing requirements and are team
is co-ordination is better synchronized.
When work begins they are free to work in the sequence they prefer
and make optimizations and adjustments in the schedule as needed. While
estimating time for a feature, members add additional time necessary to smoke-test
the overall build after integration.
Developers are skilled in basic testing techniques. For
doing a better job in testing, the developer does a thorough impact analysis of
any change he makes. Access and understanding of the details in code can
increase the accuracy of impact analysis and including testability hooks.
Knowledge of the code also helps in running better test cases and choosing
better test data. In this way trivial bugs are resolved in the development
environment itself.
When a developer encounters a bug in the code written by
someone else, he discusses the bug with original writer and together they come
up with a plan for resolution. Bug tracking system is mostly used to serve
reminders and that is why the quality of bug reports is poor as per standards.
Now bug reports contain the investigation and resolution notes rather than
steps to reproduce test data etc.
In this system, both praise and fire come to the same
person. This makes the developer completely responsible for the product quality
and encourages better quality.
Limitations
While the approach described here produces good quality software,
the formal documentation of test cases, test data and test reports etc. becomes
unavailable.
Also there are cases where this methodology cannot directly
apply or will need to be adapted.
For instance, if a large software product is now simply being
maintained, the development and testing workloads are no longer evenly matched.
Thus, the testing workload is significantly higher and calls for specialization.
In other cases such as mission-critical software, the cost
of managing multiple teams plus all the overheads may still be significantly less
than the cost of a defect going out in the field.
Another case is of software being shipped embedded on a
device where no matter how you test otherwise; test cycle on actual devices is
mandatory.
One more case is of WIPS (Wireless Intrusion Prevention
Systems) where so many environmental factors are at play that a radio frequency
isolated place is needed for testing.
Scaling
The approach described in this paper was applied
successfully with small team sizes, or in cases where the batch size of
features released is small. However, the idea behind the proposal is to make
the developers write good code at the first place by holding them responsible
for the quality and making them think about the potential failure points ahead
of time.
At large scale, if the testing team can’t be completely
eliminated, it should be introduced very late in the process. For instance, if
the testing team comes in the picture at customer acceptance test level when
for the purpose of development team, the software is considered ‘shipped’; the
development team members have to upgrade themselves by all means for delivering
high quality software.
You could talk to Sokrati, they use QA by rotation,
ReplyDeletehttp://sokratijourney.blogspot.in/2012/09/engineering-sokrati-basics.html