Matthew Steer posted a simple yet intriguing question over at the LinkedIn Agile Testing group:
What sort of developer / tester ratios are people using in their scrum teams?
My reply was that, at some level, it also depends on the skills that the team collectively has. If “devs” are willing and able* to do activities that QA leads and testers generally have skills in, more power to them. In my experience, however, devs are more naturally drawn to other activities, and QA leads/testers should have more advanced skills in customer-level testing and exploratory testing, which many devs mistakenly view as merely ad-hoc testing. Only about half of our teams have QA leads/testers, and I think they’re poorer for it. On an agile team that takes a high view of the “whole team” approach, though, a ratio of anywhere between 4:1 and 8:1 is reasonable, in my opinion.
*Although many developers have skills in the things I mentioned are generally the province of QA leads (customer testing, exploratory testing), I’ve found that they either don’t want to do those activities or do them with a particular blind spot. That blind spot is perhaps through no fault of their own, as it is borne of their unique vantage point as developers. As the team members most responsible for writing the code that literally makes the software for a customer, they tend to view the product from that perspective. It’s a rare developer who can shift so far away from that perspective to see things as a customer might. That doesn’t mean developers can’t understand requirements or the big picture, or even correctly anticipate the details that a customer will want; merely that having a customer mindset, or a total quality view of a project, is more natural for someone not as engrossed in the production-code development. That’s where, even in high-functioning agile teams, a QA lead or tester is indispensable. The skills may — and ideally do — overlap a lot, but the mindset doesn’t nearly as much.
I was reading Implementing Lean Software Development and stumbled upon this passage that rang true for me:
When we walk into a team room, we get an immediate feel for the level of discipline just by looking around. If the room is messy, the team is probably careless, and if the team is careless, you can be sure that the code base is messy. In an organization that goes fast, people know where to find what they need immediately because there is a place for everything and everything is in its place.
Leftover soda cans, papers, lunch remains and personal effects don’t in and of themselves cause problems. But they are most likely reflective of and a symptom of — a smell, as it were — of a lack of discipline. The question that I debated with a colleague: Can simply cleaning up and organizing a work area infuse the team with discipline?
Nate asked me this morning if I had read or heard about James Shore’s “The Problems With Acceptance Testing” post. I hadn’t, but, as with the things of an agile project, if it’s important, it will be talked about again. Indeed, by this afternoon, a few people in the agile testing community had responded. Here are a few “money quotes” from people I follow:
Clear examples and improved communication are the biggest benefits of the process, but using a tool brings some additional nice benefits as well. A tool gives us an impartial measure of progress. Ian Cooper said during the interview for my new book that “the tool keeps developers honest”, and I can certainly relate to that. With tests that are evaluated by an impartial tool, “done” is really “what everyone agreed on”, not “almost done with just a few things to fill in tomorrow”. I’m not sure whether an on-site review is enough to guard against this completely.
— Gojko Adzic
In my experience, teams that don’t do automated acceptance testing quickly get to a point where adding new features goes slower and slower, just because it takes longer and longer to test all of the functionality. Sometimes they start trying to figure out which functionality they need to retest, and which “couldn’t possibly be broken by this change.” This starts down a very slippery slope.
— George Dinwiddie
Bottom line, I’m concerned about this issue because I like the clarity that results from having concrete tests that are agreed to be “the definition of done”. At the same time, Jim is a smart and experienced person, and we need to pay attention to what he’s finding out there.
— Ron Jeffries
I think ATDD/AT gets bad rap cuz teams don’t know how / don’t try to design tests well.
— Lisa Crispin
As a tester, I feel somehow offended.
— Markus Gartner
Roughly where I am, though I may be changing my mind.
— Brian Marick