Some acronyms are actually useful for remembering models or concepts. Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else: FLAVE.
The software-development world needs another acronym like it needs another methodology. Yet, if you’re anything like me — someone who loathes acronyms, by the way — you find that some acronyms are actually useful for remembering models or concepts (for instance, it’s easy to recall all of the elements of Bill Wake’s INVEST when creating stories).
Lately, it seems that teams could use some help remembering some of the core principles of Kanban, so I’ll offer up my own acronym to help myself as much as anyone else (definitions taken from David Anderson):
Measure and manage Flow: Track work items to see if they are proceeding at a steady, even pace.
Limit work-in-progress: Set agreed-upon limits to how many work items are in progress at a time.
Adapt the process: Adapt the process using ideas from Systems Thinking, W.E. Deming, etc.
Visualize the workflow: Represent the work items and the workflow on a card wall or electronic board.
Make process policies Explicit: Agree upon and post policies about how work will be handled.
The concepts aren’t in any particular order, except to create the acronym word. What, FLAVE isn’t a word? This guy just doesn’t know how to spell it.
Matthew Steer posted a simple yet intriguing question over at the LinkedIn Agile Testing group:
What sort of developer / tester ratios are people using in their scrum teams?
My reply was that, at some level, it also depends on the skills that the team collectively has. If “devs” are willing and able* to do activities that QA leads and testers generally have skills in, more power to them. In my experience, however, devs are more naturally drawn to other activities, and QA leads/testers should have more advanced skills in customer-level testing and exploratory testing, which many devs mistakenly view as merely ad-hoc testing. Only about half of our teams have QA leads/testers, and I think they’re poorer for it. On an agile team that takes a high view of the “whole team” approach, though, a ratio of anywhere between 4:1 and 8:1 is reasonable, in my opinion.
*Although many developers have skills in the things I mentioned are generally the province of QA leads (customer testing, exploratory testing), I’ve found that they either don’t want to do those activities or do them with a particular blind spot. That blind spot is perhaps through no fault of their own, as it is borne of their unique vantage point as developers. As the team members most responsible for writing the code that literally makes the software for a customer, they tend to view the product from that perspective. It’s a rare developer who can shift so far away from that perspective to see things as a customer might. That doesn’t mean developers can’t understand requirements or the big picture, or even correctly anticipate the details that a customer will want; merely that having a customer mindset, or a total quality view of a project, is more natural for someone not as engrossed in the production-code development. That’s where, even in high-functioning agile teams, a QA lead or tester is indispensable. The skills may — and ideally do — overlap a lot, but the mindset doesn’t nearly as much.
I was reading Implementing Lean Software Development and stumbled upon this passage that rang true for me:
When we walk into a team room, we get an immediate feel for the level of discipline just by looking around. If the room is messy, the team is probably careless, and if the team is careless, you can be sure that the code base is messy. In an organization that goes fast, people know where to find what they need immediately because there is a place for everything and everything is in its place.
Leftover soda cans, papers, lunch remains and personal effects don’t in and of themselves cause problems. But they are most likely reflective of and a symptom of — a smell, as it were — of a lack of discipline. The question that I debated with a colleague: Can simply cleaning up and organizing a work area infuse the team with discipline?
Nate asked me this morning if I had read or heard about James Shore’s “The Problems With Acceptance Testing” post. I hadn’t, but, as with the things of an agile project, if it’s important, it will be talked about again. Indeed, by this afternoon, a few people in the agile testing community had responded. Here are a few “money quotes” from people I follow:
Clear examples and improved communication are the biggest benefits of the process, but using a tool brings some additional nice benefits as well. A tool gives us an impartial measure of progress. Ian Cooper said during the interview for my new book that “the tool keeps developers honest”, and I can certainly relate to that. With tests that are evaluated by an impartial tool, “done” is really “what everyone agreed on”, not “almost done with just a few things to fill in tomorrow”. I’m not sure whether an on-site review is enough to guard against this completely.
— Gojko Adzic
In my experience, teams that don’t do automated acceptance testing quickly get to a point where adding new features goes slower and slower, just because it takes longer and longer to test all of the functionality. Sometimes they start trying to figure out which functionality they need to retest, and which “couldn’t possibly be broken by this change.” This starts down a very slippery slope.
— George Dinwiddie
Bottom line, I’m concerned about this issue because I like the clarity that results from having concrete tests that are agreed to be “the definition of done”. At the same time, Jim is a smart and experienced person, and we need to pay attention to what he’s finding out there.
— Ron Jeffries
I think ATDD/AT gets bad rap cuz teams don’t know how / don’t try to design tests well.
— Lisa Crispin
As a tester, I feel somehow offended.
— Markus Gartner
Roughly where I am, though I may be changing my mind.
— Brian Marick
Brian Marick wrote a couple of years ago that “Teams that don’t produce potentially shippable software at the end of each iteration are likely in trouble.”
With more and more teams using a kanban approach to developing software, it would seem that producing potentially shippable software on a regular basis would be more common. But is it? Does your team produce potentially shippable software at the end of each iteration? Why or why not? What can we do to make it the case?
Kanban requires a rigorous dedication to building software. If your “agile circumstances” are less than ideal — and really, how often do you have an ideal situation? — such as an unengaged customer, nebulous deliverables or uncertain deadlines, you need to be all the more rigorous. Build in practices that keep the team honest, like a regular demo (even if the customer doesn’t attend). I’ve seen too many teams burn themselves by waiting until the last week of the project to create a CI build server or see if they could cut a release. If the team releases potentially shippable software starting after the first week of the project and continuing regularly, they’ll save themselves a lot of headaches and reduce the risk of a nightmare end of the project. And they’ll focus on giving their customer something of value each week, instead of what amounts to a bunch of work in progress at the end of the project.
At Asynchrony several teams have dedicated QA leads. These team members spend a portion of their time testing, but they also help customers write stories, and they define acceptance tests and take the lead in automating them, among other activities. I know a lot of people in the software world refer to the people who do these activities as “Agile Testers,” but, with no malice intended, I reject the term “tester.” That’s because it fails in two ways: First, QA leads do much more than merely test, and second, it implies that they are the only ones who test, when in fact, everyone on the team should test. Ultimately, the activities that the person does are more important than the title he goes by. But words are still important, and to the extent that people tend to identify activities with roles and roles with titles, I think “QA lead” is more helpful than “Agile Tester.”
In order to think more intentionally about how your career is going, I think it’s useful to think about your ideal job description. Then you can assess where you are and how far you need to go to get there. Perhaps articulating the description can even be helpful for conveying your position to your manager and having a constructive conversation about making it happen (in some cases, it may even be refreshing and welcome news to your manager). Here’s where I see the value that I bring intersecting with activities that I enjoy, in order of frequency:
- Work in agile software-development teams doing QA lead activities. That includes pairing with developers to write automated tests and drive acceptance-test-driven development, overseeing relevant metrics and engage the team in conversation about them, working with customers to elicit requirements. (daily)
- Facilitate retrospectives for other teams (weekly)
- Read relevant newsgroups and articles on QA topics (weekly)
- Spend a couple of hours a week blogging on QA topics (weekly)
- Mentor other QA leads in the company (biweekly)
- Be responsible for a monthly article on QA topics for company distribution (monthly)
- Coach/consult with teams in agile transition (quarterly)
- Teach agile QA course to QA leads (quarterly)
- Attend industry conferences, like Agile, STARWest, CITCON, etc. (semi-annually)