Documentation in the Mob
As a follow up to questions I received on the mob programming Q&A blog I wanted to address questions about documentation.
Background
We have developed our process over many retrospectives.
In fact, we hold only one set of rules sacred: Treat everyone with Kindness,
Consideration, and Respect. With that said our process typically leaves no
stone unturned when retrospecting on any one process including documentation.
Early on we found that our need for documentation to build an
effective product was minimal to nonexistent. As we all love to say “Working software over comprehensive
documentation”. We have developed our philosophy around that idea and we
have retrospected frequently on the result. This does not mean that we are not
documenting at all, however we do document differently.
Q: Are test cases described outside of the actual code?
A: Test cases are conceptualized by the product owner. They
will give us a feature and tell us how it would be used and we help them build
a description of the feature in a cucumber/gherkin.
The lines of gherkin get translated into executable UI testing code and we then
use that for acceptance testing. For product owners who are not comfortable
with continuous delivery we give them a button that will deploy the version of
the app they see to production. In the past before we had our continuous
integration environment we would just demo the new features and then deploy
when we knew the tests were passing. The unit test and gherkin tests are the
only feature related documentation we keep. At times we will create Visio
diagrams of the applications components to make maintenance easier, however we
consider the need for that a smell that could be addressed by a more comprehensive
test suite (lofty goal). Our product owners may keep a large backlog of what
they want but we ask them to not show us the whole thing. Instead giving us the
next most important feature they need to see complete and working. We then get
it working and move to the next feature. This has been very effective for us
because we discover after we have released the project to production we have
eliminated hundreds of tasks that were considered “nice to have”.
If the product owner does have test cases or feature descriptions listed outside the gherkin we have them only feed them to us as we need to develop them working and complete. Eventually we reach an MVP and we can begin collecting real user feedback to continue development. So the documentation may exist but our mobs do not look at it or use it.
Q: Are there any manual test cases and are they documented?
A: About 4 years ago there were manual test cases and they were documented. As we began to mob program together we found that our defect rate was beginning to crawl toward zero. We eventually held a retrospective that ended in the team deciding that we did not need a manual testing team any more, we then merged all the manual testers into the development mobs and they began to help us write the unit tests and UI tests that we used to produce high quality software.
Today we have no manual test cases for any application that
is fully tested. We do have legacy projects where we add tests as we go, and any
new code we work on becomes wrapped in automated tests gradually reducing our manual
testing we need to do.
Q: Does the mob contribute to any training material or user guides?
A: Typically, we work with our product owners to create
training material. We rarely have our developers writing any training material
however they may help the product owner develop a manual based on their in
depth knowledge of the user interface and layout.
Q: Do you use BDD only when writing automated UI testing?
A: We use Gherkin only when automating UI tests because we use
it as a language to communicate with our product owners. All of our smaller, non-integration
level tests we use the Arrange Act Assert/ Describe It model to create the
tests.
Conclusion
In general, we consider the need for more documentation
outside of the code as a communication issue. Any document we maintain is one
layer of indirection between the product owner and the functionality of the
code. That indirection could mean a world of misunderstandings in the future
especially when a new product owner takes over the project, or a new team
begins to work on the code. We review this constantly. We have retrospectives
multiple times a week and continue to find that this works best for us and we continue
to release high quality code to our customers frequently.
Thanks for the questions, I am happy to answer any other
questions you all might have. Please put them in the comments and I will write
another blog if there are enough!
Hi Chris,
ReplyDeletein the case when you are building a whole new product, I suppose the first increment of the product to be put in production mus be quite big. How do you convince the Product Owner to just give you some small slices one after the other, and not having a sort of roadmap with a scheduled release date ?
If you succeed with that, that must be really great.
Hi Nicolas,
ReplyDeleteIt is really a matter of trust building. Over the last 4 years we have had the opportunity to build trust within the organization to the point where they are willing to go ahead and work this way. As I said in the post however some of our product owners still do keep large backlogs for themselves, but we ask them to re-evaluate each new item as each task is completed to steer the project as we go.
We also hold monthly companywide status meetings (attendance is opt in to anyone who is interested) to demo what has been done, which helps provide visibility into the decisions of the product owner. At times this causes large discussions which forces the product owners to rethink their backlog. When they change their mind like this however we do not need to worry because we only pay attention to one or two features at a time.
As for new products and big releases, we continuously ask the organization and our product owner if what we have now is enough for an MVP. We try to get to MVP for alpha, beta, production and new feature iterations.
I want to be clear however this type of trust took time to build. We have seen successes and failures based on the experiments we have run within each project and have arrived at this method through a continuous improvement process. It was not magic :)
Excellent.
ReplyDeleteSo I guess it implies a really emergent architecture, because you don't look at the big picture up-front. Did you have the case when a subsequent story implied to re-think one aspect of the architecture taht you could have anticipated if you had looked at this story before?
Typically the product owner has an idea of the number of users in the system. So our initial design choices are more around scalability and any special circumstances that are obvious. Otherwise we rely heavily on Practical Refactoring (https://www.youtube.com/watch?v=aWiwDdx_rdo) for all of our work. Meaning that since we test everything we work on, we are extremely safe when we decide to change large portions of the application based on new information. We are always ready to steer the architecture if needed however because we try and identify good patterns and refine them as we go over a longer period of time we find that the architecture when defined by small incremental steps is better that anything we could have defined up front. Each project starts with an assumed architecture but not much thought is put into it. Instead we try it and fine our where the pain points are and then work on making the pain points go away.
ReplyDelete