Nearing the end of any Agile Adoption engagement, client stakeholders will start wondering about the benefits received from coaching. This may be for their own curiosity, to justify ROI to their bosses, or simply to show off their initiatives to peers!
Often this translates into a Coaching Benefits Report. But I doubt if a set template would work. Different stakeholders have different objectives, and the report may need to be presented accordingly. Some reports may stress on the subjective aspects, while others may rely on objective ones. Some may be formal in language, while others may have a more lively feel to them. In any case, some aspects you can touch upon are mentioned below.
The idea is simple: you got some numbers when you started the engagement with an assessment; you saw how the numbers varied during active coaching; and finally, the numbers have reached a certain level when it's time to wrap up. A Before/After will typically stress on comparing the starting numbers against the final numbers; in an appendix, you could attach a trend graph for what happened in between.
Beyond this obvious approach to metrics, I want to call out some gotchas, basically some numbers that could confuse you and your stakeholders at first look!
Sample Case Study 1
During assessment you measure Defect Count within Iteration, ie, the number of defects found by QA for the iteration's stories within the iteration itself. You find it to be 10. After active coaching, let's say you expected a decrease in defects. Turns out the number has increased to 15!
You and your stakeholders may end up asking: what Agile Coaching have we done that actually led to an increase in defects?!
Answer: Earlier the team's process and practices were such that a high number of defects were actually slipping through. These were caught at later stages, unfortunately, mostly in production by end users. By improving on the processes and practices, the team is now catching defects earlier. With a clearer view on quality, the team is ready to focus on these concern areas better.
Sample Case Study 2
Consider per iteration, the % of stories that have automated tests. During assessment of a team which had already started some Agile practices, let's say the number was 50%. We measure this number again to get final numbers before wrapping up, and we see it's 35%!!
What happened?! Was the team better off on their own, before Agile Coaching?
No. The last iteration we measured just happened to be during Christmas time, when most of the team was on leave, and the work was mostly spikes or proof of concept, done in a separate code repository, in preparation for a new feature that would be backed by a fresh tech stack!
We can see that relying on simple number comparisons can throw us off, leading us to wrong conclusions. I've called this out before, and will repeat: metrics are simply indicators, not guarantees, and should be supported with subjectivity. That said, numbers can still be useful. In cases like above, when the results seem surprising, consider these approaches:
- Use complementary metrics (eg. Defect Count per Environment in addition to Defect Count within Iteration)
- Use a Trend Graph instead of a Before/After comparison
- Use a mean or average value over multiple recent iterations instead of just the latest numbers
- Present ratios or percentages along with the absolute numbers, not in isolation (eg. a seemingly low 7% coverage increase meant a substantial ~6000 LOC coverage improvement in a huge legacy code base)
Surveys & Feedback Forms
To capture team skills improvement, and general shifts in mindset, use surveys that have a mix of subjective and objective type of questions. You may have conducted such surveys during assessment, and you could conduct a similar one towards the end of the engagement. Sometimes people challenge if certain aspects of team behaviour can be tied to coaching efforts, claiming those changes may have happened without coaching too. While this may be true, collecting specific coaching-related feedback can help evaluate the contribution of coaching to such behavioural changes. Such feedback may also be collected using surveys in addition to in-person feedback. It is important to clarify that the feedback is for gauging what worked and what could be improved, and that honest answers will not reflect badly on the Coach in any way.
As they say, a picture is worth a thousand words. Feel free to include pictures of team stand-ups, retrospective stickies, information radiators, etc, in the Benefits document. Compared to dry numbers, pictures make the entire engagement seem more real and worthwhile.
Speaking of making things seem more real, try to weave the team's journey on the Agile path into a clear-to-understand story, covering where we began, what all we tried, successes and failures, and eventual sustenance. You may represent these on a timeline or simply as a bulleted list of highlights.
You can end a Benefits report with a Next Steps section. This should cover recommendations on what the team, the department, and the organization should focus on next, so that even without formal Agile Coaching, sustenance is continued, learnings are shared, and further improvements happen in creative and innovative ways, not just reactive.
Do not forget to share drafts of the Coaching Benefits Report with the client stakeholder as you write it, so that you get feedback and are able to meet the report's objectives, catering to the intended audience.