Here are 2 datasets, from different clients in the public sector, showing again how Kanban reduces lead time and perhaps more significantly, stabilises it, enabling more reliable promises to be made to customers.
This first dataset I showed at lascot12, and is for a defects process operated by a support team. Now these are relatively low-severity defects, otherwise they would have had a lot more focus on them a long time ago. However, while the original variation didn’t cause serious problems for our customer relationships – there certainly weren’t SLAs and penalties involved – to have such long running outstanding defects was proving annoying. But not enough to attract vast amounts of effort. The light touch of Kanban did the trick.
This is a run chart over the period of Kanban implementation, measuring cycle time (it’s a dump from that team’s AgileZen board).
As you can guess by the fact that some tickets went through same/next day, the actual hands-on working time isn’t the issue; it’s the waiting time throughout the process. Once you get the visualisation up and running, even without the flow goodness of WiP restriction, slow running items stick out like sore thumbs, and get the attention they deserve.
The second dataset is from a commercial process that operates in nearly every account – commercial response to Requests For Service (aka the RFS process). This is where we are working on a piece-work basis, and the customer asks us to solution and price each piece, and in addition, we need to get approvals to release a price to customer, which involves scorecard and manual assessment of Risk dictating the appropriate levels of authority needed to approve the release, which does form a contractually robust offer to the customer.
In this case, we were under an SLA to turn around RFSs to released prices of 10 working days.
This one is interesting, as we do have true ‘Before’ data to compare with.
Before: all over the shop.
To give you a sense of just how much, look at the Spectral Frequency Analysis. Anyone accustomed to process variation would expect to see a nice Bell Curve. Perhaps a wide, flat one, but some sense of clustering around a mean.
Where’s the Bell Curve peak? If you had to draw a mean, it’d be somewhere round 21d, but it’s utterly random as far as I can tell. Drawing a 95% SLA around the ‘mean’ would mean you’d be crazy to sign up for anything less than about 60d, so it’s no wonder that less than 30% of this dataset is meeting the 10d target.
Compare that to the After picture:
Process stability. And happily, the mean is falling below the 10d SLA – in fact, this shows we’re hitting 67% SLA compliance currently, and looking back at the tail end of the run chart, if those last 8 datapoints are establishing the trend they seem to be, then this number will go up. Right now – even with the Kanban teething period – I’d have fair confidence in achieving 95% SLA compliance of 22d, whereas before, if we’d achieved 50% we’d probably be fortunate.
Two very different processes. Two different teams, working entirely independently with different customers. Same outcome.
Originally posted on my Martin Burns: PM PoV site at http://writing.easyweb.co.uk/more-kanban-data