SAFe’s content has a range of strength of recommendation, but this is implicit and only weakly articulated. It needs to better define what is mandatory, and what is customisable.
SAFe is a Framework, not a Religion.
One of the criticisms levelled at SAFe by folks who are early in the Dunning-Kruger learning journey is that it is a monolithic set of required practises, and that while it talks about customising, this is very much about customising down in the face of frowns and discouragement. This is definitely compounded by inexperienced coaches, whose behaviour is that of high priests faced with scriptural revisionism. Despite this kind of behaviour being common in many coaching environments, the critics are unequipped to differentiate between their experience and SAFe’s intent.
This is compounded in early transformation journey organisations, where a little bit of Novice-level rule compliance may be helpful. And many contexts have enough similarity to support strong reusability of practises. Which enterprise software-intensive context would not benefit from taking a large axe to its backlog? It’s a high level recommendation that’s often possible to draft in the car park before the first customer meeting, so common a problem is it.
SAFe has always (at least since I first heard Dean talking about it in 2013) been consciously intended as a flexible framework that is responsive to context, and since 4.0’s launch earlier this year, there has been an increasing emphasis on principles with the explicit recognition that you can’t simply take SAFe off the shelf and install it ‘as-is’ without contextual adaptation.
Nevertheless, as the number of SAFe instantiations have increased, we’re seeing increasing numbers of organisations who have taken the permission to contextually adapt, and have done so in profoundly unhelpful ways. Or who have confused empathy with starting constraints with failing to challenge them; thoughful timing and speed of a journey of change with failing to embark on it.
As a result, we are seeing many profoundly unhelpful variations from SAFe practises, outcomes that fail to produce the business benefits and environments that are unchanged command and control systems that have been repainted in lovely Agile words — essentially a realisation of some of the risks that have often been expressed as a racing certainty by some critics (ironically sometimes the same critics who complain about lack of contextual customisability). In these cases, stronger guard rails around core practises would be very helpful in preventing problems.
Strength of Recommendation
Observed Critical Dysfunctions
All of the following have been observed in SAFe instantiations, usually with significant problems following.
- No Quality Practises
SAFe is absolutely clear on this:
Crappy Code Does Not Scaleand using some kind of quality practises (and XP is definitely the preferred option for software teams) used to be descriped as
The only truly mandatory thing in SAFe. Yet implementing organisations often try to do without.
- Representatives Only at PI Planning
Sometimes it is very difficult to bring everyone together for PI Planning. Travel budgets for distributed organisations are often constrained, and not all locations have the ease of travel as within the Schengen Area so sponsoring a hundred visas may not be an achievale matter. Some organisations try to get around this by only ever sending representatives to PI Planning. This breaks the rapid feedback cycles and the team ownership that comes from the highly collaborative nature of effective PI Planning.
- No Inspect and Adapt
Attendees at Leading SAFe have been observed marking up The Big Picture in class with their own intended customisations, and crossing out all the I&A ceremonies. Continuous learning and improving is as integral to SAFe as to any Lean or Agile approach. And without it, it’s going to be impossible to recover from this mis-step.
- No Consistent System Demo
Just as I&A provides a feedback loop for process, System Demos provide the feedback for systems, and cultivate the trust in stakeholders that real progress is being made in the shape of working systems.
- Silo-based Teams
While SAFe isn’t as hard-core as LeSS at requiring Feature Teams, it is very clear that teams are Cross-Functional, with ability to define, implement and test increments of value, and that defining them by layers in the architecture is undesirable. Yet organisation do try to operate their existing siloed code teams separately as UI, logic and data teams, with testers and analysts optimised for intraspeciality communication.
It’s clear that such (and more) approaches are very ill advised, carrying significant risk. However, as the last item hints, other elements are very much able to be customised to fit context, within a defined range of what is known to work repeatably in the overall Team of Teams context.
- Should teams do Scrum or Kanban? Or Scrumban? (Kanban was always acceptable but not well defined as to ‘how’)
- How many iteration cadences should be in a PI? (4 is short but doable; 7 is long)
- How many teams should be on a train? (more than 2, less than 20, depending on size of teams)
- How long should iterations be (experience is that 3 weeks and immature teams tend to combine to form waterfall sprints: Design Week, Build Week, Test Week)
In other areas, the best answer to
what? questions is The Standard Consulting Answer:
It Depends, even if there is a known target pattern (eg Feature Teams over Component Teams)
- How should we communicate vision to trains?
- How should we structure our trains?
- How should we split Epics/Features/Stories? (There are a set of known patterns, but which is the best? It Depends)
- How do we increase innovation in our trains?
- How do we improve visual management?
In yet further domains, there is no defined practise beyond some very general advice. All of Change Management in SAFe is reduced to one slide that be summarised as
Kotter! Yay!. And not even taking Kotter’s highly applicable second book into consideration.
On the other hand, there are a very large number of areas where the wider Lean/Agile community of thought has at least patterns worth experimenting with. Like using the repeated pattern of experiments to drive continuous improvement.
From another perspective, SAFe has many patterns that are useful if your context needs them. Biggest and boldest of these is the entire Value Stream layer. If your organisation’s Value Stream is around 100 people or less and can be contained within a single Agile Release Train, then YAGNI. On the other hand, if you are a builder of large and complex systems needing multiple ARTs and integrating the work of external Suppliers, then it contains many useful constructs and patterns that certainly help the many-hundreds (and thousands) of practioners environments.
Similarly, organisations in high assurance and regulated environments will probably need some pre-release whole-solution validation and verification effort beyond what’s possible within incremental build. If you’re producing a casual social mobile game experience, again, YAGNI.
Canvas for Customisation
Putting these two ideas together, we have a canvas where we can start understanding where any given SAFe practice sits. At one corner, we have universal, mandatory practises: do this, or you’re not doing SAFe. At the other, we have some highly contextually-bound experiments that need to be Safe-To-Fail (no pun intended) probes to help understand what might be useful in this organisational, situational and temporal context, that are guided from SAFe’s Principles. And we recognise that what might be The Right Answer today may not be that tomorrow.
Or, if you prefer it in Cynefin terms, we have a range of domains from from Obvious to Complex (and very specifically, domains that have fallen off the Obvious cliff into Chaos and need rescuing urgently).
SAFe should articulate this Principles-informed Customisation Canvas, with three primary areas: Recipes (Do This), Patterns (Explore These Options) and Experiments (Try This).
As practises become more established, their boundaries better explored, and their outcomes more reliably repeatable, they will move towards stronger degrees of recommendation, from ‘Try’ to ‘Explore’ to ‘Do’ and more effective exploitation (or if you prefer, Enabling to Governing constraints and ultimately Rigid ones)
- This piece originated from a couple of sessions at the 2016 SAFe Leadership Retreat, exploring SAFe’s customisation. My thanks to fellow participants in these, including Kerry Fullagar, Eric Willeke and Dean Leffingwell.
- It has some inspiration (or possibly hindsight congruence) from LeSS‘ Rules, Guides and Experiments structure, for which I thank Elad Sofer and Jim Tremlett for introducing to me.
- There is a conscious link to Cynefin Dynamics. Thanks as always to Liz Keogh, Chris Matts, Marc Burgauer and ultimately Dave Snowden for getting this through my thick skull.