Personal appraisals in an Agile team, hmm…

Early on in any Agile transformation there will be people talking about teams, the 11th principle talks about self-organising teams. Now there are lots of recommendations and articles about what the team should look like, how big, what skill, and how to address team dysfunctions, even I have got in on this act here, and I am not going to go down that path now.

But once you have this team, ready and willing, how do you keep them fresh and motivated; and importantly for the parent organisation, how do you manage these people? It is trite to say, “they need no management, they are self-organised”. That is to wilfully ignore the practical realities of their context, which will often be within a larger organisation with objectives, hierarchy, annual reviews, performance management, training budgets, bonus pools etc. Whilst it maybe noble to preach that such practices are antiquarian and should be swept away in the birth of a new utopia… many people will look around them and hunker down, painfully aware that the second coming isn’t coming anytime soon, and when it does arrive, it will knock on their door last…

Given the constraints of reasonably standard HR practices, what can be done to ensure that the performance management practices support your agile transformation, rather than work against it?

What kind of people do you want working for you, and how can you assess people in such a fashion that the right type of people are rewarded and the development areas of each of them are identified? I really like this simple view:

People types

So how do you find your “Team Picks” – does your annual review process really highlight them?

Many companies have an annual review cycle which, with the best will in the world, suffers from both immediacy and confirmation biases. In other words, given all the information now available to me I will award you the grade that I was going to anyway, unless something dramatic has just happened. Consistent behaviour contrary to my preconceived notion, that hasn’t had significant recent impact, will be ignored.

To improve matters I advocate two practices:

  • Regular structured 360 feedback
  • Team based assessment

360 Feedback

Within teams, have each member give feedback on one other team member each iteration, on rotation. By year end there will be a comprehensive assessment of each person, where each contribution is from a different premise, and has only be focused on recent events. It turns the feedback collation activity from an “year end” exam into “ongoing coursework”. Making it a continual activity also means that you can start to consider it in the context of what it takes to sustain your team, ensuring that this process is as sustainable as your automated test approach or your technical documentation.

Having done this the only caveat I had to put in place is that each person was to be assessed at the end of the year on the quality of the feedback given, is it full enough, is it balanced, is it actionable etc, without this the feedback will tend to either mutually assured destruction, or a collective congratulation circle – usually, thankfully, the latter.

Team based assessment

Secondly, you can assess the team collectively, where the only individual measure is an assessment of how much effort they have invested in supporting the team. How “teamy” are you. How much focus have you placed on “teaminess” to use two invented words. (this is in addition to the individual feedback assessment mentioned earlier).

A team assessment means that the team either does well or poorly, they should celebrate success together, or share the learnings, usually a little of both. When your peer does something great, you not only feel good for them but they share that feeling with you, because they know that you had a hand in it somewhere. This collective identity is permissive and contagious, once it starts to catch on, it typically grows deeper. This isn’t to say that every piece of work is shared amongst everyone, that would be onerous, but that everyone is happy with everything going on around them and would be willing to put their name under it.

We are social creatures and the ways in which we interact with others is often more important to our ultimate success than any innate talent in a specific area. Practices and policies that reward positive collective interactions will be beneficial to the individual and to their ecosystem.  A team assessment will bring behaviours such as the most talented actively supporting the development of the least able and will help each individual ride out their own personal highs and lows. When you increase the sample size using an average you will flatten out the peaks and troughs of success and struggle, and if you can get teach team member to feel in line with the team, then you avoid any deep personal lows. Naturally you’ll also dampen the individual highs which might not be ideal for the person involved but can cause confidence / jealousy issues for their peers. In really mature Agile teams it is more than success that is shared, it becomes delivery, so at this point there is no individual high, because no individual acted or delivered without the contribution from their peers.


This type of approach really puts the hammer down on the kind of cultural ethos you are trying to promote and helps to show that you are placing equal emphasis on how work is conducted as much as the delivery result.


About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn


How company culture changes with product maturity

I recently attended a seminar with Kent Beck, a giant in the software delivery world, the father of eXtreme Programming. The topic was about the need to adapt our delivery approach based on where the organisation was within a growth pattern of, “Explore” to “Expand” and finally “Extract”. This progression was related to financial models: investment vs return, against risk.

3X diagramImage Credit to @antonyMarcano

The progression of the product connects to another standard of product theory, that of BCG’s Growth Share matrix which is a portfolio planning model used to divide up products on two axis, revenue generation and market share.

BCG Matrix

Credit @BCG

The progression from “Question Mark” to either “Star” or “Dog”, with the “Stars” finally becoming the “Cash Cows” (the dogs having been killed off along the way) is shown on the diagram. I believe the flow of “Question” to “Star” to “Cash Cow” is somewhat analogous to “Explore”, “Expand” and “Extract” and in both cases is a reflection of product lifecycle. What Kent Beck has explored is the consequence on organisational psychology in having products at the various phases.

In both models what is happening is a change of risk appetite, reflective of the changing balance between investment and return, and the consequence of failure.

I have merged the two ideas to make:

  • Exploring Questions:  Where we experience an unlikely very high return on low investment for a growing product with a small market share
  • Expanding Stars: Where we see a reasonable likelihood of a substantial return on significant investment, for a product that has a rapidly growing market share
  • Extracting Cows: (An unfortunate mental model) Where we see a very highly likely, small return on a large investment, for a product that is a stable market leader

Given this framework; Product development on Exploring Questions is typically undertaken in such small groups that the relationships between team members can be sustained without any process framework and the association between business and tech, between demand and supply, is so tight and immediate, that even most light touch Agile frameworks appear cumbersome. The best practices here are those that weed out the Dogs quickly, User Centred Design and Experience, fail fast experiments etc.

Most of the practices that we see in an Agile delivery I would describe as being suited to the Expanding Star world, rapid gains with constant feedback to ensure we on track. The key here is sustainable delivery, staying ahead of the pack. It is said that success is always a surprise, because if it wasn’t someone else would be doing it, and in this phase product development are trying to increase revenue and market share to ensure the idea, the surprise, remains theirs.

I see the Extracting Cow phase as markedly different, the risk profile and consequences of failure here are much more akin to traditional engineering disciplines. You can’t add decide to add a swimming pool to the 30th floor of a building after completing the first 28 storeys. Product delivery in this phase is much more likely to adopt frameworks and governance that avoid failure than aim for rapid gain, and we say hello again to our old friend Waterfall.

To use a different analogy, to see the changing behaviour against the consequence of failure consider the following epic conflicts:

  • In the novel, “Lord of the Rings” you have a plucky few to take on an uncountable near unstoppable foe, with a slim chance of success but a massive return of saving the world. They are gung-ho in their exploratory approach making it up as they go along. The consequence of inaction is fatal. They have everything to play for, and in the scope of middle earth, not a lot to lose.
  • The Roman Empire between 150BC and 50 AD, had one of the finest fighting forces on the planet and certainly superior to anything nearby. The gains from conquest were substantial and the likelihood of victory high which lead to an enormous expansion. They expanded because if they didn’t someone else would, they didn’t risk losing what they had, they risked losing the opportunity for more. They were systematic, organised and effective. Their only downfall was the sustainability of holding onto the new territory, grow too far, too fast and the cost to operate without sound investment can exceed the returns, and you fall.
  • In the 1983 film, “War Games”, a computer responsible for nuclear war simulation eventually accepts that the only viable course of action, is to take no action. The only winning move is not to play. The consequence of failure is too great.

For an Agile Coach it is important to understand how this product evolution impacts on our ability to work with organisations. How the mindset shaped by the product lifecycle stage transcends the product, and comes to dominates the parent organisation. Typically small fast companies grow on the back of a successful product and become large, risk adverse corporations. It is as if each company has one product and slowly changes their organisational psychology to match that single product’s evolution. Facebook changing its motto from, “Move fast and break things” to, “Move fast with stable infrastructure” is a clear example. If a company was structured along product lines then it would be reasonable to expect that the area owning an Expanding Cow would be slow and reserved, whilst an area associated to an Exploring Question to be fast and dynamic, but that isn’t the case.

What happens is that the organisation “matures” with its first product, loses that innovative capability and is trapped either trying to innovate within a highly constrained system or not innovate at all – innovation becomes a timid extension of a secure product. In an attempt to reduce risk, a mature company typically chooses to split out certain components to specialists, silos are created, technical component teams propagate, swathes of middle management appear to ensure that nothing goes wrong – and stagnation sets in.


At this point an organisation in this state, paralysed and inert but still sustaining huge revenue, many consider an Agile transformation as a possible route out. Many coaches work in this environment – and most find culture the biggest barrier, most of the Agile practices are not suited for this risk adverse mindset, many Agile coaches have cut their teeth in Exploring Questions or Expanding Stars and are lost in the face of the corporate governance of a gargantuan Extracting Cow, the frustration has prompted so called “Scaling Frameworks” to appear which ape agile values while remaining compliant to the NOT SAFE TO FAIL, siloed environment.

But should the main focus of delivery even be happening there anyway? Back to BCG’s model; Cash Cows should be milked, extracting profits and investing as little as possible. The cash that is derived should be used to fund the next set of Questions and to propel those Stars. Are the mature organsiations doing this? Aren’t they all too often ploughing the cash from their Cows back into their Cows, costing a lot for low return? They hold their market share though economies of scale and the value return on effort spent, is often so low that they deliberately choose not to measure it, so as to not own up to their wastefulness.

I believe large organisations with mature products need to have a strategy that focuses on introducing exciting new products funded by revenue from their established ones. The investment should be high in the growth areas and cut to a minimum on those products that have reached the Extract Cow level. Whilst it may be prudent to have a risk adverse approach to the operation and maintenance of the Extracting Cow, that mindset, those organisational dynamics, MUST NOT be replicated in the other product development areas.

Extracting Cows die, eventually. There are plenty of case studies of huge products that have ceased to exist, some have taken their parent companies with them, some haven’t. The BCG work has shown that for companies to survive the death of an Extracting Cow they must innovate, diversify their product portfolio, use the revenue of their Extracting Cow to fund their next set of Exploring Questions, with a few being successful. Invest in tomorrow, rather than hope that today will never end. Kent Beck’s work helps us to understand that the organisational approach across the portfolio needs be variable. Through combining the two I believe it is not sufficient to diversify your portfolio, you must diversify your approach and governance, maybe even culture to match. For your Questions to become Stars you need an Explore based mindset and approach.

Organisations need to reinvent themselves, not just their product offering, in order to survive; the mindset and governance honed to operate and maintain a mature product is an impediment to discovering its replacement.


About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn


It is a matter of opinion


Pick a contentious topic, Brexit, Trump, Climate change, religious extremism or even the merits of Justin Bieber and the chances are you have an opinion. In the IT industry there are many contentious issues, Agile, Scrum, Emergent architecture; all emotive topics that are filling up the comments sections of blogs all over the internet. But what makes your opinion right and what makes other believe it?
For your opinion to be “right” your hypothesis should logically fit the situation, and that really means YOUR understanding of YOUR situation. Our context, environment and relationships are what shape our understanding of a situation; two people in vastly different situations can correctly hold widely differing opinions on the same topic, because to each of them, their hypothesis holds true in their environment; “Behaviour is a product of practices in a context” (Dan North).

This approach supports the neat little adage that opinion is a matter of perspective. We should go one further, opinion is a PRODUCT of perspective. When we wish to debate between opinions there is no merit in focusing on the opinion itself, that is to try to deny the rational logic of localised cause and effect, instead our attention should turn to the perspective, to that person’s context in which the opinion has been formed. “Understanding context is important before you draw any conclusions” (Dave Snowden).
Someone’s opinion that, “everyone that voted for Brexit is a racist” could be formed if they have heard a number of people state they are for Brexit using racist terminology, and critically they had no contact with other Brexit voters that did not express such ideologies. Supporting opinions is data, here the data is, ‘the number of non-racists encountered that support Brexit’. Edward Demming alluded to this in his much quoted line, “Without data you’re just another person with an opinion”, which Jim Barksdale, former CEO of Netscape neatly expanded with, “If all we have are opinions, let’s go with mine”. The point here is that you can’t argue with the opinion, the argument has to be taken with the data that underpins it. There are millions of people that believe what Donald Trump says about the Washington establishment, because they have no contrary data.
Critically, to form the most educated opinion, the best opinion, we should seek out data from as many disparate contexts as possible, we need to consciously seek to put ourselves in other people’s shoes, something we don’t usually enjoy and hence rarely do.
Now here is the really challenging point for all of us that hold an opinion: if we agree that opinion is a product of perspective, and perspective is a function of our context, then we accept that our opinions will change if we change our context.
If I change your context, you will change your opinion….
Stating that your opinion won’t change on a topic is to state that you will ignore new data that would logically draw a different conclusion. To say you will never change your opinion on something is to be a zealot. To hold a position based on faith, dogma or belief is to deny rational logic. This is quite a problem today where changing opinions are seen as a function of weakness not learning. An “expert” that changes their opinion will have their “expert” status questioned or derided, even though it is their expertise that has enabled them to learn more on a topic and be open to drawn new logical conclusions to their now wider data set.
This can be seen in the Dunning Kruger effect, as people’s experience grows, so does their exposure to differing contexts and there will be a point where people have started to change their opinions, and being aware that they could change further, are less likely to forthright about them. Do not trust the Agile coach that states operational process changes without seeing the teams at work, it belies a narrow contextual exposure.
Our society is becoming increasingly polarised, many have used the term ‘post-factual‘. This is a reference to widespread opinions that are unaffected by new data. Our opinion  strengthens when we receive new information that aligns to our existing data and weakens in the face of contrary data, although we have a tendency to favour the status quo. We all suffer from confirmation bias, where we will disproportionately listen to information that supports our ideas, but it can be overwhelmed, we also have varying levels of trust for data sources. So for opinions to be unchanging, it means that information being received by people is sufficiently supporting their existing context  so as to suppress any contrary data, especially from untrusted sources. Of course, which data is trusted is an opinion itself…
The problem is that we seek out the company of likeminded people and increasingly our human relationships are being sustained through social media which is underpinned with algorithms that prioritise interactions of demographically similar people, because that will generate more content/revenue. This means we are increasingly exposed to a defined subset of data, separate groups of people consistently reinforcing their opinions with separate datasets. This is referred to as being in an Echo chamber.
Echo chambers are dangerous because they are comfortable. It is reassuring to be surrounded by people that agree with you and you become blind to alternative contexts. Echo chambers are where innovation dies due to lack of disruptive challenge. They are invisible ideological prisons.
We need to constantly challenge our opinions; be open to the fact that they will change and be open and honest when they do. Seek out different perspectives, speak to people who don’t agree with you and try to understand the situations that they find themselves in. Be measured when giving your opinion and support it with data to help others to understand your perspective. Strongly asserting unsubstantiated opinions rarely achieves anything , either you are speaking to someone who already agrees with you, or someone that can’t understand you.

What’s your opinion?

About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn

Agile is consuming itself

The biggest threats to wholesale agile adoption within our business society don’t come from a counter proposal, they come from within. The failings of previous approaches are well known and well documented and in fact have been since their inceptions, but everyone muddled through for lack of alternative. There isn’t going to be resurgence in support for the “Good old days”, too many people can prove it wasn’t that good. Nor do I imagine a new way, a utopian enlightenment to dawn upon us, from which point all programme delivery becomes risk and issue free, there just aren’t sufficient unexplored paradigms in our approach.

If the agile movement is to die, to collapse, it will do so inwards, on itself and from within. It will suffer the fate of Robespierre, the French revolutionary who rose to power through a fervent belief in equality and support for those that had been excluded and repressed under royal tyranny. His passion and success made him increasingly blind to the consequence of his unyielding beliefs and the presence of those that coveted his position. Eventually those that would usurp him turned the populace to revile the fanatical dogma that had wrought so much terror in the name of social progress, and he met the same end that he had brought about for the late king, a short drop from Madame Guillotine.

I suggest the dangers lie in three areas, the ignorant, the exploitative and the manipulative. In all cases the issue is misinterpretation of sound decent values, either innocently or more malevolently.

The first case is ignorance. This is a hard truth I had had to admit to myself, and I am reassured to read postings from other thought leaders I admire, who have humbled themselves in a similar fashion – see here, that has given me the confidence to come clean. Years ago I probably was this person, the well-intentioned but ignorant zealot; armed with too little understanding or experience of Agile Values and human politics, and too much theory and process definition.  I was that guy howling into the wilderness standing on Dunning-Kruger’s mount stupid. You’ve may relate to these kinds of transformation attempts; process and terminology centric backed by dogma and rhetoric that is applied through contextless retrospective coherence. Trying to change behaviours and practices through process, like trying to turn the quiet shy girl at the back of the class into the lead cheerleader by tossing her a costume and couple of pom-poms.

The second case is where the revenue generation consequence of those talented individuals working in organisations to support an agile transformation becomes a motivator for themselves and others. When the Agile philosophy becomes a commercial opportunity, then predictable but none too pleasant behaviours start to emerge. Pyramid style certification schemes, and an attempt to commoditise processes and supporting tooling for the purpose of revenue rather than stakeholder value. The worst excesses of this can be seen in those offerings that do little more than relabel existing familiar enterprise operations with new “Agiley” terminology with a supporting license fee. This undermines the Agile principles by dragging it down to something much closer to the status quo for the purpose of profit.

The last case is the most dangerous, those that speak in our name to further their own agendas. The butt of many a Dilbert joke – “Welcome to Agile – stop documenting anything and now you can work faster”. This is the wrecking ball of Agile, or more usually Scrum, wielded by paranoid power-hungry, non-technical managers who feel they now have a weapon to use against their intractable, awkward IT colleagues. Teams have been made to work longer, harder, with less control, fewer standards and more interference all in the name of Scrum. New developers have been born into this environment and are left believing that this is normal, and the more experienced developers resent the dumbing down on their industry and rage against the framework because they are powerless against their management. There are hundreds of comments on blog boards of people decrying Scrum through valid complaints about business practices that bear no resemblance to Scrum.

Now imagine all three together, well-intentioned but ignorant scrum masters being manipulated by untrusting and overly ambitious management to deliver the impossible, at the expense of the developer workforce, being cheered on by a process, tooling and certification industry laughing all the way to the bank. The end result will be a profitable industry of failing projects and people in a slightly different way to twenty years ago, and critically no real improvement in the enterprise project success rate.

So what is to be done? As a consultant working on Agile Transformation; are we like a few conservationists, trying to save what is left with the grim knowledge that it won’t be enough against the rampant consumerism, selfishness and apathy of humankind?

We have to continue, to give up would be dereliction of duty, and most of us have skin in the game ourselves now, we are part of the problem even as we try to point the finger elsewhere. Firstly we should point out misrepresentation of Agile wherever we see it. We need to stop preaching and learn a little humility, for those that teach Agile theory and concepts end each class with this statement – “you now know  a lot less than you think you do and are now capable of a lot more damage than you can imagine”. For those that are working in environments that are Agile in name only, then call this out, transformation to Agile may be beyond your means but at least stop calling it Agile so as to not further tarnish what was once a noble ideology. We need to focus on delivering value, on return for our clients not for ourselves. Be honest and ethical about the contracts we take and the companies we work for.

I like the proposition that someone attributed to McKinseys (I don’t know if correctly) that we should focus on delivering value to our clients rather than to ourselves and through that the money will flow anyway.

About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn


Use Service Design as a tool to challenge

Some would argue that Service Design has been around for ages, for those people that were designing and developing great products years ago, this was what they did, but didn’t take the opportunity to name it, package it and market it. Service Design, as an industry, could be dismissed as the latest reincarnation of common sense – however if it really was so simple and obvious then why weren’t we all doing it, oh how we titter at the common masses for their foolishness, uncomfortable in the knowledge we were just lucky.

Modern Service Design principles and practices are at their most effortless when there is a prevailing wind supporting those activities and their timeline, and there is a very clear vision that focuses on outcomes. Service Design is a structured approach to ensure that users are able to achieve what they need, from their initial desire to the final outcome. Within IT projects it starts with upstream investigations to ensure that what is delivered will fit neatly into the fuller user experience and then manifests more as a user-centric culture from that point on, constantly focusing on the differential between what the user has and what they need. It involves activities such as identifying the users, understanding why they want something, what they are currently doing and how they would naturally approach their need.

My experience with Service Design is less about creating great products but more about identifying and exposing poorly thought through projects. If you follow a service design approach it is very hard to accept a long list of requirements without confidence that they will deliver something appropriately sized in the users’ best interests.

It is more common in a supplier client relationship to feel the need to (and have the opportunity to) challenge the prescribed solution on a table than an in house build. We, the business, have decided we need this widget – please build the widget… This request now usually solicits a slow “Okaaaaaaaaaaaaaaaaay” from me.

The trap in front of you is to ask the obvious question, “tell me about this widget?”. The right question is, “tell me why this widget will solve your problem?” It could be that the response is a full well researched and documented study on user behaviours and needs, with a few supporting usability studies on prototypes which come neatly packaged with a user researcher to join your team. I say ,“It could be…” but really, that isn’t my experience. Careful questioning usually exposes weak assumptions and through pushing a Service Design strategy you can bring everyone to a common path avoiding too much conflict or loss of face.

Projects that proceed without a good foundation on Service Design (or common sense as it was called before it got a name) typically end in one of three situations:

  • Successful with substantial changes during delivery
  • Successful but over-engineered and expensive – and usually late
  • Abandonment

An immediate focus on the widget proposal on the table will typically take you down these paths, I’ve been there, don’t go there.

Start with the problem

Image result for headless chickens cartoon

I have often witnessed teams getting themselves into trouble by focusing more on activity than value. Many people, especially those lost in the middle of a formal hierarchy, are appeased by people doing stuff, it almost doesn’t matter what the team are doing as long as they are busy. It is back to that old attendance over performance metric.

Teams being busy and working hard is only a problem if the alignment of what they are working on cannot be directly drawn to the problem. It is normally the case that their activity can be traced to a request to do something, but tracing to a level deeper – to the underlying problem, is where issues arise. To make things harder, the consequence of the discrepancy between problem and activity isn’t seen until late, when users expect something to change and the new stuff brings the usual change management but fails to solve the original problem.

There is still too much focus on WHAT over WHY. I support the concept that when writing user stories we should start with the “So that…” to force the point but that assumes that the stories even have that line at the end though.

I suggest starting each sprint planning or backlog refinement or coaching engagement or frankly anything, with this simple mantra:

What is the problem we are trying to solve, how are we measuring it, and by that measure what is our definition of success?

The first answer to the “WHY” question often solicits a rephrasing of the deliverable. “I want a widget so that I have a widget”, yes ok, but WHY do you want the widget? What does the widget enable? What does it give you that you don’t currently have? Who actually benefits from this widget? Why is the budget holder going to pay the team for the widget? You really have to get to the heart of the problem and this can be a difficult conversation – because – and this is the scary part – the people responsible for delivering the project haven’t fully understood the problem, and highlighting this after the problem has started can be uncomfortable for some as it could be seen to reflect poorly on the project leaders. It can be the case that projects continue blindly just to save face.

The way in which backlogs are usually described is a cascade of big deliverables to solve problems called “Epics” that are broken down into associated stories. Many teams have lots of stories but have lost the association to the parent Epic – to the purpose. This leaves lots of activity, lots of well meaning work but a massive risk that the work lacks direction and will not deliver the value that the effort deserves.

Slow down, be problem centric, not solution centric. This will help align to principle 10 of the manifesto, “Simplicity – the art of maximising the art of work not done – is essential”. If you just focus on solving the problem rather than delivering the widget – you might find you can deliver a smaller widget!

We need a strategy for this…

Been here before?

“I hear your issue and I think you are right, we need a strategy for this…” (feel free to roll your eyes at this point).

This is typically said in response to an expression of a problem – rather than a request for a strategy; and what is a strategy anyway?


Strategy should be a set of principles applied continuously, that support decision making to ensure alignment to your objective. It is not (or should not be) a fixed plan that implies excessive Big Design Up-Front. However I suggest our opening line isn’t referring to either of these strategies, no in this case it is much less. I will rephrase…

“I hear your point and I think you are right, I don’t know what to do. I don’t want to make a decision, because it might be wrong; but I don’t want you to think that I don’t know what to do, and I need you to remain thinking I am important.”


Basically the word strategy is over used and often thrown in as an opportunity to procrastinate without losing authority.

So how can we help prevent this response from being given (or even from giving it ourselves).

Firstly, take a stance that doesn’t suggest that solutions can be plucked out of thin air for a problem and then put through an expensive development process that won’t return against its own risk for months or years. Once the expectation is lifted from having to implement a solution with unknown consequences, then it is possible to retain authority whilst investigating, rather than acting.

Next understand the metrics by which the problem, and hence its resolution, can be measured. If you can’t define the metric, then you probably don’t really have your finger on the real problem.

Now suggest an idea that will affect the metric in the right direction and ask people involved what could we do to test whether the implementation of the idea will have the desired impact on the metric – a safe to fail experiment – mindful that most experiments DO fail.

Then do that test. This is a proactive decision to DO something, ACTUALLY DECIDE TO DO SOMETHING. Later assess the findings and then you can decide if the idea is worth progressing with. Now the expensive decision to invest in something is a lot less risky and the deep desire to procrastinate to avoid making a mistake is reduced.


Now you can call this User Research, Lean, MVP, Agile or whatever. I have avoided doing so because I don’t want to solicit an emotive response against poor implementations of these things that lead to organisations stating “We don’t do that here”. This is a call to those situations where enormous time and money is wasted with the word “Strategy” because it is an excuse to justify doing nothing hoping the problem will just evaporate!

The Agile Developer – “they may take our projects, but they will never take our freedom!”

One of the underlying principles of Agile and consequently areas of activity in an Agile transformation programme is Empowerment. I am a big driver of empowerment in my explanations and preferred implementations of Agile delivery – and it takes time.


There are two challenges  – firstly encouraging people that have been micromanaged to step up and take decisions, secondly to encourage people, who until recently have been making decisions, to back off and at most facilitate team discussions to make those decisions.

Enabling those things to occur is a complex and difficult challenge but that isn’t the thrust of this post.

Assuming that this has occurred we now have a team of empowered professionals that are shaping their world and the delivery of their product. They are in some context – free. I have heard Agile referred to as, “developer emancipation”. The teams have gained their freedom and the natural passion this evokes in delivery can be related to the soaring of a bird or the breaching of a whale, an expression of joy within their environment.


But consider a caged bird – which sang happily in the cage, ignorant of what lay beyond, or of the feeling of the wind beneath it’s wings. If you set the bird free and encourage it to fly and fend for itself … it will never return to the cage; and if you do capture it and return it to the cage, will it sing as sweet?


There is a risk in a non-committed Agile adoption that if you develop genuine Agile culture in a team and then opt (for holistic organisational cohesive reasons) to roll back to a waterfall model, then you are forcing your now “free bird” back into the cage. It is one thing to have never had responsibility or freedom, but to have had it, and then had it taken away can crush the spirit.

Organisations when considering an Agile adoption need to be cognisant of the risks they are undertaking. The change involved is not simple or pain-free, rolling back may suit those that never embraced the values, but for those that did achieve it, rolling back will be even more destructive than the initial adoption. I would warn that those that have tasted freedom, will not accept confinement, and if the organisation cannot sustain, or compensate for it, then they may fly outside to freedom.

How to measure your Agile delivery?

I am often asked what to measure in Agile delivery. The common measure appears to be velocity, which I concede is useful to track and is also readily available (assuming scrum) but, as is well discussed in articles and blogs, it can be dangerous to measure this publically as it morphs into something that is judged. It can end up be held it as a target for the team and then Goodhart’s law kicks in and what was useful information is now a manipulated artificial construct to give the desired answer. (Using it internally to help project delivery forecasts is very sensible – just don’t assess the team against it).

So given that, what would be a sensible answer? I would advise digging a little deeper into the question rather than giving a snap answer. Information is only as good as the decision made off the back of it (and by extension information provision when no decisions will be taken is waste). Why do you want to measure the Agile delivery? And there are a few honest answers to this:

  • I want to see how much value the team are delivering
  • I want to ensure the team aren’t slacking off
  • I want to understand how the team are performing / struggling

Each of these really justifies a different approach.

For Value you need a discussion as to where the organisation perceives value – and this usually causes some uncomfortable moments when comparing existing process to fundamental Agile values – Working Software over Comprehensive Documentation and Working software is the primary measure of progress. The real measures should be associated to the benefits the organisation reaps as a consequence of the software but that can be a little unfair on the team (as they are not responsible for the requirements) – but critical for the organisation as a whole. Considering just the software delivery then Cycle Time is probably the best measure. How long it takes from idea to delivered software. Measuring, and by extension managing, this will also encourage the organisation to break their deliverables into smaller units. The benefits of the delivered work should still be tracked to avoid a situation where a highly efficient software delivery outfit rapidly and consistently delivers a stream of valueless changes.

The “I want to avoid slacking off” is a difficult one, and is probably always just under the surface, even if they say value – they may really mean – I need something to beat the team with. Despite what they say, what we really have here is fear of loss of control, someone who maybe accountable but has little influence over delivery. This suggests a structural issue with people put in management positions abstracted from delivery, and it also suggests a culture where trust is lacking. The Product Manager or Project Owners on the engagement should be close enough to the teams to have an opinion on current activities and levels of engagement. They will have a first hand understanding as to the reasons behind delivery ups and downs (reflected in velocity) associated to complications or setbacks identified in development (perfectly normal and expected in a complex industry). I usually suggest these people provide sprint activity information to the programme, which can decrease overtime as the management structure adapts.

The third request is a little refreshing and implies a maturity often lacking. This says we are concerned about value – trust the team and want to know when to try to help to support them. To understand what to measure here requires a little “Genba”, real world observation of how the team copes with adversity. Systems are inefficient when operating at 100% capacity, any change to an input variable that worsens the situation will cause failure, so the more dynamic the system the lower the optimum operating capacity. The difference between the optimum and the maximum could be considered contingency – if you want to give support to a team, measure the use of that contingency. This will give an indication of when and by how much the team is struggling and therefore when to act. This could be overtime, or compromising the Definition of Done, items added mid sprint etc.

As a metaphor consider a racing yacht, if you just record the speed then you may miss the unsustainable efforts the team are making to achieve it. Instead note just how many people are hanging off the side of the boat – and how long they have been there, they can’t hold on forever – and if they are made to, then the yacht will speed along until they drop off, at which point the yacht won’t slow down, it will capsize.

An Estimate is a guess in a suit, you can do better than that

New project, new team, new opportunities, a fresh start for everyone, a room full of hope and optimism. Then someone senior comes in and asks for an estimate for the full scope of work – and things start to spiral downwards.


There are three answers to this question, the right one, the wrong one and a refusal to answer. The right answer is impossible, even if you do get it right it will soon be wrong as the scope, and hence the question, will soon change, which leads us to the second answer – a wrong one and for the majority of circumstances this is what is provided.

The problem with an estimate is both the background from where it comes, and what is done with it. If you have a heavily caveated range that is used to inform medium term planning with an awareness of the risk, then that is great. If it is a guess that is treated as a commitment – well we all know the trouble that causes – but people still ask for them.


So firstly why are we so poor at estimating with the software industry? There are other industries out there that appear to be able to get things done against simple predictable plans – yes a few things slip, but houses get built, gas pipes are laid and aircraft get assembled. The important difference is one of experience. Software is repeatable at very little cost and effort – CTRL C, CTRL V, this means that the majority of large software projects are new – never been attempted before by definition. Therefore the solid experience that drives the confident project planning in the industrial sectors is absent in the software industry. Software is now largely a creative / knowledge based activity, like graphic design or management consulting. (It is important to note that other engineering disciplines also have these issues when attempting something unique, so automotive design, large bespoke construction projects etc).


So what is the solution? It isn’t realistic to respond to every request for an estimate of when something will be ready with a wise frown and, “it will be done when it is done”.

Existing and well discussed techniques such as Story points and then breaking user stories down into Tasks and estimating those Tasks in hours is a valid approach but suitable only for the current planning horizon and is unfeasible for an entire backlog, indeed to attempt it for the entire backlog would require such extensive analysis and design you are pretty much back at waterfall – the scope will likely change before you finish.


Understanding the issues with estimation is more a psychology than technology challenge. Typically we estimate by drawing parallels between the work in question and prior experience, but humans are naturally self centred and optimistic, we exaggerate the parallels between this work and previous, undervalue the substantial differences and have a rose tinted memory of how it went last time.

The scope of work to be estimated can be considered in the context of the KNOWNS, these are the same “Knowns” that Donald Rumsfeld referred to on US defence policy, although he only mentioned 3 of the 4.


These are:

  • Known Knowns, these refer to deliverables that are well understood and the premise that short term user story estimates are still valuable.
  • Known Unknowns, these are deliverables that we are aware there will be issues and problems with and those problems are not solved, could be easy, could be difficult but fundamentally require investigation to understand. These requirements are the reason we apply contingency or a margin of error on the Known Known estimate – but without any possible logic as to what that margin should be. The advantage here is that the team will have a fair idea about how to improve their understanding of total work and what activities will help them to become more precise.
  • Unknown Unknowns, the black swan events. These refer to issues that are not understood and not even known to exist. Problems that nobody has even considered would occur and usually lie well outside the realm of contingency planning. These issues may have minor impact, or could completely
  • The last and the most pernicious of the four, and for that reason the one that Mr Rumsfeld wasn’t brave enough to mention, are the Unknown Knowns. These are things we know but choose not to accept or allow for because recognition is so disruptive to our social construct that it is more comforting to create an illusion where they do not exist. In wider society things like state oppression of minorities are often raised as examples, in the less dramatic world of software delivery, issues like stakeholder politics would be a better example. When estimating deliverables it is important to try to surface these as much as possible and be honest about their influence – they typically supress estimates.


A solid appreciation of Complexity Theory, and an awareness of the “Knowns” should enable us to look at the work to be estimated from an informed perspective, and should give us good, communicable reasoning as to why a firm estimate of complex software deliverables beyond our planning horizon is so difficult as to be fruitless. However, good examples of estimation techniques used on complex (unpredictable) systems do exist – the best of these is the weather. The weather later this week is projected, not by a group of experienced meteorologists given today’s information, but by adding that information to all previous information and passing it through a very complex model that is continually evolving. This approach can be applied to our software delivery to deliver long term estimates but now we can appreciate the difference, what we would be providing is no longer an estimate – a guess in a suit, but a forecast. A FORECAST is a statistical likelihood of something happening given historical information and a set of input data. The Monte Carlo simulation approach is a well documented version of this and can be simplified dramatically to be easily employed and still gives very helpful, and importantly fast, forecasts for software delivery.

All forecasting tools rely on data, and therefore before any forecast can be delivered, the team need to make a start on the delivery and record their performance. Once the team have delivered 10 items, as long as those items were not chosen based on expected size, then the probability of the next item taking longer than any previous item to deliver is 5%. Assessing the full list of deliverables in this light – taking the 50% mark would enable a team to rapidly give a most likely forecast and bound it by X% either way. Then after each additional item is delivered the model is improved, and the remaining work reforecasted refining the given result.


So when someone asks your team for an estimate, the first thing to do is have a discussion to see if this work could be described as a KNOWN KNOWN, and deliverable within your planning horizon. If so, then proceed with a breakdown by User Stories and Story Points, compare to velocity and give a duration with heavy margins of error.

If the work is less well defined or substantially larger, then divide it up and compare against your historical delivery through a statistical model. If you have no model, because you are a new team or the work is completely different to anything previously undertaken, then you have to have the awkward but honest discussion explaining that you can’t give an estimate until after you have started, “so give us a month to deliver something of use and we’ll then be able to start to understand enough to give future projections”. Now if that isn’t acceptable then I suppose you could guess what you think you think they want to hear, and revise that figure after a month or so with something more credible – good luck with that.


Keep in touch on #philagiledesign