It is a matter of opinion


Pick a contentious topic, Brexit, Trump, Climate change, religious extremism or even the merits of Justin Bieber and the chances are you have an opinion. In the IT industry there are many contentious issues, Agile, Scrum, Emergent architecture; all emotive topics that are filling up the comments sections of blogs all over the internet. But what makes your opinion right and what makes other believe it?
For your opinion to be “right” your hypothesis should logically fit the situation, and that really means YOUR understanding of YOUR situation. Our context, environment and relationships are what shape our understanding of a situation; two people in vastly different situations can correctly hold widely differing opinions on the same topic, because to each of them, their hypothesis holds true in their environment; “Behaviour is a product of practices in a context” (Dan North).

This approach supports the neat little adage that opinion is a matter of perspective. We should go one further, opinion is a PRODUCT of perspective. When we wish to debate between opinions there is no merit in focusing on the opinion itself, that is to try to deny the rational logic of localised cause and effect, instead our attention should turn to the perspective, to that person’s context in which the opinion has been formed. “Understanding context is important before you draw any conclusions” (Dave Snowden).
Someone’s opinion that, “everyone that voted for Brexit is a racist” could be formed if they have heard a number of people state they are for Brexit using racist terminology, and critically they had no contact with other Brexit voters that did not express such ideologies. Supporting opinions is data, here the data is, ‘the number of non-racists encountered that support Brexit’. Edward Demming alluded to this in his much quoted line, “Without data you’re just another person with an opinion”, which Jim Barksdale, former CEO of Netscape neatly expanded with, “If all we have are opinions, let’s go with mine”. The point here is that you can’t argue with the opinion, the argument has to be taken with the data that underpins it. There are millions of people that believe what Donald Trump says about the Washington establishment, because they have no contrary data.
Critically, to form the most educated opinion, the best opinion, we should seek out data from as many disparate contexts as possible, we need to consciously seek to put ourselves in other people’s shoes, something we don’t usually enjoy and hence rarely do.
Now here is the really challenging point for all of us that hold an opinion: if we agree that opinion is a product of perspective, and perspective is a function of our context, then we accept that our opinions will change if we change our context.
If I change your context, you will change your opinion….
Stating that your opinion won’t change on a topic is to state that you will ignore new data that would logically draw a different conclusion. To say you will never change your opinion on something is to be a zealot. To hold a position based on faith, dogma or belief is to deny rational logic. This is quite a problem today where changing opinions are seen as a function of weakness not learning. An “expert” that changes their opinion will have their “expert” status questioned or derided, even though it is their expertise that has enabled them to learn more on a topic and be open to drawn new logical conclusions to their now wider data set.
This can be seen in the Dunning Kruger effect, as people’s experience grows, so does their exposure to differing contexts and there will be a point where people have started to change their opinions, and being aware that they could change further, are less likely to forthright about them. Do not trust the Agile coach that states operational process changes without seeing the teams at work, it belies a narrow contextual exposure.
Our society is becoming increasingly polarised, many have used the term ‘post-factual‘. This is a reference to widespread opinions that are unaffected by new data. Our opinion  strengthens when we receive new information that aligns to our existing data and weakens in the face of contrary data, although we have a tendency to favour the status quo. We all suffer from confirmation bias, where we will disproportionately listen to information that supports our ideas, but it can be overwhelmed, we also have varying levels of trust for data sources. So for opinions to be unchanging, it means that information being received by people is sufficiently supporting their existing context  so as to suppress any contrary data, especially from untrusted sources. Of course, which data is trusted is an opinion itself…
The problem is that we seek out the company of likeminded people and increasingly our human relationships are being sustained through social media which is underpinned with algorithms that prioritise interactions of demographically similar people, because that will generate more content/revenue. This means we are increasingly exposed to a defined subset of data, separate groups of people consistently reinforcing their opinions with separate datasets. This is referred to as being in an Echo chamber.
Echo chambers are dangerous because they are comfortable. It is reassuring to be surrounded by people that agree with you and you become blind to alternative contexts. Echo chambers are where innovation dies due to lack of disruptive challenge. They are invisible ideological prisons.
We need to constantly challenge our opinions; be open to the fact that they will change and be open and honest when they do. Seek out different perspectives, speak to people who don’t agree with you and try to understand the situations that they find themselves in. Be measured when giving your opinion and support it with data to help others to understand your perspective. Strongly asserting unsubstantiated opinions rarely achieves anything , either you are speaking to someone who already agrees with you, or someone that can’t understand you.

What’s your opinion?

About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn


Agile is consuming itself

The biggest threats to wholesale agile adoption within our business society don’t come from a counter proposal, they come from within. The failings of previous approaches are well known and well documented and in fact have been since their inceptions, but everyone muddled through for lack of alternative. There isn’t going to be resurgence in support for the “Good old days”, too many people can prove it wasn’t that good. Nor do I imagine a new way, a utopian enlightenment to dawn upon us, from which point all programme delivery becomes risk and issue free, there just aren’t sufficient unexplored paradigms in our approach.

If the agile movement is to die, to collapse, it will do so inwards, on itself and from within. It will suffer the fate of Robespierre, the French revolutionary who rose to power through a fervent belief in equality and support for those that had been excluded and repressed under royal tyranny. His passion and success made him increasingly blind to the consequence of his unyielding beliefs and the presence of those that coveted his position. Eventually those that would usurp him turned the populace to revile the fanatical dogma that had wrought so much terror in the name of social progress, and he met the same end that he had brought about for the late king, a short drop from Madame Guillotine.

I suggest the dangers lie in three areas, the ignorant, the exploitative and the manipulative. In all cases the issue is misinterpretation of sound decent values, either innocently or more malevolently.

The first case is ignorance. This is a hard truth I had had to admit to myself, and I am reassured to read postings from other thought leaders I admire, who have humbled themselves in a similar fashion – see here, that has given me the confidence to come clean. Years ago I probably was this person, the well-intentioned but ignorant zealot; armed with too little understanding or experience of Agile Values and human politics, and too much theory and process definition.  I was that guy howling into the wilderness standing on Dunning-Kruger’s mount stupid. You’ve may relate to these kinds of transformation attempts; process and terminology centric backed by dogma and rhetoric that is applied through contextless retrospective coherence. Trying to change behaviours and practices through process, like trying to turn the quiet shy girl at the back of the class into the lead cheerleader by tossing her a costume and couple of pom-poms.

The second case is where the revenue generation consequence of those talented individuals working in organisations to support an agile transformation becomes a motivator for themselves and others. When the Agile philosophy becomes a commercial opportunity, then predictable but none too pleasant behaviours start to emerge. Pyramid style certification schemes, and an attempt to commoditise processes and supporting tooling for the purpose of revenue rather than stakeholder value. The worst excesses of this can be seen in those offerings that do little more than relabel existing familiar enterprise operations with new “Agiley” terminology with a supporting license fee. This undermines the Agile principles by dragging it down to something much closer to the status quo for the purpose of profit.

The last case is the most dangerous, those that speak in our name to further their own agendas. The butt of many a Dilbert joke – “Welcome to Agile – stop documenting anything and now you can work faster”. This is the wrecking ball of Agile, or more usually Scrum, wielded by paranoid power-hungry, non-technical managers who feel they now have a weapon to use against their intractable, awkward IT colleagues. Teams have been made to work longer, harder, with less control, fewer standards and more interference all in the name of Scrum. New developers have been born into this environment and are left believing that this is normal, and the more experienced developers resent the dumbing down on their industry and rage against the framework because they are powerless against their management. There are hundreds of comments on blog boards of people decrying Scrum through valid complaints about business practices that bear no resemblance to Scrum.

Now imagine all three together, well-intentioned but ignorant scrum masters being manipulated by untrusting and overly ambitious management to deliver the impossible, at the expense of the developer workforce, being cheered on by a process, tooling and certification industry laughing all the way to the bank. The end result will be a profitable industry of failing projects and people in a slightly different way to twenty years ago, and critically no real improvement in the enterprise project success rate.

So what is to be done? As a consultant working on Agile Transformation; are we like a few conservationists, trying to save what is left with the grim knowledge that it won’t be enough against the rampant consumerism, selfishness and apathy of humankind?

We have to continue, to give up would be dereliction of duty, and most of us have skin in the game ourselves now, we are part of the problem even as we try to point the finger elsewhere. Firstly we should point out misrepresentation of Agile wherever we see it. We need to stop preaching and learn a little humility, for those that teach Agile theory and concepts end each class with this statement – “you now know  a lot less than you think you do and are now capable of a lot more damage than you can imagine”. For those that are working in environments that are Agile in name only, then call this out, transformation to Agile may be beyond your means but at least stop calling it Agile so as to not further tarnish what was once a noble ideology. We need to focus on delivering value, on return for our clients not for ourselves. Be honest and ethical about the contracts we take and the companies we work for.

I like the proposition that someone attributed to McKinseys (I don’t know if correctly) that we should focus on delivering value to our clients rather than to ourselves and through that the money will flow anyway.

About me:

I am Phil Thompson, an Agile Consultant, have worked at many places in many industries and with many people, mostly in Europe, mostly in the UK, mostly in London. My opinions are my own, shaped and challenged by the people and companies I have been fortunate to work with over the past fifteen yrs.

You can reach me at @philagiledesign or LinkedIn



Use Service Design as a tool to challenge

Some would argue that Service Design has been around for ages, for those people that were designing and developing great products years ago, this was what they did, but didn’t take the opportunity to name it, package it and market it. Service Design, as an industry, could be dismissed as the latest reincarnation of common sense – however if it really was so simple and obvious then why weren’t we all doing it, oh how we titter at the common masses for their foolishness, uncomfortable in the knowledge we were just lucky.

Modern Service Design principles and practices are at their most effortless when there is a prevailing wind supporting those activities and their timeline, and there is a very clear vision that focuses on outcomes. Service Design is a structured approach to ensure that users are able to achieve what they need, from their initial desire to the final outcome. Within IT projects it starts with upstream investigations to ensure that what is delivered will fit neatly into the fuller user experience and then manifests more as a user-centric culture from that point on, constantly focusing on the differential between what the user has and what they need. It involves activities such as identifying the users, understanding why they want something, what they are currently doing and how they would naturally approach their need.

My experience with Service Design is less about creating great products but more about identifying and exposing poorly thought through projects. If you follow a service design approach it is very hard to accept a long list of requirements without confidence that they will deliver something appropriately sized in the users’ best interests.

It is more common in a supplier client relationship to feel the need to (and have the opportunity to) challenge the prescribed solution on a table than an in house build. We, the business, have decided we need this widget – please build the widget… This request now usually solicits a slow “Okaaaaaaaaaaaaaaaaay” from me.

The trap in front of you is to ask the obvious question, “tell me about this widget?”. The right question is, “tell me why this widget will solve your problem?” It could be that the response is a full well researched and documented study on user behaviours and needs, with a few supporting usability studies on prototypes which come neatly packaged with a user researcher to join your team. I say ,“It could be…” but really, that isn’t my experience. Careful questioning usually exposes weak assumptions and through pushing a Service Design strategy you can bring everyone to a common path avoiding too much conflict or loss of face.

Projects that proceed without a good foundation on Service Design (or common sense as it was called before it got a name) typically end in one of three situations:

  • Successful with substantial changes during delivery
  • Successful but over-engineered and expensive – and usually late
  • Abandonment

An immediate focus on the widget proposal on the table will typically take you down these paths, I’ve been there, don’t go there.

Start with the problem

Image result for headless chickens cartoon

I have often witnessed teams getting themselves into trouble by focusing more on activity than value. Many people, especially those lost in the middle of a formal hierarchy, are appeased by people doing stuff, it almost doesn’t matter what the team are doing as long as they are busy. It is back to that old attendance over performance metric.

Teams being busy and working hard is only a problem if the alignment of what they are working on cannot be directly drawn to the problem. It is normally the case that their activity can be traced to a request to do something, but tracing to a level deeper – to the underlying problem, is where issues arise. To make things harder, the consequence of the discrepancy between problem and activity isn’t seen until late, when users expect something to change and the new stuff brings the usual change management but fails to solve the original problem.

There is still too much focus on WHAT over WHY. I support the concept that when writing user stories we should start with the “So that…” to force the point but that assumes that the stories even have that line at the end though.

I suggest starting each sprint planning or backlog refinement or coaching engagement or frankly anything, with this simple mantra:

What is the problem we are trying to solve, how are we measuring it, and by that measure what is our definition of success?

The first answer to the “WHY” question often solicits a rephrasing of the deliverable. “I want a widget so that I have a widget”, yes ok, but WHY do you want the widget? What does the widget enable? What does it give you that you don’t currently have? Who actually benefits from this widget? Why is the budget holder going to pay the team for the widget? You really have to get to the heart of the problem and this can be a difficult conversation – because – and this is the scary part – the people responsible for delivering the project haven’t fully understood the problem, and highlighting this after the problem has started can be uncomfortable for some as it could be seen to reflect poorly on the project leaders. It can be the case that projects continue blindly just to save face.

The way in which backlogs are usually described is a cascade of big deliverables to solve problems called “Epics” that are broken down into associated stories. Many teams have lots of stories but have lost the association to the parent Epic – to the purpose. This leaves lots of activity, lots of well meaning work but a massive risk that the work lacks direction and will not deliver the value that the effort deserves.

Slow down, be problem centric, not solution centric. This will help align to principle 10 of the manifesto, “Simplicity – the art of maximising the art of work not done – is essential”. If you just focus on solving the problem rather than delivering the widget – you might find you can deliver a smaller widget!


We need a strategy for this…

Been here before?

“I hear your issue and I think you are right, we need a strategy for this…” (feel free to roll your eyes at this point).

This is typically said in response to an expression of a problem – rather than a request for a strategy; and what is a strategy anyway?


Strategy should be a set of principles applied continuously, that support decision making to ensure alignment to your objective. It is not (or should not be) a fixed plan that implies excessive Big Design Up-Front. However I suggest our opening line isn’t referring to either of these strategies, no in this case it is much less. I will rephrase…

“I hear your point and I think you are right, I don’t know what to do. I don’t want to make a decision, because it might be wrong; but I don’t want you to think that I don’t know what to do, and I need you to remain thinking I am important.”


Basically the word strategy is over used and often thrown in as an opportunity to procrastinate without losing authority.

So how can we help prevent this response from being given (or even from giving it ourselves).

Firstly, take a stance that doesn’t suggest that solutions can be plucked out of thin air for a problem and then put through an expensive development process that won’t return against its own risk for months or years. Once the expectation is lifted from having to implement a solution with unknown consequences, then it is possible to retain authority whilst investigating, rather than acting.

Next understand the metrics by which the problem, and hence its resolution, can be measured. If you can’t define the metric, then you probably don’t really have your finger on the real problem.

Now suggest an idea that will affect the metric in the right direction and ask people involved what could we do to test whether the implementation of the idea will have the desired impact on the metric – a safe to fail experiment – mindful that most experiments DO fail.

Then do that test. This is a proactive decision to DO something, ACTUALLY DECIDE TO DO SOMETHING. Later assess the findings and then you can decide if the idea is worth progressing with. Now the expensive decision to invest in something is a lot less risky and the deep desire to procrastinate to avoid making a mistake is reduced.


Now you can call this User Research, Lean, MVP, Agile or whatever. I have avoided doing so because I don’t want to solicit an emotive response against poor implementations of these things that lead to organisations stating “We don’t do that here”. This is a call to those situations where enormous time and money is wasted with the word “Strategy” because it is an excuse to justify doing nothing hoping the problem will just evaporate!

The Agile Developer – “they may take our projects, but they will never take our freedom!”

One of the underlying principles of Agile and consequently areas of activity in an Agile transformation programme is Empowerment. I am a big driver of empowerment in my explanations and preferred implementations of Agile delivery – and it takes time.


There are two challenges  – firstly encouraging people that have been micromanaged to step up and take decisions, secondly to encourage people, who until recently have been making decisions, to back off and at most facilitate team discussions to make those decisions.

Enabling those things to occur is a complex and difficult challenge but that isn’t the thrust of this post.

Assuming that this has occurred we now have a team of empowered professionals that are shaping their world and the delivery of their product. They are in some context – free. I have heard Agile referred to as, “developer emancipation”. The teams have gained their freedom and the natural passion this evokes in delivery can be related to the soaring of a bird or the breaching of a whale, an expression of joy within their environment.


But consider a caged bird – which sang happily in the cage, ignorant of what lay beyond, or of the feeling of the wind beneath it’s wings. If you set the bird free and encourage it to fly and fend for itself … it will never return to the cage; and if you do capture it and return it to the cage, will it sing as sweet?


There is a risk in a non-committed Agile adoption that if you develop genuine Agile culture in a team and then opt (for holistic organisational cohesive reasons) to roll back to a waterfall model, then you are forcing your now “free bird” back into the cage. It is one thing to have never had responsibility or freedom, but to have had it, and then had it taken away can crush the spirit.

Organisations when considering an Agile adoption need to be cognisant of the risks they are undertaking. The change involved is not simple or pain-free, rolling back may suit those that never embraced the values, but for those that did achieve it, rolling back will be even more destructive than the initial adoption. I would warn that those that have tasted freedom, will not accept confinement, and if the organisation cannot sustain, or compensate for it, then they may fly outside to freedom.

How to measure your Agile delivery?

I am often asked what to measure in Agile delivery. The common measure appears to be velocity, which I concede is useful to track and is also readily available (assuming scrum) but, as is well discussed in articles and blogs, it can be dangerous to measure this publically as it morphs into something that is judged. It can end up be held it as a target for the team and then Goodhart’s law kicks in and what was useful information is now a manipulated artificial construct to give the desired answer. (Using it internally to help project delivery forecasts is very sensible – just don’t assess the team against it).

So given that, what would be a sensible answer? I would advise digging a little deeper into the question rather than giving a snap answer. Information is only as good as the decision made off the back of it (and by extension information provision when no decisions will be taken is waste). Why do you want to measure the Agile delivery? And there are a few honest answers to this:

  • I want to see how much value the team are delivering
  • I want to ensure the team aren’t slacking off
  • I want to understand how the team are performing / struggling

Each of these really justifies a different approach.

For Value you need a discussion as to where the organisation perceives value – and this usually causes some uncomfortable moments when comparing existing process to fundamental Agile values – Working Software over Comprehensive Documentation and Working software is the primary measure of progress. The real measures should be associated to the benefits the organisation reaps as a consequence of the software but that can be a little unfair on the team (as they are not responsible for the requirements) – but critical for the organisation as a whole. Considering just the software delivery then Cycle Time is probably the best measure. How long it takes from idea to delivered software. Measuring, and by extension managing, this will also encourage the organisation to break their deliverables into smaller units. The benefits of the delivered work should still be tracked to avoid a situation where a highly efficient software delivery outfit rapidly and consistently delivers a stream of valueless changes.

The “I want to avoid slacking off” is a difficult one, and is probably always just under the surface, even if they say value – they may really mean – I need something to beat the team with. Despite what they say, what we really have here is fear of loss of control, someone who maybe accountable but has little influence over delivery. This suggests a structural issue with people put in management positions abstracted from delivery, and it also suggests a culture where trust is lacking. The Product Manager or Project Owners on the engagement should be close enough to the teams to have an opinion on current activities and levels of engagement. They will have a first hand understanding as to the reasons behind delivery ups and downs (reflected in velocity) associated to complications or setbacks identified in development (perfectly normal and expected in a complex industry). I usually suggest these people provide sprint activity information to the programme, which can decrease overtime as the management structure adapts.

The third request is a little refreshing and implies a maturity often lacking. This says we are concerned about value – trust the team and want to know when to try to help to support them. To understand what to measure here requires a little “Genba”, real world observation of how the team copes with adversity. Systems are inefficient when operating at 100% capacity, any change to an input variable that worsens the situation will cause failure, so the more dynamic the system the lower the optimum operating capacity. The difference between the optimum and the maximum could be considered contingency – if you want to give support to a team, measure the use of that contingency. This will give an indication of when and by how much the team is struggling and therefore when to act. This could be overtime, or compromising the Definition of Done, items added mid sprint etc.

As a metaphor consider a racing yacht, if you just record the speed then you may miss the unsustainable efforts the team are making to achieve it. Instead note just how many people are hanging off the side of the boat – and how long they have been there, they can’t hold on forever – and if they are made to, then the yacht will speed along until they drop off, at which point the yacht won’t slow down, it will capsize.

An Estimate is a guess in a suit, you can do better than that

New project, new team, new opportunities, a fresh start for everyone, a room full of hope and optimism. Then someone senior comes in and asks for an estimate for the full scope of work – and things start to spiral downwards.


There are three answers to this question, the right one, the wrong one and a refusal to answer. The right answer is impossible, even if you do get it right it will soon be wrong as the scope, and hence the question, will soon change, which leads us to the second answer – a wrong one and for the majority of circumstances this is what is provided.

The problem with an estimate is both the background from where it comes, and what is done with it. If you have a heavily caveated range that is used to inform medium term planning with an awareness of the risk, then that is great. If it is a guess that is treated as a commitment – well we all know the trouble that causes – but people still ask for them.


So firstly why are we so poor at estimating with the software industry? There are other industries out there that appear to be able to get things done against simple predictable plans – yes a few things slip, but houses get built, gas pipes are laid and aircraft get assembled. The important difference is one of experience. Software is repeatable at very little cost and effort – CTRL C, CTRL V, this means that the majority of large software projects are new – never been attempted before by definition. Therefore the solid experience that drives the confident project planning in the industrial sectors is absent in the software industry. Software is now largely a creative / knowledge based activity, like graphic design or management consulting. (It is important to note that other engineering disciplines also have these issues when attempting something unique, so automotive design, large bespoke construction projects etc).


So what is the solution? It isn’t realistic to respond to every request for an estimate of when something will be ready with a wise frown and, “it will be done when it is done”.

Existing and well discussed techniques such as Story points and then breaking user stories down into Tasks and estimating those Tasks in hours is a valid approach but suitable only for the current planning horizon and is unfeasible for an entire backlog, indeed to attempt it for the entire backlog would require such extensive analysis and design you are pretty much back at waterfall – the scope will likely change before you finish.


Understanding the issues with estimation is more a psychology than technology challenge. Typically we estimate by drawing parallels between the work in question and prior experience, but humans are naturally self centred and optimistic, we exaggerate the parallels between this work and previous, undervalue the substantial differences and have a rose tinted memory of how it went last time.

The scope of work to be estimated can be considered in the context of the KNOWNS, these are the same “Knowns” that Donald Rumsfeld referred to on US defence policy, although he only mentioned 3 of the 4.


These are:

  • Known Knowns, these refer to deliverables that are well understood and the premise that short term user story estimates are still valuable.
  • Known Unknowns, these are deliverables that we are aware there will be issues and problems with and those problems are not solved, could be easy, could be difficult but fundamentally require investigation to understand. These requirements are the reason we apply contingency or a margin of error on the Known Known estimate – but without any possible logic as to what that margin should be. The advantage here is that the team will have a fair idea about how to improve their understanding of total work and what activities will help them to become more precise.
  • Unknown Unknowns, the black swan events. These refer to issues that are not understood and not even known to exist. Problems that nobody has even considered would occur and usually lie well outside the realm of contingency planning. These issues may have minor impact, or could completely
  • The last and the most pernicious of the four, and for that reason the one that Mr Rumsfeld wasn’t brave enough to mention, are the Unknown Knowns. These are things we know but choose not to accept or allow for because recognition is so disruptive to our social construct that it is more comforting to create an illusion where they do not exist. In wider society things like state oppression of minorities are often raised as examples, in the less dramatic world of software delivery, issues like stakeholder politics would be a better example. When estimating deliverables it is important to try to surface these as much as possible and be honest about their influence – they typically supress estimates.


A solid appreciation of Complexity Theory, and an awareness of the “Knowns” should enable us to look at the work to be estimated from an informed perspective, and should give us good, communicable reasoning as to why a firm estimate of complex software deliverables beyond our planning horizon is so difficult as to be fruitless. However, good examples of estimation techniques used on complex (unpredictable) systems do exist – the best of these is the weather. The weather later this week is projected, not by a group of experienced meteorologists given today’s information, but by adding that information to all previous information and passing it through a very complex model that is continually evolving. This approach can be applied to our software delivery to deliver long term estimates but now we can appreciate the difference, what we would be providing is no longer an estimate – a guess in a suit, but a forecast. A FORECAST is a statistical likelihood of something happening given historical information and a set of input data. The Monte Carlo simulation approach is a well documented version of this and can be simplified dramatically to be easily employed and still gives very helpful, and importantly fast, forecasts for software delivery.

All forecasting tools rely on data, and therefore before any forecast can be delivered, the team need to make a start on the delivery and record their performance. Once the team have delivered 10 items, as long as those items were not chosen based on expected size, then the probability of the next item taking longer than any previous item to deliver is 5%. Assessing the full list of deliverables in this light – taking the 50% mark would enable a team to rapidly give a most likely forecast and bound it by X% either way. Then after each additional item is delivered the model is improved, and the remaining work reforecasted refining the given result.


So when someone asks your team for an estimate, the first thing to do is have a discussion to see if this work could be described as a KNOWN KNOWN, and deliverable within your planning horizon. If so, then proceed with a breakdown by User Stories and Story Points, compare to velocity and give a duration with heavy margins of error.

If the work is less well defined or substantially larger, then divide it up and compare against your historical delivery through a statistical model. If you have no model, because you are a new team or the work is completely different to anything previously undertaken, then you have to have the awkward but honest discussion explaining that you can’t give an estimate until after you have started, “so give us a month to deliver something of use and we’ll then be able to start to understand enough to give future projections”. Now if that isn’t acceptable then I suppose you could guess what you think you think they want to hear, and revise that figure after a month or so with something more credible – good luck with that.


Keep in touch on #philagiledesign

Improving Agile team performance

I have often been asked my opinions on teams that are not performing. Typically this is prompted by an observation of going through the motions on a Scrum ceremony that they have witnessed. I have seen the same situation a few times now so thought it worth sharing.

So consider this team:

They appear listless, little passion or focus. They have the necessary process and artefacts but more from a sense of obligation than because they are actually deriving any value from them. They probably get together as a whole team to discuss stuff right at the beginning of the sprint and then drift off into smaller groups at the start of sprint ending with a minor panic in the last couple of days leaving sprints incomplete. Standups are largely pointless, mumbled status updates to the Scrum Master. They will often respond with excuses of interdependencies or lack of understanding. Their “Definition of Done” is a little hazy and regularly compromised to get things though at the end of a sprint and for the same reason, smaller pieces of low priority work often get put into sprints to enable the team to take some easy wins – to keep THEM happy.

It is easy to point the finger at the team, kick them a little bit, maybe even shout and mention things like performance reviews, objectives and the like, but for a team to have gotten into this mess there must be something systemically wrong and pep talks will have short term benefits at best.


So the Scrum Master is at fault, well maybe. Typically I find the fault is in the system the team is in and either the Scrum Master is part of that system or has given up fighting. Watch the Scrum Master. If they are busy directing the team flow like a policeman at a busy junction, having quiet chats with the Product Owner and separately with Architects and Project Managers then yes, the Scrum Master is part of the system that is killing the team. If however – as I have often mentioned, the Scrum Master appears like the rest of the team, quiet, reluctant, weary and at a loss, then you probably have a respectable Scrum Master that isn’t sufficiently powerful to break the system and therefore is as beaten as the team.


A first starting point is to look at team size, most of the situations I have seen have been compounded by over sized teams. I know many people in the Agile Coaching industry that would argue that you can have teams up to 11 and yes you can get things to work with larger teams, but you need the system to be working well first; and many of the issues that affect smaller teams have a more significant impact on larger ones.

Agile is fundamentally about people, it is light on process by design, and process enables people to be directed forcibly. The Agile approach depends on self direction and self direction thrives on motivation. Fail to motivate the team and they will become despondent and without the structure and direction in a waterfall model, they will split, drift and performance will be a fraction of what is possible.

Motivation in an Agile context typically stems from empowerment and an appreciation of what they are working on, not just realisation of the final product but also a direct understanding of their element of it.


Given this there are some demotivational factors to look out for:

Component teams – delivery teams that are working on technological slices of the application have a much harder time to appreciate the point of their work, and therefore their impact on the value. Because it is harder to see the final solution it is also much harder for the team to drive out an MVP, component teams often gold plate because they struggle to be able to identify which features are the most critical.


Specialists – A team of specialists that are able to completely break up the work will find themselves operating in increased isolation. The handoffs between team members will start to become increasingly formal in an attempt to ensure responsibility. What this means is each team member looks out for themselves to ensure that when something goes wrong they are able to point the finger elsewhere. The team will start working when they act as a team, and that can be helped though cross functional delivery.


Knowledge of the users – have the team, not a management representative of the team, but the team themselves, ever actually met and talked to the actual users of the system? People will probably have spoken to the team about the users, may even have involved the team in creating personas but there is something very powerful about actually meeting them. Seeing the whites of the eyes of the people you are building things for. It is about consequence, about responsibility. It is harder to cut corners and deliver substandard product for someone you have actually met and have a relationship with.


Culture of Management – This is the most subtle and pervasive of the issues, and also the one that typically gets worse as performance issues grow creating a vicious circle. A poor team typically attracts more management attention, who will act to direct and control the flow and definition of work. What the team needs is empowerment, freedom to own their own process.  Did they write their own definition of done (and not have it “rephrased” by someone senior)? Have they chosen their own tracking and reporting templates?

A really important piece will be the decision makers. These decision makers need to be in the team and making decisions with the team – in the presence of their team members. This makes their decisions team decisions, as opposed to decisions made on behalf of the team. It is an important but subtle distinction. Project Managers and Architects are the two roles that this usually applies to.


In short if you have a team that appears to be drifting and disinterested then:

  • Reduce the team size as much as possible – facilitate the team to split maybe
  • Empower the team – stop making decisions outside the team on their behalf
  • Connect the team with the users
  • Enable the team to own their own process
  • Give the team a slice of the system where they can own the definition, delivery and deployment of something of value


Digital Transformation – No, just having a website doesn’t count!

Recently a colleague asked me about Digital transformation, and how best to express it; and more importantly, what practical steps to take to start to achieve it.

As I understand it, Digital is about using information to shape your product line and sales strategies to the ever changing market, using the technological capabilities of the modern age to tailor your product offering and operate at lower cost.

It is not achieved by simple having a website!

Moving to Digital is a business strategy that leans on IT’s capabilities, it is not an IT strategy. Digital is moving from selling products that consumers use to fulfil their needs to selling the actual use – servicing the NEED – your tangible product may be part of that but the engagement is based around the customer not around you.

To illustrate this, consider this evolution – a hard working peasant farmer with his cart of cabbages some time in the middle ages. One man – some cabbages, selling them to people that a) find him, b) want a cabbage for whatever purpose. Now fast forward a few hundred years and his cart has become a shop, another hundred years and a catalogue is published for mail order and then last week he launches Massive progress, but hold on, still one enterprise, some cabbages, selling them to people that a) find him, b) want a cabbage for whatever purpose. Finding him is now easier due to the internet but fundamentally this is still the same concept.

This is not a digital transformation; this is channel shift to new channels as they become pervasive. There has been more than one company disappear from the market place that had full channel coverage but hadn’t innovated their product line.

Mr Cabbage man is still fundamentally selling the same product and expecting the same approach from the customer. The customer must still have a need that they identify with your product, approach you to purchase it and then use the product to address their need.

Going digital is more like the next step from the data driven product placements that have been so effectively used by the supermarkets. Through extensive data analysis of purchases, footfall, and buying patterns the supermarkets have been able to change their layout and merchandising to better tap into the underlying need of the consumer. Originally the shop would sell ingredients, it was the consumer’s need for a meal that drove that purchase but it was still down to the consumer to identify which ingredients were needed and later to assemble them into a meal. The introduction of ready meals in the 1980s was a product shift based on addressing user needs. Digital is the next step along that path.

Going to Digital means the ability to have a comprehensive understanding of your customer, and then offering services (based on known experience and products) tailored to the exact customer. Technology now gives the possibility of providing specific services to the individual and then automatically matching them to the best service offering. This is can often be done with minimal human decision involvement enabling much lower cost to operate.

Many IT buzzwords start to crop up at this point: Big data, CRM, analytics. These concepts are what really lies behind Digital, it isn’t really the IT stack or web screens – that is just what it needs.

Moving to Digital is considered a challenging cultural change because the change is not in the IT department – it is a change in the business, in the product offering and your understanding of their customers and your USP, IT will help you deliver the new world, but cannot lead it.

Many companies are now employing a Chief Digital Officer (CDO), this is a very difficult role because they are responsible for breaking out of IT, you could argue (if looking for sensationalism) that their job is to destroy the IT department – to bring technology into the heart of the business to be something we all do, not something “they” do.

It somewhat misses the point if the CDO sits under the IT director and is basically manager of online product offerings, although they would be in good company. I would imagine that the relationship between CIO and CDO would be at times tense, and to work they must be equals.

But practically speaking what are good steps to take to a digital transition:

  • Understand who your customers are, and more subtly why your customers are. What is it about you that makes them come to you – or “come back” to you.
  • Next ensure that the data that you are currently capturing is accessible – and cross referenceable, and not just along the existing product hierarchies. I remember back when I worked in retail that we could give really detailed sales information along product lines and store lines, so general merchandise, men’s shirts, that range, that size, in blue. But if you wanted aggregated info you were limited to that hierarchy, so you could find sales for all shirts in that range, but what was needed was sales for ALL BLUE clothes… but that basically meant you had to know the code for every blue item and run a query asking for each item in a list.
  • Then add some analytics on existing product lines.
  • Then get someone to actually look at this stuff, someone with a CRM background – there is too much critical work to just add this to someone’s existing work.

All these steps will help your organisation understand what it is selling, how and to who and why. From there they can try to understand what your customers are really wanting (of which your existing product is a part), and then you can start to consider what service you can offer to address that.

Don’t rush to solutionising a software build to a problem you haven’t validated and you can’t just point to a picture of your product on a web screen and declare digital success.