LongCut logo

Time to stop the madness. Time to stop estimating.

By Development That Pays

Summary

Topics Covered

  • Maybe not eating junk food is the best solution
  • Throughput forecasts as good or better than velocity forecasts
  • Removing information from the model improves it
  • Going small is everything
  • Throughput pressure is self-limiting in the right direction

Full Transcript

The single biggest problem in Agile today is the degree to which we depend on estimates.

Today I'll show you the high price we pay for this dependency I'll also show you that it might just be possible to break free.

Hi, my name is Gary Straughan Welcome to Development That Pays We've made it to the big one: the concluding episode of this series on Estimates and Estimating.

We've got a ton of good stuff to get through so let's get right to it It was 2016 when I asked the viewers of this channel what it was about Agile that really bugged them and I discovered that I wasn't the only one that had concerns around estimates.

But it's a bit of a thorny one, isn't it?

Especially as the process of estimating is generally held in high regard as a great way of understanding the work.

A weird case perhaps of the means justifying the ends.

Well, I was far from sure, but by 2018, I'd got on and built a course all about how to get better at estimating and I had most of it done and dusted when I stumbled across this video and it really spoiled my day.

This is Woody Zuill.

Woody, as you may know, is the originator of the #NoEstimates hashtag.

I'll put a link to his full talk in the description below, but I want to play you a short clip where Woody's sharing an interesting estimating analogy.

- I have a problem.

I will often overeat on junk food.

There's a jar of candy on the tables or they're all gone now?

When they're there, I'll just keep eating them as long as they're there.

Okay, then I gain weight.

I like my weight right now, I'd like to weigh less, but I weighed a lot more than this.

Okay, is there a better way for me to eat junk food?

Could I chew the candy more thoroughly or with more in my mouth at the same time or less in my mouth?

How would I do eating junk food better?

Maybe not eating the junk food is the best solution.

- Maybe not eating the junk food is the best solution.

The moment I saw that, I wasn't sure how or why, but I was sure that it was true.

What we're trying to do with estimates, what I was trying to do with the course I was building was a lot like trying to eat junk food better.

We shouldn't be trying to get better, we should be trying to get rid.

For the first time, I saw that there's a whole industry out there dedicated to how to estimate and how to get better at estimating and I knew I couldn't have any part of it.

I had no choice but to take a hatchet to my course.

Of course, getting rid of estimates is a lot easier said than done, especially as they've somehow weaved their way into just about everything that we do.

Take a look.

We start by estimating, that gives us estimates.

And from estimates over multiple sprints, we can calculate velocity.

And estimates and velocity are used jointly and severely for all kinds of things.

All manner of charts and reports, burn-ups, burn-downs, that sort of thing, forecasts and for selecting items for the sprint backlog.

Sadly, as you'll know if you've been following along with this series, there's a lot here that isn't ideal.

So can we take estimates out of the equation?

- Take them out of the equation.

- Without the whole thing falling apart?

Well, let's see.

Here, we have estimates, which of course came from a bunch of stories that have been estimated.

On this side, we have a bunch of stories that have not been estimated.

Here, we can't count story points, but we can count stories.

That might not feel very useful, but suspend your disbelief just for now.

Velocity is a number of story points delivered per sprint.

Here, we have the number of stories delivered per sprint.

What did I call that last time?

Giving a pseudo velocity.

Oh yeah, pseudo velocity.

That was a little bit naughty given that there's a perfectly good term that predates velocity by at least a century, throughput.

Throughput is the average number of work items processed per unit of time, which translates for our purposes to stories delivered per sprint.

Estimates to story count, velocity to throughput.

We have our equivalents, but are they usable?

They kind of feel like they wouldn't be, but can we be more scientific?

Well, one of the reasons I dedicate an entire part of this series to forecasting is that a forecast is, if you think about it, the proof of the pudding of velocity.

Does proof of the pudding translate?

And if it's the proof of the pudding of velocity, it's also the proof of the pudding of throughput.

Vasco Duarte did the work on this so we don't have to using historical Jira data for multiple teams to produce velocity-based forecasts and throughput-based forecasts, to discover that the latter was as good or better than the former at predicting the actual end date.

- Whoa.

- Mind-blowing stuff indeed and the breakthrough that I'd been looking for.

Remember that course I was building?

Well, if Woody Zuill had me take a hatchet to it, Vasco Duarte had me pick it out of the bin and piece it together again.

It eventually saw the light of day as 'Agile Estimating and Planning Beyond Story Points and Planning Poker' .

But I digress.

In the Vasco episode, I showed you these fancy graphs.

Hey, they took an age to produce.

There's no way I wasn't gonna use them again.

And I wonder if you noticed what I said next.

That's different to the story points forecast, which shouldn't come as a surprise.

We have after all removed a lot of information.

Actually, we've removed a hell of a lot of information, all the story points that went into calculating the velocity plus all of the story points in the entire backlog.

Question for you, if you take something, remove information from it and the result improves, what can you say about that information?

Certainly, it casts velocity in a bad light and it doesn't exactly say good things about the underlying estimates.

Certainly, story count and throughput, that we get for free by the way, is starting to look pretty good.

(people cheering) Moving on to charts and reports.

Well, here I find myself at something of a disadvantage, somehow I've ended up in agile teams that didn't really go in for that sort of thing.

Although I'm sure way back in the distant past, I did see the odd burn-down chart like this one.

And here's the same chart with the story point information removed, just using counts of stories.

So it's possible to plot it, but is it useful?

Well, clearly I can't be the judge, so I'm gonna let you be the judge.

Let me know in them there comments below.

Onto the final block, selecting work for the sprint backlog.

And this, ladies and gentlemen, is where I got derailed for years.

In a world where a hundred story points can be one story or dozens, a simple count of stories isn't going to cut it, which is nothing short of tragic.

Are we really gonna have to hang on to estimating just for this?

- You're kidding me.

- Took me a while to ask a more positive question.

What would have to be true for a simple count of stories to work?

The answer to that question turned out to be as simple as it is profound, smaller stories.

It turns out that going small is everything.

It's time to add the final piece to the jigsaw as we swap the process of estimating for the process of producing smaller stories, a.k.a. Story Slicing.

a.k.a. Story Slicing.

And now at last, we're ready for a full side-by-side comparison, but just before doing so, a reminder that there's a cheat sheet to go along with this episode featuring this very model.

Getting your hands on a copy is simplicity itself.

You find a link in the description below, click the link, follow the instructions and I'll send it right along.

Alright, side-by-side comparison, we're gonna start at the bottom and work our way up.

Estimating gives us estimates, which is a bit of a bad start, but it's not all bad.

As we talked about in Part 2, the "Estimating Conversation" is a good way of us understanding the work, but we can do better.

Story Slicing starts by identifying the stories within a story.

Turns out that it's easier and faster to identify those sub-stories than it is to understand a large story well enough to assign a story point value.

It's also quick and easy to identify dependencies and put those aside for another day.

And with more of these sub-stories to choose from, we have more opportunity to select high-value stories, leaving us a cherry-picked list of small, independent high-value stories.

Did I mention high-value stories?

And small stories without dependencies are very easy to understand.

I'm calling this one as a win for Story Slicing.

Moving on to estimates and let's leave aside for a moment their potential to get up to no good and remember what they are.

They're estimates.

Uncertainty is implied, actually a little more than implied, it's pretty much guaranteed.

Stories on the other hand are concrete.

If I line up some stories, there's no discussion about the total, about the count.

If you don't count seven, could you let me know in the comments?

I think that's another win for this side.

Moving on to Velocity.

Now, velocity has a little bit of evilness of its own that we've yet to discuss.

Those, shall we say, less familiar with agile, perhaps the management, might expect that velocity for a particular team would increase steadily over time.

I guess that's a danger with just about any metric.

But when it comes to velocity, with enough external pressure, velocity can indeed increase steadily over time.

I've seen it many times.

Alas, this wasn't anything to do with performance improvement, but everything to do with inflated estimates.

Bigger estimates mean more story points delivered in each sprint.

So the velocity goes up.

So what about throughput?

Is that immune to pressure?

Well, not at all.

The equivalent cheat for throughput, (stories delivered per sprint, remember), would be smaller stories, which is exactly what we want!

And while an estimate is uncapped, you can always go bigger, a story can only be sliced so far.

The resulting slice must be a complete story in its own right.

If it's not potentially shippable, you've sliced too far.

So pressure on throughput, not only works in the right direction, towards smaller stories, it's also self-limiting.

Very, very cool.

Another win I think for this side.

Three to go and if you found this video useful so far, please consider giving it a thumbs up.

That will encourage YouTube to share it with more people.

Alright, forecasting is a slam dunk win for this side.

That's because forecasting on this side requires a ridiculous amount of estimating Please don't estimate the entire backlog.

Charts and reports, I've confessed my ignorance, but I really would like to hear from you.

Let me know in those comments below.

And finally, the fly in my soup, the thorn in my side, the stone in my shoe, selecting items for the sprint backlog.

I guess I have to give that one as a win for this side or rather a win for this side up until the point that we get good enough at story slicing.

That really is the price of admission.

So I have a question for you.

Are you really gonna hold on to this or are you prepared to put in the work to move to this?

Now that you know that you can deliver more value in a fraction of the time without the evil, there's really no good reason not to start the work today.

Start Story Slicing!

Identify those sub-stories, cherry-pick for value, understand the work.

And then at the very last moment, and only if you feel you have to, assign a story point value.

Loading...

Loading video analysis...