April 17, 2014

Agile Coaching Blog

Agile Coaching Blog

This agile coaching blog has news & articles from BigVisible coaches and trainers. Find agile topics like Scrum and Kanban. Read about collaboration, communication, & enterprise agile. See hints for better daily meetings, retrospectives, and Scrum reviews. Learn about leading edge tools for lean startup and customer development. Discover information geared toward agile teams as well as leaders & executives.

We hope the information you find here will help your organization become more agile, adaptive, and innovative. Our goal is to help you delight your customers & succeed beautifully.

Stay Hungry. Stay Foolish. And Don’t Be Afraid to Say “Our Process Has No Clothes!”

“Stay hungry. Stay foolish.” Many people remember those words as a quote from Steve Jobs made during Jobs’ Stanford commencement address in 2005, where, among other things, he spoke about the Whole Earth Catalog as the Google of that day.  Referring to the Whole Earth Catalog, and that particular time in history, brings a tear to my eye and makes me smile at the same time.

People remember the four words as Jobs’, but forget that Jobs readily admitted that he didn’t make up those words – he was quoting the farewell message on the back cover of the last issue of Whole Earth Catalog in 1974.

Blog pic 1 - Stay Hungry
To this day, whenever I look at that photo, I never look at it as an end of a long and winding road, but as a beginning of a journey down it.  But that’s not why I’m writing this today.  What I’m interested in writing about starts with this “Stay hungry.  Stay foolish.” mantra, and leads us down the road of why this is so important for agility.

Staying hungry: we are reminded to focus on delighting our customer.  Hunger is so low on the Maslovian scale that it’s a basis for just about everything else.  While one item to be hungry for are the products we produce, the capital that they provide should not be considered the goal of existence.  In a sense, rather than simply dining well on the money that revenue produces, we have to also use it to plant the seeds for future revenues and the capital that it will provide.

“A rising tide lifts all boats.” It’s a phrase commonly attributed to John F. Kennedy, but he didn’t make up those words either.  He got them from a chamber of commerce called the New England Council.  For purposes here, remember that simply doing what’s worked in the past is merely a way to slowly drain the lake.  Constantly improving is what we have to do to make the lake larger. Continuous improvement doesn’t come with a checklist of things to do.  It requires creativity to figure out how to make what’s good now even better, and that requires that we never grow complacent and rest on our laurels.  We have to constantly be searching and never forgetting that things could always be even better than they are. “Curiosity is stimulated when individuals feel deprived of information and wish to reduce or eliminate their ignorance.” (The Agile Mind, p279, Wilma Koutstaal, 2012, Oxford University Press.)

Staying foolish: we are reminded to not just accept, but to question and be curious. Getting to the “whys” are far more important than just jumping to a whole bunch of “whats” (and getting them wrong the until we learn what’s really needed.)  And learning can be expensive at times.  Failure shouldn’t be viewed as a bad thing, but as something that gives us an opportunity to learn something. If nothing else, we learn what doesn’t work. As Thomas Edison said, “I have not failed. I’ve just found 10,000 ways that won’t work.” We have to be in an environment that supports that line of thought for the potential to improve through learning to work.

Sometimes the reason we fear being foolish is that the idea that we have isn’t something that we feel we should discuss.  It may be something that might make us the object of ridicule.  Maybe we’re afraid that should something go wrong, that we’ll get blamed for the failure.  Those sort of situations remind me of the old Hans Christian Andersen tale of “The Emperor’s New Clothes.”  When the vain emperor employs tailors, they decided to make clothes from a special fabric that was invisible to anyone unfit for their positions or hopelessly stupid.  The Emperor and his ministers all saw the clothes, because they feared that otherwise, they would be seen as unfit or stupid. The child who blurted out that the emperor had no clothes was not only foolish, but hungry as well (since he wasn’t part of the ruling class, who had everything to lose).  In our world, we have to find ways to be like that child if we expect to make Agile anything more than superficial self-congratulation that ultimately ends up draining the lake.  We have to make sure that in our organizations, we can stay hungry and foolish and not fear that saying “our process has no clothes” will be met with anything other than the wonderment of what’s possible.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



What’s the Right Ratio Between QA Testers and Developers?

I get asked this question all the time.  And I think that the answer is both obvious and not at the same time.  In any case, it’s not possible to answer what the ratio of developers to QA testers should be.  Here’s why.

Let’s take a look at a flowchart of how software development really occurs.

I know that there are differences in this diagram based on whether we are using “waterfall”, “Scrum”, “Kanban”, and so forth.  But the differences are usually that we don’t explicitly acknowledge the stage of process that we are in, or we subsume it in another process step.  For example, in eXtreme Programming, we tend to design, code, and functionally test all in one step. We use Unit Testing in Test Driven Development as the functional test in isolation, and the “refactor” step in “red/green/refactor” as a way to accomplish “design a solution”.

Where are the testers in that diagram?

That is, by no means, a trivial question, and it does vary based on the operating model that we use for software development.  In traditional waterfall development, we usually have testing occur by role.  Developers are typically assigned the responsibility to functionally test what they code, usually called “unit testing”.  In fact, most of what I’ve seen in the field is that those “unit tests” are usually simplified functional “spot tests” that never make it into any regression suite, rather than the eXtreme Programming type of unit tests used in Test Driven Development.  These spot tests are typically fast, throwaway, and unrepeatable.  The people in QA sometimes are called upon to do this testing.  The best that can be said is that we saw some version of code that made the code functional for its intended purpose at the point at which point the test passed.

Traditional waterfall type testing does employ QA testers at the “non-functional and regression tests” stage.  QA will typically write long test plans intended to ensure that functionality not only meets the functionality desired, but also does not have an adverse impact on either previously developed functionality or non-functional aspects of the software, such as speed, capacity, etc.

In a traditional world, market tests are typically not done by either developers or QA personnel.  That testing occurs only once the product has been released to the marketplace.  The testing occurs by the customers of the product.  Unhappily, the results of that testing show up as missed market expectations, and occur after any hope of fixing the software would be possible, as the product is now in the marketplace.

If we are looking for better quality, both in terms of assuring that the software is written correctly, as well as being the right code to solve the problems that customers need a solution for, you’re going to have to do two things.  You’re going to have to push testing forward, so the feedback loops occur quicker. You’re going to have to automate as much of the testing as possible, so that you can iteratively attack the problem, and not have huge amounts of labor to fix a product that is not on course with customer expectations.  I can’t count the number of times I’ve seen product teams cut out the software testing done after the code was developed so that it could be delivered on a date previously promised. They did it because the labor and time needed to exhaustively regression test the code weren’t on the timelines that were committed to.  Delivering buggy code to your marketplace is an excellent way to help your competitors capture your market. So is delivering on the wrong solution.  This means that shortening the cycle time for this diagram is critical to make this happen!

This diagram is way too simplistic

Absolutely!  In Lean terms, it is a single stream flow of one piece of required functionality within a product that a businessperson knows must be presented to the marketplace as a “minimum viable product”, a “minimum marketable feature”, or even something smaller.  Depending on the process that you use for software development, you may be tempted to have as many of these functionality implementations going on at once.  But that increase in WIP (work in process) causes its own set of problems.  If one piece of functionality is being developed that depends on another piece of functionality code, then dependencies start to rear their ugly head.  Having one piece of functionality take longer to develop than originally estimated can have compounding and confounding ill effects on the code depending on it.  The same is true of a solution that doesn’t “meet the mark” on the non-functional and integrated regression testing aspects.  Note that by waiting until “we are ready to release” before we start this type of testing, we potentially grow an enormous WIP of unreleased code.  Any failure at this stage has a lot more code to fix than if we were able to stay with an ideal single stream flow.

“Rules of thumb” are meaningless

You can find many rules of thumb for the ratio of QA to developers if you do a Google search with the words in the title of this blog entry.  You will find people talk about 10 developers to 1 QA tester, 3 to 1, 1 to 1, and many others.  My feeling is that none of these can possibly be correct.  They can’t be right, because they don’t take into account the abilities of both the developer and the tester.  Highly capable developers may be 10 or more times quicker at producing the same code as less capable team members.  The same will hold true for QA testers.  I had a conversation with Fred George a little over a year ago on this topic, and he recounted an assignment where he observed a ratio of 6 testers needed to absorb the work of one highly productive developer.

Rather than going that rule of thumb route, I would urge you to consider getting closer to a single stream flow on individual things that the software needs to do and employ the “Three Amigos” model that George Dinwiddie explains in Better Software, November/December 2011.  Here, we get the BAs, QAs, and developers at the start to write automated tests that serve as the functional requirements for the work to be done.  If we keep the rate of production of these collaboratively-developed tests in line with the actual rate of production that satisfies the tests, we never have to fear that we have something off balance.  If we find, perhaps with a Kanban analysis, that we can’t produce enough tests to keep our developers happy with enough work to do, we may find that we don’t have enough BA types or enough QA types available for us, and can adjust accordingly.

And, yes, there will always be a place for QA exploratory testing on integrated code.  But that should be automated as well into regression suites that are automated and repeatable.

Flow is the most important thing to a business

You may be asking yourself at this point, “Yes.  I get it.  I need to reduce WIP, and not worry so much about rules of thumb to get software done.  But what about that test in the market?  How do we get better at that?”  The answer there is easy – release more often!  And that will take your continuous integration solution to a whole new vista – continuous delivery, and engender a whole new set of problems that are wonderful to have, such as “how quickly can may market absorb new features, and how can I get them to accept things in a more laminar flow fashion?”

Businesses that produce software do it for a purpose.  Usually a pecuniary purpose.  Understanding the reasons behind why excessive WIP is such a dangerous thing to have may put the onus on the business to see how to incorporate smaller product tests, in terms of releases to the marketplace, more frequently.

So, the right answer is?

Since there is no right answer to the question of “what’s the right ratio”, let’s invoke Kobayashi Maru sort of solution.  For those of you who never saw or have forgotten Star Trek II: The Wrath of Khan, the Kobayashi Maru was a simulation exercise that Starfleet put all of its officers through to test if they could save the civilians trapped in a disabled ship in the Klingon Neutral Zone.  Because of the constraints involved, no cadet at Starfleet academy had ever passed the test.  Even the legendary James T. Kirk failed the test twice, only passing it the third time by reprogramming the simulator.

We can’t win this battle for correct ratios between QA and developers by simple rules of thumb.  But we can fall back on the values and principals of Agility, just like Kirk did for Star Trek values and principles.  We need to focus on the team’s people and interactions (specifically, QAs, BAs, and developers) for as much work as possible up front to increase quality (the most efficient and effective method of conveying information to and within a development team is face-to-face conversation).  Keep WIP sizes small (working software is our primary measure of progress).  Test our code not just during development (continuous attention to technical excellence and good design enhances agility), but get it into the hands of customers quickly and often (customer collaboration).

Let’s change the conversation from one asking for rule of thumb ratios into one that asks for collaborative development, better quality, and faster market realization of smaller and smaller chunks of valuable software.  We can measure and find where WIP is causing resource constraints, and do the traditional 5 step Theory of Constraints solution to fix things.  In other words, let’s turn around the faulty question for an answer (that is bound to fail in practice) into a new quest to deliver, measure, and deliver more of what works.

Photo Source

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



Mind the WIP to Become Effective, Not Merely Efficient

Mind the WIP to Become Effective, Not Merely Efficient

“Efficiency is doing things right, while effectiveness is doing the right things.”

We’d all like to be efficient.  We’d love to show our boss how cheaply we can get something done, compared to all her other options.  We’d like to have extra time, maybe to clean up, beautify, or just goof around.  Efficiency is the hallmark of mass manufacturing.  If we build it for less, we have more profit to put in our pocket.  But did you ever consider that when we build software in situations where resources and dates are fixed that you’re making a huge, risky bet?  I venture to say that you’d rather ship 80% of your product features on a certain date (and have least valuable 20% left over for “the next time around”) than be 90% done on everything and have absolutely nothing to ship on that due date.

You probably have 2 questions at this point.  “How could that be?” and “How can I be effective, and not merely efficient?”  The answer to both questions is found in three tiny letters – WIP (work in process).

It all begins with estimation

Let’s take an easy example for this piece.  Here’s a list of things that we’d like to develop some software for and release to our new website in the next 90 days:

  • Allow website users to search for and display lists of items in our warehouse
  • Give website users the ability to display pictures and details for an item that they select
  • Give website users the power to display real time inventory status for an item that they are looking at
  • Give website users the ability to create a “shopping cart” of quantities of items that they select
  • Allow website users to complete a transaction and pay with credit card(s), PayPal, direct checking account, or any combination thereof

Assume that since we only have 90 days, the Delivery Team that we have, is the army that we will use. The 90-day window is non-negotiable, because we paid a zillion dollars for a Super Bowl ad that will drive traffic to our website on that fateful day.

If you’re a traditional, plan-driven project manager sort of person, you probably do this project by first decomposing the software stack into components that we’ll put together, and then create a project plan that pulls people, time, and dependencies together and convinces us that everything will be fine.

  • Database for the warehouse items
  • Component to access database
  • Website functionality for search
  • Website functionality for display
  • Website functionality for real time inventory
  • Website functionality for adding, displaying, deleting, and modifying items in the shopping cart
  • Component for integrating our warehouse system with the website real time inventory status
  • Component for implementing the shopping cart on the server
  • Component for credit card processing
  • Component for PayPal processing
  • Component for checking account ACH processing
  • Component for payment coordination between the various processing options

OK.  Got the list.  Get some estimates from various people.  Assign developers against the tasks.  Adjust estimates to make everything fit inside that 90 day window (Tell me that you’ve never done this.  Be honest!).  Announce “It’s going to be tight, but the plan demonstrates that we can do this.  So, let’s get going!”

With software, the odds of everything working in your project plan are a lot like your odds of hitting the Trifecta

But here’s the rub.  Estimates are made for things that we haven’t previously done.  We use our best effort to understand and give meaningful honest estimates.  And we do.  But the uncomfortable truth is that effort-based estimation for software construction is highly variable, due to a variety of factors:

  • Uncertainties with how to do things that we’ve never previously done.  For example, we’ve never tried to interface with our mainframe based warehouse system before.  We may find little nooks and crannies of problems and issues which we must solve to permit the integration that we never considered when we gave our limited “happy path” estimate.
  • Uncertainties around what things we should actually build.  For example, should that real-time warehouse inventory level stop people from adding items into their cart? Stop them from completing a transaction? Or just ask them if they want to place the item on backorder?  We’ve never done this with customers before, and we can’t be sure that customers will bring us enough value to cover our costs of development, since we’ve never before given customers this ability.  If we have to re-do a lot of work for a fully fleshed out feature, that could cost a lot.
  • Uncertainties based on who actually does the work.  Your best developers may be 10 times (or more) productive than your least capable ones.

And then we completely hide that variability in estimates by expressing a single number for the estimate.  The estimate, even though it is expressed as a single number, is really a probability curve.  It might look something like this:

Graph Source

And, finally, we completely compound and confound the problem by linking the tasks together with scheduling dependencies.  “We can’t start/finish this task before finishing/starting that one” (either because of software functionality dependencies or resource dependencies). Our project is a hope, wish, and desire that everything goes according to plan, even though we mathematically know that things will go wrong, and will start to cascade through the chain of scheduling dependencies.  If we start creating the compounded probability of arrogated estimates, we start to see that the nice peak on our bell style estimate distribution curve starts becoming very short and wide.  And our chances of hitting our estimate square on are about as likely as picking the “win, place, and show” horses, in order, at the track.

Yet somehow, seeing everything fit together in a nice Gantt chart that magically ends right on target makes us feel safe and sound.  At least when we start.  But without fail, almost every project plan that I’ve been around goes sour quickly.  One late dependency quickly snowballs into an avalanche of late tasks that just compound and worsen. Blame starts getting metered out.  Morale goes through the floor, and when we’re living the day before the game, we try to integrate everything, and nothing is working.  And now that the business sees the real time inventory status in action (when it sort of works) they aren’t sure that they want it, as they now realize that when people see that an item won’t be in stock for a couple of weeks, they may decide to go elsewhere!  But we’re so afraid that pulling out the code will break 20 things against the middle.  So, it’s part of the deployment, like it or not.

Gantt Chart

Our attempt to be hyper-efficient and get each and every feature working the day before the end suddenly becomes an #EPICFAIL when things don’t work on Super Bowl Sunday.

That’s horrible!  What’s a Mother to do? 

What if we turned this on its head?  Let’s say we start with a single ranked ordered list something like this?

  1. As a website user, I want to search for items to purchase using words in an item’s description, so I can find things to buy.
  2. As a website user, I want to display an item’s description, price, and primary picture, so I can decide if I want to purchase it.
  3. As a website user, when I click the “Buy It Now!” button on an item’s display page, I want a quantity of one item placed in my shopping cart so I can purchase it when I’m done shopping.
  4. As a website user, when I click the “Proceed to Checkout” button, I want my shopping cart displayed with checkout options.
  5. As a website user, when I click the “Purchase Now!” button with the “Debit My Checking Account” option clicked, I want an ACH setup to my checking account for the amount due and the items in my cart shipped to me, so I can become a happy and loyal customer.  Note: this transaction is free through our bank.  Yay!
  6. As a website user, when I click the “Purchase Now!” button with the “PayPal” option clicked, I want my PayPal account to be debited for the amount due and the items in my cart shipped to me, so I can become a happy and loyal customer.  Note: we pay a 1% merchant fee on PayPal transactions.
  7. As a website user, when I click the “Purchase Now!” button with the “Mastercard” option clicked, I want my Mastercard account to be charged for the amount due and the items in my cart shipped to me, so I can become a happy and loyal customer.  Note: we pay a 2% merchant fee on Mastercard transactions.
  8. As a website user, when I click the “Purchase Now!” button with the “American Express” option clicked, I want my American Express account to be charged for the amount due and the items in my cart shipped to me, so I can become a happy and loyal customer.  Note: we pay a 4% merchant fee on Mastercard transactions.
  9. As a website user, when I want to be able to change quantities and delete items in my shopping cart when I am checking out so I don’t get frustrated with a shopping cart that is not exactly what I want to purchase.
  10. As a website user, I want to display an item’s auxiliary pictures, to help push me to clip the “Buy It!” button.
  11. As a website user, I want to display an item’s real time inventory status, to help me decide if I want to buy the item now.  Note: the website user may decide to not buy the item when they see that it’s not in stock.
  12. As a website user, when I click the “Purchase Now!” button with the “Multiple Credit Card” options clicked, I want the ability to setup a complicated transaction [details to be thought through], so I can become a happy and loyal customer.  Note: we have to figure out technically what happens here, how to back out of scenario such as having one card going through and a subsequent card failing.
  13. As a website user, I want to backorder an item that real time inventory shows as out of stock so I can eventually purchase the item.  Making this happen will invoke a workflow involving eMails, order cancellation, and other things.

Notice what happens when we start to develop one thing at a time, in this order, and get feedback from the business as we complete each of these items.  If we got some of the details wrong (it happens all the time!), we can immediately get that right before we move on.  When the business says “Good!” on each item, we then get our deployment to stay “always ready to ship”.  If we run out of time, we may not have every feature completed, but we always have something to show on Super Bowl Sunday.  We’d like to get through the whole list, but we don’t even really know the details for some of this yet.  But we have enough to get started.

Untitled 2

What was different?

That’s easy.  We minded the WIP.

We made sure that we had the highest value items worked first, and did one thing at a time (“Single Stream Flow”, in Lean terms).  We were always ready to ship.  That allowed us to work right up to game time without fear that integration would ruin things.  We didn’t have to worry that we wouldn’t get to go home for the next two days, subsisting purely on Mountain Dew and day old pizza.  We concentrated on delivering value one step at a time, from most important to the least.

That pivot to focusing on business value, rather than figuring out the optimal plan that gets us to done can be a huge mindset change for some organizations.  Sometimes, managers are afraid that someone won’t have something to do when the team is working through the business value ordered backlog of functionality to create.  But remember, every time we add to our WIP, we create the potential that dependencies will keep us from being ready to ship.  We create the potential to very efficiently work on lots of stuff at once, but have nothing to show for it when the clock runs out. That’s not very smart.  And it’s awfully risky. It sure isn’t an effective way for your organization to work.

The prime directive is effectiveness – efficiency is only secondary

I don’t know how it works for your business, but for mine, I’d rather deliver 80% of the highest business valued items and leave the remaining 20% of lowest value items for sometime in the future.  I really don’t want to be 90% done with everything, but having nothing to show for it when the clock runs out, because that last 10% was needed to get anything to work.

For me, Meatloaf said it best, almost 40 years ago.  “I want you, oh, I need you.  But there ain’t no way I’m ever gonna love you. Now don’t be sad, oh, ’cause two out of three ain’t bad.”

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.




But It’s Just a Simple Matter of Coding – How Hard Could It Be?

Many times, I hear business people (such as Product Owners) bemoan how expensive software development is.  And the brutal truth is that for anything non-trivial, software costs a ton of money to develop.  I happened upon some numbers for a fairly simple but elegant iOS app called “Twitterrific”.  Twitterrific is a client to the Twitter network, uses a well defined interface created by Twitter, and does not require any backend development.  It runs on the iOS, which is a well defined, well documented, well supported, and stable environment for both development and use.  According to an article published on the PadGadget site, Twitterrific was developed by a single person (Craig Hockenberry) in about 1100 hours.  The article goes on to do the math at $150/hour, adds in design, testing, and so on, and comes up with a number of around $250K.  That’s just for labor.  No costs for a workplace, infrastructure, developer tools, and so on.  Once you start adding in enterprise costs for offices, management labor, etc., and start building multiple tier complex applications, It’s easy to see why business people would look at the finished work (knowing how they interact with computers, which has become easier and easier over the years) and wonder “How hard could it be?”

That burning issue perplexes many organizations.  It isn’t usually an issue with companies with disruptive products – when margins are huge, things become profitable quickly and easily.  It isn’t usually an issue with a young, well funded company – if there is a long enough runway covered in money, and you aren’t accountable for showing profits until sometime far down the road, costs are not the issue.  However, it is a huge issue in many enterprise IT settings, where capitalized expenditures are rigorously scrutinized to see what work can be done under the funding constraints that exist. So, why is software such an expensive thing to create?

Here are four reasons that I find compelling.

  1. For anything other than trivial software, the potential for failure due to complexity and development risks is huge.  Back when mainframes ruled the earth in the 60s and 70s, the average program was thousands of lines of code. Today, it is very common to see applications that are tens of millions of lines of code. And, since any line of code can potentially affect any other line of code through side effects, a program that is 10 million lines of code is one million times more complicated than a program that is 10 thousand lines of code (a so called “n squared” problem).  The care needed to create code that works at all, and the potential to have bugs emerge is astronomically high.  While we have better tools now than we had forty or fifty years ago, they aren’t a million times better.  Trust me!  I developed software on machines back then as well as now.
  2. Programming is more like an art than it is engineering.  Engineering and building large and useful things on schedule has become commonplace nowadays, and people count on it.  For example, Tommy Lennhamn (from IBM), states that “Examples of successful engineering projects, at least with respect to schedule, are all the construction projects for Olympic Games — as far as I know, all Olympic Games have started on time, on fundamentally new premises.”  The secret is that with engineering problems for things like constructing buildings, components to build from are well known, assemble together well, and don’t create too many unknown side effects when bolted together.  In that type of situation the hard part about the project is agreeing on what to build.  With software, many times the business doesn’t know what to build until they build something to see if it solves the problem.  But that operates in a doubly damning world where developers don’t have the components that assemble together without knowing for sure if their construction in one place has affected something else (and someone else) in the code base.  Not withstanding the efforts of TDD, we can never prove correctness in software to the same extent that we can with mechanical construction.
  3. Not all developers are created equal.  The barrier to enter the field is relatively low.  This attracts a lot of people to compete for relatively high paying jobs.  But, as people like Steve McConnell point out (in IEEE Software, Vol. 15, No. 2), “…differences of more than 20 to 1 in the time required by different developers to debug the same problem” exist.  Does it make more sense to hire one awesome person for $200K or 10 mediocre people for $90K each?  Remember, once you hire 10 people, communication issues will slow you down (Fred Brooks “Mythical Man Month” stuff, but I digress).  While there may be a glut of cheaper labor today relative to the salary requirements of the awesome developers (due to such factors as geographic outsourcing), the reality is most enterprise IT settings would never hire the really awesome developer at a relatively high pay grade, due to political issues. And, even if they could, they would be hard pressed to find talent willing to work in the stodgy environments that many enterprise IT settings have become.
  4. There is still a wall of disengagement between business and development that needs to be broken down.  The Agile movement of the last 13 years has done a lot to publicize the need to have “Product Owners” engage on at least a daily basis (with continuous engagement being the best), but it is still very common to see business people physically and geographically separated from development.  Worse yet is when enterprise business people complain about how much “more important” work they have to do, and how they yearn to return to the days of yesteryear, when they could ask the Project Manager for a status update, and then complain when things were behind schedule.  Lean thinking tells us that faster feedback loop cycles will create less waste, and therefore more and better product.  Given the realities of 1-3 above, and the cost associated with those items, I implore such business people to stay with the development teams and help them help you reach the enterprise’s goals.

It’s a shame that software is so exquisitely expensive to make, as the opportunities to enrich our lives by using relatively cheap hardware is everywhere.  Everything from smart thermostats (such as the “Nest”) to smart light (such as the Philips Hue) to truly personal computers (such as Samsung Galaxy Gear and Google Glass) surrounds us.  And it all takes programming computers in one way, shape, or form.  Unhappily, programming computers is anything but easy.  But if you consider that humans have been building houses, roads, and bridges for thousands of years, and are still faced with colossal failures from time to time, it sort of puts things into perspective.  Programming will continue to evolve into something more of an engineering practice someday, and programming may eventually become something relatively simple.  Of course, we’ll have to find something else to complain about if that ever occurs!

Kinky for Governor Poster

Kinky Friedman ran for governor of Texas in 2006 on the slogan is “How Hard Could It Be?” While this was a great line, no one, including Kinky himself, ever believed it. Even Kinky admitted that “I’ll hire good people!”

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



The Role of Scrum Master – A Permanent Position?

As agile frameworks grow in both popularity and widespread use, especially with Scrum leading the way, Scrum Masters have grown in popularity with companies, and we see an increase in job postings specifically for them. However, we have to understand that “being” an SM is not a position to occupy, it’s living up to a set of responsibilities.

Although the role itself is an artifact of Scrum, the responsibilities of the role are nothing new for a healthy team environment. A Scrum Master needs to support the team’s ability to focus on what is important and deliver on commitments.
team leadership
When organizations implement agile frameworks, a Scrum Master role is needed to help teams re-learn the discipline of healthy delivery. But as time goes by, the responsibilities need to be absorbed by the entire team. There may be one person that takes a lead, but the mantle lies on the entire team’s shoulders, and the position of Scrum Master slowly phases out as a new team capability emerges: leadership.

Leadership at the team level is something we have slowly diverged from in the name of organizational efficiency over the last few decades. Management theory has taught us that the most efficient way of managing the work is to remove tasks and responsibilities outside of each person’s job description. So developers should only do development, QA only testing, etc. Leadership is seen more in terms of a craftsman expertise with a focus on tightly controlling the work itself.

I don’t know if “Scrum Master-ing” is a professional endeavor for a life-long career or more of a step along the way. I think that the more you practice it, the more you really dive into becoming a change agent. The responsibilities of the role lend themselves naturally to that progression. For instance, the more teams you lead as a Scrum Master, the more you learn to read the patterns of impediments, and the more you will seek to prevent/mitigate the formation of problems you have observed in the past. Why wait until challenges materialize when we can go out and influence the outcome early, right? And you would do that by influencing the environment in which the patterns are forming to help “change” the circumstances that are developing. Now, you’re now thinking like a change agent.

Like a good leader, a Scrum Master builds a legacy for the team to perpetuate once the position is no longer needed. And now the role persists through the team.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



The Essence of Agility: Becoming Safer by Controlling Less

The other day, I presented at Agile India 2014 on the topic of “Pivoting Your Organization to Become Agile Testers”. Near the end, when I was tying up all the points in the talk, I was speaking about wastes that come from “big batch thinking” and gave an analogy off the cuff (and way off my talking points!) to illustrate why the most useful testing should be automated at a product functionality level.  The analogy I conjured up is powerful, and is an application of lean thinking that has value in so many enterprise-type organizations.  The basic message is that by planning less (not attempting to not try to preplan so such about how things will work, but instead concentrate on showing that things actually are working) and lightening up on process controls, we can gain a lot in the sense of getting actual stuff done, and not just measuring the busy work that accompanies getting that stuff done.

The analogy I used has to do with cars and the flow of traffic.  It starts with the 20th century concept of separating drivers from pedestrians, which was developed to keep pedestrians safe from fast moving cars and to allow cars to be driven without constantly fearing that they might run into pedestrians.  As wider and faster roads developed, the need to use road signage and traffic signals to control the flow of movement became both an expectation and a necessity.  But the traffic signs and signals had an unintended consequence.  Drivers felt that if they simply obeyed what the signs told them that they were allowed to do, they could be safe in just driving and not have to worry about the current conditions or how they needed to react to them.  Some people slowly began to question these assumptions.  For example, Hans Monderman, the Dutch road traffic engineer behind the Drachten Experiment (more about that in a minute), has said, “A wide road with a lot of signs is telling a story.  It’s saying, go ahead, don’t worry, go as fast as you want, there’s no need to pay attention to your surroundings. And that’s a very dangerous message.”

An interesting way of “seeing” the issue of inattention when concentrating on a task at hand is to experience “The Monkey Business Illusion”.  Take a minute to try the YouTube video and play along!


Are you back?  How did you do?  Think of how this applies to driving, and maybe even some close calls you had when driving.  When our attention is on task to do something specific asked of us, such as “Count how many times the players wearing white pass the ball”, or “Speed Limit 50 MPH”, we can easily lose the bigger perspective of very important things.

Anyhow, back to “The Drachten Experiment”.  Drachten, a Dutch town in the Netherlands, is an old medieval city that has been growing steadily in the past 60 years.  The town had a problem that it wanted to solve.  It wanted to lessen downtown traffic congestion and reduce traffic accident rates at the same time.  For 100 years, the accepted way to accomplish this has been to separate people and vehicles, introduce road signs and traffic signals, and have strict rules on who has the right of way and when.  But the town had enough of that school of thought, and felt it could go better by doing something which seemed weird at first glance.

What Monderman did for Drachten was to remove all of the traffic lights and road signs from the town’s center.  This had the effect of reducing accidents from about 8 per year before the experiment to about 1 per year after the experiment.  Throughput, even in the face of additional traffic numbers, has increased with average measured delays reduced from about 50 seconds to between 10 and 30 seconds.  An explanation of the reduction in accident rates is that many drivers habitually race through traffic lights right before they turn red.  They get lulled into a false sense of security by the confidence that they have right of way – making them less aware of potential hazards, such as people that are anticipating the changing traffic signal and ready to assert their right of way.  Perhaps you can recall an experience like that.  It usually evokes pangs of anxiety as we think about what could have happened.

Drachten before Hans Monderman.


Drachten after Hans Monderman.

The trouble is that we have the same sorts of issues when we do traditional project management.  We preplan the dependencies, map out what the expectations for what final integration will be, and then start reporting on how close each sub-project is towards the ultimate goal.  We install our own forms of traffic signals, called “gates” in traditional project management, and have fairly high ceremony events, using terms like “go / no go” decision points.  In our zeal to track and manage progress towards the ultimate goal, we can be blinded to the actual problems facing the project.  The false security that we get from showing progress against the plan, by using the process we employ, can blind us to the fact that a gorilla has come on the stage, that the curtain has changed color, or that our car now poses a treat to a pedestrian entering the cross walk.  Successful projects don’t get that way by just being right on the process.  They have to pay attention to the right things that change, and make sure that the relevant issues are surfaced as quickly as possible to avoid last minute close class (or worse!)  Letting go of some process gives much better results than rigid control of flow.  That’s what agile planning is all about.

Lao-tsu, in the Tao Te Ching, said it best:

Stop trying to control. Let go of fixed plans and concepts, and the world will govern itself.  The more prohibitions you have, the less virtuous people will be.
If you don’t trust the people, you make them untrustworthy.

We hire good people to work for us.  We owe it to them to trust them to do the jobs they were hired to do.  As leaders, it’s our job to ensure that the organization is configured to allow that to occur and that there are as few road signs and traffic signals as we can get away with.  We don’t need false senses of security.  We need results.  Reduce the control points down to where people act as people, see the whole picture, and can act appropriately.  That’s the essence of agility, and where we need to be.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



Tracking Points vs. Tracking Hours

One of the questions that frequently comes up in the CSM trainings is:

“Is it better to track points or hours?”

The short answer is that either is fine. Both provide valuable information which can be helpful to the team. Whether you track both, or focus on just one of them is really dependent on the what the team feels provides them with a better understanding of their ability to meet their commitment (or forecast) during a sprint.

Tracking Points

If a team is tracking points in a burndown chart during a sprint, what they are paying attention to is the total number of story Tracking Pointspoints that belong to work they have committed to but that has not yet been accepted as potentially shippable by the Product Owner. In the sample, you can see that the team is working on a 10 day sprint and has forecast that they will be able to deliver 30 points of potentially shippable work. In this example, they are on Day 6 of the Sprint. Each day they have been updating the burndown and now have only 18 points which have yet to be accepted by the Product Owner as being potentially shippable. It is important to note that when you are tracking work this way, the burndown chart does not directly speak to how much work (labor) still has to be done. It only cares about how much is not yet potentially shippable. So, regardless of how many hours of labor it takes, the team is expected to meet their forecast and get the remaining 18 points to a state where the PO accepts it as being complete enough that it could ship.

Tracking Hours

If a team is tracking work in hours, what they are concerned with is the estimated total number of hours of work they expect to have to complete in order to meet their forecast and get all their work accepted by the Product Owner as being potentially Tracking Hoursshippable. In this type of burndown, the team is able to see how many estimated hours of work they have left, and how many days they have to do it in. In the example provided, the team began the sprint with 150 estimated hours of labor. Each day they have updated the burndown to show how many estimated hours of work remain. Because the team can add or remove tasks as needed during the sprint, they may have days when the work remaining is higher than it was the day before. This would probably indicate that they discovered some additional tasks which were required to complete the work they committed to. Likewise, they can remove tasks, so you may see drops in work remaining which are a mix of work having been completed and tasks that the team has removed.

The Scrum Framework does not specify a preference for tracking either points or hours. Most teams will find one more helpful in understanding their likelihood off meeting the commitment over the other. If you are using software to track your work, it will probably offer you the option of tracking either or both. It really comes down to what the team prefers.

IMHO – Coffee is for Closers!

In my own personal experience, I have developed a preference for tracking points over hours. The reason for that is that I was a Scrum Master on a project where the team continually struggled with their ability to deliver potentially shippable work. During the Sprint it was very common to see tasks added (which is what should happen if it needs to). Unfortunately, this happened in almost every Sprint and the additional work was very frequently significant enough that the team could not meet their commitment. The increased number of task hours would usually be the reason the team gave for not meeting a commitment at the end of the sprint. What was happening was that when the work was defined during Sprint planning, the team was not breaking it down to a point where they really understood what they were committing to. When this happens, the team should discuss how to become better at breaking down and estimating the work so that they do not over commit. In this particular case, that was not happening. Burning down the work indicated that the team was actually getting work done each day, and every day we were able to see the gap between our Coffee is for Closers! remaining capacity and the labor that was estimated to be required. The problem was, with this particular team, on this particular project, the fact that they were working really hard was not getting us any closer to meeting a forecast or delivering work which was accepted as potentially shippable. It may sound harsh, but there is no award for effort in Scrum. If the work is not done, it is not done. And at the end of the Sprint, no matter how hard the team has worked, they will have met their commitment (forecast) or not. Work will be in a state that is potentially shippable, or it will not. You can’t deliver unfinished product to your customer and ask to be paid because you worked really hard. (If you do have a customer like that, be VERY nice to them.) In Scrum, the goal is potentially shippable increments of work at the end of each Sprint. Focusing on this is why I (personally) tend to favor tracking points, but as I mentioned above, either is fine and both is even better.

Watch Dave Prior bring it home – go on press play.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



Planning Horizons: Decision-making within Agile Frameworks

Organizations need to plan, it’s not a “nice-to-have option”, and they need to do it effectively at multiple levels. In the end, this planning needs to feed an enterprise’s ability to do financial forecasting of costs, cash flows, budgets, etc. A company that cannot plan will most likely not understand what it’s doing with its assets, and inviting a lot of waste in its processes. Yeah, planning IS essential.

As we help our clients through their transformations, we must keep their planning horizons in mind and always in our sights.

Planning Horizons

A planning horizon is the time period over which effective decisions can be made to support organizational activities, for instance delivery of features, customer support, market-reporting requirements/filings, R&D investment, even budgeting and financial planning. Each horizon operates on a different time scale, which is why we talk about multiple horizons, but in my experience companies tend to operate in 3:

  1. Tactical horizon – what I need to know to enable decisions in a timeframe measured in weeks, immediate future
  2. Product/Project horizon – what I need to know to enable decisions in a timeframe measured in quarters
  3. Strategic horizon – what I need to know to enable decisions in a timeframe measured in years, green field of ideas

Clearly the level of detail needed to enable effective decisions for each horizon is different. I am purposefully not defining the exact number for each one. That really does depend on the company in question.

If we are talking about agile organizations working in some kind of time-boxed framework (like Scrum for instance), then the horizons can be defined in terms of those time boxes using concepts of iterations, releases and years, respectively.

When implementing agile frameworks sometimes we lose track of the need for organizations to plan ahead in the cloak of allowing that information to emerge over time. We understand that eventually it will, but often some information is really needed much earlier. We talk about trust, and a leap of faith that it will come, but we don’t meet our client’s need where they are right now: in a state of flux between change states.

I want advocate not losing sight of what is important to the client (organization, division, department) right now, to help them through the change they have undertaken. This is a basic coaching tenet we hold: meet the client where they are now, so we can help them move to where they need/want to go.

3I/3R/3Y Model

To that point, I usually think in terms of a simple 3I/3R/3Y model, where a tactical horizon extends from now to about 3 iterations (3I) and deals with what I’m trying to get done in the now. A yearly product horizon in most organizations lines up with a quarterly release schedule, so that is about 3 releases (3R) in the future. Finally, the strategic horizon deals with what is the next new thing we have to consider to remain competitive for the next few years. Some companies try to plan 5 years in advance, but given the current rate of technological change, I think 3 years (3Y) turns out to be more realistic (and that can be argued should be shorter still! – not going there).

How many times have we heard in companies that “we don’t know what we are going to work on the next sprint until sprint planning”?

This simple 3I/3R/3Y model gives us nice horizons for the tactical, product and strategic planning that needs to take place. All healthy organizations are able to forecast with confidence along those timeframes.

Your Effective Planning Horizons

So next time you look, ask yourself, what are my effective planning horizons? And, how do they support the enterprise? How long can it wait for the emergence of the information really?

I am curious to find out from folks out there if they observe these same trends, and how they have shaped their companies.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



Ideal Days: Unnecessary Agile Evil?

Agile teams have a valid need for sizing or estimating user stories. Most teams use a dimensionless scale such as story points for estimating. In some quarters “Ideal Days” are often held out as reasonable replacement for story points. When we look at the reasons for sizing stories and the ways in which ideal days open (or leave open) doors to traditional management styles that don’t support Agile development, we see that there is no real advantage to using ideal days and they tend to retard some of the paradigm shifts that need to happen for successful Agile adoption.

Why do Agile Teams Estimate?
Agile teams, with the Product Owner, estimate or size user stories for two reasons.
• Product Owners need to know how big a story is so that she can compare its relative worth to other stories. That is, in order to properly groom and prioritize the backlog the Product Owner needs to know both the value and the cost of completing the story.
• Teams use story estimates, in conjunction with an understanding of their historic velocity, to gauge how much work they can reasonably commit to when planning a sprint.
In both the cases we don’t need anything beyond relative consistency of estimates. Relative consistency means that, for example, any stories with a size of 5 are all about the same amount of work for the team to complete. Similarly, we would expect a story of size 10 would be about twice as much work as a size 5 story. A dimensionless relative size is all that is needed.

Agile “Estimates” are Intentionally Imprecise
It is a good practice to use predefined sequence of numbers such as a Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, etc) or a geometric sequence (e.g., 1, 2, 4, 8, 16, 32, 64) for story sizes. This practice is good for two reasons.

First, it tends to reinforce “just enough” when it comes to estimates. Any estimate is, by definition, likely incorrect. Teams can waste a lot of time chasing an illusion of precision and accuracy when in practice, unknowns will almost always cause estimates to be inaccurate. They need to be good enough for the purposes discussed; any work to go beyond that is waste.

Second, sequences like Fibonacci Numbers or a geometric sequence, provide values that get further apart as they get larger. This helps build uncertainty and risk into estimates of larger pieces of work.

Story Points or Ideal Days?
Story Points are a true dimensionless measure. Any given team’s notion of what constitutes a “point” is independent of any other team’s estimates. My five point story may be twice as much work to complete as your five-point story. As long as my sizing is relatively consistent within my team and our stories it doesn’t matter if they are consistent with yours.

Ideal Day estimates are not really dimensionless. They are effort estimates, not abstract, dimensionless sizes. The standard for an Ideal Day varies, but for the sake of our discussion we will assume this definition: one Ideal Day represents the amount of work an average developer could complete in one eight hour day completely free of interruptions such as drop-ins, phone calls, meetings, etc. So, what is wrong with using Ideal Days? Ideal Days open the door, intentionally or not, to managers and PMS who seemingly can’t keep themselves from “doing the math” with Ideal Days.

Well, let’s see here. You are telling me that the team will produce 42 Ideal Days in the two week sprint. But I see that you have a 7 person team, so if we assume an 80% loading factor, the team’s output should be 56 Ideal Days (80% x 7 developers x 10 days). Shouldn’t the team be able to include a few more stories in the sprint?

Needless to say, such conversation seriously undercuts the team’s decision making and self-management!
Ideal Days also encourage comparisons between teams, since the basis for estimating is presumed to be standard.

Mary, I see that your team is averaging a velocity of around 7 Ideal Days per team member for the last few sprints. Bills team is averaging 8. That’s about 15% better! How come your team isn’t as productive as Bills? Can we boost their productivity a bit?

Given the intentional imprecision in Agile estimates, even with Ideal Day estimates much of the difference between Bill’s and Mary’s teams may be accounted for by that imprecision.

When managers and PMs start “doing the math” with Ideal Days we easily fall into a game where developers estimate not based their view of the work, but based on what they think the estimates should be to keep voices outside the team happy. That kind of lack of transparency is anything but Agile!

Ideal Days blog

Managers and PMs are not, as a class, evil or stupid. The shift to a team-based, high trust paradigm is hard. Ideal Days allows them to more easily stay in the old paradigm and to keep using the tools they know how to use. Story points encourage us to shift our thinking to a more Agile mindset. Managers and PMs have a legitimate function. When they push back on dimensionless “estimates” in Agile our response should be to point out the risks inherent in things like Ideal Days and help them develop the skills and techniques to manage in an Agile environment.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.



Estimation is a Cruel Mistress

Estimation is a thing of beauty.  It’s the crystal ball that every business wants to allow perfect decisions to be made.  It’s sort of a yearning, a siren song that calls to us and tells us everything will be wonderful.  If only we could estimate well, then we could have perfect insight beforehand into what things will cost (in terms of research and development, marketing, etc.) and what those things will produce in benefit (in terms of both extrinsic as well as intrinsic values).  Our decisions would be risk free, and therefore we could make perfect decisions.

But estimation is also a shiny object that can become very distracting.  Unhappily, what far too many people fail to realize is that, no matter how careful we try, estimates are not exactimates, and decisions have to be made with less than perfect accuracy.

Estimation is Bad – It is Nothing More Than Waste

Why do we feel compelled to estimate our work before we start work?  There are many reasons, but the first one that we should dispel is that we need to estimate before we work.  Estimating before work starts is something that we’ve been told since the days when project managers were first conceived of.  Estimate, analyze, design, code, and test.  We were also told that we needed to get “signoff” before we moved from one stage to the next.  And estimation was always fundamental in those moves.  If the estimates were high, and the benefits were low, then our manager would tell us to not work on the project now.

But what if we challenged that assumption?  What if we could produce solid working software, ready to use to make our company money, and do it efficiently without any sort of documentation whatsoever?  Why wouldn’t we do that?  In fact, if those assumptions are actually false (that is, you could produce positive value without estimating), then you’d be certifiably insane to create all of that documentation which isn’t needed!  Why would you create unnecessary documentation when you could be creating code instead?  After all, we get paid for the software and the value it produces.  No one buys our documentation about the process to get to software!

Estimation is Good – It is Needed to Plan

Well, there actually is a use for estimation.  Even in that perfect world where software is getting created with only conversations between the business and the technical sides of the project, there’s still one thing missing.  And that’s planning.

A business needs to plan and mold its customer expectations based on the capacity of the organization to produce useful and valuable software on a timetable that makes sense to both the business in terms of revenues, and the customers in terms of value received.  Unless we can have some form of plan, we can’t set those expectations.  And getting expectations from customers, and therefore expectations of things like revenue from customers is important.  It’s how we plan our business.  We need to marry and couple those plans together to have a viable business, pay salaries, and so on and so forth.

The job of the business will be to set broad expectations for what might be available in what approximate timeframe, and then manage those expectations as the dates begin to get close.  So, when the organization plans its roadmap, the huge epics on the backlog will need some sort of rough estimates to couple to the velocity of the teams creating the software.  That coupling of plans allows the business to derive the expectations of when things might be available.  It’s simply not acceptable for customers to ask, “So, what features are on the horizon for the software?” and get a response of “Oh, gee.  We don’t know yet.  But we’ll let you know what we did when things are done!”

The Evolving Role of Estimation in an Agile/Scrum Setting

Estimation in Agile and Scrum is not a one-time learned skill.  Because of the true difficulty of actually creating accurate estimates, it is rarely done well.  The real point of this piece is say that accurate estimates are really never needed for development, but only estimates good enough to set reasonable expectations are needed for planning.  But there is another reality that estimation clarifies.  As the Delivery Team moves from more traditional development into an Agile model, their ability to estimate appropriately increases.  There are at least two causes of this: the Delivery Team remains constant over time, so people learn their collective capacity, and the Delivery Team begins to understand that estimation, by itself, is not needed to actually produce software!

So, what progression can one expect a Delivery Team to go through when making the transformation from traditional to Agile development?

At first, when the team works on very large epic type stories, many times the team is so poor at estimating that they routinely miss iteration commitments.  When we examine how the software is being created, we see that the team is actually practicing a form of “Scrumfall”, where a story starts in one iteration or two with analysis and design, followed by another iteration with implementation, followed by testing, etc.  Tasking is used to try to figure out what is possible, and the iteration planning sessions are laborious, lengthy, and end with poor results.

As the Agile transformation proceeds, it is common for the team to still be using fairly large epic type stories. Many times, these stories are “assigned” one story per developer per iteration. Since the stories have large story point sizes, tasks are still used to assure that story will fit into iteration.  The results may be marginally better and the iteration goals may be somewhat more predicable.  But the iteration planning sessions are still intolerable.  Yet somehow, the fragrant scent of change wafts in the air.

Then, as the transformation progresses, smaller user stories, perhaps around 5 for each developer per iteration, start becoming common.  The stories will now still vary in size, but we no longer see things like a story that eats an entire iteration.  The Delivery Team becomes much better with sizing these stories using relative estimation.  Tasks are still used to ensure that adequate capacity will likely be available from the entire team.  But the tasks are now more of a cross-check rather than the way that the iteration is planned.

As the transformation progresses further, even smaller user stories get written and used for iteration planning.  Those smaller stories have better foreseeability, so less uncertainty is present, which makes the estimates better.  Additionally, because of its experience working together, the team starts feeling comfortable with relying entirely on velocity to get to an iteration commitment.  It’s at this point that tasking becomes a waste.  Iteration commitment can be done purely by looking at team availability and story point commitment!

Further down the road, the transformation starts having even smaller detailed user stories that all are fairly similar in size. The act of negotiating while writing the right story (with appropriate acceptance criteria) now allows the team to not even estimate anymore. We begin to start writing stories that are more or less uniform in size.  Iteration commitment can be done purely by looking to the team availability for the iteration and just counting stories.  Oh, and by the way, once we can avoid doing estimation during iteration planning, we get back a significant amount of time for creating even more product value per iteration.  Of course, all of this reduction in estimation is really just continuous improvement doing its job!


Estimation, like beauty, is something that we are drawn to.  We are attracted to the idea of estimation and feel that somehow if we possess a great way to be comfortable and good at it that our lives will be better.  But, at least for most of us, we have to be careful to not fall prey to its deception.  Our lives are immensely better with things like openness and honesty, not just crass momentary illusions of what seems beautiful.  In the product development world, at least, those values help us achieve the goals that really matter.

Thandie Newton as Stella in Guy Richie’s 2008 film “RocknRolla” (http://www.imdb.com/title/tt1032755/). “Beauty is a cruel mistress.”

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.