A while ago I finally had the chance to do the marshmallow challenge. I learned a few things, but not the things I expected to learn. I feel it might be worthwhile to share these learnings.
The marshmallow challenge is a team-building exercise. Each team gets a yard of rope, a yard of tape, a bunch of uncooked spaghetti noodles and one marshmallow. With just these materials each team has to build a structure holding the marshmallow as high as possible, in 18 minutes.
Our team consisted of three people. Each team would do the marshmallow challenge twice, with a retrospective in between so that the learnings of the first round might be applied in the second round.
I had read about the challenge on the internet and knew that young children performed best at it, because they simply started doing instead of trying to come up with a detailed plan beforehand. So, during the first round I suggested a simple approach to start off with and then just dived right in.
One other person on the team went along with my idea, but the remaining team member had a different idea. I mostly ignored those suggestions and stuck with my initial strategy, because I didn't want to lose time arguing.
Obviously, this was a mistake. This team member tuned out and stopped cooperating, wistfully looking at the work of the other teams who did implement his strategy or something similar. Although those teams failed miserably, so did we, and we had to scramble in the final minute to get the marshmallow just a few inches high. At least we had a result, some teams had nothing at all. Small consolation.
During our retrospective we concluded that whatever strategy we would choose, we all had to agree to it, even if that meant spending a few minutes getting to that agreement. I acknowledged that I should be more open to other people’s ideas and the other team member agreed to be more cooperative even when their idea wasn’t fully adopted.
Lesson learned #1: before embarking on a mission, make sure to have buy-in from all team members on the approach you will take. If this takes some time, then so be it. You can’t skip this step.
We also discussed how we would approach the technical side of building the structure in the second round. We agreed that we would try an incremental approach. We would first build a single story with the marshmallow on top of it and focus on making it sturdy, then add the next story underneath the first one, and so forth. We could stop when time was running out, or when we felt adding another story would be too risky, or if we ran out of materials to build another story. Assuming we would finish at least one story, we would have a result.
This time we were asked to estimate the height of our marshmallow for round 2. Adhering to the idea of empiricism we estimated the result to be the same as in the previous round.
As we were able to apply the learnings of the previous round we did much better this time. We finished two stories and were able to more than double our result from the previous round. We finished a few minutes early and decided to not to risk adding another story, even though this structure was a lot sturdier than the one from the previous round.
The fact that we decided to use an incremental strategy had a few interesting effects. If you have an effective way of creating valuable increments within a small timeframe then there is no need to time box it. Indeed, it might even be wasteful to do so. However, if you cannot create a valuable increment within a small timeframe then time boxing might be a way to get to the point that you can. Once you reach that point, however, it might be more effective to ditch these training wheels.
Lesson learned #2: time boxing becomes obsolete when using an effective incremental approach.
Also, the estimating we did in between rounds seemed silly in retrospect. We knew the materials we could use and the maximum amount of time we had to do the job. What did it matter what we had estimated? Providing an estimate did not make us more efficient or provide value in some way. Exceeding or falling short of the estimate did not define success, only the final result mattered.
Using an incremental strategy we were able to achieve the best possible result in the allotted time. It also allowed us to mitigate risk as we were more confident that we would have a result when time ran out. We could continually evaluate whether it might be worth it to add another story to the structure.
Lesson learned #3: estimating becomes obsolete when using an effective incremental approach.
Summary: the marshmallow challenge is a fun and quick way of learning to work together on a task you have never performed before.
Make take-aways:
- Take the time to assure buy-in from all team members on the approach to take
- Use an incremental approach to produce results quickly and mitigate risk
- An effective incremental approach makes time boxing unnecessary
- An effective incremental approach makes estimating less useful
René Wiersma is an architect and Scrum Master at New Nexus Mobile. His team uses practices from Agile, Lean, Scrum, XP, DevOps and Kanban. René blogs about his real world experiences.
Wednesday, September 5, 2018
Thursday, December 28, 2017
Story Slicing: A Real Life Example
This is a real life example of how we split a large user story, sometimes called an epic, into manageable bits of less than two days. By doing so we can amplify learning, get feedback early and reduce risk.
This functionality will be part of a platform we built for a customer. With this platform consumers can order products from affiliate webshops through an app.
It consists of an app (iOS & Android), a management web portal, various databases and several back-end services accessible through an API.
This is an existing platform, so a lot of infrastructure is already there. It means we have to be careful not to break existing things.
Currently, the payment of an order is not done in the app. Affiliate webshops have to work out payment with the customer for themselves when an order comes in.
Understandably, for webshops this is a hurdle to join the platform. Also, consumers expect to be able to pay for their orders right away in the app and do not always want to pay on delivery. For these reasons enabling payment in the app has become the #1 feature on the backlog.
Our customer has done the legwork choosing a Payment Service Provider (PSP). We have worked with other PSPs before, but we haven't worked with this particular one. We find it hard to gauge how much work the whole story will be and what challenges we may encounter.
The first thing we do is define a technical spike to analyse the possibilities and restrictions of this specific PSP. We spent a day or so playing around with their API on a sandbox environment and reading their online documentation.
We are now fairly confident we can realize the features as asked for by our client and do so in a reasonable timeframe.
We have also learned that there are two major parts necessary to implement the whole payment story:
The order in which these two things have to be built is obvious. We can't let consumers pay in the app if we can't transfer the money to the webshops, so #1 has to be implemented first. Along the way we will probably learn things which will be useful when implementing in-app payment.
To register webshops with the PSP we have to implement a tab in our website where an affiliate webshop can:
We decide to focus on #1 and #3, because those steps are necessary to register webshops as PSP customers. Seeing the results will allow us to verify whether registration has succeeded. The other parts of the story can be deferred until later.
The registration may return a number of results, such as Success, Failed, Cancelled, Success & Abort, Pending, etc. For the first story we decide to focus solely on dealing with the Success result.
So, in summary, we distill the first story to building a tab in our portal where an affiliate website can :
This is a so-called tracer bullet story. All components necessary for the completion of the larger story will be touched, but in the most marginal way possible. If there are any troubles connecting to the PSP, or if implementing the flow of a payment result proves to be difficult, or if there turns out be something we didn't think about we would rather find out sooner than later.
The following components have to be built for this first story:
1) A table in the database that holds:
- Webshop Id
- Customer Reference number as returned by the PSP
- Bank Account number
- Payment Registration Status
2) An endpoint to retrieve the Payment Registration Status of a Webshop
3) An endpoint to Start Payment Registration
4) An endpoint to receive a Payment Registration Notification from the PSP
5) An endpoint to Get Payment Registration Info
6) A tab page that depending on the Payment Registration Status shows:
- A partial webview for starting a Payment Registration
- A partial webview for showing Payment Registration Info
We figure each of these tasks should take less than two days to complete. They are also atomic, meaning they can be built and deployed without interfering with the rest of the application, except for #6 which has to be behind a feature flag if we want to deploy to production without this feature enabled.
It makes sense to build the components in the order in which they are summarised, but it is not strictly necessary. Several of these components can be developed in parallel, although that does mean close communication between the individuals or pairs working on them to determine the interface between these components.
Building this story first allowed us to learn about the challenges and opportunities working with the API of this particular PSP.
A thing we discovered, for example, was we didn't actually need to store the Bank Account number and the Status with the Registration, but that we could retrieve them from the PSP with a simple call when needed.
We also learned that we wanted the pages on the hosted environment of the PSP to be in the same language as the app, a feature we hadn't thought of beforehand.
Another boon was that, while working on the story, we could ask some specific questions to the PSPs support desk, opening up a channel of communication with them. They informed us of some features that would be released shortly that could be useful to us, which we otherwise wouldn't have known about.
In this post a real life example was shown of how to split up a large user story (epic) to make it manageable, reduce risk and amplify learning.
First, we split up the user story by user type: webshops and consumers. Then we singled out a single "tracer bullet" flow to implement first, leaving the other flows for later.
For this first story slice, we left out user interactions, such as help texts and dialogs, that will be useful to have at some point, but that we do not need for now.
Finally, we chopped up the remaining functionality into technical tasks that are each less than two days work and can be developed separately and in parallel with each other.
This functionality will be part of a platform we built for a customer. With this platform consumers can order products from affiliate webshops through an app.
It consists of an app (iOS & Android), a management web portal, various databases and several back-end services accessible through an API.
This is an existing platform, so a lot of infrastructure is already there. It means we have to be careful not to break existing things.
Currently, the payment of an order is not done in the app. Affiliate webshops have to work out payment with the customer for themselves when an order comes in.
Understandably, for webshops this is a hurdle to join the platform. Also, consumers expect to be able to pay for their orders right away in the app and do not always want to pay on delivery. For these reasons enabling payment in the app has become the #1 feature on the backlog.
Our customer has done the legwork choosing a Payment Service Provider (PSP). We have worked with other PSPs before, but we haven't worked with this particular one. We find it hard to gauge how much work the whole story will be and what challenges we may encounter.
Spike
The first thing we do is define a technical spike to analyse the possibilities and restrictions of this specific PSP. We spent a day or so playing around with their API on a sandbox environment and reading their online documentation.
We are now fairly confident we can realize the features as asked for by our client and do so in a reasonable timeframe.
We have also learned that there are two major parts necessary to implement the whole payment story:
- Allow webshops to register with the PSP as customers of our platform
- Allow consumers to start payment of their order in the app
The order in which these two things have to be built is obvious. We can't let consumers pay in the app if we can't transfer the money to the webshops, so #1 has to be implemented first. Along the way we will probably learn things which will be useful when implementing in-app payment.
Register a Webshop as PSP Customer
To register webshops with the PSP we have to implement a tab in our website where an affiliate webshop can:
- Start the registration process. This will open up a web page on the PSPs hosted environment.
- See the status of registration, when started but not successfully finished. This might be "In Process" or "Failed". Each status has a specific explanatory text and icon
- See their PSP customer reference number and bank account number, when registered
- Click on a link to the Terms and Conditions
- See a succinct help text
- See a Success dialog when registration is completed successfully
- Automatically and periodically refresh the screen when a payment is "In Process" to check for any updates (notification of registration is asynchronous and might take a few minutes to finish)
- Let the webshop change their bank account number registered with the PSP
We decide to focus on #1 and #3, because those steps are necessary to register webshops as PSP customers. Seeing the results will allow us to verify whether registration has succeeded. The other parts of the story can be deferred until later.
The registration may return a number of results, such as Success, Failed, Cancelled, Success & Abort, Pending, etc. For the first story we decide to focus solely on dealing with the Success result.
So, in summary, we distill the first story to building a tab in our portal where an affiliate website can :
- Start the registration process and open a web page in the PSPs hosted environment
- See the resulting customer reference number as returned by the PSP and the bank account with which they registered
This is a so-called tracer bullet story. All components necessary for the completion of the larger story will be touched, but in the most marginal way possible. If there are any troubles connecting to the PSP, or if implementing the flow of a payment result proves to be difficult, or if there turns out be something we didn't think about we would rather find out sooner than later.
Tasks
The following components have to be built for this first story:
1) A table in the database that holds:
- Webshop Id
- Customer Reference number as returned by the PSP
- Bank Account number
- Payment Registration Status
2) An endpoint to retrieve the Payment Registration Status of a Webshop
3) An endpoint to Start Payment Registration
4) An endpoint to receive a Payment Registration Notification from the PSP
5) An endpoint to Get Payment Registration Info
6) A tab page that depending on the Payment Registration Status shows:
- A partial webview for starting a Payment Registration
- A partial webview for showing Payment Registration Info
We figure each of these tasks should take less than two days to complete. They are also atomic, meaning they can be built and deployed without interfering with the rest of the application, except for #6 which has to be behind a feature flag if we want to deploy to production without this feature enabled.
It makes sense to build the components in the order in which they are summarised, but it is not strictly necessary. Several of these components can be developed in parallel, although that does mean close communication between the individuals or pairs working on them to determine the interface between these components.
Aftermath
Building this story first allowed us to learn about the challenges and opportunities working with the API of this particular PSP.
A thing we discovered, for example, was we didn't actually need to store the Bank Account number and the Status with the Registration, but that we could retrieve them from the PSP with a simple call when needed.
We also learned that we wanted the pages on the hosted environment of the PSP to be in the same language as the app, a feature we hadn't thought of beforehand.
Another boon was that, while working on the story, we could ask some specific questions to the PSPs support desk, opening up a channel of communication with them. They informed us of some features that would be released shortly that could be useful to us, which we otherwise wouldn't have known about.
Summary
In this post a real life example was shown of how to split up a large user story (epic) to make it manageable, reduce risk and amplify learning.
First, we split up the user story by user type: webshops and consumers. Then we singled out a single "tracer bullet" flow to implement first, leaving the other flows for later.
For this first story slice, we left out user interactions, such as help texts and dialogs, that will be useful to have at some point, but that we do not need for now.
Finally, we chopped up the remaining functionality into technical tasks that are each less than two days work and can be developed separately and in parallel with each other.
Wednesday, November 15, 2017
A Review of Clean Architecture
Robert C. Martin's ("Uncle Bob") Clean Code was a real eye opener for me when I read it a few years ago. I learned many things from Clean Code and it made me a better programmer.
Recently I've become increasingly interested in software architecture. So when I heard Clean Architecture was coming out, I ordered it immediately.
As I was pouring over the Table Of Contents in preparation of this review I caught myself thinking: "Did I read all that in just a few days"?! That really is a testament to Uncle Bob's ability to make such a book a breeze to read. It is very accessible, sprinkled with personal anecdotes and a bit of humor.
Due to the nature of the subject it is somewhat less readily applicable to my everyday work than Clean Code is. Still, it is a very practicable book, a few more theoretical chapters not withstanding. Certainly, next time when starting a new project I will apply a Clean Architecture.
The book starts out by defining what architecture is and isn't. Martin makes the point that a good architecture should make it easy to change the code when requirements change.
What follows is a nice overview of the history of software engineering and how programming evolved, including a summary of the three programming paradigms: functional, object oriented and structured programming.
When the book dives into the meat and potatoes of clean architecture it is really good. Uncle Bob covers several architectural principles, including the SOLID principles which are just as applicable to architecture as to code. Various chapters are devoted to how and where to draw boundary lines in the code, decoupling and keeping options open.
While reading I kept thinking "yes, yes, I agree!". The book validates many of the ideas I have about software architecture, like abstracting databases, frameworks and other third party components and relegating them to the fringes of your program, rather than putting them front and center. As Martin puts it: the database (or framework, or the web) is a detail.
A case study and an example of the architecture of a simple game show how to apply Clean Architecture in real life situations.
The final chapter, written by Simon Brown, deals with architecture in a bit more detail and presents some options on how to divide code across architectural layers.
An appendix allows Uncle Bob to tell some anecdotes from his early career (up until the early 1990s) and how certain successes and failures shaped his ideas about architecture. While sometimes going off on a tangent, I found it really interesting, and sometimes painfully recognizable, to read.
The cover promises an afterword by Jason Gorman, but I wasn't able to find it. Is it me, or is this an omission?
Conclusion
Don't expect a book that tells you which frameworks and database systems to use. Clean Architecture is not about details like how to use Azure Services or Entity Framework 6. On the contrary, this books focuses on timeless architecture that is resilient to change.
I enjoyed reading it and can recommend Clean Architecture to anyone with an interest in software architecture.
Recently I've become increasingly interested in software architecture. So when I heard Clean Architecture was coming out, I ordered it immediately.
As I was pouring over the Table Of Contents in preparation of this review I caught myself thinking: "Did I read all that in just a few days"?! That really is a testament to Uncle Bob's ability to make such a book a breeze to read. It is very accessible, sprinkled with personal anecdotes and a bit of humor.
Due to the nature of the subject it is somewhat less readily applicable to my everyday work than Clean Code is. Still, it is a very practicable book, a few more theoretical chapters not withstanding. Certainly, next time when starting a new project I will apply a Clean Architecture.
The book starts out by defining what architecture is and isn't. Martin makes the point that a good architecture should make it easy to change the code when requirements change.
What follows is a nice overview of the history of software engineering and how programming evolved, including a summary of the three programming paradigms: functional, object oriented and structured programming.
When the book dives into the meat and potatoes of clean architecture it is really good. Uncle Bob covers several architectural principles, including the SOLID principles which are just as applicable to architecture as to code. Various chapters are devoted to how and where to draw boundary lines in the code, decoupling and keeping options open.
While reading I kept thinking "yes, yes, I agree!". The book validates many of the ideas I have about software architecture, like abstracting databases, frameworks and other third party components and relegating them to the fringes of your program, rather than putting them front and center. As Martin puts it: the database (or framework, or the web) is a detail.
A case study and an example of the architecture of a simple game show how to apply Clean Architecture in real life situations.
The final chapter, written by Simon Brown, deals with architecture in a bit more detail and presents some options on how to divide code across architectural layers.
An appendix allows Uncle Bob to tell some anecdotes from his early career (up until the early 1990s) and how certain successes and failures shaped his ideas about architecture. While sometimes going off on a tangent, I found it really interesting, and sometimes painfully recognizable, to read.
The cover promises an afterword by Jason Gorman, but I wasn't able to find it. Is it me, or is this an omission?
Conclusion
Don't expect a book that tells you which frameworks and database systems to use. Clean Architecture is not about details like how to use Azure Services or Entity Framework 6. On the contrary, this books focuses on timeless architecture that is resilient to change.
I enjoyed reading it and can recommend Clean Architecture to anyone with an interest in software architecture.
Saturday, October 1, 2016
Stuffing Envelopes: The Power of Small Batches
Working in small batches in a Lean principle. That's counter-intuitive for many people. The idea that large batches are more efficient is deeply ingrained in modern day society. Doesn't overhead increase when working with small batches?
Here is a group exercise to demonstrate the power of small batches.
Each participant receives the following materials:
* six letter sized papers
* six envelopes
* a pen
Also, there's a list of six names which everyone can see.
Each participant has to write a name from the list on a piece of paper, fold it, put it in the envelope, seal the envelope, and finally write the name on the envelope.
They should do this for all six names on the list.
Divide the participants into two groups. The participants in one group follow the Large Batch Method. Participants in the other group follow the Small Batch Method.
Each participant of the Large Batch Method first has to write all six names on the six envelopes, then fold all six papers, then put the papers into the envelopes, then seal all envelopes, and finally write the names on them. For many people, this way of working intuitively seems the most efficient.
Each participant of the Small Batch Method group writes a name on a piece of paper, folds it, puts it in an envelope, seals the envelope and writes the name on it. Then does the same for the next name on the list.
I also ask the participants to secretly estimate how long they think it will take them to complete this exercise.
Every time I have done this exercise a participant of the Small Batch Method is the fastest. On average the Small Batch Method is about 20% faster. This never fails to surprise people.
Also, most people tend to underestimate the time they need to complete this exercise, sometimes by as much as 100%.
Obviously there are parallels with software development. The following are some of the observations that are typically made:
* Small Batch Method participants eventually settle into a rhythm. The Large Batch Method people will often get stressed out at the end in a mad dash to get things finished. In software development people can get very stressed in the last few weeks of a Big Bang release, whereas if the software is being built and rolled out incrementally, doing a release is just business as usual.
* What if it turned out that the folded papers didn't fit into the envelopes and needed an extra fold? The Small Batch people would have found this out sooner in the process than the Large Batch people. In software development it makes sense to deliver so called "Tracer Bullets" early in the process to find any technical bottlenecks, problems with usability, "wrong" requirements, and so on.
* The first finished envelope (value for customer) was delivered far sooner by the Small Batch Method. In software development a product that includes only a couple of useful features of the complete intended scope can still deliver value. Why wait with releasing useful software until "everything" is "finished"?
* The order of the envelopes might get messed up with the Large Batch Method. The wrong name might get written on an envelope, resulting in a wrongly addressed, and unhappy, customer. In software development doing a huge release with many features will risk having more bugs, as there are more things to test and integrate and keep track of. If something goes wrong during a release to production it will be significantly harder to track down what caused the bug than it is with a smaller release.
* Apparently, we can be way off estimating how long something as simple as stuffing envelopes will take. But we will probably be more accurate the second time we have to estimate our work. The problem in software development is that we never estimate the exact same thing. The requirements are never exactly the same, the domain is different, the technology is different and there may be different people working on the software.
This exercise was inspired by a video by Ron Pereira. In this video Ron demonstrates an even simpler version of this exercise, with ten envelopes. Study has shown that the Large Batch Method loses time mostly by "housekeeping" stuff like keeping the stacks tidy, shuffling papers around the table, etc.
Eric Ries also has an excellent article on Small Batches, which I came across while doing research for this article. It nicely sums up all the advantages of Small Batches.
The first time I tried this exercise I used ten envelopes per person, but then the exercise took a little too long. With six envelopes it is a bit shorter and it gets the point across just as well.
If you have a larger group, you might want to try this with teams of two or three people working together. The Small Batch people would work as individuals, the Large Batch people would work as an assembly line. Give each group 20 seconds before the exercise to discuss how to divide up the work. You probably want to use ten or twelve envelopes when doing this exercise with groups, otherwise it is a bit too quick.
Now go forth and use the power of small batches!
Here is a group exercise to demonstrate the power of small batches.
Each participant receives the following materials:
* six letter sized papers
* six envelopes
* a pen
Also, there's a list of six names which everyone can see.
Each participant has to write a name from the list on a piece of paper, fold it, put it in the envelope, seal the envelope, and finally write the name on the envelope.
They should do this for all six names on the list.
Divide the participants into two groups. The participants in one group follow the Large Batch Method. Participants in the other group follow the Small Batch Method.
Each participant of the Large Batch Method first has to write all six names on the six envelopes, then fold all six papers, then put the papers into the envelopes, then seal all envelopes, and finally write the names on them. For many people, this way of working intuitively seems the most efficient.
Each participant of the Small Batch Method group writes a name on a piece of paper, folds it, puts it in an envelope, seals the envelope and writes the name on it. Then does the same for the next name on the list.
I also ask the participants to secretly estimate how long they think it will take them to complete this exercise.
Every time I have done this exercise a participant of the Small Batch Method is the fastest. On average the Small Batch Method is about 20% faster. This never fails to surprise people.
Also, most people tend to underestimate the time they need to complete this exercise, sometimes by as much as 100%.
Obviously there are parallels with software development. The following are some of the observations that are typically made:
* Small Batch Method participants eventually settle into a rhythm. The Large Batch Method people will often get stressed out at the end in a mad dash to get things finished. In software development people can get very stressed in the last few weeks of a Big Bang release, whereas if the software is being built and rolled out incrementally, doing a release is just business as usual.
* What if it turned out that the folded papers didn't fit into the envelopes and needed an extra fold? The Small Batch people would have found this out sooner in the process than the Large Batch people. In software development it makes sense to deliver so called "Tracer Bullets" early in the process to find any technical bottlenecks, problems with usability, "wrong" requirements, and so on.
* The first finished envelope (value for customer) was delivered far sooner by the Small Batch Method. In software development a product that includes only a couple of useful features of the complete intended scope can still deliver value. Why wait with releasing useful software until "everything" is "finished"?
* The order of the envelopes might get messed up with the Large Batch Method. The wrong name might get written on an envelope, resulting in a wrongly addressed, and unhappy, customer. In software development doing a huge release with many features will risk having more bugs, as there are more things to test and integrate and keep track of. If something goes wrong during a release to production it will be significantly harder to track down what caused the bug than it is with a smaller release.
* Apparently, we can be way off estimating how long something as simple as stuffing envelopes will take. But we will probably be more accurate the second time we have to estimate our work. The problem in software development is that we never estimate the exact same thing. The requirements are never exactly the same, the domain is different, the technology is different and there may be different people working on the software.
This exercise was inspired by a video by Ron Pereira. In this video Ron demonstrates an even simpler version of this exercise, with ten envelopes. Study has shown that the Large Batch Method loses time mostly by "housekeeping" stuff like keeping the stacks tidy, shuffling papers around the table, etc.
Eric Ries also has an excellent article on Small Batches, which I came across while doing research for this article. It nicely sums up all the advantages of Small Batches.
The first time I tried this exercise I used ten envelopes per person, but then the exercise took a little too long. With six envelopes it is a bit shorter and it gets the point across just as well.
If you have a larger group, you might want to try this with teams of two or three people working together. The Small Batch people would work as individuals, the Large Batch people would work as an assembly line. Give each group 20 seconds before the exercise to discuss how to divide up the work. You probably want to use ten or twelve envelopes when doing this exercise with groups, otherwise it is a bit too quick.
Now go forth and use the power of small batches!
Friday, June 24, 2016
Estimate Using Story Points
Imagine you are standing in front of a pile of bricks. Your job is to estimate how long it will take to make neat 3x3 stacks of these bricks. Sounds easy, right? But if you think about it there are quite a few variables involved and assumptions you have to make.
The bricks you can see, on the outside of the pile, all look fairly similar shaped, but perhaps there are some awkwardly shaped stones in the middle of the pile that you can't see which would make the work take longer.
Variables and assumptions
If the work is done outdoors, the weather could be an influence. If it is a hot day, or a rainy one, that could make the work take longer. Or perhaps the work is to be done at night. Is there any lighting you can use then? What about other tools? Who will perform this work? Will it be a single person, or a team? Are they experienced in this type of work? If it's a team, have they worked together before on a similar job? Are they in good shape, do they have the stamina to perform this kind of physical duty?
If you have never estimated this kind of work before, your first estimate will likely be a very rough guess with a lot of padding on both sides. If you have done this before, you can probably make a better estimate.
Estimating software
Estimating software is much like this. There are often many unknowns and it is hard to come up with a good estimate. This is where story points come in.
Suppose there are multiple piles of bricks that have to be stacked. While you still don't know how much time each single pile will take to stack, you can assign a number of "story points" to each one. You could assign one point to the smallest pile of stones, two points to each pile that looks about twice the size of the smallest one, etcetera. You are sizing the chunks of work relatively to each other.
Then, you would do the work on the most important, most valuable, pile of bricks and measure the time it takes to perform the job. Suppose you estimated your first pile as two story points and it took you eight hours. If the total number of story points of the remaining pile is, for example, ten story points, it would mean it might take roughly another forty hours to stack all the remaining piles of bricks.
However, don't stop there. Every time a pile of bricks is finished, update the measurement to get a more precise estimate on when it will be done. This is called velocity.
Velocity
In Scrum, at the end of each sprint the number of story points for finished stories is counted. Over time this will give a decent indication of how much work can be performed during a sprint. Usually it will take three or four sprints for the velocity to become meaningful. This can then be used to estimate how long the remainder of a release or project will approximately take.
Even when using story points you cannot perfectly predict beforehand how long a release or project will take. However, it will allow you to adjust your plans during the release or project based on actual data, rather than on wishful thinking.
The bricks you can see, on the outside of the pile, all look fairly similar shaped, but perhaps there are some awkwardly shaped stones in the middle of the pile that you can't see which would make the work take longer.
Variables and assumptions
If the work is done outdoors, the weather could be an influence. If it is a hot day, or a rainy one, that could make the work take longer. Or perhaps the work is to be done at night. Is there any lighting you can use then? What about other tools? Who will perform this work? Will it be a single person, or a team? Are they experienced in this type of work? If it's a team, have they worked together before on a similar job? Are they in good shape, do they have the stamina to perform this kind of physical duty?
If you have never estimated this kind of work before, your first estimate will likely be a very rough guess with a lot of padding on both sides. If you have done this before, you can probably make a better estimate.
Estimating software
Estimating software is much like this. There are often many unknowns and it is hard to come up with a good estimate. This is where story points come in.
Suppose there are multiple piles of bricks that have to be stacked. While you still don't know how much time each single pile will take to stack, you can assign a number of "story points" to each one. You could assign one point to the smallest pile of stones, two points to each pile that looks about twice the size of the smallest one, etcetera. You are sizing the chunks of work relatively to each other.
Then, you would do the work on the most important, most valuable, pile of bricks and measure the time it takes to perform the job. Suppose you estimated your first pile as two story points and it took you eight hours. If the total number of story points of the remaining pile is, for example, ten story points, it would mean it might take roughly another forty hours to stack all the remaining piles of bricks.
However, don't stop there. Every time a pile of bricks is finished, update the measurement to get a more precise estimate on when it will be done. This is called velocity.
Velocity
In Scrum, at the end of each sprint the number of story points for finished stories is counted. Over time this will give a decent indication of how much work can be performed during a sprint. Usually it will take three or four sprints for the velocity to become meaningful. This can then be used to estimate how long the remainder of a release or project will approximately take.
Even when using story points you cannot perfectly predict beforehand how long a release or project will take. However, it will allow you to adjust your plans during the release or project based on actual data, rather than on wishful thinking.
Thursday, June 16, 2016
Sprint length
If you are the Scrum Master for a new team, one of the first things you have to figure out is the sprint length. As with many things, my advice would be: take it to the team!
Set up a meeting with the whole team to determine the sprint length. This doesn't have to be an hours long meeting, just a quick check up on what everybody thinks.
Remember though, that the Scrum Master is the one ultimately responsible for choosing the sprint length. Sometimes teams choose a sprint length that is too long. If you think this is the case then go for a shorter length.
If you are really clueless about the sprint length, try two weeks. That works for a lot of teams around the world.
At the end of the sprint use the retrospective to discuss how the chosen sprint length worked out. Inspect and, if necessary, adapt!
Timebox
The sprint is a timebox. Once you start a sprint, do not change the sprint length. If you run out of work before the sprint ends, add more work. The Product Owner decides on the priority on items to work, the team decides how much extra work they can fit into the sprint.
Make use of burndown charts and work with small user stories so that you may see early in the sprint that possibly not all of the planned work can be achieved. If it turns out you have planned for too much work in the sprint, use this as an opportunity to inspect and adapt. The team should decide what they think they can still finish in this sprint. The Product Owner decides what is most important. Use the Sprint Goal to guide these decisions.
Organisation
If the length of a sprint is decided at the organisational level, I would try and find out what the reasons behind it are. Imposing the length of the sprint on the team hampers the team's ability to self-organize and I would view this as a possible impediment. There might be good reasons for it though, such as avoiding overlapping meetings with other teams, or synchronizing release schedules.
Steady rhythm
Don't change your sprint length based on the amount of work. So, don't have one sprint be three weeks, the next one four weeks, the one after that two weeks, etc. The amount of work should be chosen based on the sprint length, not the other way around. Ideally, you want the sprint length to be the same for every sprint. This allows the team to settle into a steady rhythm.
Having the same length, sprint after sprint, makes things more predictable. Team members and stakeholders, will know when each Scrum meeting is without having to check their appointments. Also, it will be easier to determine velocity if the sprint length is the same every time which makes planning releases easier.
Our sprint length
Our very first sprint was three weeks long. That felt a little long to us, so we changed it to two weeks for the next sprint and it has been so ever since. We discussed changing it to one week sprints a few times, but so far, two weeks still works best for us.
How long are your sprints, and how does that work for you?
Set up a meeting with the whole team to determine the sprint length. This doesn't have to be an hours long meeting, just a quick check up on what everybody thinks.
Remember though, that the Scrum Master is the one ultimately responsible for choosing the sprint length. Sometimes teams choose a sprint length that is too long. If you think this is the case then go for a shorter length.
If you are really clueless about the sprint length, try two weeks. That works for a lot of teams around the world.
At the end of the sprint use the retrospective to discuss how the chosen sprint length worked out. Inspect and, if necessary, adapt!
Timebox
The sprint is a timebox. Once you start a sprint, do not change the sprint length. If you run out of work before the sprint ends, add more work. The Product Owner decides on the priority on items to work, the team decides how much extra work they can fit into the sprint.
Make use of burndown charts and work with small user stories so that you may see early in the sprint that possibly not all of the planned work can be achieved. If it turns out you have planned for too much work in the sprint, use this as an opportunity to inspect and adapt. The team should decide what they think they can still finish in this sprint. The Product Owner decides what is most important. Use the Sprint Goal to guide these decisions.
Organisation
If the length of a sprint is decided at the organisational level, I would try and find out what the reasons behind it are. Imposing the length of the sprint on the team hampers the team's ability to self-organize and I would view this as a possible impediment. There might be good reasons for it though, such as avoiding overlapping meetings with other teams, or synchronizing release schedules.
Steady rhythm
Don't change your sprint length based on the amount of work. So, don't have one sprint be three weeks, the next one four weeks, the one after that two weeks, etc. The amount of work should be chosen based on the sprint length, not the other way around. Ideally, you want the sprint length to be the same for every sprint. This allows the team to settle into a steady rhythm.
Having the same length, sprint after sprint, makes things more predictable. Team members and stakeholders, will know when each Scrum meeting is without having to check their appointments. Also, it will be easier to determine velocity if the sprint length is the same every time which makes planning releases easier.
Our sprint length
Our very first sprint was three weeks long. That felt a little long to us, so we changed it to two weeks for the next sprint and it has been so ever since. We discussed changing it to one week sprints a few times, but so far, two weeks still works best for us.
How long are your sprints, and how does that work for you?
Friday, June 10, 2016
Our Kanban board
We are a colocated team. Our Kanban board is in our team room in view of everybody. It's a great tool for collaboration and promoting transparency. We keep a virtual copy of our board in Jira up-to-date which our customers may use to view progress. Our Kanban board, and the process it reflects, has gone through quite a few changes since we started our Agile journey.
At first, our board was simply divided into three columns: To Do, Doing and Done. We used to fill the To Do column at the start of a sprint with stickies containing User Stories and accompanying tasks. The tasks were defined during the sprint planning. Later we found it easier and quicker to define the tasks while in front of our physical board rather than in the meeting room.
We quit estimating tasks long ago. Our stories are small enough to be able to see progress during the sprint. Our progress is drawn on a burn down chart. Most of our user stories are around three story points in size. We do about fifteen of them during our two week sprints. Usually one or two stories will be finished per day.
We had a Review column for a while. Stories in that column had to be reviewed by the Product Owner before they could be marked as "Done". We introduced this column because too often our Product Owner would not be satisfied with a story during the Sprint Review. The Review column forced us to evaluate a story with the PO much sooner. Eventually, the quality of our work rose and we decided we could do without this column.
We added a To Deliver column to make visible which stories are ready for delivery to a test or production environment. Recently we upgraded our Definition of Done, and now every story has to be delivered to a test environment to be considered "Done", creating a continuous delivery flow.
The testing task can and should start before the development of a story is complete. However, it made sense to our team to have a Test column. A story in that column is delivered to our development environment and is ready to be tested by whomever is available at that moment to perform the testing task.
The most recent addition to our board is the Prepared column. Stories in this column have been are OK-ed by both the team and our customer. These stories have an estimate and are ready to be developed. We don't plan a sprint in advance anymore. Instead we pull the top Prepared item from the backlog as soon as we finish an item. In effect, this makes our process currently more Kanban than Scrum.
Every column on our board has a maximum number of allowed stories, another Kanban influence. This forces team members to focus on finishing work, rather than starting new work, and it promotes collaborating on items to get them done.
And there you have it, the current state of our board!
At first, our board was simply divided into three columns: To Do, Doing and Done. We used to fill the To Do column at the start of a sprint with stickies containing User Stories and accompanying tasks. The tasks were defined during the sprint planning. Later we found it easier and quicker to define the tasks while in front of our physical board rather than in the meeting room.
We quit estimating tasks long ago. Our stories are small enough to be able to see progress during the sprint. Our progress is drawn on a burn down chart. Most of our user stories are around three story points in size. We do about fifteen of them during our two week sprints. Usually one or two stories will be finished per day.
We had a Review column for a while. Stories in that column had to be reviewed by the Product Owner before they could be marked as "Done". We introduced this column because too often our Product Owner would not be satisfied with a story during the Sprint Review. The Review column forced us to evaluate a story with the PO much sooner. Eventually, the quality of our work rose and we decided we could do without this column.
We added a To Deliver column to make visible which stories are ready for delivery to a test or production environment. Recently we upgraded our Definition of Done, and now every story has to be delivered to a test environment to be considered "Done", creating a continuous delivery flow.
The testing task can and should start before the development of a story is complete. However, it made sense to our team to have a Test column. A story in that column is delivered to our development environment and is ready to be tested by whomever is available at that moment to perform the testing task.
The most recent addition to our board is the Prepared column. Stories in this column have been are OK-ed by both the team and our customer. These stories have an estimate and are ready to be developed. We don't plan a sprint in advance anymore. Instead we pull the top Prepared item from the backlog as soon as we finish an item. In effect, this makes our process currently more Kanban than Scrum.
Every column on our board has a maximum number of allowed stories, another Kanban influence. This forces team members to focus on finishing work, rather than starting new work, and it promotes collaborating on items to get them done.
And there you have it, the current state of our board!
Subscribe to:
Posts (Atom)