Last May at the Lean SSC 2011 conference I introduced my thoughts to some colleagues about the interpretation of what I think would be one of the fundamental elements in knowledge work systems: cycles of evaluations of assumptions. Alan Shalloway has recently taken this conversation forward, connecting these thoughts to other ideas related to risk and flow on this blog post. So, I have decided to write about the topic in more detail so we can explore this in terms of understanding what I think is part of what the Lean-Kanban and Agile communities do and why it matters so much.
 
Why we should care about assumptions
 
Assumptions are at the heart of all knowledge work activities. In the software development field, you can find them everywhere, from coding to portfolio management, from the smaller technical decisions to larger product design decisions. The evaluation of an assumption ends the cycle of learning and defines the boundaries around systemic feedback loops. Several things happen when you evaluate one or a set of assumptions: 
 
(1) Learning: The evaluation of an assumption holds a very important opportunity for learning. By comparing what you have with what you were expecting, you learn about what you are capable of. You learn to separate facts from intuition. You learn how to work with small expectations and how to quickly change bigger expectations as derivate smaller ones are being evaluated.
(2) Making Progress Towards a Goal in a Terrain of Uncertainty: In knowledge-based environments all you have is uncertainty. You don't know how much time takes to do a task. You don't know how one task is going to affect others until all the assumptions around it are validated. So progress in those environments is not defined by putting another brick in the wall, but by a recurrent evaluation of assumptions that emerges as you move forward from one uncertainty to another. 
(3) Establishing Relations of Causality/Correlation: This insight came from Don Reinertsen, when we were discussing the topic with other colleagues at LSSC11. In fact, as the time to evaluate your assumptions increases, the potential to rationalize the result in terms of causality or correlation will be reduced. These are fundamental mental capabilities to find opportunities to learn and improve. 
(4) Reducing risk: Alan Shalloway was the first to relate this to the work of Bob Charette. By controlling the number of assumptions in a given system, you are actually shortening the range of options and probabilities of getting an unexpected result. I believe if you design mechanisms to control the volume of non-evaluated assumptions in a given system, you are creating the potential to mitigate certain types of risk without too much case by case upfront analysis.  
 
The awareness of being surrounded by assumptions is a property of an environment tolerant to failure. As Alan Shalloway stated on the earlier mentioned post: “Sometimes we are so sure of our assumptions that when we prove them wrong, we call the action doing so an error”. Errors only happen when you have been proven wrong in conduct or judgement by actual facts. It is a mistake to encourage a culture of establishing guilty and comparing results with expectations when what you just have are assumptions. It is important to learn how to separate unexpected results operating on fact-based reality from unexpected results obtained from non-evaluated assumptions.
 
Time is the dominant parameter
 
Acknowledging the existence of assumptions in our knowledge work environments is the first step. However, just knowing the relevancy of this is not enough. We need to know “how” to use this awareness in order to design better work systems.
 
My breakthrough about the “how” came when I first read about John Boyd's work few months ago. His theory is centered on the OODA Loop concept (Observe, Orient, Decide, Act). According Boyd, every active entity, like an individual, a group of individuals or an organization, use the OODA loop to react to events. They first observe the environment or situation they are involved in, then they analyze the data applying it their own background of knowledge and experiences, taking a decision to finally act based on that information.
 
However, instead of concentrating on the quality aspect in each part of the loop, which cycles like PDCA and LAMBDA seem to describe well, Boyd came up with something really interesting. For Boyd, what mattered most is not how good you are at observing, orienting, deciding or acting, but how fast you go through these stages in order to start a new cycle. Despite most of his work being oriented to environments dominated by competition, I think this could be also applied to collaborative environments, when the fundamental principle is about adaptation and responsiveness to new information obtained from an environment in constant change.          
 
As Boyd's theory suggests, the speed of iteration is more important than being right when iterating. If the evaluation of assumptions marks the boundary for feedback loops in knowledge work environments, maybe the same rule applies for how we iterate in our activities. In other words, being fast at evaluating our assumptions is better than trying to force assumptions to be right at the end. So, to be effective in knowledge work environments we need to constantly try to minimize the time between the creation or emergence of an assumption until its evaluation.
 
In order to represent that concept, I've came up with this expression which represents the whole idea in few symbols:
 
? – min(t) → !

Where:
 ? represents the creation of an assumption 
 !  represents the evaluation of that assumption 
min(t) represents the focus on minimizing the time between the creation and evaluation of the assumption
 
How programmers are minimizing the time-life of assumptions
 
If you were a programmer in the mid-1980s you probably remember writing programs on punched cards. Look at how the wikipedia describes a punched card:
 
A punched card is a flexible write-once medium that encodes, most commonly, 80 characters of data. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk-sized keypunch with a typewriter-like keyboard. A typing error generally necessitated repunching an entire card. A single character typo could be corrected by duplicating the card up to the error column, typing the correct character and then duplicating the rest of the card. (...) These forms were then converted to cards by keypunch operators and, in some cases, checked by verifiers.
 
At that time, programmers had to write their code on these cards and then submit them to an operator who ran the program on IBM mainframe computers. It could take days to get the response from the execution of those programs! After days waiting for the response, the result could be something like: Error on line 32.
 
I started my own career as a software developer in the late 80's. Before being a real programmer, I used to type several pages of printed code in Brazilian magazines just to run them and realize - probably because of my own mistakes while typing the code – that the program didn't work. It could took several hours to copy the whole program line by line. The result of running the program could just be a blank screen or a small square blinking somewhere on the screen.   
 
We all know that the practice of writing several lines of code before you can evaluate them is not as good as evaluating a few lines or even each line as you type.  We also know that the practice of writing a really simple test, that evaluates one single behavior of your code, is considered a better practice than writing complex tests with complicated scenarios and multiple inputs and outputs.  The point is not only about simplicity versus complexity, but also about the speed in which you can evaluate the whole set of assumptions that you carry when you write those lines.  
 
Nowadays, compilers and interpreters are getting quicker at showing typos and syntax mistakes. In some software development platforms, compilation or interpretation of the code occurs in the background, and I have instantaneous feedback of my errors. If I'm using techniques like Test Driven Development (TDD) or Behavior Driven Development (BDD), I can also get instantaneous feedback not only if I mistype something, but also if the system is doing what it should. Ruby developers, for instance, have shrunk this feedback loop even more by having their tests automatically executed in the background for every saved change in their source files.  Developers have been applying intense assumption-oriented semantics to specify their behaviors. So you see words like “should” dominating their vocabulary, which is a clear indication of awareness of  the assumption inherent to each piece of code. 
 
From punched cards and long lists of non-validated code to real time validation for syntax and purpose, what is common in all of those evolutionary leaps is the exponential growth in the ability to  reduce the time-life of assumptions in the form of non-validated lines of code. Every line of code carries its own assumptions. It assumes that it is written in the right syntax, that it is the right instruction, that it is in the right place and it is the best way to do what should be done at that point. 
 
Peer Review
 
Programmers have to deal with several challenges when they are working together in teams. Style and syntax standards, code design, usability guidelines, test practices and other rules should be aligned among all developers in the same team. There are so many details and different situations when you are coding, that just write rules down and expect everyone to follow is not such a good option. 
 
As software development becomes a repetitive activity, quality goes down. That happens because the code is a complex artifact. You need to embrace the inherent variability to prevent the uncontrolled expansion of that complexity. Documented “standards” don’t help to deal with that variability.
 
It is common to use a review cycle to help developers to align themselves in a common structured model. When a feature is completed by a developer, another developer must review what was done. He is supposed to point it out what doesn’t fit in the work structure of the team, what could be better and what could be removed because is not necessary. As time goes on, the expectation is the formation of a shared mental model that everyone follow by agreement, not by rules in a document. 
 
The problem is that there are several ways to do this. One possible way involves one developer handing off the implemented feature to another. That is not a good solution in most cases. Besides the natural issues regarding handoffs, who are receiving the task probably will enqueue it until he finishes his current task. As usually there is no visibility or limits controlling what is happening, the volume of work waiting for review can increase very quickly.   
 
Another way to make code reviews involves the practice of pulling someone to review the code together with the original developer as soon as the feature is implemented. This is a much better solution, once it eliminates the handoff, prevent the formation of queues and promote collaboration. However, it still carries downsides like interruption, context switching, hurry on finish the work and lack of involvement of the reviewer since the beginning.
 
We can think about different solutions for the review problem, but developers have discovered a much more efficient way to handle that, which is called pair programming. In this model, two developers work together during the whole time until the feature finishes. Every action or decision is immediately reviewed by the peer. There is no handoff, no context switching, no hurry, no lack of involvement anymore.
 
However, pair programming still let the space for some dysfunctions once the feature absorbs the mental model of only two people in the team. A technique called promiscuous pairing addresses this problem by making developers change position between each other in a certain frequency, so all of them can work a little in all features.
 
As was explained in the context of coding, here we have the same pattern of evolutionary leaps. When a developer starts coding, assumptions about this code are created. Is it following the style and rules used by the team? Is doing what was supposed to do? Is there some design issues or flaws that should be solved before finishing? These and other assumptions are living in the system until being evaluated by another team member. The review process is just another way to evaluate these assumptions. Again, the time is what matters. In the first review model, the handoff extends this time quite a lot. The second model, when a developer pulls another to do the review, the time of evaluation is reduced to the time for develop the feature, which is better. The pair programming model makes this time to be near zero, but it still maintains some assumptions in the system. By doing promiscuous pairing, your evaluation embraces more assumptions and the time remains minimal.       
 
How the management system is affected by assumptions
 
Accumulation of assumptions is probably one of the biggest sources of dysfunction in value-add activities in knowledge work environments. It is not different when we analyze management activities. 
 
The Agile Manifesto probably has marked the emergence of a new paradigm in the software development world. In terms of management, the Agile movement, and after that, the Lean-Kanban movement in software development, both promote a different approach. The predictable world of Command and Control was left behind, opening space to a new understanding focusing on people  over processes and collaboration over contract negotiations. Managers are now supposed to be system thinkers, instead of resource controllers. While this transformation was taking place, in terms of assumption evaluations, dramatic changes occurred.
 
When someone is doing a task for others, there are some implicit assumptions made during the whole process. Is the right direction being followed? It is being done correctly? Is that the right thing to do at the time? Many times we hear the same story. Weeks or months after a manager assign tasks to developers, usually in large batches, they discover that nothing works as expected, that the code has no quality, and developers are taking the blame and leaving the project. 
 
Most managers would say that this happens because of lack of control, that managers should command and control tightly. But, should they? Should they try to impose predictability to a naturally unpredictable environment? I don’t think that is the answer. In those types of projects, as time goes on, new assumptions are in constant generation because you are essentially dealing with partial information all the time. As the project moves forward, you start to collect pieces of information that were lacking when you first started. Evaluating assumptions helps you to confirm or refute decisions that were created in the first place by the lack of that information. This applies not only for developers writing code, but for management practices as well.
 
Let’s analyze some management practices using this perspective. Nowadays, more and more developer teams are doing 15 minutes stand-up meetings on a daily basis. They do that to improve communication and alignment towards a shared goal. What they probably don’t know is that in every one of these meetings they are evaluating some important assumptions: that everybody on the team have been working in the right direction, that everybody is going to work in the right direction tomorrow and that the team know about the problems so members can help each other to overcome problematic situations. Do you agree if we evaluate these assumptions every week, instead of every day, we are just allowing the accumulation of some critical assumptions that could jeopardize our short-term goals? Those meetings are usually kept really simple, short and frequent, because the focus is not doing the meeting right, but do it frequent enough to not allow that assumption to live without validation for a long period of time.
 
Why short iterations?
 
Scrum practitioners have also been applying the concept of short evaluations in assumptions to dealing with estimations. They assign “points” (Story Points) to units of work. The amount of points for each unit increases as the complexity or uncertainty for what is expected of that work increases. The project is broken into timebox iterations. When an iteration starts, the team defines how many points each unit of work is worth and the total number of points they can handle for the whole iteration. What a lot of Scrum teams don’t realize is that this is just an assumption. The total amount of estimated points is an assumption of future throughput, not a commitment whatsoever. The number of points for each unit of work is also just an assumption of the understanding of the problem that has to be solved, not a commitment to fit the work to its estimation.
 
Now you probably can understand why short iterations are frequently better and why teams working this way are considered “more mature” than teams working with longer iterations. If you really want to know more about your capacity, then the time to evaluate the created assumption is crucial. Taking more time to do this evaluation ends up increasing the number of live assumptions in your system, making your knowledge about the capacity of the project less precise.  
 
The Sprint Review is another Scrum ceremony which is useful on controlling assumptions in software projects. As the team is producing working software, non-evaluated assumptions about that work are being accumulated. Is each feature what the customer was expecting? As the development process starts, those assumptions come to life for every feature. Only when the feature is presented to the customer you can evaluate them. The Sprint Review marks the point where the evaluation happen. As the cadence of this ceremony is aligned with the time that an iteration get complete, shorter iterations will reduce the time-life of that assumption in projects running in a timebox fashion. 
 
Continuous Deploy
 
After a feature is reviewed, another assumption appears: Does the new feature will need any adjustment to work well? The implemented feature needs to be deployed so the customer can actually use it. In order to publish a new release of the product, it is common to wait for a certain volume of new features sufficient to form a cohesive business operation. The average scale of this wait time is months. It is a quite large timeframe to keep non-validated assumptions alive.
 
The side effect of this delay is know as “release stabilization”.  Feedback from users comes in so large quantity right after the release being published that the team needs to stop the development of new features for weeks until have all these requests addressed. In other cases, it is not the volume of requests that bothers the team, but the lack of any request! Some features are released and nobody uses. The cost of added complexity to the code is really underestimated when this happens. Indeed, this happens a lot.
 
Some web companies with millions of users are showing the path to solve this problem. The time to reach the user is being minimized by a model which new features are progressively deployed for clusters of users in cycles of evaluation. So, a new feature is first deployed to a small group of users. The use of the feature is evaluated. If people don’t use it, the feature is simply killed. By other hand, if the new feature generates good feedback and attraction, they deploy it for the next group of users. Additionally, the first group points some flaws or misbehaviors in the product. This is corrected before the next group receives the feature. New analysis is made and feedback is collected, and the cycle goes on until all users get the feature. 
 
Again, the main difference between the two models is that, on the second one, you recognize that what you did is based on assumptions. The users are the ones that are going to evaluate them, not the Product Owner or any other proxy role in your project. What makes you effective is how quick you can go to the customer to make this evaluation.
 
WIP Constraints
 
The emergence of the Kanban change management method brought new tools to deal with assumptions. Flow, Visibility and WIP constraints are in the essence of this approach. Those elements have a huge contribution in terms of reducing the time-life of assumptions. 
 
Let’s go back to the customer review problem addressed by Scrum teams with the Sprint Review ceremony. We already know that teams working in short timebox iterations are  better in controlling the time-life of assumptions than teams working in longer iterations. But we can do better than that. What would happen if we create in our working system the ability to evaluate those assumptions as soon as you have the minimal conditions to do it, instead of doing in a pre-defined scheduled time? 
 
With a WIP constraint you can control the accumulation of existent assumptions in the feature review process. The PO will be involved as soon as this accumulation reaches a desirable level. You can argue that more pressure on this cadence is not possible because of the PO availability or because any other reason. Fair enough. In this case, you can’t reduce the volume of assumptions in your system because you are subordinated to a constraint that you can’t remove. But don’t loose the opportunity to make your system better in controlling assumptions just because you want to stick to a method prescription. Scrum is a good set of practices, but are not the best set of practices, simply because, as the Cynefin Framework model suggests, in complex environments, there is no such thing as best practices. What exist are emergent practices. So, there is no reason to not design your process in a more efficient way. 
 
Handoffs
 
WIP constraints have the potential to control the time-life of assumptions in several ways. A particular useful scenario is when you are dealing with handoffs. Despite being a necessary evil in some cases, handoffs should be avoided most of the times. When you transfer work between people, teams or organization units in a continuous way, you start very critical assumptions: Has the work arrived in a good condition? Who has received gets what was expecting? Who has to respond is going to do that in the expected timeframe? Is rework not going to be necessary? Is enough information about the work being passed with the work? 
 
WIP constraints can minimize the impact of handoffs by forcing this assumptions to be evaluated before new ones are created. When everything is fine to move on, you are allowed to start new work. This practice has the potential to transform any knowledge work environment because, beside other reasons, you are controlling the amount of assumptions that are living in your work system in a given time. You do that by forcing their evaluation in a dramatic shorter cycle. 
 
Visibility
 
Visibility is another Kanban element which has a systemic effect in assumptions. One fundamental aspect of work models is how much time is necessary for people react to new information that comes everyday. When the work model is visible, people on the team start to share a mental map where conditions and information about each important piece of work can be signalized. 
 
In knowledge work, it is quite easy to see important information hidden in e-mail inboxes, phone calls or memos. If the work of everyone is projected on a single map, you move the assignment  model from a “per-individual-basis” to a “system-basis”. With a single map, people have a place to pull the work based on explicit policies and to discuss strategies to handle their challenges every day.  
 
What visibility does is reveal one of the most important assumptions in knowledge work: that the system is working fine. When aligned with other Kanban principles, visibility empowers people to discuss a better distribution of efforts based on availability and importance, instead of just familiarity or personal assignment. They can anticipate and swarm in problems as soon they emerge.  They can work as a real team.
 
Customer Assumptions
 
As it was mentioned at the beginning of this text, you can observe a software development work system in terms of how it deals with the accumulation of assumptions over time.  This can be done by observing how people are dealing with their operational tasks, how managers are managing and how team practices can be organized to guarantee frequent evaluation of assumptions. But you can go further on this.
 
The recent Lean Startup movement is teaching us a valuable lesson. This community is learning how to minimize risk in creating the wrong product by evaluating assumptions about what customers really want during the development process. They use concepts such as Customer Driven-Development, Business Model Canvas and Minimum Viable Products to empower people with a effective method to do that.
 
The most common form of product development in the software development field involves the generation of a backlog of features. Then, you manage progress by comparing the planned features with the already developed features.  The problem with this approach is that it carries a large quantity of hidden assumptions about what the customer really need. A product can take years to be developed. After all this time, you discover that nobody wants to use it. Basically, because your focus remained on trying to meet scope, budget and schedule.
 
The Lean Startup community is learning to reduce the cycle of discovery of customer needs to a minimum time and effort. People now are launching product releases in really short cycles. They are reaching the customer even without a real product developed. They are doing this because they know that even the most brilliant idea is formed by a set of assumptions, and these assumptions need to be continuously evaluated before you do a major investment in the wrong product.
 
Basically what changes is the way you progress in your product development initiative. In this model, instead of moving from one feature to another, you move forward by evaluating one assumption after another. When you don’t have a positive response to your assumption, you pivot. The idea of pivoting makes the approach really strong. When you pivot, you use the new information that you have obtained to change the direction of the product making it compatible with the customer response. This is quite counter-intuitive because by deviating it from the original intention actually makes it stronger.  
 
Here the same concept applies. It is the time and frequency of the evaluation that matters, not how perfect you do it.
 
Trade-off
 
There is a clear trade-off regarding how to find the optimal time to evaluate assumption for each feedback cycle.  It seems like value-add activities tend to offer space to near zero time, depending on the available tools or techniques.  However, coordination activities, like meetings, handoffs and reviews hold a transaction cost which makes the constant reduction of time not so useful. 
 
As an example, standup meetings are a coordination activity which can be really effective on a weekly basis, a daily basis or even twice a day depending on the context. A team of managers can do that with project team leaders in a weekly basis. More than that will not generate any value because there aren’t enough assumptions accumulated on this period to be evaluated.  For a software development team, twice a day is too much, while for a maintenance team, living a period of crises, can do that twice a day easily. 
 
However, in all those cases there is a optimal limit to reduce time between assumptions evaluation. The transaction cost of the activity helps us to define that limit. When you reach it, a good way to go further is stop thinking about how to reduce the time and think about how to replace the practice entirely. In this case, you act differently keeping the purpose of that practice in the context of the feedback cycle. The evolution from developers review to pair programming is a good example of this type of change. The purpose was sustained but the means was modified. 
 
Takeaways
 
The evaluation of assumptions is a thinking tool. It can be used to analyze a system using a new perspective. It can be potentially useful for most knowledge workers, including developers,  managers, product designers and other IT specialists. Each one, in their own problem space, can use this tool not only to take better decisions, but also to improve the current know-how, designing the process to be more responsive and self-regulated.
 
If you are somehow involved in a Lean, Kanban or Agile initiative, stop for a while and try to think about the feedback loops that you have in your environment. When they are going to close? What assumptions are you carrying on at the moment? When they will be evaluated? What are the possible risks of letting non-validated assumptions accumulate in the system as time goes on?
 
Processes don’t evaluate assumptions, people do. Processes have feedback loops, but who close those loops are people. So when the Lean, Kanban or Agile communities tell you that “is all about the people”, pay attention to it. At the end is all about how your culture empower them to make good decisions, no matter what are their level of influence.  

Posted by: alissonvale
Posted on: 6/30/2011 at 3:18 AM
Tags: , , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 

Comments (1) -

IIT JEE India

Thursday, August 11, 2011 5:10 AM

IIT JEE

Its very much true that assumptions are at the heart of all knowledge work activities.There is nothing happened without assumption in software field.You defined the assumptions very well in your article.I am very impress with it.

Pingbacks and trackbacks (2)+

Add comment

  Country flag

biuquote
  • Comment
  • Preview
Loading