A little more than a year ago I've posted something about "The information flow loop". At that point I wrote: "If the system depends on the effectiveness of people's decisions, the whole system can be improved if you put the right information at the right place and at the right time for them." Is that valid when people are in front of the more important decision they ever have to make? Evaluate by yourself.
[More]

Posted by: alissonvale
Posted on: 8/12/2013 at 7:54 AM
Tags: ,
Categories: systems
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
One of my current initiatives is to write about studies and experiences that I am having lately with knowledge work systems. It started as a blog post, then become several posts and now is a mini-book that I want to evolve as my motivation about the subject grows with time. So here is the link for a draft of the first few pages: Click here to download I'm really proud to announce this in AgileBrazil which is happening in my birth city (Brasilia) at this moment.  I will be glad to receive any feedbacks or suggestion of topics to cover by e-mail. My e-mail address --> contact at alissonvale dot com.

 
 

Most of what happens to a firm is a consequence of what it does, not of what is done to it. - Russel Ackoff


It is really common to hear people in organizations blaming and complaining about external factors that contribute to make their knowledge work environment somehow problematic. In his book, Recreating the Corporation, Russel Ackoff describes how the forces of systems provokes common organization behaviors: “Our lots is due more to what we do then what is done to us”. [More]

Posted by: alissonvale
Posted on: 3/1/2013 at 11:07 AM
Tags:
Categories: Management
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Introducing "retrospectives" for a new team that I'm working with: I ask: What problems did you have since last time I had been here? Result of discussion: 7 different problems listed on the board. What actions would we take to make them disappear from now on? They suggested 5 actions. I agree with all and suggest another 4. All of them were just small adjustments on process or behavior. So far: 9 actions listed on the board. I ask: - Good?- Good!  - Can you do it? - Sure. Easy! - Do you think we can check the effectiveness of these changes next month and discuss about new problems that will arrive until there? - Sure. - Do you think you can repeat this process every month even if I'm not here to lead? (silence) -> hmmm, understandable... change a habit is a different level of commitment! - Ok. Let's think this way: Try to establish a reality where you do this every month. How would be the work process 10 months from now? Better or much better? - Much better! - Nice! Now try to create a parallel dimension where you don't do this. We just work and work and work... no improvement process whatsoever. How would be the work process 10 months from now? The same, worse or much worse? - The same? - No! Much worse! - Why? - Feedback cycles. Systems, and work systems are not different from any regular system, are composed by feedback cycles. They are the way systems run themselves. When you don't improve over time you are in a self-reinforcing positive feedback loop. Problems increase the level of insatisfaction on team members. Displeased with how things are going, people care less about their work, which generates more problems, and by consequence, more insatisfaction. The cycle keeps reinforcing the level of insatisfaction and the work environment tends to get much worse. When you improve over time you are in a self-correcting negative feedback loop. Actions to improve restore and control the level of insatisfaction. Good results motivate people to keep improving and to care more about the work. This is also called "pursuit of excellence". Lesson: There is no stable work place. A work environment that doesn't improve, gets worst.

Posted by: alissonvale
Posted on: 2/18/2013 at 4:03 PM
Tags: , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
  Every work environment has a well defined “work model”. It is common to hear people saying things like “- there is no process at all here! We just do it.”. Even in these cases, where there isn't an explicit “process”, a work model can be extracted and used as starting point for improvement. Your work model is just the way you do the work at certain point. It is dynamic, because it evolves positively or negatively along time regardless your will. However, it evolves slowly most of the time allowing you to extract a physycal representation that can help you to see how it looks like at certain point, what are possible improvement points to persue and, mainly, what are your current options for taking action and delivery not only things right, but also the right things. Identify and understand your work model, visualize it and talk about it frequently is a great way to increase awareness of the game you are playing. Visualization enhances your hability to analyze your options and choose the one considering the whole context, not only your current pressures. A good way to stabilize your work model is to create a physical representation that make those options explicit. Physycal and electronic kanban boards are just perfect for this job. Another important aspect of work models stabilization is the establishment of a regular cadence of value delivery. This cadence is essential not only to make explicit your ability to reach your purpose regularly, but also to make explicit if you are getting better or worst in this game as time goes on. When we talk about cadence, we need to talk about time. Time is always the dominant parameter for knowledge work or creative work. The question is never “What should be done to reach the goal?” but always “How much time do I have to reach the goal?”. For this type of work, the iron triangle is obsolete. We commit to an objective and use the time we have to generate a scope whose purpose is to reach the goal. The scope is always output, never input. Frequency of value delivery (your process “heart beat”) is where you can evaluate the most important feedback loop of your work place. A known frequency of value delivery implies predictability and a predictable delivery process is an important indication of a healthy process. Every work process could be stabilized using these few concepts: visualize it, talk about it and persue cadence. Yes, it could be better by doing that, but you shouldn't do that only for the sake of just making it better, you should do it because you need to put your car on the right road. Few things are so wasteful as putting your car to move faster on the wrong road.

Posted by: alissonvale
Posted on: 10/14/2012 at 2:48 AM
Tags: , , , ,
Categories: Management | thoughts
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
While I was flying back to Brazil after attending another great Kanban Retreat event in Mayrhofen, I had the opportunity to read the last book published by David Anderson: Lessons in Agile Management - On the Road to Kanban, which is really great by the way and deserves a specific post about it. One of the topics particularly got my attention. I'm refering to the one that talks about the use of the "Inventory Lean Manufacturing" metaphor in the software development context (pages 291-294). In four pages, David Anderson really got me thinking when he tries to clarify the topic by explaining how tricky is to map concepts in such different contexts like manufacturing and knowledge work. He talks about the differences between an abstraction, an analogy and a metaphor and how this is relevant when we are mapping concepts and establishing language: "Lean is an abstraction rather than a metaphor". Lean is an abstraction and inventory regulation concerns are part of a concrete implementation of Lean in the manufacturing context. That was my insight from his text. In that context, inventory is something that should be reduced as much as possible in order to smooth flow, reduce cost and eliminate waste. Inventory is an accounting term. It represents the entire stock of a business, including materials, components, work in process, and finished products. In a systemic perspective, is something that you are accumulating or bufferizing in a system in order to guarantee supply for a downstream part of your value chain.  It is key for any continuous work process don't let its operators starving of work because of lack of raw material. So, to avoid that, it is common to produce more and keep it for later use, so downstream operators always have what they need to work.  Besides the fact that costs are much higher when you have to deal with a large quantity of material in the factory floor, what I think is so important as it, is the economics perspective. In the manufacturing context, as the raw materials are being transformed and as they are flowing through the system they become work in process, which can be a real source of inconvenience. Work in process usually has intrinsic value in this situation. As there are already money committed to the work in process, it becomes urgent to finish it, even if it becomes not necessary anymore or its value diminishes in comparison to other work items.  Even when this work in process becames finished goods you are not free from these types of problems unless you sell them immediately. Do you want to have a high quantity of items ready to be sold but with no customers to buy? Think about the simple economic law of demand and supply. So, maybe now you know why sometimes we have these huge marketing promotions selling by a price much lower than the usual. It is just excess of items with no customers to buy. So, it turns out that, for manufacturing processes, inventory in excess is something bad. This over production causes a lot of harm and systemic dysfunctional issues. It makes the process less efficient and costs to increase once requires a lot of effort to manage it. Therefore, reducing inventory is key for Lean Manufacturing. What about knowledge work? It doesn't work like that in knowledge work. The model is quite different. The raw material for knowledge workers is information, not documentation, requirements, designs, neither any other artifact or discrete material, just information in its abstract form. Don't confuse information with the media that it comes with. Work in process is generated when one starts applying knowledge and experience to produce value given the information provided regardless in what format it is provided. The money commited to it comes usually in the form of work hours invested by a knowledgeable person. As there is no intrinsic value whatsoever in partially done work, and as this type of work perishes really quickly, the economic pressure to finish it will depend much more of a perception of value, than of its intrisic value. So, it can be much more easily discarded, despite the whole psychological distress that this provokes. Raise your hand if you never have been involved in an abandoned expensive project with no real value or economic results delivered at the end.  Finished goods ready to be consumed but not consumed yet, in knowledge work, are just a different manifestation of work in process. If you write a non published book, this doesn't affect its intrinsic value. It is still can value millions or nothing depending on someone else's perception of value. In knowledge work, the value of partial work is not intrisic to the good, as it is in manufacturing.  So the economics of work in process is different in knowledge work. We are dealing with a really different set of concepts. We  need to assume a different perspective on this.  The Lean abstraction doesn't require inventory as a concept to be applied in different contexts than manufacturing. Inventory control is more a Lean Manufacturing implementation aspect. However, we still have the same root problem. Dysfunctional issues regarding systemic buffers growing as time goes on that jeopardize the effectiveness and efficiency of the environment. We need to take care of this in our knowledge work systems, as people do in manufacturing systems when they regulate their inventories. Let's focus on the root problem then: what type of accumulation, despites being necessary, can be really harmfull if it is not controlled over time in knowledge work environments?  One can say: work items. We can't let work items in progress start to accumulate in the system. That's why we use kanban to limit then and to not allow new work to be started before old work has been released. Definitely true! But enough? Work items in progress are just a manifestation of something much broader and more impactant in knowledge work environments: Assumptions. Knowledge work requires assumptions We need to assume a lot of things, during the whole time to accomplish our goals in this type of work. It is fine to move forward with partial information, because when there is no information, we replace that with some kind of assumption to map the terrain. Our brain does that all the time.  When you start to work in a given item, a lot of assumptions start to take place in your system. It is really what the customer needs? It is going to take the time that you expect to finish it? It is not going to generate more rework? Who is taking care of it really knows what to do? and so on... Uncertainty rules here! And, to deal with that, we *assume*. It turns out that the solution for the uncertainty is not to cheat the system by trying to create certainty upfront. Uncertainty lies in the knowledge work reality. What we need to do is to continously transform the assumptions that we created at the beginning to deal with the uncertainty in facts, and learn by doing that, once this learning is going to be useful on the formation of the following assumptions that will come up later on. Real problems start to emerge when we don't care about the continuous replacement of assumptions by facts. When the environment is full of assumptions, we make blind decisions. We support ourselves on our perception of the reality, not on the reality itself. We kill one assumption and replace it for facts when we evaluate it. This evaluation creates a fact. A discovered fact is just a manifestation of learning and the generation of learning is probably the most important aspect of a knowledge work process. Few examples of assumptions and related practices that start some feedback cycles: We are going to produce 32 points in this next iteration. (estimation) The code in this method is doing what it should do (unit tests) This part of the code is designed according our agreements, standards and style (code standards and collective ownership) We are doing what should be done to achieve our next short term goal (iteration planning) Few examples of facts and related practices that ends the feedback cycle: We have produced 26 points in this iteration. (iteration review) The code in this method was tested and it is doing what should do (unit tests executed on continuous integration) This part of the code was checked and it is according our standards and style (code review, pair programming) We have just shared what we have being doing and we know that now we are in the right direction (day after stand up meeting) This list can be endless as closer you look to the environment with that perspective. Think about what lies between assumptions and facts: Time!!!  Shorter feedback cycles is what Lean offers As this time increases, it also increases the probability of jeopardizing the effort and decisions that are being made, conscioulsy or unconsciously, based on those. It increases risk! So, for a knowledge work environment, I propose that we should focus in identify and differentiate what is an assumption from what is a fact, and then, look for ways to evaluate those assumptions as quick as possible in order to transform them in facts.  The theory related to this concept is fully explained on this post: Cycles of Assumptions Evaluation  Interesting as we already have a really good tool to optimize this process of assumption evaluation. We just need to design short feedback cycles in our systems or make the current ones shorter. That's what Lean Systems for Knowledge work are about when we are talking about a system design perspective: make feedback cycles shorter. Assumptions are not the new "Inventory" for knowledge work. This is not a good metaphor, actually. We are not just counting assumptions and measuring its accumulation. We are not limiting them. However, they can fill a theoretical gap that we have in our Lean implementations for knowledge work systems that can open a whole field of study and opportunities to find leverage points in such systems. It has the same role as "inventory reduction" has in manufacturing systems, because, in the end, the root problem they address is really close to be the same.

Posted by: alissonvale
Posted on: 6/26/2012 at 7:12 AM
Tags: , , , ,
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 
Last May at the Lean SSC 2011 conference I introduced my thoughts to some colleagues about the interpretation of what I think would be one of the fundamental elements in knowledge work systems: cycles of evaluations of assumptions. Alan Shalloway has recently taken this conversation forward, connecting these thoughts to other ideas related to risk and flow on this blog post. So, I have decided to write about the topic in more detail so we can explore this in terms of understanding what I think is part of what the Lean-Kanban and Agile communities do and why it matters so much.   Why we should care about assumptions   Assumptions are at the heart of all knowledge work activities. In the software development field, you can find them everywhere, from coding to portfolio management, from the smaller technical decisions to larger product design decisions. The evaluation of an assumption ends the cycle of learning and defines the boundaries around systemic feedback loops. Several things happen when you evaluate one or a set of assumptions:    (1) Learning: The evaluation of an assumption holds a very important opportunity for learning. By comparing what you have with what you were expecting, you learn about what you are capable of. You learn to separate facts from intuition. You learn how to work with small expectations and how to quickly change bigger expectations as derivate smaller ones are being evaluated. (2) Making Progress Towards a Goal in a Terrain of Uncertainty: In knowledge-based environments all you have is uncertainty. You don't know how much time takes to do a task. You don't know how one task is going to affect others until all the assumptions around it are validated. So progress in those environments is not defined by putting another brick in the wall, but by a recurrent evaluation of assumptions that emerges as you move forward from one uncertainty to another.  (3) Establishing Relations of Causality/Correlation: This insight came from Don Reinertsen, when we were discussing the topic with other colleagues at LSSC11. In fact, as the time to evaluate your assumptions increases, the potential to rationalize the result in terms of causality or correlation will be reduced. These are fundamental mental capabilities to find opportunities to learn and improve.  (4) Reducing risk: Alan Shalloway was the first to relate this to the work of Bob Charette. By controlling the number of assumptions in a given system, you are actually shortening the range of options and probabilities of getting an unexpected result. I believe if you design mechanisms to control the volume of non-evaluated assumptions in a given system, you are creating the potential to mitigate certain types of risk without too much case by case upfront analysis.     The awareness of being surrounded by assumptions is a property of an environment tolerant to failure. As Alan Shalloway stated on the earlier mentioned post: “Sometimes we are so sure of our assumptions that when we prove them wrong, we call the action doing so an error”. Errors only happen when you have been proven wrong in conduct or judgement by actual facts. It is a mistake to encourage a culture of establishing guilty and comparing results with expectations when what you just have are assumptions. It is important to learn how to separate unexpected results operating on fact-based reality from unexpected results obtained from non-evaluated assumptions.   Time is the dominant parameter   Acknowledging the existence of assumptions in our knowledge work environments is the first step. However, just knowing the relevancy of this is not enough. We need to know “how” to use this awareness in order to design better work systems.   My breakthrough about the “how” came when I first read about John Boyd's work few months ago. His theory is centered on the OODA Loop concept (Observe, Orient, Decide, Act). According Boyd, every active entity, like an individual, a group of individuals or an organization, use the OODA loop to react to events. They first observe the environment or situation they are involved in, then they analyze the data applying it their own background of knowledge and experiences, taking a decision to finally act based on that information.   However, instead of concentrating on the quality aspect in each part of the loop, which cycles like PDCA and LAMBDA seem to describe well, Boyd came up with something really interesting. For Boyd, what mattered most is not how good you are at observing, orienting, deciding or acting, but how fast you go through these stages in order to start a new cycle. Despite most of his work being oriented to environments dominated by competition, I think this could be also applied to collaborative environments, when the fundamental principle is about adaptation and responsiveness to new information obtained from an environment in constant change.             As Boyd's theory suggests, the speed of iteration is more important than being right when iterating. If the evaluation of assumptions marks the boundary for feedback loops in knowledge work environments, maybe the same rule applies for how we iterate in our activities. In other words, being fast at evaluating our assumptions is better than trying to force assumptions to be right at the end. So, to be effective in knowledge work environments we need to constantly try to minimize the time between the creation or emergence of an assumption until its evaluation.   In order to represent that concept, I've came up with this expression which represents the whole idea in few symbols:   ? – min(t) → ! Where:  ? represents the creation of an assumption   !  represents the evaluation of that assumption  min(t) represents the focus on minimizing the time between the creation and evaluation of the assumption   How programmers are minimizing the time-life of assumptions   If you were a programmer in the mid-1980s you probably remember writing programs on punched cards. Look at how the wikipedia describes a punched card:   A punched card is a flexible write-once medium that encodes, most commonly, 80 characters of data. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk-sized keypunch with a typewriter-like keyboard. A typing error generally necessitated repunching an entire card. A single character typo could be corrected by duplicating the card up to the error column, typing the correct character and then duplicating the rest of the card. (...) These forms were then converted to cards by keypunch operators and, in some cases, checked by verifiers.   At that time, programmers had to write their code on these cards and then submit them to an operator who ran the program on IBM mainframe computers. It could take days to get the response from the execution of those programs! After days waiting for the response, the result could be something like: Error on line 32.   I started my own career as a software developer in the late 80's. Before being a real programmer, I used to type several pages of printed code in Brazilian magazines just to run them and realize - probably because of my own mistakes while typing the code – that the program didn't work. It could took several hours to copy the whole program line by line. The result of running the program could just be a blank screen or a small square blinking somewhere on the screen.      We all know that the practice of writing several lines of code before you can evaluate them is not as good as evaluating a few lines or even each line as you type.  We also know that the practice of writing a really simple test, that evaluates one single behavior of your code, is considered a better practice than writing complex tests with complicated scenarios and multiple inputs and outputs.  The point is not only about simplicity versus complexity, but also about the speed in which you can evaluate the whole set of assumptions that you carry when you write those lines.     Nowadays, compilers and interpreters are getting quicker at showing typos and syntax mistakes. In some software development platforms, compilation or interpretation of the code occurs in the background, and I have instantaneous feedback of my errors. If I'm using techniques like Test Driven Development (TDD) or Behavior Driven Development (BDD), I can also get instantaneous feedback not only if I mistype something, but also if the system is doing what it should. Ruby developers, for instance, have shrunk this feedback loop even more by having their tests automatically executed in the background for every saved change in their source files.  Developers have been applying intense assumption-oriented semantics to specify their behaviors. So you see words like “should” dominating their vocabulary, which is a clear indication of awareness of  the assumption inherent to each piece of code.    From punched cards and long lists of non-validated code to real time validation for syntax and purpose, what is common in all of those evolutionary leaps is the exponential growth in the ability to  reduce the time-life of assumptions in the form of non-validated lines of code. Every line of code carries its own assumptions. It assumes that it is written in the right syntax, that it is the right instruction, that it is in the right place and it is the best way to do what should be done at that point.    Peer Review   Programmers have to deal with several challenges when they are working together in teams. Style and syntax standards, code design, usability guidelines, test practices and other rules should be aligned among all developers in the same team. There are so many details and different situations when you are coding, that just write rules down and expect everyone to follow is not such a good option.    As software development becomes a repetitive activity, quality goes down. That happens because the code is a complex artifact. You need to embrace the inherent variability to prevent the uncontrolled expansion of that complexity. Documented “standards” don’t help to deal with that variability.   It is common to use a review cycle to help developers to align themselves in a common structured model. When a feature is completed by a developer, another developer must review what was done. He is supposed to point it out what doesn’t fit in the work structure of the team, what could be better and what could be removed because is not necessary. As time goes on, the expectation is the formation of a shared mental model that everyone follow by agreement, not by rules in a document.    The problem is that there are several ways to do this. One possible way involves one developer handing off the implemented feature to another. That is not a good solution in most cases. Besides the natural issues regarding handoffs, who are receiving the task probably will enqueue it until he finishes his current task. As usually there is no visibility or limits controlling what is happening, the volume of work waiting for review can increase very quickly.      Another way to make code reviews involves the practice of pulling someone to review the code together with the original developer as soon as the feature is implemented. This is a much better solution, once it eliminates the handoff, prevent the formation of queues and promote collaboration. However, it still carries downsides like interruption, context switching, hurry on finish the work and lack of involvement of the reviewer since the beginning.   We can think about different solutions for the review problem, but developers have discovered a much more efficient way to handle that, which is called pair programming. In this model, two developers work together during the whole time until the feature finishes. Every action or decision is immediately reviewed by the peer. There is no handoff, no context switching, no hurry, no lack of involvement anymore.   However, pair programming still let the space for some dysfunctions once the feature absorbs the mental model of only two people in the team. A technique called promiscuous pairing addresses this problem by making developers change position between each other in a certain frequency, so all of them can work a little in all features.   As was explained in the context of coding, here we have the same pattern of evolutionary leaps. When a developer starts coding, assumptions about this code are created. Is it following the style and rules used by the team? Is doing what was supposed to do? Is there some design issues or flaws that should be solved before finishing? These and other assumptions are living in the system until being evaluated by another team member. The review process is just another way to evaluate these assumptions. Again, the time is what matters. In the first review model, the handoff extends this time quite a lot. The second model, when a developer pulls another to do the review, the time of evaluation is reduced to the time for develop the feature, which is better. The pair programming model makes this time to be near zero, but it still maintains some assumptions in the system. By doing promiscuous pairing, your evaluation embraces more assumptions and the time remains minimal.          How the management system is affected by assumptions   Accumulation of assumptions is probably one of the biggest sources of dysfunction in value-add activities in knowledge work environments. It is not different when we analyze management activities.    The Agile Manifesto probably has marked the emergence of a new paradigm in the software development world. In terms of management, the Agile movement, and after that, the Lean-Kanban movement in software development, both promote a different approach. The predictable world of Command and Control was left behind, opening space to a new understanding focusing on people  over processes and collaboration over contract negotiations. Managers are now supposed to be system thinkers, instead of resource controllers. While this transformation was taking place, in terms of assumption evaluations, dramatic changes occurred.   When someone is doing a task for others, there are some implicit assumptions made during the whole process. Is the right direction being followed? It is being done correctly? Is that the right thing to do at the time? Many times we hear the same story. Weeks or months after a manager assign tasks to developers, usually in large batches, they discover that nothing works as expected, that the code has no quality, and developers are taking the blame and leaving the project.    Most managers would say that this happens because of lack of control, that managers should command and control tightly. But, should they? Should they try to impose predictability to a naturally unpredictable environment? I don’t think that is the answer. In those types of projects, as time goes on, new assumptions are in constant generation because you are essentially dealing with partial information all the time. As the project moves forward, you start to collect pieces of information that were lacking when you first started. Evaluating assumptions helps you to confirm or refute decisions that were created in the first place by the lack of that information. This applies not only for developers writing code, but for management practices as well.   Let’s analyze some management practices using this perspective. Nowadays, more and more developer teams are doing 15 minutes stand-up meetings on a daily basis. They do that to improve communication and alignment towards a shared goal. What they probably don’t know is that in every one of these meetings they are evaluating some important assumptions: that everybody on the team have been working in the right direction, that everybody is going to work in the right direction tomorrow and that the team know about the problems so members can help each other to overcome problematic situations. Do you agree if we evaluate these assumptions every week, instead of every day, we are just allowing the accumulation of some critical assumptions that could jeopardize our short-term goals? Those meetings are usually kept really simple, short and frequent, because the focus is not doing the meeting right, but do it frequent enough to not allow that assumption to live without validation for a long period of time.   Why short iterations?   Scrum practitioners have also been applying the concept of short evaluations in assumptions to dealing with estimations. They assign “points” (Story Points) to units of work. The amount of points for each unit increases as the complexity or uncertainty for what is expected of that work increases. The project is broken into timebox iterations. When an iteration starts, the team defines how many points each unit of work is worth and the total number of points they can handle for the whole iteration. What a lot of Scrum teams don’t realize is that this is just an assumption. The total amount of estimated points is an assumption of future throughput, not a commitment whatsoever. The number of points for each unit of work is also just an assumption of the understanding of the problem that has to be solved, not a commitment to fit the work to its estimation.   Now you probably can understand why short iterations are frequently better and why teams working this way are considered “more mature” than teams working with longer iterations. If you really want to know more about your capacity, then the time to evaluate the created assumption is crucial. Taking more time to do this evaluation ends up increasing the number of live assumptions in your system, making your knowledge about the capacity of the project less precise.     The Sprint Review is another Scrum ceremony which is useful on controlling assumptions in software projects. As the team is producing working software, non-evaluated assumptions about that work are being accumulated. Is each feature what the customer was expecting? As the development process starts, those assumptions come to life for every feature. Only when the feature is presented to the customer you can evaluate them. The Sprint Review marks the point where the evaluation happen. As the cadence of this ceremony is aligned with the time that an iteration get complete, shorter iterations will reduce the time-life of that assumption in projects running in a timebox fashion.    Continuous Deploy   After a feature is reviewed, another assumption appears: Does the new feature will need any adjustment to work well? The implemented feature needs to be deployed so the customer can actually use it. In order to publish a new release of the product, it is common to wait for a certain volume of new features sufficient to form a cohesive business operation. The average scale of this wait time is months. It is a quite large timeframe to keep non-validated assumptions alive.   The side effect of this delay is know as “release stabilization”.  Feedback from users comes in so large quantity right after the release being published that the team needs to stop the development of new features for weeks until have all these requests addressed. In other cases, it is not the volume of requests that bothers the team, but the lack of any request! Some features are released and nobody uses. The cost of added complexity to the code is really underestimated when this happens. Indeed, this happens a lot.   Some web companies with millions of users are showing the path to solve this problem. The time to reach the user is being minimized by a model which new features are progressively deployed for clusters of users in cycles of evaluation. So, a new feature is first deployed to a small group of users. The use of the feature is evaluated. If people don’t use it, the feature is simply killed. By other hand, if the new feature generates good feedback and attraction, they deploy it for the next group of users. Additionally, the first group points some flaws or misbehaviors in the product. This is corrected before the next group receives the feature. New analysis is made and feedback is collected, and the cycle goes on until all users get the feature.    Again, the main difference between the two models is that, on the second one, you recognize that what you did is based on assumptions. The users are the ones that are going to evaluate them, not the Product Owner or any other proxy role in your project. What makes you effective is how quick you can go to the customer to make this evaluation.   WIP Constraints   The emergence of the Kanban change management method brought new tools to deal with assumptions. Flow, Visibility and WIP constraints are in the essence of this approach. Those elements have a huge contribution in terms of reducing the time-life of assumptions.    Let’s go back to the customer review problem addressed by Scrum teams with the Sprint Review ceremony. We already know that teams working in short timebox iterations are  better in controlling the time-life of assumptions than teams working in longer iterations. But we can do better than that. What would happen if we create in our working system the ability to evaluate those assumptions as soon as you have the minimal conditions to do it, instead of doing in a pre-defined scheduled time?    With a WIP constraint you can control the accumulation of existent assumptions in the feature review process. The PO will be involved as soon as this accumulation reaches a desirable level. You can argue that more pressure on this cadence is not possible because of the PO availability or because any other reason. Fair enough. In this case, you can’t reduce the volume of assumptions in your system because you are subordinated to a constraint that you can’t remove. But don’t loose the opportunity to make your system better in controlling assumptions just because you want to stick to a method prescription. Scrum is a good set of practices, but are not the best set of practices, simply because, as the Cynefin Framework model suggests, in complex environments, there is no such thing as best practices. What exist are emergent practices. So, there is no reason to not design your process in a more efficient way.    Handoffs   WIP constraints have the potential to control the time-life of assumptions in several ways. A particular useful scenario is when you are dealing with handoffs. Despite being a necessary evil in some cases, handoffs should be avoided most of the times. When you transfer work between people, teams or organization units in a continuous way, you start very critical assumptions: Has the work arrived in a good condition? Who has received gets what was expecting? Who has to respond is going to do that in the expected timeframe? Is rework not going to be necessary? Is enough information about the work being passed with the work?    WIP constraints can minimize the impact of handoffs by forcing this assumptions to be evaluated before new ones are created. When everything is fine to move on, you are allowed to start new work. This practice has the potential to transform any knowledge work environment because, beside other reasons, you are controlling the amount of assumptions that are living in your work system in a given time. You do that by forcing their evaluation in a dramatic shorter cycle.    Visibility   Visibility is another Kanban element which has a systemic effect in assumptions. One fundamental aspect of work models is how much time is necessary for people react to new information that comes everyday. When the work model is visible, people on the team start to share a mental map where conditions and information about each important piece of work can be signalized.    In knowledge work, it is quite easy to see important information hidden in e-mail inboxes, phone calls or memos. If the work of everyone is projected on a single map, you move the assignment  model from a “per-individual-basis” to a “system-basis”. With a single map, people have a place to pull the work based on explicit policies and to discuss strategies to handle their challenges every day.     What visibility does is reveal one of the most important assumptions in knowledge work: that the system is working fine. When aligned with other Kanban principles, visibility empowers people to discuss a better distribution of efforts based on availability and importance, instead of just familiarity or personal assignment. They can anticipate and swarm in problems as soon they emerge.  They can work as a real team.   Customer Assumptions   As it was mentioned at the beginning of this text, you can observe a software development work system in terms of how it deals with the accumulation of assumptions over time.  This can be done by observing how people are dealing with their operational tasks, how managers are managing and how team practices can be organized to guarantee frequent evaluation of assumptions. But you can go further on this.   The recent Lean Startup movement is teaching us a valuable lesson. This community is learning how to minimize risk in creating the wrong product by evaluating assumptions about what customers really want during the development process. They use concepts such as Customer Driven-Development, Business Model Canvas and Minimum Viable Products to empower people with a effective method to do that.   The most common form of product development in the software development field involves the generation of a backlog of features. Then, you manage progress by comparing the planned features with the already developed features.  The problem with this approach is that it carries a large quantity of hidden assumptions about what the customer really need. A product can take years to be developed. After all this time, you discover that nobody wants to use it. Basically, because your focus remained on trying to meet scope, budget and schedule.   The Lean Startup community is learning to reduce the cycle of discovery of customer needs to a minimum time and effort. People now are launching product releases in really short cycles. They are reaching the customer even without a real product developed. They are doing this because they know that even the most brilliant idea is formed by a set of assumptions, and these assumptions need to be continuously evaluated before you do a major investment in the wrong product.   Basically what changes is the way you progress in your product development initiative. In this model, instead of moving from one feature to another, you move forward by evaluating one assumption after another. When you don’t have a positive response to your assumption, you pivot. The idea of pivoting makes the approach really strong. When you pivot, you use the new information that you have obtained to change the direction of the product making it compatible with the customer response. This is quite counter-intuitive because by deviating it from the original intention actually makes it stronger.     Here the same concept applies. It is the time and frequency of the evaluation that matters, not how perfect you do it.   Trade-off   There is a clear trade-off regarding how to find the optimal time to evaluate assumption for each feedback cycle.  It seems like value-add activities tend to offer space to near zero time, depending on the available tools or techniques.  However, coordination activities, like meetings, handoffs and reviews hold a transaction cost which makes the constant reduction of time not so useful.    As an example, standup meetings are a coordination activity which can be really effective on a weekly basis, a daily basis or even twice a day depending on the context. A team of managers can do that with project team leaders in a weekly basis. More than that will not generate any value because there aren’t enough assumptions accumulated on this period to be evaluated.  For a software development team, twice a day is too much, while for a maintenance team, living a period of crises, can do that twice a day easily.    However, in all those cases there is a optimal limit to reduce time between assumptions evaluation. The transaction cost of the activity helps us to define that limit. When you reach it, a good way to go further is stop thinking about how to reduce the time and think about how to replace the practice entirely. In this case, you act differently keeping the purpose of that practice in the context of the feedback cycle. The evolution from developers review to pair programming is a good example of this type of change. The purpose was sustained but the means was modified.    Takeaways   The evaluation of assumptions is a thinking tool. It can be used to analyze a system using a new perspective. It can be potentially useful for most knowledge workers, including developers,  managers, product designers and other IT specialists. Each one, in their own problem space, can use this tool not only to take better decisions, but also to improve the current know-how, designing the process to be more responsive and self-regulated.   If you are somehow involved in a Lean, Kanban or Agile initiative, stop for a while and try to think about the feedback loops that you have in your environment. When they are going to close? What assumptions are you carrying on at the moment? When they will be evaluated? What are the possible risks of letting non-validated assumptions accumulate in the system as time goes on?   Processes don’t evaluate assumptions, people do. Processes have feedback loops, but who close those loops are people. So when the Lean, Kanban or Agile communities tell you that “is all about the people”, pay attention to it. At the end is all about how your culture empower them to make good decisions, no matter what are their level of influence.  

Posted by: alissonvale
Posted on: 6/30/2011 at 3:18 AM
Tags: , , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 
That was the title of my presentation last Thursday at Agiles 2009, 2nd Latin-American conference on Agile Development Methodologies. I have designed this presentation trying to summarize what the Kanban community is trying to spread recently as a new way to manage knowledge work. Download the slides here to follow this post about the presentationBy the way, knowledge work was the first topic of the presentation. The initial discussion was about how software development fits with knowledge work (slide #1) and how such activities can be associated with a huge quantity of variations in scenarios and contexts (slides #3 to #6). Understanding the nature of knowledge work is the key to not get trap in trying to translate manufacturing tools and techniques unappropriately.Context Matters Don Reinertsen's quote about best practices and the importance of the right context to consider them as "best" warn us to be carefull when choosing best practices just because thay are supposed to be "the best" (slide #7). Context is relevant, so Kanban allows you to design processes that fit to the context, instead of manipulating the context to fit a specific process (slide #9). The word "process" generates fear on the agile community. In fact, there is a reason to put this word on the right side of the manifesto. But, there is no reason to think that no process is the goal. Kanban establishes a new balance on the relationship between people and processes, establishing a way to empower people to design their own processes. Hence, Kanban is a collaborative exercise for process design and offers thinking and action tools to empower people to evolve processes by themselves (slide #11). The process is owned by people, which can connect Lean with Agile in a incredible way.And there is more... Kanban is also a MindsetSo many case studies around the world describe a kind of transformation process emerging once you absorb the "paradigm of flow". A Kanban implementation holds only four essential tools. But there is a whole new mindset out there. The WipLimitedSociety is agreggating people and content regarding something bigger than the four essential tools. A whole new body of knowledge is emerging right now. The "Yes We Kanban" logo is the symbol of this mindset (slide #13).It's not so easy to structure this mindset in a way that people can understand. I give my try, anyway. So, my way to describe the Kanban Mindset is by showing patterns, tools and the type of thinking spread by the people who is talking about and applying such ideas.  The result was a pyramidal structure like the one that I introduce at slide #14: Thinking Tools, Process Design Patterns, Capability Measurements, Collaboration and Team Model Patterns and A Culture of Continuous Improvement. Thinking Tools(#slides 16 a 20)Thinking tools guide practitioners when they are applying Kanban. System Thinking help coaches and team members to understand why they are doing what they are doing. See your work environment as a system can make you conscious about causality and effects of actions. This type of thinking, despite of being very abstract, is very powerfull. A single intervention can cause a huge influence on the system as whole. Once System Thinking can be viewed as the starting point, Lean Thinking is the basis. Almost every operational concern is guided by the need to identify value appropriately and make it flows throught the system, improving the process continuously and eliminating waste progressively. Theory of Constraints is another element on this thinking process. Once you have flow in your process, you are going to start to see bottlenecks that make it slow in certain part of your system. The Theory of Constraints establishes that your system is so strong as its bigger bottleneck. By using knowledge of this theory you can dramatically leverage the throughtput of the system or, at least, you can get understanding about what are the right investments to do in your system to get higher or better results.Queueing Theory is another key thinking tool in the Kanban mindset. What lies behind knowledge work is the unpredictable nature of arrival times and task durations (slide #19). Queues are always there and the understanding of its behaviour is in the toolbox of every Kanban thinker.Finally, there is the Real Option theory that has been discussed lately. It establishes the basis for risk analysis by raising the importance of managing choices and defer commitments to generate options.Process Design Tools and PatternsThe idea that processes can be designed using tools and patterns instead of procedures and documentation forms is mind blowing. In slide #21, I delineate four essential tools and some other patterns for process design using Kanban. This list is just an extract of what I have been listening as solutions to common problems used by practioners and discussed by the community. Here they are:Essential ToolsValue Stream MappingVisual ManagementPull System & Single Piece FlowLimited WIPPatternsBuffers & Queue LimitsClasses of ServiceLeveling WorkTwo tiered Systems (Expand/Collapse)Swimlanes/ExpeditingTriggersPriority FiltersPerpetual Multivote Obviously, this is not the full list of patterns that people are applying to their Kanban implementation. What matters here is the idea that patterns can help a team to solve very specific problems when they are trying to design their processes. You can try to extract the basic meaning of each pattern by looking at the slides #21 to #40. You can also leave a comment on this post if you need more info about one or other.I just want to leave a few words about the last two (Priority Filters and Perpetual Multivote). They were initially proposed by Corey Ladas as patterns for designing the process of select work to inject into the system. I think the perpetual multivote system can be particularly interesting to select work from a continuous improvement backlog queue.  The description of those can be found here: http://leansoftwareengineering.com/2008/08/19/priority-filter/http://leansoftwareengineering.com/2008/09/29/perpetual-multivote/Collaboration and Team Model Patterns(slides #41 to #46)It's difficult to realize how collaboration patterns emerge on Kanban projects without actually doing Kanban. The first thing to notice is about the ownership of process rules. The best Kanban teams are using all these tools to master the process. They, in fact, own the process usually helped by a coach with a System Thinking or Lean Thinking bias. What people call "swarming" is the most interesting pattern, in my opinion. Usually happens when people have to look at the process board and decide what to do next. The Kanban limits avoid people from inject more work into the system, so they have to find a way to help pushing the work in progress out. That's the moment when Collaboration patterns start to emerge. Teamlets, feature teams, and other collaboration structures start to be notice daily.Capability Measurements(slides #47 to #53)I think this quote from John Seddon illustrate why we have to be carefull with measurement in our systems: "Managing with functional measures always causes suboptimization, because parts achieve their ends at the expense of the whole."Kanban gives to the team the ability to measure capacity and use that measure to bargain or manage its commitments with customers and stakeholders. Capacity can be known using time measurements (lead/cycle time) or volume measurements (throughput). It's common for Kanban teams and coaches to use Cumulative Flow Diagrams to analyze throughput and find bottlenecks and other systemic issues on processes. A Continuous Improvement CultureFinally the main goal of a Kanban implementation: establish a Kaizen culture. A culture of continuous evolution of the system. What Kanban team have been reporting so far is the constant appeareance of "kaizen events" as the time goes on. In slide #53, I mention three action tools to establish this culture: * doing Operations Reviews regularly; * enabling Continuous Improvement (CI) Actions by reserving capacity in the system to inject CI work items continuously;* expanding oportunities for improvements removing bottlenecks, eliminating waste and reducing variability.

Posted by: alisson.vale
Posted on: 10/11/2009 at 3:25 PM
Tags: , , ,
Categories: Conferences
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Last saturday I was at Agile Brazil 2009 conference. It was a great one-day conference with a lot of influential brazilian Agile thought leaders and practitioners talking about Agile. Interesting how Rio de Janeiro offers an audience with high mature levels of knowledge to discuss and argue about software development. It's definitely a great place to host national and international conferences about Agile.It's also interesting to watch Lean and Kanban become so controversial in the Agile world. I can classify the Agile Communitty impressions about the "Flow Paradigm" in four categories:1) People that believe in this paradigm as the only alternative for their projects but still don't know how to start;2) People that believe in this paradigm as a possible alternative for very specific projects (like the ones dealing with support, bugs and intense change request).3) People that is looking very carefully and avoiding any oficial statement until fully understand and know how it works in practice (I've noticed several thought leaders in this situation)4) People that believe we are moving backwards with this approach.Anyway, in my opinion, everyone should look very carefully to this new approach. After I've started to use it, I really can't see myself using the timebox approach again, even for new product development. The leverage in subordinate the system to its demonstrated capacity is really meaningfull. But I have to admit this is very risky for teams with low maturity in Agile practices.My presentation: System Leverage in Agile ProjectsThe whole point of my presentation was to describe how changing your way of thinking can create real opportunities to leverage Agile projects (or even any other type of project).I've started describing the difference between System Thinking (the new way of thinking) and Analytical Thinking (the old way). The main message was about System Thinking as a model to get understanding of why things work the way they do, in opposition to the Analytical model that gets knowledge about how things work. Use System Thinking to solve problems changes dramatically the potential to create leverage in software projects.Some of the points of leverage that I have described in the presentation were: - Focus on the purpose of the system;- Demand Management is so important as Demand Planning;- Forget functional measures and focus on measure the capability of the system (e.g,cycle time) or the conditions of the system (e.g, failure demand vs value demand).- Establish flow end-to-end (from the customer to the customer)- Limit WIP- Leadership, Team Work and generalist people are elements that improve flow; - Every waste activity points to the design of your process or the design of your product;- Refactor your process is so important as refactor your code is.You can download the presentation here:For portuguese readersFor english readers

Posted by: alisson.vale
Posted on: 6/29/2009 at 11:32 AM
Tags: , ,
Categories: Conferences
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
I always looked at risk management activities using the event-driven risk approach offered by the current project management traditions.Perhaps because of this, I've never paid so much attention on this topic before, as I am doing now. Flying back to Brazil from the conference in Miami, I had the opportunity to read the amazing piece of text that David Anderson wrote about Risk Management on the proceeding book of the conference. After that, I've started to think about risks in a different dimension. Now, I’m diving on this conclusion:Mutual Trust Relationships can be sustained by intrinsic risk management mechanisms.There is definitely something more in dealing with risks than just trying to create mitigation and contingency plans up-front.  To be relevant, risk management has to be merged into the system to orchestrate his behavior.I have this feeling today that any decision point in a process should be oriented by some risk analysis approach. I’m not talking about a plan for risk mitigation here; I’m talking about a process for continuous risk mitigation. Is that possible?In David's article, he describes a technique that uses Classes of Service (CoS) as an instrument to make the system works oriented to risk decisions according Cost of Delay. So, before you inject new work into the system you are conditioned to think about risks, once you have to classify each work item by doing an analysis of the associated Cost of Delay. The type of risk is going to influence behavior by the application of different contextual policies. Pure System Thinking!Labeling work items is a interesting mechanism to influence system behavior. If you use an e-mail system like Gmail you can see that in practice. When you decide to tag messages by some form of classification, you define a visual agreement mechanism that is able to influence your behavior. You are going to act differently when you see a message with a special tag type. What David is proposing is labeling work items by risk criteria, which is going to subordinate the system to be risk-oriented, a fundamental part of self-sustainable processes. So that's why David's article got my attention in the conference book. Explicit Risk Management is missing in our process. We have to figure it out how to apply this knowledge on it. In our process, we have a different use for Classes of Services.Instead of thinking about the risk, we are thinking about Value mapped to agreements that we need to respect as we interact with our customers. We talk to them in these terms:“You (the customer) should trust us (the vendor) as we will try very hard to keep your business up and running by solving any problem (1) that you have or by helping you in any operation (2) that you need assistance from us; we also are going to sustain your processes by doing improvements and adaptations (3) as your business evolve and, while we do that, we are constantly trying to deliver new features and capabilities in our software to create new business opportunities (4) that generates value for you in your market share.” This “Agreement Statement” maps to our Classes of Services (CoS):1 - Problem solving2 - Support and Operations3 - Improvements for sustainability4 - New ValueThe intention of our "Agreement Statement" is to create a Mutual Trust Relationship with them by being flexible with their needs and getting flexibility from them for our needs (which are mainly estimation, prioritization, error tolerance, and others). So, all units of work derived from this CoS have Value for the customer. But the lack of any of this agreement can make the long term relationship that we aim becomes unstable.We need to create balance by delivering units of work observing the system against this "Agreement Statement". During this observation we have to consider each individual customer and the whole system to make right decisions.We have an assumption here which is based on the fact that we have different entities of our system competing by the same (and limited) resource. So they have to trust us that we are going to take the best decision considering this "invisible" competition that is happening in background. This decision process is all about "risk analysis.” So, Risk Management is an important activity to create effectiveness on this decision process. An effective decision process creates trust, and high levels of efficiency on risk control leads to high levels of trust on this relationship.Using CoS as David suggests seems to be a quite reasonable approach, once CoS is the primary way to classify units of work in Kanban Systems. But, I’ve realized an underlying model for managing risks here, which are leading us to the definition of a new orthogonal ax in our system for risk-oriented work items classification and also for risk-oriented policies. Merging risk analysis into our system to influence its behaviors is going to be my next challenge for now. 

Posted by: alisson.vale
Posted on: 5/12/2009 at 4:58 PM
Tags: , ,
Categories: Management
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed