Introducing "retrospectives" for a new team that I'm working with: I ask: What problems did you have since last time I had been here? Result of discussion: 7 different problems listed on the board. What actions would we take to make them disappear from now on? They suggested 5 actions. I agree with all and suggest another 4. All of them were just small adjustments on process or behavior. So far: 9 actions listed on the board. I ask: - Good?- Good!  - Can you do it? - Sure. Easy! - Do you think we can check the effectiveness of these changes next month and discuss about new problems that will arrive until there? - Sure. - Do you think you can repeat this process every month even if I'm not here to lead? (silence) -> hmmm, understandable... change a habit is a different level of commitment! - Ok. Let's think this way: Try to establish a reality where you do this every month. How would be the work process 10 months from now? Better or much better? - Much better! - Nice! Now try to create a parallel dimension where you don't do this. We just work and work and work... no improvement process whatsoever. How would be the work process 10 months from now? The same, worse or much worse? - The same? - No! Much worse! - Why? - Feedback cycles. Systems, and work systems are not different from any regular system, are composed by feedback cycles. They are the way systems run themselves. When you don't improve over time you are in a self-reinforcing positive feedback loop. Problems increase the level of insatisfaction on team members. Displeased with how things are going, people care less about their work, which generates more problems, and by consequence, more insatisfaction. The cycle keeps reinforcing the level of insatisfaction and the work environment tends to get much worse. When you improve over time you are in a self-correcting negative feedback loop. Actions to improve restore and control the level of insatisfaction. Good results motivate people to keep improving and to care more about the work. This is also called "pursuit of excellence". Lesson: There is no stable work place. A work environment that doesn't improve, gets worst.

Posted by: alissonvale
Posted on: 2/18/2013 at 4:03 PM
Tags: , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Last September I presented this topic in the 2012 Agile Brazil conference in Sao Paulo. The reception was great and finally I had the opportunity to translate it to english so it can be viewed by a larger audience. [More]

Posted by: alissonvale
Posted on: 10/29/2012 at 7:39 AM
Tags: , , , , ,
Categories: Conferences | Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Last May at the Lean SSC 2011 conference I introduced my thoughts to some colleagues about the interpretation of what I think would be one of the fundamental elements in knowledge work systems: cycles of evaluations of assumptions. Alan Shalloway has recently taken this conversation forward, connecting these thoughts to other ideas related to risk and flow on this blog post. So, I have decided to write about the topic in more detail so we can explore this in terms of understanding what I think is part of what the Lean-Kanban and Agile communities do and why it matters so much.   Why we should care about assumptions   Assumptions are at the heart of all knowledge work activities. In the software development field, you can find them everywhere, from coding to portfolio management, from the smaller technical decisions to larger product design decisions. The evaluation of an assumption ends the cycle of learning and defines the boundaries around systemic feedback loops. Several things happen when you evaluate one or a set of assumptions:    (1) Learning: The evaluation of an assumption holds a very important opportunity for learning. By comparing what you have with what you were expecting, you learn about what you are capable of. You learn to separate facts from intuition. You learn how to work with small expectations and how to quickly change bigger expectations as derivate smaller ones are being evaluated. (2) Making Progress Towards a Goal in a Terrain of Uncertainty: In knowledge-based environments all you have is uncertainty. You don't know how much time takes to do a task. You don't know how one task is going to affect others until all the assumptions around it are validated. So progress in those environments is not defined by putting another brick in the wall, but by a recurrent evaluation of assumptions that emerges as you move forward from one uncertainty to another.  (3) Establishing Relations of Causality/Correlation: This insight came from Don Reinertsen, when we were discussing the topic with other colleagues at LSSC11. In fact, as the time to evaluate your assumptions increases, the potential to rationalize the result in terms of causality or correlation will be reduced. These are fundamental mental capabilities to find opportunities to learn and improve.  (4) Reducing risk: Alan Shalloway was the first to relate this to the work of Bob Charette. By controlling the number of assumptions in a given system, you are actually shortening the range of options and probabilities of getting an unexpected result. I believe if you design mechanisms to control the volume of non-evaluated assumptions in a given system, you are creating the potential to mitigate certain types of risk without too much case by case upfront analysis.     The awareness of being surrounded by assumptions is a property of an environment tolerant to failure. As Alan Shalloway stated on the earlier mentioned post: “Sometimes we are so sure of our assumptions that when we prove them wrong, we call the action doing so an error”. Errors only happen when you have been proven wrong in conduct or judgement by actual facts. It is a mistake to encourage a culture of establishing guilty and comparing results with expectations when what you just have are assumptions. It is important to learn how to separate unexpected results operating on fact-based reality from unexpected results obtained from non-evaluated assumptions.   Time is the dominant parameter   Acknowledging the existence of assumptions in our knowledge work environments is the first step. However, just knowing the relevancy of this is not enough. We need to know “how” to use this awareness in order to design better work systems.   My breakthrough about the “how” came when I first read about John Boyd's work few months ago. His theory is centered on the OODA Loop concept (Observe, Orient, Decide, Act). According Boyd, every active entity, like an individual, a group of individuals or an organization, use the OODA loop to react to events. They first observe the environment or situation they are involved in, then they analyze the data applying it their own background of knowledge and experiences, taking a decision to finally act based on that information.   However, instead of concentrating on the quality aspect in each part of the loop, which cycles like PDCA and LAMBDA seem to describe well, Boyd came up with something really interesting. For Boyd, what mattered most is not how good you are at observing, orienting, deciding or acting, but how fast you go through these stages in order to start a new cycle. Despite most of his work being oriented to environments dominated by competition, I think this could be also applied to collaborative environments, when the fundamental principle is about adaptation and responsiveness to new information obtained from an environment in constant change.             As Boyd's theory suggests, the speed of iteration is more important than being right when iterating. If the evaluation of assumptions marks the boundary for feedback loops in knowledge work environments, maybe the same rule applies for how we iterate in our activities. In other words, being fast at evaluating our assumptions is better than trying to force assumptions to be right at the end. So, to be effective in knowledge work environments we need to constantly try to minimize the time between the creation or emergence of an assumption until its evaluation.   In order to represent that concept, I've came up with this expression which represents the whole idea in few symbols:   ? – min(t) → ! Where:  ? represents the creation of an assumption   !  represents the evaluation of that assumption  min(t) represents the focus on minimizing the time between the creation and evaluation of the assumption   How programmers are minimizing the time-life of assumptions   If you were a programmer in the mid-1980s you probably remember writing programs on punched cards. Look at how the wikipedia describes a punched card:   A punched card is a flexible write-once medium that encodes, most commonly, 80 characters of data. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk-sized keypunch with a typewriter-like keyboard. A typing error generally necessitated repunching an entire card. A single character typo could be corrected by duplicating the card up to the error column, typing the correct character and then duplicating the rest of the card. (...) These forms were then converted to cards by keypunch operators and, in some cases, checked by verifiers.   At that time, programmers had to write their code on these cards and then submit them to an operator who ran the program on IBM mainframe computers. It could take days to get the response from the execution of those programs! After days waiting for the response, the result could be something like: Error on line 32.   I started my own career as a software developer in the late 80's. Before being a real programmer, I used to type several pages of printed code in Brazilian magazines just to run them and realize - probably because of my own mistakes while typing the code – that the program didn't work. It could took several hours to copy the whole program line by line. The result of running the program could just be a blank screen or a small square blinking somewhere on the screen.      We all know that the practice of writing several lines of code before you can evaluate them is not as good as evaluating a few lines or even each line as you type.  We also know that the practice of writing a really simple test, that evaluates one single behavior of your code, is considered a better practice than writing complex tests with complicated scenarios and multiple inputs and outputs.  The point is not only about simplicity versus complexity, but also about the speed in which you can evaluate the whole set of assumptions that you carry when you write those lines.     Nowadays, compilers and interpreters are getting quicker at showing typos and syntax mistakes. In some software development platforms, compilation or interpretation of the code occurs in the background, and I have instantaneous feedback of my errors. If I'm using techniques like Test Driven Development (TDD) or Behavior Driven Development (BDD), I can also get instantaneous feedback not only if I mistype something, but also if the system is doing what it should. Ruby developers, for instance, have shrunk this feedback loop even more by having their tests automatically executed in the background for every saved change in their source files.  Developers have been applying intense assumption-oriented semantics to specify their behaviors. So you see words like “should” dominating their vocabulary, which is a clear indication of awareness of  the assumption inherent to each piece of code.    From punched cards and long lists of non-validated code to real time validation for syntax and purpose, what is common in all of those evolutionary leaps is the exponential growth in the ability to  reduce the time-life of assumptions in the form of non-validated lines of code. Every line of code carries its own assumptions. It assumes that it is written in the right syntax, that it is the right instruction, that it is in the right place and it is the best way to do what should be done at that point.    Peer Review   Programmers have to deal with several challenges when they are working together in teams. Style and syntax standards, code design, usability guidelines, test practices and other rules should be aligned among all developers in the same team. There are so many details and different situations when you are coding, that just write rules down and expect everyone to follow is not such a good option.    As software development becomes a repetitive activity, quality goes down. That happens because the code is a complex artifact. You need to embrace the inherent variability to prevent the uncontrolled expansion of that complexity. Documented “standards” don’t help to deal with that variability.   It is common to use a review cycle to help developers to align themselves in a common structured model. When a feature is completed by a developer, another developer must review what was done. He is supposed to point it out what doesn’t fit in the work structure of the team, what could be better and what could be removed because is not necessary. As time goes on, the expectation is the formation of a shared mental model that everyone follow by agreement, not by rules in a document.    The problem is that there are several ways to do this. One possible way involves one developer handing off the implemented feature to another. That is not a good solution in most cases. Besides the natural issues regarding handoffs, who are receiving the task probably will enqueue it until he finishes his current task. As usually there is no visibility or limits controlling what is happening, the volume of work waiting for review can increase very quickly.      Another way to make code reviews involves the practice of pulling someone to review the code together with the original developer as soon as the feature is implemented. This is a much better solution, once it eliminates the handoff, prevent the formation of queues and promote collaboration. However, it still carries downsides like interruption, context switching, hurry on finish the work and lack of involvement of the reviewer since the beginning.   We can think about different solutions for the review problem, but developers have discovered a much more efficient way to handle that, which is called pair programming. In this model, two developers work together during the whole time until the feature finishes. Every action or decision is immediately reviewed by the peer. There is no handoff, no context switching, no hurry, no lack of involvement anymore.   However, pair programming still let the space for some dysfunctions once the feature absorbs the mental model of only two people in the team. A technique called promiscuous pairing addresses this problem by making developers change position between each other in a certain frequency, so all of them can work a little in all features.   As was explained in the context of coding, here we have the same pattern of evolutionary leaps. When a developer starts coding, assumptions about this code are created. Is it following the style and rules used by the team? Is doing what was supposed to do? Is there some design issues or flaws that should be solved before finishing? These and other assumptions are living in the system until being evaluated by another team member. The review process is just another way to evaluate these assumptions. Again, the time is what matters. In the first review model, the handoff extends this time quite a lot. The second model, when a developer pulls another to do the review, the time of evaluation is reduced to the time for develop the feature, which is better. The pair programming model makes this time to be near zero, but it still maintains some assumptions in the system. By doing promiscuous pairing, your evaluation embraces more assumptions and the time remains minimal.          How the management system is affected by assumptions   Accumulation of assumptions is probably one of the biggest sources of dysfunction in value-add activities in knowledge work environments. It is not different when we analyze management activities.    The Agile Manifesto probably has marked the emergence of a new paradigm in the software development world. In terms of management, the Agile movement, and after that, the Lean-Kanban movement in software development, both promote a different approach. The predictable world of Command and Control was left behind, opening space to a new understanding focusing on people  over processes and collaboration over contract negotiations. Managers are now supposed to be system thinkers, instead of resource controllers. While this transformation was taking place, in terms of assumption evaluations, dramatic changes occurred.   When someone is doing a task for others, there are some implicit assumptions made during the whole process. Is the right direction being followed? It is being done correctly? Is that the right thing to do at the time? Many times we hear the same story. Weeks or months after a manager assign tasks to developers, usually in large batches, they discover that nothing works as expected, that the code has no quality, and developers are taking the blame and leaving the project.    Most managers would say that this happens because of lack of control, that managers should command and control tightly. But, should they? Should they try to impose predictability to a naturally unpredictable environment? I don’t think that is the answer. In those types of projects, as time goes on, new assumptions are in constant generation because you are essentially dealing with partial information all the time. As the project moves forward, you start to collect pieces of information that were lacking when you first started. Evaluating assumptions helps you to confirm or refute decisions that were created in the first place by the lack of that information. This applies not only for developers writing code, but for management practices as well.   Let’s analyze some management practices using this perspective. Nowadays, more and more developer teams are doing 15 minutes stand-up meetings on a daily basis. They do that to improve communication and alignment towards a shared goal. What they probably don’t know is that in every one of these meetings they are evaluating some important assumptions: that everybody on the team have been working in the right direction, that everybody is going to work in the right direction tomorrow and that the team know about the problems so members can help each other to overcome problematic situations. Do you agree if we evaluate these assumptions every week, instead of every day, we are just allowing the accumulation of some critical assumptions that could jeopardize our short-term goals? Those meetings are usually kept really simple, short and frequent, because the focus is not doing the meeting right, but do it frequent enough to not allow that assumption to live without validation for a long period of time.   Why short iterations?   Scrum practitioners have also been applying the concept of short evaluations in assumptions to dealing with estimations. They assign “points” (Story Points) to units of work. The amount of points for each unit increases as the complexity or uncertainty for what is expected of that work increases. The project is broken into timebox iterations. When an iteration starts, the team defines how many points each unit of work is worth and the total number of points they can handle for the whole iteration. What a lot of Scrum teams don’t realize is that this is just an assumption. The total amount of estimated points is an assumption of future throughput, not a commitment whatsoever. The number of points for each unit of work is also just an assumption of the understanding of the problem that has to be solved, not a commitment to fit the work to its estimation.   Now you probably can understand why short iterations are frequently better and why teams working this way are considered “more mature” than teams working with longer iterations. If you really want to know more about your capacity, then the time to evaluate the created assumption is crucial. Taking more time to do this evaluation ends up increasing the number of live assumptions in your system, making your knowledge about the capacity of the project less precise.     The Sprint Review is another Scrum ceremony which is useful on controlling assumptions in software projects. As the team is producing working software, non-evaluated assumptions about that work are being accumulated. Is each feature what the customer was expecting? As the development process starts, those assumptions come to life for every feature. Only when the feature is presented to the customer you can evaluate them. The Sprint Review marks the point where the evaluation happen. As the cadence of this ceremony is aligned with the time that an iteration get complete, shorter iterations will reduce the time-life of that assumption in projects running in a timebox fashion.    Continuous Deploy   After a feature is reviewed, another assumption appears: Does the new feature will need any adjustment to work well? The implemented feature needs to be deployed so the customer can actually use it. In order to publish a new release of the product, it is common to wait for a certain volume of new features sufficient to form a cohesive business operation. The average scale of this wait time is months. It is a quite large timeframe to keep non-validated assumptions alive.   The side effect of this delay is know as “release stabilization”.  Feedback from users comes in so large quantity right after the release being published that the team needs to stop the development of new features for weeks until have all these requests addressed. In other cases, it is not the volume of requests that bothers the team, but the lack of any request! Some features are released and nobody uses. The cost of added complexity to the code is really underestimated when this happens. Indeed, this happens a lot.   Some web companies with millions of users are showing the path to solve this problem. The time to reach the user is being minimized by a model which new features are progressively deployed for clusters of users in cycles of evaluation. So, a new feature is first deployed to a small group of users. The use of the feature is evaluated. If people don’t use it, the feature is simply killed. By other hand, if the new feature generates good feedback and attraction, they deploy it for the next group of users. Additionally, the first group points some flaws or misbehaviors in the product. This is corrected before the next group receives the feature. New analysis is made and feedback is collected, and the cycle goes on until all users get the feature.    Again, the main difference between the two models is that, on the second one, you recognize that what you did is based on assumptions. The users are the ones that are going to evaluate them, not the Product Owner or any other proxy role in your project. What makes you effective is how quick you can go to the customer to make this evaluation.   WIP Constraints   The emergence of the Kanban change management method brought new tools to deal with assumptions. Flow, Visibility and WIP constraints are in the essence of this approach. Those elements have a huge contribution in terms of reducing the time-life of assumptions.    Let’s go back to the customer review problem addressed by Scrum teams with the Sprint Review ceremony. We already know that teams working in short timebox iterations are  better in controlling the time-life of assumptions than teams working in longer iterations. But we can do better than that. What would happen if we create in our working system the ability to evaluate those assumptions as soon as you have the minimal conditions to do it, instead of doing in a pre-defined scheduled time?    With a WIP constraint you can control the accumulation of existent assumptions in the feature review process. The PO will be involved as soon as this accumulation reaches a desirable level. You can argue that more pressure on this cadence is not possible because of the PO availability or because any other reason. Fair enough. In this case, you can’t reduce the volume of assumptions in your system because you are subordinated to a constraint that you can’t remove. But don’t loose the opportunity to make your system better in controlling assumptions just because you want to stick to a method prescription. Scrum is a good set of practices, but are not the best set of practices, simply because, as the Cynefin Framework model suggests, in complex environments, there is no such thing as best practices. What exist are emergent practices. So, there is no reason to not design your process in a more efficient way.    Handoffs   WIP constraints have the potential to control the time-life of assumptions in several ways. A particular useful scenario is when you are dealing with handoffs. Despite being a necessary evil in some cases, handoffs should be avoided most of the times. When you transfer work between people, teams or organization units in a continuous way, you start very critical assumptions: Has the work arrived in a good condition? Who has received gets what was expecting? Who has to respond is going to do that in the expected timeframe? Is rework not going to be necessary? Is enough information about the work being passed with the work?    WIP constraints can minimize the impact of handoffs by forcing this assumptions to be evaluated before new ones are created. When everything is fine to move on, you are allowed to start new work. This practice has the potential to transform any knowledge work environment because, beside other reasons, you are controlling the amount of assumptions that are living in your work system in a given time. You do that by forcing their evaluation in a dramatic shorter cycle.    Visibility   Visibility is another Kanban element which has a systemic effect in assumptions. One fundamental aspect of work models is how much time is necessary for people react to new information that comes everyday. When the work model is visible, people on the team start to share a mental map where conditions and information about each important piece of work can be signalized.    In knowledge work, it is quite easy to see important information hidden in e-mail inboxes, phone calls or memos. If the work of everyone is projected on a single map, you move the assignment  model from a “per-individual-basis” to a “system-basis”. With a single map, people have a place to pull the work based on explicit policies and to discuss strategies to handle their challenges every day.     What visibility does is reveal one of the most important assumptions in knowledge work: that the system is working fine. When aligned with other Kanban principles, visibility empowers people to discuss a better distribution of efforts based on availability and importance, instead of just familiarity or personal assignment. They can anticipate and swarm in problems as soon they emerge.  They can work as a real team.   Customer Assumptions   As it was mentioned at the beginning of this text, you can observe a software development work system in terms of how it deals with the accumulation of assumptions over time.  This can be done by observing how people are dealing with their operational tasks, how managers are managing and how team practices can be organized to guarantee frequent evaluation of assumptions. But you can go further on this.   The recent Lean Startup movement is teaching us a valuable lesson. This community is learning how to minimize risk in creating the wrong product by evaluating assumptions about what customers really want during the development process. They use concepts such as Customer Driven-Development, Business Model Canvas and Minimum Viable Products to empower people with a effective method to do that.   The most common form of product development in the software development field involves the generation of a backlog of features. Then, you manage progress by comparing the planned features with the already developed features.  The problem with this approach is that it carries a large quantity of hidden assumptions about what the customer really need. A product can take years to be developed. After all this time, you discover that nobody wants to use it. Basically, because your focus remained on trying to meet scope, budget and schedule.   The Lean Startup community is learning to reduce the cycle of discovery of customer needs to a minimum time and effort. People now are launching product releases in really short cycles. They are reaching the customer even without a real product developed. They are doing this because they know that even the most brilliant idea is formed by a set of assumptions, and these assumptions need to be continuously evaluated before you do a major investment in the wrong product.   Basically what changes is the way you progress in your product development initiative. In this model, instead of moving from one feature to another, you move forward by evaluating one assumption after another. When you don’t have a positive response to your assumption, you pivot. The idea of pivoting makes the approach really strong. When you pivot, you use the new information that you have obtained to change the direction of the product making it compatible with the customer response. This is quite counter-intuitive because by deviating it from the original intention actually makes it stronger.     Here the same concept applies. It is the time and frequency of the evaluation that matters, not how perfect you do it.   Trade-off   There is a clear trade-off regarding how to find the optimal time to evaluate assumption for each feedback cycle.  It seems like value-add activities tend to offer space to near zero time, depending on the available tools or techniques.  However, coordination activities, like meetings, handoffs and reviews hold a transaction cost which makes the constant reduction of time not so useful.    As an example, standup meetings are a coordination activity which can be really effective on a weekly basis, a daily basis or even twice a day depending on the context. A team of managers can do that with project team leaders in a weekly basis. More than that will not generate any value because there aren’t enough assumptions accumulated on this period to be evaluated.  For a software development team, twice a day is too much, while for a maintenance team, living a period of crises, can do that twice a day easily.    However, in all those cases there is a optimal limit to reduce time between assumptions evaluation. The transaction cost of the activity helps us to define that limit. When you reach it, a good way to go further is stop thinking about how to reduce the time and think about how to replace the practice entirely. In this case, you act differently keeping the purpose of that practice in the context of the feedback cycle. The evolution from developers review to pair programming is a good example of this type of change. The purpose was sustained but the means was modified.    Takeaways   The evaluation of assumptions is a thinking tool. It can be used to analyze a system using a new perspective. It can be potentially useful for most knowledge workers, including developers,  managers, product designers and other IT specialists. Each one, in their own problem space, can use this tool not only to take better decisions, but also to improve the current know-how, designing the process to be more responsive and self-regulated.   If you are somehow involved in a Lean, Kanban or Agile initiative, stop for a while and try to think about the feedback loops that you have in your environment. When they are going to close? What assumptions are you carrying on at the moment? When they will be evaluated? What are the possible risks of letting non-validated assumptions accumulate in the system as time goes on?   Processes don’t evaluate assumptions, people do. Processes have feedback loops, but who close those loops are people. So when the Lean, Kanban or Agile communities tell you that “is all about the people”, pay attention to it. At the end is all about how your culture empower them to make good decisions, no matter what are their level of influence.  

Posted by: alissonvale
Posted on: 6/30/2011 at 3:18 AM
Tags: , , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 
 One of the central elements of Kanban is the ability that you acquire in redesigning continuously your process as you understand how things work in your scenario. In fact, understanding what we do was the first benefit that we got after starting to work with this approach about a year ago. Insofar as we have got this comprehension, we realize that there was something special about how we have been signalizing the elements of our process. As these signals become more meaningful, more stable and predictable becomes our process.Figure 1 shows what we call "The System View". That is the main view of the process and it is the most frequently used view by all team members.   Figure 1: View of the System as a Whole (Click to Expand) As you can see in Figure 1, there is a lot of signalization on our Kanban board. The signs help us to quickly answer some specific questions that we have to deal with on a daily basis, like "What am I supposed to do now?","What and How much has been delivered week after week?" or "What are in the system right now?". Besides, it helps us to make work go downstream more smoothly, sometimes by point us to flow decisions based on our politics and preferences, sometimes by showing up bottlenecks that happen from time to time.  The System itself is the first SignalizationOur Kanban board is organized in three main areas that suggest a systemic view of the process: Input, In Progress, Output. Our system was intentionally drawn using a systemic perspective. Like every other work system, we have a lot to do (Input), we are doing some work now (In Progress) and we had already delivered some work to our customers (Output). When you are looking to the Demand Area (Input), actually you are trying to look at the future. The WIP area represents the present and the delivered area the past. Look at this picture as a whole help us to make decisions when we see ourselves in front of hard situations involving priorization, capacity, overloading, flow, value and waste reduction. It is a key point of this approach the ability to evaluate specific situations having the whole picture in mind. Every decision results in the movement of a physical token that moves around the board following physical constraints and rules (like in a chessboard). The tokens are moved from the left to the right. I just can move a token to an empty space, for example. If there's no empty space, I have to create this space by moving other tokens to other areas. Then a trigger is fired to re-analyze the system in concern to priorization, schedule negotiation and resource availability.  In other words, when you (as a Team Member) are operating the system, you are using the signals to guide yourself. Just like in a chess game.  Figure 2: Signalization guides the Flow and Team Decisions Our board today can be seen with different eyes, or different views. We look at our system like we look at a map, and there are areas in this map that can be zoomed in to increase the details about what is happening in there. Figure 3 shows the three main areas in zoom. We need to look with more details to manage different areas of the system. Demand management, for example, is done by increasing the details of the Input Area. So when we have to organize the demand, some tools and visualization fit better with the nature of operations inherent to this area of the system: bigger cards, priorization filters, drag and drop of the cards between areas and slots, grouping by customer importance, and a basket thrash to put away old demands that expire as the time goes on. Figure 3: Expanding the Input Area with the Front View Another area where it is very important to take a closer look frequently is the WIP area (Figure 4). When we look at this area, we want to know how the things are going on now, in the present. Here we are monitoring other aspects of the process, for example:   Figure 4: WIP View   how the demand is distributed in terms of classification (see "Value and Demand" later on this article). To do that, we have a graphic signalization that shows the percentage of every class of service that we are working on in a moment. It allows us to influence the system in a way that more work of certain type can be injected to balance the output results. For example, we can prioritize more "value" work, when we see that the percentage of "non-value" work is too high.  how the demand is distributed for each team member. Sometimes the flow in the system is threatened by work accumulated in the buffers of team members. We can see that simply analyzing the amount of cards by work-state (See Figure 5). It also shows important information about team overloading and multi-tasking levels.   Figure 5: Amount of cards by work-state in WIP   don't loose the focus in cards that are "Waiting for Customer". This queue holds cards that are waiting for information or for deliberations from the customers. As more time they stay in this queue, worst will be for the flow of the system in a general way.  Parking Lots keep our eyes on the progress of deliverables that need several other minor deliveries to getting done. It could be a MMF (Minimum Marketable Feature) for example, when you need to group Value as a set of features or user stories to deliver at a once. They can also represent any other Business Activity that has to be done by the synchronized execution of several other cards, whatever class of service they are. The Delivery Area is what we see when the Output part of the system is zoomed in (Figure 6). This area represents the past, and is the source of our "Yesterday Weather" analysis for planning. Cycle time and other information are used to give us the necessary understanding about what we are capable of (in terms of delivering per class of service).  Figure 6: Delivery View Value and DemandAnother important topic that worth mentioning is how the work can be classified, sized and organized in the system. And the signalization comes again to help. We represent each unit of work in our system as a "Kanban Card" and we use different types of signalization to draw and organize them on the board (Figure 7):  Figure 7: Signalization of our unit of work (Kanban cards)  Each card is labelled with a unique number which help us to track it and to get a better communication when the team needs to talk about it. Bigger cards are used to represent the work before they enter in WIP (Work In Process), smaller ones after. The reason is because we can show the description of the work when they are bigger. It help us to analyze several of them at the same time when doing priorization.  Colors are used to classify the work by classes of service: [Green] for Value demand: features that generates new business opportunities for customers; [Yellow] for Improvements to Sustaining: features that customers are missing when trying to use the product to sustain their business; [Orange] for Support Operations: Activities to support the use of the product by customers; [Red] for Problem Solving: When a customer can not use the product appropriately for any reason;    In Lean terms the Green cards are value, and all the others are pure waste. Two types of borders indicate if the card is a unit of work for a customer (solid border) or for the own team tooling and process improvement (dashed border).   A sign in the top right area of the card indicates the estimated size of the work. We work with just three types of size: Small (it is our reference and it takes between 1 or 2 days of work to be done), Medium (3 times more complex than the reference) and Large (5 times more complex than the reference). The same colors are used to signalize how much of each class of service is in the three areas of the system. It can be a good measurement for tracking waste levels, or to help in controlling how much work of each type could enter in it. The interesting thing about this kind of signalization is to create opportunities to see the whole using the Lean perspective of Value. We can monitor how much effort of our system is consumed to generate value in a certain period of time for each class of service, and that is the main metric that we use to evaluate our process: how much money we are spending generating new value or dealing with waste?Stages, Queues and LimitsOnce we have our value and demand classified appropriately, it is important to see how this demand flows when it enters in the system. Our visual board contains areas representing each stage of the process. Most of this stages has limits to prevent new cards from entering in it. The limits cause some interesting systemic behaviours: they avoid the perpetual accumulation of work in stages where the rate of incoming is higher than the outgoing rate; they highlight the need for sequentialization inside the stage and constant priorization of the work; and they help in making a downstream operation to be the one that establishes the rhythm of the flow. Our stages and their signalization are shortly explained on Table 1.     The Front Queue  All the collected demand that are not qualified yet to get in the system regarding business criterias. The items of this queue are not important for who is looking at the System View. Just for those who are managing demand. But it is important to monitor the number of items in this situation for each project.    Waiting For LRM Queue  With a limit of 14. Items on this queue require more constant observation. They are just waiting a moviment in LRM Queue. This will create an empty space triggering another priorization action. So the work item is pulled from this Queue to the LRM Queue. When this happens, an empty space shows up in this one, which forces another priorization action. This time pulling another card from the Front Queue. That is the point in a Pull System. The next stage on the pipeline pulls the work as needed.   LRM Queue LRM is an acronym "Last Responsible Moment" and indicates that the best time to work on an item has coming. The first item of this queue is the one that will be pulled from the Team to start working on it. We have two LRM queues, one for each work cell (development and support operations). Look at the empty space on the first slot in the queue beside. That is the sign from the system that more work can be pulled from the front area.    Team Members WIP Queues Each Team Member (including the Leaders) organizes his demand using the work state of the item. Ideally, a team member would be holding just one card, and that one should stay on "In Progress" Area. But, in the real world we have to be able to deal with other cards in different state contexts. The Feedback area holds cards that come back from downstream, for quality reasons, for improvements or even for failing in following standard compliance policies. The Inspection area is filled by automatic distribution. The numbers in this area represent the order that which each team member will receive the next item for doing inspection. It means when someone moves a card indicating that it is "Done", this card is automatically associated to an another Team Member that has the number #1 in his slot. Then everyone else gets a new number (for example, the number #2 becomes the new number #1 and so on). This mechanism was created to randomize the allocation process for inspections and allows all team members to inspect the work of all the others members. This is an interesting way to influence the flow of the system using signalization.  Release Ready Queue When the work is done and inspected, the next stage indicates that the item is just ready for release. It means that the customer can already be notified about that job. Here we have different actions depending on the type of work or deploy issues. The notification can be just a link to download a new release of the product, or some instructions to configure the product or even just an information that some situation has been addressed and the product is ready to use.   Waiting For Customer Eventually, the work can't be done because some information or consideration from the customer. So this is a kind of "hospital" area, where the work stays stationary until we get the information and can proceed in WIP.   Released Queue That is the real output of the system. After we send the notification, we move the card to the release area, and the stop button of the cycle time clock is pressed.    Table 1: Stages of our Kanban System  "Stop the Line" SignalizationOne of the problems in visualizing a process as just a sequence of linear steps is underestimate those 20% of the situations that causes 80% of the problems. The linearity occurs just when you are observing in a high level view. On a daily basis, a lot of situations are happening and we have to be conscious about the complexity in handling multiple demands with different people and with different customer expectations at the same time. Implementing Autonomation can be powerful to avoid work without quality to move on through the process. A big source of quality problems generated by complex trade-offs when the work is in progress can be avoided. Just put your tools to send you some signs when possible anomalies are in place.     As an example, we have a sign that appears on our board when we get suspicious about the quality of a build generated by some modification in the product. This is quite different than the Continuous Integration sign (that we also use to check several aspects of the code base). A new release of the product will be only inspected after a successful CI build. If some quality issue is found during the inspection process, the kanban card will be moved to the feedback area of the team member who first implemented the modification. Meanwhile, if we have any new build (even from a different team member) that has just been moved to the release ready area, sharing the same code branch of the suspicious one, the board will signalize the problem as you can see in figures that show the release ready queue.The lesson here is that any process is rich in opportunities to fail. The idea is just get these signs and just "stop the line" until the team becomes confident again to continue the flow.Signalizing Kanban MovementsWhen you have a system where the kanban movements are quite frequent, with a lot of movements on the board daily, the ability to follow this movements becomes very important. It is also common that two or three team members are somehow collaborating to deliver the same unit of work. So a mechanism to let the Team know about the life cycle of the card as it is flowing through the system is necessary. The mechanism that we have been using to keep the team up-to-date is a system task bar notification tool (see Figure 8). Clicking on the notification window will send you directly to the details of the kanban card (Figure 9).  Figure 8: Signalization of a Kanban card movement Signs of CollaborationOne of the ways to see if your team is collaborating to get the things done is observing how they communicate with each other. Is the communication rich? Is it happens all the time or just when they meet each other on the stand-up meetings? Is there a way to look for signs of collaboration as the time goes on? We get these signs from our kanban system as well. We did that transforming our kanban cards in free spaces for open conversation. Everyone is stimulated to write short notes of anything related to that work on the card (see Figure 9).   Figure 9: Kanban card detail Every time someone attaches a note to a kanban card, the team receives the signalization with another task bar notification:  Figure 10: Signalization of a new note added to a kanban card  So we use the kanban as a key element to connect the process of management to the process of communication. That became a huge element of leverage in our collaboration model.Signs can point you to Technical ArtifactsIn software development, management and technical activities are closely related. It is not simple to get traceability from management to code. But when you got it, your process become more reliable and a lot of facilities comes to help. In the output area of the System View you can notice that some kanban cards have a build number near it. It means that we had to generate a new build for the application to deliver the associated demand.   This kind of sign combined with the type of demand show us how many changes we are doing in the software to solve bugs or to create features for sustaining without creating new value. It is just another sign that brings the waste to the surface, making the team conscious about the pain of technical debt. To do that, our build server was configured to update the kanban card with the build number when a new build pass by the Continuous Integration process. That kind of automation is just possible because we enforce a commit policy on our subversion repository where the developer has to inform what kanban card is associated with the changes in the code base. With this kind of policy we can go from the demand to any other important technical artifact, including how the code base was changed to attend it (see Figure 11). Figure 11: Traceability from the customer demand to technical artifacts Final Words about Signs    Signs are good mechanism of compensation when you are trying to influence a system. To correct something that is not working appropriately, you first have to see a sign of it. Usually signs of deviations from the expected behaviour come very late. The kanban management technique is great to make these signs visible all the time. We are still looking for good ways to find these signs and improve our process. 

Posted by: alisson.vale
Posted on: 2/22/2009 at 11:11 AM
Tags:
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed
 
 
Hoje vive-se o dilema de encontrar o limite adequado entre o craft e o lean. Quanto do processo de software deve ser "criação" e quanto deve ser "produção". O quanto ele deve ter de "artesanal" ou de "industrial". Hoje há dois extremos em agile. E é difícil saber onde se posicionar entre eles. Talvez seja isso que torne o paradigma tão forte conceitualmente. Software precisa de qualidade e excelência (craft), mas também precisa de gestão, números e tendências (lean)... quanto é necessário de cada um para compor um bom projeto de software? Como nivelá-los de acordo com a situação e com o cenário em que se atua?Por outro lado, há um outro fenômeno interessante. Muitos querem usar Agile, e não é fácil em muitos cenários. Enquanto uma parte das pessoas tenta proteger o novo paradigma de conceitos atrelados ao velho, outros tentam adaptá-lo a esses conceitos para que possam entrar no circuito ou expandir sua influência na indústria como um todo. Algumas vezes para dar respostas a nichos de mercado que não querem assumir o risco ou não tem certeza sobre os efeitos gerados por uma mudança tão brusca. É por isso que quando, por exemplo, a palavra CMM se junta à palavra Ágil em algum momento, a internet recebe uma enxurrada de e-mails, posts, etc com opiniões contra e a favor. O interessante que tanto quem é contra quanto quem é a favor clama por defender ou se beneficiar dos mesmos princípios. Mais uma vez, o que é certo? o que é errado? Talvez não se ter certeza sobre o que é certo e o que é errado seja a nossa principal qualidade como comunidade.O que se vê nesse momento é que o fato de Agile ter sido criado em cima de valores e princípios, e o fato de ele ser representado principalmente por comunidades virtuais, o torna mais poderoso do que se pensa. Hoje não há ninguém que controle Agile. Nenhum dos autores do manifesto, ou mesmo um pequeno grupo deles, pode, isoladamente, controlá-lo. É um movimento de vida própria. E ele vem conseguindo oferecer as peças que precisamos para o quebra-cabeças que é desenvolver software. E o que o mantém assim é o equilíbrio gerado pela existência de diferentes abordagens e soluções para um número ilimitado de realidades e cenários de negócio que temos por aí. O Movimento Ágil é hoje um Sistema Adaptativo Complexo, como descrito pelo Highsmith. Ele começa a atuar sob regras que o fazem assumir "Comportamentos Complexos", onde "Complex Behaviour = Simple Rules + Rich Relationships". Em outras palavras, a comunidade hoje funciona assumindo os mesmos comportamentos esperados em projetos Ágeis de software: emergência, adaptabilidade e colaboração, tudo isso sob a proteção de quatro regras simples.Em resumo, o que faz Agile hoje ser tão poderoso são as polêmicas, as discordâncias. Elas mantém o paradigma atrelado ao bom senso. Nenhum dos lados deixa o modelo estável. Há dois extremos, e é a experiência e o estudo de cada um de nós que nos ajudará a localizar o ponto ideal entre eles. Nesse momento, a única certeza que se tem é que nenhum dos dois extremos é o melhor lugar para se posicionar.

Posted by: alisson.vale
Posted on: 11/15/2008 at 11:25 AM
Tags: , ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Conheça a história do sistema kanban adotado na Phidelis. [More]

Posted by: alisson.vale
Posted on: 8/26/2008 at 7:47 PM
Tags:
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (7) | Post RSSRSS comment feed
 
 
Pressupostos são hipóteses que precisamos admitir antecipamente para que possamos aceitar ou compreender as teorias que delas decorrem. Eles são o ponto de partida para justificar as abordagens adotadas. Para optar entre uma ou outra linha de pensamento é necessário antes de mais nada estabelecer concordância com eles. Quando comparamos o modelo Ágil com o Waterfall eu acredito em quatro pressupostos básicos sobre a qual precisamos nos posicionar antes de partir para uma ou outra abordagem. São eles:   Waterfall Agile Sobre a natureza das atividades de desenvolvimento de software Produção taylorista Criação colaborativa Sobre o modelo de qualidade Seguir especificações Satisfazer usuários e clientes Sobre a forma de organizar as pessoas Grupos de Trabalho Times de Projeto Sobre o modelo de gestão Gestão de escopo fixo Gestão de escopo variável Afinal, software deve ser produzido de modo fabril ou criado colaborativamente? Para ter qualidade ele deverá seguir rigidamente especificações pré-acordadas ou ser capaz de satisfazer os anseios e necessidades de usuários e clientes? Como devemos organizar as pessoas? Por meio de um grupo de trabalho com tarefas pré-definidas em um processo controlado? ou criando times de projeto com liberdade para definir e otimizar o seu próprio modelo de trabalho? Talvez a pior escolha, nesse caso, seja não fazer uma escolha. O modelo teórico que vai embasar as suas práticas de trabalho será decorrente dessa escolha. Há uma conexão direta entre esses pressupostos, os princípios que lhe endereçam, e aquilo que precisamos praticar no dia-a-dia para implantá-los. Quando não há rigidez na escolha do conjunto adequado de pressupostos, o modelo teórico adotado se enfraquece, as práticas anulam-se umas às outras e os riscos de insucesso aumentam.

Posted by: alisson.vale
Posted on: 6/21/2008 at 12:35 PM
Tags:
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed
 
 
Recentemente chegou às bancas a edição de número 09 da revista MundoDotNet. Essa edição conta com um artigo onde descrevo uma técnica interessante para administração de cenários de testes implementada com sucesso pelo meu colega Paulo Cesar Fernandes aqui na empresa.  A técnica consiste em interceptar a execução de métodos de forma a armazenar objetos construídos durante a execução da aplicação em um banco de dados orientado a objetos. Assim, esses objetos podem ser carregados na memória no momento em que são necessários para o SetUp de testes unitários. Na verdade, o artigo foi um pouco além de descrever a técnica. Grande parte do texto é dedicada a explicar as raízes e as várias formas em que técnicas de teste podem ser utilizadas para aumentar a qualidade do software no contexto de um projeto ágil. Alguns tópicos e trechos do artigo Uma nova visão para as atividades de teste de software:  A relação das atividades de teste com qualidade. Algumas das idéias de Deming que podem ser utilizadas em desenvolvimento de software. A relação de Deming com o estilo de administração japonês que levou ao Movimento Ágil e as técnicas que este movimento trouxe de forma a permitir a redução de inpeções no software. Testes como oportunidades para aumentar a qualidade: Em projetos ágeis, testar significa criar oportunidades para que o produto absorva elementos de qualidade de forma permanente. "Um teste automatizado injeta qualidade dentro do software". Quando testes automatizados podem aumentar a qualidade do processo:  Testes de aceitação automatizados criam as condições para melhoria do processo de desenvolvimento na medida em que estabelecem um instrumento de colaboração e de comunicação entre clientes e desenvolvedores. "o propósito de um teste de aceitação é aumentar a qualidade do processo de comunicação necessário para as atividades de análise e levantamento de requisitos, por meio de especificações executáveis". Quando testes automatizados podem aumentar a qualidade do produto: Aqui a defesa é que o uso de TDD aumenta a qualidade interna do produto na medida em que cria as condições para a evolução sustentável do software. "o propósito do teste unitário é influenciar o design da aplicação, permitindo que ele evolua sem sofrer os danos causados pelo seu processo de degradação". O artigo também oferece um rápido exemplo de como uma ferramenta como o Fitnesse pode criar especificações executáveis fáceis de ler e de produzir. Conforme imagem a seguir:   A abordagem Ágil oferece uma nova perspectiva para endereçar qualidade de software. Acho que esse artigo dará ao leitor uma boa idéia do que isso quer dizer.

Posted by: alisson.vale
Posted on: 6/17/2008 at 9:43 PM
Tags: , ,
Categories: Coding | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (3) | Post RSSRSS comment feed
 
 
Embora Quality has nothing to do with testing seja um ótimo post que vale a pena dar uma olhada, acho que o título ficaria melhor formulado como Quality has nothing to do with finding defects. Testes são instrumentos de feedback que obtemos do software quando comparamos seus resultados com nossas expectativas. Assim, quando o resultado de um teste é associado com a completude de uma expectativa, ele é sim um importante instrumento de qualidade. O mérito do post está em desmistificar as atividades de teste como sendo as únicas necessárias para garantir a qualidade do software. Na verdade, o que ocorre é que as atividades de teste perdem dramaticamente o poder de agregar valor em termos de qualidade quando são utilizadas como instrumento de inspeção-para-a-localização-de-defeitos. Se uma feature sai do ambiente de desenvolvimento sem funcionamento adequado, será certamente mais apropriado parar e descobrir porquê isso ocorreu do que registrar os problemas em um bug tracking e depois continuar o trabalho de inspeção incansável até que nenhum problema seja mais encontrado. Uma melhor opção é utilizar as atividades de testes como instrumentos para prevenção-de-defeitos. Ao criar, por exemplo, um teste automatizado, não há inspeção. Há a formalização de critérios de aceitação que passarão a conviver eternamente de forma agregada ao produto à partir daquele momento. Assim, somos capazes de introduzir qualidade no produto no mesmo passo em quê o desenvolvemos. A qualidade dos incrementos de software liberados em um projeto Ágil depende muito do efeito conjunto resultante da aplicação de diferentes tipos de técnicas e práticas, cujo intuito central é evitar que defeitos sejam introduzidos no produto. A ação conjunta desses elementos tem o poder de eliminar (muitas vezes por completo) a necessidade de inspeções-para-a-localização-de-defeitos. Testes orientados para inspecionar aquilo que, já no seu início, é produzido sem levar em conta os vários aspectos de qualidade relevantes para desenvolvimento de software, são nocivos na medida em que criam uma cultura em que quem desenvolve não precisa garantir a qualidade daquilo que faz, pois há uma implícita delegação para que isso seja verificado por uma outra pessoa.    No entanto, inspeções não serão sempre ruins. Há as inspeções boas também. Elas ocorrem quando o foco é gerar feedback. Ao inspecionar uma nova funcionalidade, por exemplo,  o foco de quem inspeciona não precisa ser verificação e conferência. O objetivo da boa inspeção é dar feedback, é oferecer um novo ponto de vista de modo a encontrar pontos que poderiam estar melhor resolvidos. O mesmo ocorre quando inspecionamos código, seja passando o código para um colega inspecionar, ou seja fazendo inspeção contínua com pair programming. Em ambos os casos, a procura-por-problemas é elemento secundário, enquanto que feedback é o elemento central. Em resumo, a inspeção ruim procura por defeitos, enquanto que a boa gera feedback.

Posted by: alisson.vale
Posted on: 6/10/2008 at 10:37 PM
Tags: ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Documentação é importante em qualquer projeto. Aqueles que alegam que projetos ágeis não são bem documentados quase sempre caem no engano de misturar dois propósitos distintos para documentação de projeto. Um deles é o do registro puro e simples de algo que um dia pode ser útil para alguém, mas que quase sempre não é. O segundo envolve gerar documentação como resultado de um processo que mistura atividades de colaboração e comunicação e gera algo que contribui para o aumento da qualidade do produto em desenvolvimento. Os Testes de aceitação são um exemplo clássico. O cliente e o desenvolvedor utilizam esse instrumento em um processo de colaboração para encontrar os cenários que precisarão ser previstos durante a implementação da funcionalidade. Ao mesmo tempo, o teste é utilizado para que um comunique suas idéias para o outro, fazendo com que os objetivos sejam unificados, já que há duas visões focadas em diferentes níveis de abstração: a visão do negócio e a visão da implementação. Apesar desse tipo de teste ser uma excelente documentação, isso é apenas uma espécie de efeito-colateral, pois o seu real propósito reside em aumentar a qualidade do processo facilitando os processos de comunicação e colaboração. O modelo de trabalho da Toyota, que por si só já converge em vários pontos com a abordagem ágil, parece que funciona de maneira semelhante no que diz respeito à documentação. Quem já teve a oportunidade de estudar um pouco o seu processo de solução de problemas sabe como ele é rico em observação, diagnóstico e análise detalhada das questões que influenciam ou são influenciadas por um determinado problema. Mesmo assim, quem espera que a Toyota registre cada passo desse tipo de análise, gerando um grosso relatório a ser utilizado para avalisar as decisões, pode se preparar para rever seus conceitos. Pelo menos é o que diz o best-seller O Modelo Toyota, escrito por Jeffrey Liker e David Meier. Esse livro, que, na minha opinião, deveria estar na cabeceira de todos aqueles que estudam e/ou aplicam o modelo ágil em seus projetos de software, tem um grande número de ensinamentos dos senseis da Toyota que, ou corroboram com as idéias do movimento ágil, ou o expandem em direções inusitadas.     Figura 1: Esboço de relatório A3 adaptado ao contexto ágil Uma das partes do Manual de Aplicação que me chama a atenção já há bastante tempo é o trecho em que se fala do uso sistemático pela Toyota de relatórios montados em uma folha de papel A3, onde se conta uma história com início, meio e fim de um projeto, de solução de um problema, ou ainda histórias que apresentam marcos de evolução de um projeto. Há vários cenários em que tais relatórios são exigidos e também há toda uma técnica para esboçá-los.  A técnica possibilita a criação de um relatório capaz não só de documentar as análises e decisões tomadas em um projeto, mas também amplia a capacidade de comunicar seu conteúdo mais eficientemente, além de permitir a colaboração de todo um grupo de pessoas para produzi-lo. O fato de ele estar limitado à parte da frente de uma folha de papel A3 o torna grande o suficiente para conter as informações mais essenciais e pequeno o suficiente para evitar que ele contenha todo aquele detalhamento que afasta o leitor e atrapalha o processo de comunicação. Um relatório desses deve poder contar toda sua história em menos de 5 minutos. Há basicamente quatro tipos de histórias que podem ser contadas por meio de relatórios A3 na Toyota: História de uma proposta; História da Solução de um Problema; História da Situação de um Projeto; História de Informações; Todos eles podem ser facilmente utilizados em um qualquer projeto, Ágil ou não. Mas há dois deles que realmente podem ser muito bem aproveitados no modelo ágil: um relatório que conta a história da solução de um problema e o que conta a história da situação de um projeto. O primeiro pode ser mais um dos instrumentos disponíveis para inpeção e adaptação de projeto ágeis. Na verdade, o relatório é apenas um dos elementos de toda uma metodologia para solução de problemas (eu falo um pouco dessa metodologia em uma série de dois artigos que escrevi para a Revista Visão Ágil sobre aperfeiçoamento de projetos ágeis). Já o segundo tipo de história (aquela que descreve a situação de um projeto), também me chamou a atenção, mas dessa vez por um motivo diferente: talvez esse seja um bom instrumento para contarmos a história de uma iteração para stakeholders ou outros personagens que não participam do seu dia-a-dia e que precisam rapidamente estar informados sobre o seu andamento. Pensando nisso eu criei um esboço simplório do que poderia ser a história de uma iteração contada por meio de um relatório A3. Uma imagem ampliada desse esboço pode ser obtida clicando aqui.  À partir desse ponto é interessante ler o restante do post com o relatório aberto para facilitar o entendimento. A primeira coisa que se deve ter em mente ao produzir um relatório desse, é que ele deve contar uma história concisa e sem desvios. Um relatório de situação de projeto (ou relatório de status) é diagramado horizontalmente. A frente da página é dividida em duas partes iguais. O verso normalmente não é utilizado. Dentro dessas duas partes, são criadas 5 seções que receberão o conteúdo do relatório.  O tamanho de cada seção e o seu conteúdo vai depender do enfoque da história que está sendo contada. A primeira seção é o Histórico do Projeto até então. É nesse momento que você situa o leitor descrevendo a exata situação do projeto até o momento imediatamente anterior ao início da iteração. Como estava o projeto antes da iteração começar? Estabeleça o foco em números e utilize gráficos para mostrar tendências. Ao invés de textos com frases corridas, crie enumerações com frases simples. Utilize setas para guiar o leitor pelas informações disponibilizadas. O  segundo passo é indicar quais foram os objetivos da iteração. Você pode, opcionalmente, contar a história sobre como estes objetivos foram estabelecidos. Um backlog com uma análise de custo-benefício pode servir para este propósito. Nesta seção deve-se ser claro quanto a esses objetivos. Ou seja, ao invés de "Melhorar a cobertura de testes unitários", utilize "Alcançar 85% de cobertura nos testes unitários". Se este for um dos seus objetivos, você deve indicar em que número esta cobertura estava na seção Histórico do Projeto para depois indicar qual foi o resultado alcançado nas seções seguintes. Lembre-se que a história deve ter início, meio e fim. Repare que eu adicionei todo o backlog do projeto ao documento. Pude fazer isso, porque o projeto é pequeno. Em projetos maiores você pode se concentrar em colocar a história da iteração no contexto de um tema ou de uma release. Assim, o escopo ficará reduzido o suficiente para ilustrar seu objetivo mais imediato. Uma vez esclarecidos os objetivos, podemos entrar nos detalhes de o quê foi implementado para alcançá-los. Aqui vale informar as ações ou os critérios utilizados e quais métricas ou informações foram coletadas de forma que possamos constatar a relação do trabalho realizado com os objetivos estipulados. Finalmente, as duas seções finais descrevem o resultado alcançado, os problemas e as ações de melhoria propostas. As informações sobre problemas e melhorias propostas podem vir das retrospectivas e devem de alguma forma apresentar elementos que descrevam problemas que ou atrapalharam a execução do trabalho ou o impediram.  Vamos ver então como a história dessa iteração poderia ser contada em menos de 5 minutos: "Na parte superior esquerda nós podemos visualizar a síntese de progresso do projeto. Após 4 iterações terem se passado, podemos ver que funcionalidades têm sido desenvolvidas a uma taxa de 8 a 11 pontos por iteração. Até o momento foram desenvolvidos 39 pontos, o que representa 34% do total. A velocidade da equipe, que antes era de 8 pontos por iteração, agora está na média de 9,75, o que indica uma aceleração de 17% quando comparada ao início do projeto. Esse quadro de progresso nos oferece três possibilidades de projeção para o fim do projeto. A projeção mais otimista leva em conta a maior velocidade alcançada até agora, a mais pessimista leva em conta a menor velocidade e uma terceira visão mais realista considera a média de velocidade das últimas 4 iterações. Assim, nosso marco final, no momento anterior ao início dessa iteração, estava previsto para ser alcançado provavelmente entre 7 e 11 semanas. Ao iniciar a Iteração #5, uma análise de custo-benefício foi feita de forma a selecionar aquelas funcionalidades que teriam maior relevância para implementação imediata. Três histórias foram selecionadas e o comprometimento para o seu desenvolvimento foi estabelecido. Em termos numéricos, apenas mantivemos o índice de entrega alcançado na iteração anterior: 11 SPs. A implementação das três histórias foi realizada dentro do prazo da iteração. Testes de aceitação, de integração e testes unitários automatizados foram criados para todas elas. Houve uma variação para baixo indesejável no nível de cobertura dos testes unitários, o que deixou a equipe alerta para trabalhar na direção de elevar esse índice novamente na próxima iteração. As variações nas métricas de design não revelaram nenhuma anormalidade. Com a implementação das três novas histórias, o projeto avançou em mais 10 pontos percentuais chegando aos 44% de completude. As Projeções de término agora estão variando entre 5,2 semanas (cenário mais otimista) e 7,8 semanas (mais pessimista).  A iteração apresentou um aumento significativo no percentual de tempo alocado em atividades que não geram valor. Esse desperdício que já fora de 17% na iteração anterior, alcançou o patamar de 27% nessa iteração. Um aumento nominal de 10%.  A análise realizada na retrospectiva revelou uma grande quantidade de tempo investida em atividades de instalação e configuração de infra-estrutura de deploy, o que nos fez propor um plano para automatizar essas atividades e, dessa forma, voltar a reduzir esse nível de desperdício para que mais tempo possa ser alocado em atividades que gerem progresso no projeto." Esse exemplo é totalmente fictício, claro. Mas dá uma idéia do que é ser capaz de apresentar o trabalho de toda uma iteração em menos de 5 minutos, concentrando-se nos pontos essencialmente relevantes para a audiência do relatório. Obviamente, os gráficos, números e métricas podem ser dos mais diversos tipos e podem apresentar as mais diversas informações. O importante é manter o foco no aspecto da comunicação, preocupando-se em ser sucinto e objetivo , sem poluir ou sobrecarregar com dados sem conexão com o que se deseja apresentar.  

Posted by: alisson.vale
Posted on: 4/12/2008 at 10:59 PM
Tags: ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed