"What we learn on this course, we don't bring back just to the company, it is knowledge for life ahead" That's what I've heard someday from one of my students during the Kanban course that I teach in Rio and other cities in Brazil together with Rodrigo Toledo, and that now is coming to the 40th edition. In the picture, "Aha-moments", moments of discovery written by students as they realize that something was learned during the class. Kanban is one of the most adaptable modern management methods that we have today. It is suitable to the biggest challenges that knowledge workers face in their offices. For software projects is a reality already. It won't take too long to be expanded for all other areas of knowledge and creative work.  In August 08th and 09th will be happening in Rio one more class of the course that is blowing the minds of managers, leaders, developers, designers and other creative workers. A whole new world of possibilities had been opened for those who attended the class. So, recommend for your friends or register yourself here:  http://www.scrumban.com.br/leankanbantraining/  

Posted by: alissonvale
Posted on: 7/29/2013 at 5:11 PM
Tags: , ,
Categories: Announcements
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Last September I presented this topic in the 2012 Agile Brazil conference in Sao Paulo. The reception was great and finally I had the opportunity to translate it to english so it can be viewed by a larger audience. [More]

Posted by: alissonvale
Posted on: 10/29/2012 at 7:39 AM
Tags: , , , , ,
Categories: Conferences | Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
While I was flying back to Brazil after attending another great Kanban Retreat event in Mayrhofen, I had the opportunity to read the last book published by David Anderson: Lessons in Agile Management - On the Road to Kanban, which is really great by the way and deserves a specific post about it. One of the topics particularly got my attention. I'm refering to the one that talks about the use of the "Inventory Lean Manufacturing" metaphor in the software development context (pages 291-294). In four pages, David Anderson really got me thinking when he tries to clarify the topic by explaining how tricky is to map concepts in such different contexts like manufacturing and knowledge work. He talks about the differences between an abstraction, an analogy and a metaphor and how this is relevant when we are mapping concepts and establishing language: "Lean is an abstraction rather than a metaphor". Lean is an abstraction and inventory regulation concerns are part of a concrete implementation of Lean in the manufacturing context. That was my insight from his text. In that context, inventory is something that should be reduced as much as possible in order to smooth flow, reduce cost and eliminate waste. Inventory is an accounting term. It represents the entire stock of a business, including materials, components, work in process, and finished products. In a systemic perspective, is something that you are accumulating or bufferizing in a system in order to guarantee supply for a downstream part of your value chain.  It is key for any continuous work process don't let its operators starving of work because of lack of raw material. So, to avoid that, it is common to produce more and keep it for later use, so downstream operators always have what they need to work.  Besides the fact that costs are much higher when you have to deal with a large quantity of material in the factory floor, what I think is so important as it, is the economics perspective. In the manufacturing context, as the raw materials are being transformed and as they are flowing through the system they become work in process, which can be a real source of inconvenience. Work in process usually has intrinsic value in this situation. As there are already money committed to the work in process, it becomes urgent to finish it, even if it becomes not necessary anymore or its value diminishes in comparison to other work items.  Even when this work in process becames finished goods you are not free from these types of problems unless you sell them immediately. Do you want to have a high quantity of items ready to be sold but with no customers to buy? Think about the simple economic law of demand and supply. So, maybe now you know why sometimes we have these huge marketing promotions selling by a price much lower than the usual. It is just excess of items with no customers to buy. So, it turns out that, for manufacturing processes, inventory in excess is something bad. This over production causes a lot of harm and systemic dysfunctional issues. It makes the process less efficient and costs to increase once requires a lot of effort to manage it. Therefore, reducing inventory is key for Lean Manufacturing. What about knowledge work? It doesn't work like that in knowledge work. The model is quite different. The raw material for knowledge workers is information, not documentation, requirements, designs, neither any other artifact or discrete material, just information in its abstract form. Don't confuse information with the media that it comes with. Work in process is generated when one starts applying knowledge and experience to produce value given the information provided regardless in what format it is provided. The money commited to it comes usually in the form of work hours invested by a knowledgeable person. As there is no intrinsic value whatsoever in partially done work, and as this type of work perishes really quickly, the economic pressure to finish it will depend much more of a perception of value, than of its intrisic value. So, it can be much more easily discarded, despite the whole psychological distress that this provokes. Raise your hand if you never have been involved in an abandoned expensive project with no real value or economic results delivered at the end.  Finished goods ready to be consumed but not consumed yet, in knowledge work, are just a different manifestation of work in process. If you write a non published book, this doesn't affect its intrinsic value. It is still can value millions or nothing depending on someone else's perception of value. In knowledge work, the value of partial work is not intrisic to the good, as it is in manufacturing.  So the economics of work in process is different in knowledge work. We are dealing with a really different set of concepts. We  need to assume a different perspective on this.  The Lean abstraction doesn't require inventory as a concept to be applied in different contexts than manufacturing. Inventory control is more a Lean Manufacturing implementation aspect. However, we still have the same root problem. Dysfunctional issues regarding systemic buffers growing as time goes on that jeopardize the effectiveness and efficiency of the environment. We need to take care of this in our knowledge work systems, as people do in manufacturing systems when they regulate their inventories. Let's focus on the root problem then: what type of accumulation, despites being necessary, can be really harmfull if it is not controlled over time in knowledge work environments?  One can say: work items. We can't let work items in progress start to accumulate in the system. That's why we use kanban to limit then and to not allow new work to be started before old work has been released. Definitely true! But enough? Work items in progress are just a manifestation of something much broader and more impactant in knowledge work environments: Assumptions. Knowledge work requires assumptions We need to assume a lot of things, during the whole time to accomplish our goals in this type of work. It is fine to move forward with partial information, because when there is no information, we replace that with some kind of assumption to map the terrain. Our brain does that all the time.  When you start to work in a given item, a lot of assumptions start to take place in your system. It is really what the customer needs? It is going to take the time that you expect to finish it? It is not going to generate more rework? Who is taking care of it really knows what to do? and so on... Uncertainty rules here! And, to deal with that, we *assume*. It turns out that the solution for the uncertainty is not to cheat the system by trying to create certainty upfront. Uncertainty lies in the knowledge work reality. What we need to do is to continously transform the assumptions that we created at the beginning to deal with the uncertainty in facts, and learn by doing that, once this learning is going to be useful on the formation of the following assumptions that will come up later on. Real problems start to emerge when we don't care about the continuous replacement of assumptions by facts. When the environment is full of assumptions, we make blind decisions. We support ourselves on our perception of the reality, not on the reality itself. We kill one assumption and replace it for facts when we evaluate it. This evaluation creates a fact. A discovered fact is just a manifestation of learning and the generation of learning is probably the most important aspect of a knowledge work process. Few examples of assumptions and related practices that start some feedback cycles: We are going to produce 32 points in this next iteration. (estimation) The code in this method is doing what it should do (unit tests) This part of the code is designed according our agreements, standards and style (code standards and collective ownership) We are doing what should be done to achieve our next short term goal (iteration planning) Few examples of facts and related practices that ends the feedback cycle: We have produced 26 points in this iteration. (iteration review) The code in this method was tested and it is doing what should do (unit tests executed on continuous integration) This part of the code was checked and it is according our standards and style (code review, pair programming) We have just shared what we have being doing and we know that now we are in the right direction (day after stand up meeting) This list can be endless as closer you look to the environment with that perspective. Think about what lies between assumptions and facts: Time!!!  Shorter feedback cycles is what Lean offers As this time increases, it also increases the probability of jeopardizing the effort and decisions that are being made, conscioulsy or unconsciously, based on those. It increases risk! So, for a knowledge work environment, I propose that we should focus in identify and differentiate what is an assumption from what is a fact, and then, look for ways to evaluate those assumptions as quick as possible in order to transform them in facts.  The theory related to this concept is fully explained on this post: Cycles of Assumptions Evaluation  Interesting as we already have a really good tool to optimize this process of assumption evaluation. We just need to design short feedback cycles in our systems or make the current ones shorter. That's what Lean Systems for Knowledge work are about when we are talking about a system design perspective: make feedback cycles shorter. Assumptions are not the new "Inventory" for knowledge work. This is not a good metaphor, actually. We are not just counting assumptions and measuring its accumulation. We are not limiting them. However, they can fill a theoretical gap that we have in our Lean implementations for knowledge work systems that can open a whole field of study and opportunities to find leverage points in such systems. It has the same role as "inventory reduction" has in manufacturing systems, because, in the end, the root problem they address is really close to be the same.

Posted by: alissonvale
Posted on: 6/26/2012 at 7:12 AM
Tags: , , , ,
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 
Last May at the Lean SSC 2011 conference I introduced my thoughts to some colleagues about the interpretation of what I think would be one of the fundamental elements in knowledge work systems: cycles of evaluations of assumptions. Alan Shalloway has recently taken this conversation forward, connecting these thoughts to other ideas related to risk and flow on this blog post. So, I have decided to write about the topic in more detail so we can explore this in terms of understanding what I think is part of what the Lean-Kanban and Agile communities do and why it matters so much.   Why we should care about assumptions   Assumptions are at the heart of all knowledge work activities. In the software development field, you can find them everywhere, from coding to portfolio management, from the smaller technical decisions to larger product design decisions. The evaluation of an assumption ends the cycle of learning and defines the boundaries around systemic feedback loops. Several things happen when you evaluate one or a set of assumptions:    (1) Learning: The evaluation of an assumption holds a very important opportunity for learning. By comparing what you have with what you were expecting, you learn about what you are capable of. You learn to separate facts from intuition. You learn how to work with small expectations and how to quickly change bigger expectations as derivate smaller ones are being evaluated. (2) Making Progress Towards a Goal in a Terrain of Uncertainty: In knowledge-based environments all you have is uncertainty. You don't know how much time takes to do a task. You don't know how one task is going to affect others until all the assumptions around it are validated. So progress in those environments is not defined by putting another brick in the wall, but by a recurrent evaluation of assumptions that emerges as you move forward from one uncertainty to another.  (3) Establishing Relations of Causality/Correlation: This insight came from Don Reinertsen, when we were discussing the topic with other colleagues at LSSC11. In fact, as the time to evaluate your assumptions increases, the potential to rationalize the result in terms of causality or correlation will be reduced. These are fundamental mental capabilities to find opportunities to learn and improve.  (4) Reducing risk: Alan Shalloway was the first to relate this to the work of Bob Charette. By controlling the number of assumptions in a given system, you are actually shortening the range of options and probabilities of getting an unexpected result. I believe if you design mechanisms to control the volume of non-evaluated assumptions in a given system, you are creating the potential to mitigate certain types of risk without too much case by case upfront analysis.     The awareness of being surrounded by assumptions is a property of an environment tolerant to failure. As Alan Shalloway stated on the earlier mentioned post: “Sometimes we are so sure of our assumptions that when we prove them wrong, we call the action doing so an error”. Errors only happen when you have been proven wrong in conduct or judgement by actual facts. It is a mistake to encourage a culture of establishing guilty and comparing results with expectations when what you just have are assumptions. It is important to learn how to separate unexpected results operating on fact-based reality from unexpected results obtained from non-evaluated assumptions.   Time is the dominant parameter   Acknowledging the existence of assumptions in our knowledge work environments is the first step. However, just knowing the relevancy of this is not enough. We need to know “how” to use this awareness in order to design better work systems.   My breakthrough about the “how” came when I first read about John Boyd's work few months ago. His theory is centered on the OODA Loop concept (Observe, Orient, Decide, Act). According Boyd, every active entity, like an individual, a group of individuals or an organization, use the OODA loop to react to events. They first observe the environment or situation they are involved in, then they analyze the data applying it their own background of knowledge and experiences, taking a decision to finally act based on that information.   However, instead of concentrating on the quality aspect in each part of the loop, which cycles like PDCA and LAMBDA seem to describe well, Boyd came up with something really interesting. For Boyd, what mattered most is not how good you are at observing, orienting, deciding or acting, but how fast you go through these stages in order to start a new cycle. Despite most of his work being oriented to environments dominated by competition, I think this could be also applied to collaborative environments, when the fundamental principle is about adaptation and responsiveness to new information obtained from an environment in constant change.             As Boyd's theory suggests, the speed of iteration is more important than being right when iterating. If the evaluation of assumptions marks the boundary for feedback loops in knowledge work environments, maybe the same rule applies for how we iterate in our activities. In other words, being fast at evaluating our assumptions is better than trying to force assumptions to be right at the end. So, to be effective in knowledge work environments we need to constantly try to minimize the time between the creation or emergence of an assumption until its evaluation.   In order to represent that concept, I've came up with this expression which represents the whole idea in few symbols:   ? – min(t) → ! Where:  ? represents the creation of an assumption   !  represents the evaluation of that assumption  min(t) represents the focus on minimizing the time between the creation and evaluation of the assumption   How programmers are minimizing the time-life of assumptions   If you were a programmer in the mid-1980s you probably remember writing programs on punched cards. Look at how the wikipedia describes a punched card:   A punched card is a flexible write-once medium that encodes, most commonly, 80 characters of data. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk-sized keypunch with a typewriter-like keyboard. A typing error generally necessitated repunching an entire card. A single character typo could be corrected by duplicating the card up to the error column, typing the correct character and then duplicating the rest of the card. (...) These forms were then converted to cards by keypunch operators and, in some cases, checked by verifiers.   At that time, programmers had to write their code on these cards and then submit them to an operator who ran the program on IBM mainframe computers. It could take days to get the response from the execution of those programs! After days waiting for the response, the result could be something like: Error on line 32.   I started my own career as a software developer in the late 80's. Before being a real programmer, I used to type several pages of printed code in Brazilian magazines just to run them and realize - probably because of my own mistakes while typing the code – that the program didn't work. It could took several hours to copy the whole program line by line. The result of running the program could just be a blank screen or a small square blinking somewhere on the screen.      We all know that the practice of writing several lines of code before you can evaluate them is not as good as evaluating a few lines or even each line as you type.  We also know that the practice of writing a really simple test, that evaluates one single behavior of your code, is considered a better practice than writing complex tests with complicated scenarios and multiple inputs and outputs.  The point is not only about simplicity versus complexity, but also about the speed in which you can evaluate the whole set of assumptions that you carry when you write those lines.     Nowadays, compilers and interpreters are getting quicker at showing typos and syntax mistakes. In some software development platforms, compilation or interpretation of the code occurs in the background, and I have instantaneous feedback of my errors. If I'm using techniques like Test Driven Development (TDD) or Behavior Driven Development (BDD), I can also get instantaneous feedback not only if I mistype something, but also if the system is doing what it should. Ruby developers, for instance, have shrunk this feedback loop even more by having their tests automatically executed in the background for every saved change in their source files.  Developers have been applying intense assumption-oriented semantics to specify their behaviors. So you see words like “should” dominating their vocabulary, which is a clear indication of awareness of  the assumption inherent to each piece of code.    From punched cards and long lists of non-validated code to real time validation for syntax and purpose, what is common in all of those evolutionary leaps is the exponential growth in the ability to  reduce the time-life of assumptions in the form of non-validated lines of code. Every line of code carries its own assumptions. It assumes that it is written in the right syntax, that it is the right instruction, that it is in the right place and it is the best way to do what should be done at that point.    Peer Review   Programmers have to deal with several challenges when they are working together in teams. Style and syntax standards, code design, usability guidelines, test practices and other rules should be aligned among all developers in the same team. There are so many details and different situations when you are coding, that just write rules down and expect everyone to follow is not such a good option.    As software development becomes a repetitive activity, quality goes down. That happens because the code is a complex artifact. You need to embrace the inherent variability to prevent the uncontrolled expansion of that complexity. Documented “standards” don’t help to deal with that variability.   It is common to use a review cycle to help developers to align themselves in a common structured model. When a feature is completed by a developer, another developer must review what was done. He is supposed to point it out what doesn’t fit in the work structure of the team, what could be better and what could be removed because is not necessary. As time goes on, the expectation is the formation of a shared mental model that everyone follow by agreement, not by rules in a document.    The problem is that there are several ways to do this. One possible way involves one developer handing off the implemented feature to another. That is not a good solution in most cases. Besides the natural issues regarding handoffs, who are receiving the task probably will enqueue it until he finishes his current task. As usually there is no visibility or limits controlling what is happening, the volume of work waiting for review can increase very quickly.      Another way to make code reviews involves the practice of pulling someone to review the code together with the original developer as soon as the feature is implemented. This is a much better solution, once it eliminates the handoff, prevent the formation of queues and promote collaboration. However, it still carries downsides like interruption, context switching, hurry on finish the work and lack of involvement of the reviewer since the beginning.   We can think about different solutions for the review problem, but developers have discovered a much more efficient way to handle that, which is called pair programming. In this model, two developers work together during the whole time until the feature finishes. Every action or decision is immediately reviewed by the peer. There is no handoff, no context switching, no hurry, no lack of involvement anymore.   However, pair programming still let the space for some dysfunctions once the feature absorbs the mental model of only two people in the team. A technique called promiscuous pairing addresses this problem by making developers change position between each other in a certain frequency, so all of them can work a little in all features.   As was explained in the context of coding, here we have the same pattern of evolutionary leaps. When a developer starts coding, assumptions about this code are created. Is it following the style and rules used by the team? Is doing what was supposed to do? Is there some design issues or flaws that should be solved before finishing? These and other assumptions are living in the system until being evaluated by another team member. The review process is just another way to evaluate these assumptions. Again, the time is what matters. In the first review model, the handoff extends this time quite a lot. The second model, when a developer pulls another to do the review, the time of evaluation is reduced to the time for develop the feature, which is better. The pair programming model makes this time to be near zero, but it still maintains some assumptions in the system. By doing promiscuous pairing, your evaluation embraces more assumptions and the time remains minimal.          How the management system is affected by assumptions   Accumulation of assumptions is probably one of the biggest sources of dysfunction in value-add activities in knowledge work environments. It is not different when we analyze management activities.    The Agile Manifesto probably has marked the emergence of a new paradigm in the software development world. In terms of management, the Agile movement, and after that, the Lean-Kanban movement in software development, both promote a different approach. The predictable world of Command and Control was left behind, opening space to a new understanding focusing on people  over processes and collaboration over contract negotiations. Managers are now supposed to be system thinkers, instead of resource controllers. While this transformation was taking place, in terms of assumption evaluations, dramatic changes occurred.   When someone is doing a task for others, there are some implicit assumptions made during the whole process. Is the right direction being followed? It is being done correctly? Is that the right thing to do at the time? Many times we hear the same story. Weeks or months after a manager assign tasks to developers, usually in large batches, they discover that nothing works as expected, that the code has no quality, and developers are taking the blame and leaving the project.    Most managers would say that this happens because of lack of control, that managers should command and control tightly. But, should they? Should they try to impose predictability to a naturally unpredictable environment? I don’t think that is the answer. In those types of projects, as time goes on, new assumptions are in constant generation because you are essentially dealing with partial information all the time. As the project moves forward, you start to collect pieces of information that were lacking when you first started. Evaluating assumptions helps you to confirm or refute decisions that were created in the first place by the lack of that information. This applies not only for developers writing code, but for management practices as well.   Let’s analyze some management practices using this perspective. Nowadays, more and more developer teams are doing 15 minutes stand-up meetings on a daily basis. They do that to improve communication and alignment towards a shared goal. What they probably don’t know is that in every one of these meetings they are evaluating some important assumptions: that everybody on the team have been working in the right direction, that everybody is going to work in the right direction tomorrow and that the team know about the problems so members can help each other to overcome problematic situations. Do you agree if we evaluate these assumptions every week, instead of every day, we are just allowing the accumulation of some critical assumptions that could jeopardize our short-term goals? Those meetings are usually kept really simple, short and frequent, because the focus is not doing the meeting right, but do it frequent enough to not allow that assumption to live without validation for a long period of time.   Why short iterations?   Scrum practitioners have also been applying the concept of short evaluations in assumptions to dealing with estimations. They assign “points” (Story Points) to units of work. The amount of points for each unit increases as the complexity or uncertainty for what is expected of that work increases. The project is broken into timebox iterations. When an iteration starts, the team defines how many points each unit of work is worth and the total number of points they can handle for the whole iteration. What a lot of Scrum teams don’t realize is that this is just an assumption. The total amount of estimated points is an assumption of future throughput, not a commitment whatsoever. The number of points for each unit of work is also just an assumption of the understanding of the problem that has to be solved, not a commitment to fit the work to its estimation.   Now you probably can understand why short iterations are frequently better and why teams working this way are considered “more mature” than teams working with longer iterations. If you really want to know more about your capacity, then the time to evaluate the created assumption is crucial. Taking more time to do this evaluation ends up increasing the number of live assumptions in your system, making your knowledge about the capacity of the project less precise.     The Sprint Review is another Scrum ceremony which is useful on controlling assumptions in software projects. As the team is producing working software, non-evaluated assumptions about that work are being accumulated. Is each feature what the customer was expecting? As the development process starts, those assumptions come to life for every feature. Only when the feature is presented to the customer you can evaluate them. The Sprint Review marks the point where the evaluation happen. As the cadence of this ceremony is aligned with the time that an iteration get complete, shorter iterations will reduce the time-life of that assumption in projects running in a timebox fashion.    Continuous Deploy   After a feature is reviewed, another assumption appears: Does the new feature will need any adjustment to work well? The implemented feature needs to be deployed so the customer can actually use it. In order to publish a new release of the product, it is common to wait for a certain volume of new features sufficient to form a cohesive business operation. The average scale of this wait time is months. It is a quite large timeframe to keep non-validated assumptions alive.   The side effect of this delay is know as “release stabilization”.  Feedback from users comes in so large quantity right after the release being published that the team needs to stop the development of new features for weeks until have all these requests addressed. In other cases, it is not the volume of requests that bothers the team, but the lack of any request! Some features are released and nobody uses. The cost of added complexity to the code is really underestimated when this happens. Indeed, this happens a lot.   Some web companies with millions of users are showing the path to solve this problem. The time to reach the user is being minimized by a model which new features are progressively deployed for clusters of users in cycles of evaluation. So, a new feature is first deployed to a small group of users. The use of the feature is evaluated. If people don’t use it, the feature is simply killed. By other hand, if the new feature generates good feedback and attraction, they deploy it for the next group of users. Additionally, the first group points some flaws or misbehaviors in the product. This is corrected before the next group receives the feature. New analysis is made and feedback is collected, and the cycle goes on until all users get the feature.    Again, the main difference between the two models is that, on the second one, you recognize that what you did is based on assumptions. The users are the ones that are going to evaluate them, not the Product Owner or any other proxy role in your project. What makes you effective is how quick you can go to the customer to make this evaluation.   WIP Constraints   The emergence of the Kanban change management method brought new tools to deal with assumptions. Flow, Visibility and WIP constraints are in the essence of this approach. Those elements have a huge contribution in terms of reducing the time-life of assumptions.    Let’s go back to the customer review problem addressed by Scrum teams with the Sprint Review ceremony. We already know that teams working in short timebox iterations are  better in controlling the time-life of assumptions than teams working in longer iterations. But we can do better than that. What would happen if we create in our working system the ability to evaluate those assumptions as soon as you have the minimal conditions to do it, instead of doing in a pre-defined scheduled time?    With a WIP constraint you can control the accumulation of existent assumptions in the feature review process. The PO will be involved as soon as this accumulation reaches a desirable level. You can argue that more pressure on this cadence is not possible because of the PO availability or because any other reason. Fair enough. In this case, you can’t reduce the volume of assumptions in your system because you are subordinated to a constraint that you can’t remove. But don’t loose the opportunity to make your system better in controlling assumptions just because you want to stick to a method prescription. Scrum is a good set of practices, but are not the best set of practices, simply because, as the Cynefin Framework model suggests, in complex environments, there is no such thing as best practices. What exist are emergent practices. So, there is no reason to not design your process in a more efficient way.    Handoffs   WIP constraints have the potential to control the time-life of assumptions in several ways. A particular useful scenario is when you are dealing with handoffs. Despite being a necessary evil in some cases, handoffs should be avoided most of the times. When you transfer work between people, teams or organization units in a continuous way, you start very critical assumptions: Has the work arrived in a good condition? Who has received gets what was expecting? Who has to respond is going to do that in the expected timeframe? Is rework not going to be necessary? Is enough information about the work being passed with the work?    WIP constraints can minimize the impact of handoffs by forcing this assumptions to be evaluated before new ones are created. When everything is fine to move on, you are allowed to start new work. This practice has the potential to transform any knowledge work environment because, beside other reasons, you are controlling the amount of assumptions that are living in your work system in a given time. You do that by forcing their evaluation in a dramatic shorter cycle.    Visibility   Visibility is another Kanban element which has a systemic effect in assumptions. One fundamental aspect of work models is how much time is necessary for people react to new information that comes everyday. When the work model is visible, people on the team start to share a mental map where conditions and information about each important piece of work can be signalized.    In knowledge work, it is quite easy to see important information hidden in e-mail inboxes, phone calls or memos. If the work of everyone is projected on a single map, you move the assignment  model from a “per-individual-basis” to a “system-basis”. With a single map, people have a place to pull the work based on explicit policies and to discuss strategies to handle their challenges every day.     What visibility does is reveal one of the most important assumptions in knowledge work: that the system is working fine. When aligned with other Kanban principles, visibility empowers people to discuss a better distribution of efforts based on availability and importance, instead of just familiarity or personal assignment. They can anticipate and swarm in problems as soon they emerge.  They can work as a real team.   Customer Assumptions   As it was mentioned at the beginning of this text, you can observe a software development work system in terms of how it deals with the accumulation of assumptions over time.  This can be done by observing how people are dealing with their operational tasks, how managers are managing and how team practices can be organized to guarantee frequent evaluation of assumptions. But you can go further on this.   The recent Lean Startup movement is teaching us a valuable lesson. This community is learning how to minimize risk in creating the wrong product by evaluating assumptions about what customers really want during the development process. They use concepts such as Customer Driven-Development, Business Model Canvas and Minimum Viable Products to empower people with a effective method to do that.   The most common form of product development in the software development field involves the generation of a backlog of features. Then, you manage progress by comparing the planned features with the already developed features.  The problem with this approach is that it carries a large quantity of hidden assumptions about what the customer really need. A product can take years to be developed. After all this time, you discover that nobody wants to use it. Basically, because your focus remained on trying to meet scope, budget and schedule.   The Lean Startup community is learning to reduce the cycle of discovery of customer needs to a minimum time and effort. People now are launching product releases in really short cycles. They are reaching the customer even without a real product developed. They are doing this because they know that even the most brilliant idea is formed by a set of assumptions, and these assumptions need to be continuously evaluated before you do a major investment in the wrong product.   Basically what changes is the way you progress in your product development initiative. In this model, instead of moving from one feature to another, you move forward by evaluating one assumption after another. When you don’t have a positive response to your assumption, you pivot. The idea of pivoting makes the approach really strong. When you pivot, you use the new information that you have obtained to change the direction of the product making it compatible with the customer response. This is quite counter-intuitive because by deviating it from the original intention actually makes it stronger.     Here the same concept applies. It is the time and frequency of the evaluation that matters, not how perfect you do it.   Trade-off   There is a clear trade-off regarding how to find the optimal time to evaluate assumption for each feedback cycle.  It seems like value-add activities tend to offer space to near zero time, depending on the available tools or techniques.  However, coordination activities, like meetings, handoffs and reviews hold a transaction cost which makes the constant reduction of time not so useful.    As an example, standup meetings are a coordination activity which can be really effective on a weekly basis, a daily basis or even twice a day depending on the context. A team of managers can do that with project team leaders in a weekly basis. More than that will not generate any value because there aren’t enough assumptions accumulated on this period to be evaluated.  For a software development team, twice a day is too much, while for a maintenance team, living a period of crises, can do that twice a day easily.    However, in all those cases there is a optimal limit to reduce time between assumptions evaluation. The transaction cost of the activity helps us to define that limit. When you reach it, a good way to go further is stop thinking about how to reduce the time and think about how to replace the practice entirely. In this case, you act differently keeping the purpose of that practice in the context of the feedback cycle. The evolution from developers review to pair programming is a good example of this type of change. The purpose was sustained but the means was modified.    Takeaways   The evaluation of assumptions is a thinking tool. It can be used to analyze a system using a new perspective. It can be potentially useful for most knowledge workers, including developers,  managers, product designers and other IT specialists. Each one, in their own problem space, can use this tool not only to take better decisions, but also to improve the current know-how, designing the process to be more responsive and self-regulated.   If you are somehow involved in a Lean, Kanban or Agile initiative, stop for a while and try to think about the feedback loops that you have in your environment. When they are going to close? What assumptions are you carrying on at the moment? When they will be evaluated? What are the possible risks of letting non-validated assumptions accumulate in the system as time goes on?   Processes don’t evaluate assumptions, people do. Processes have feedback loops, but who close those loops are people. So when the Lean, Kanban or Agile communities tell you that “is all about the people”, pay attention to it. At the end is all about how your culture empower them to make good decisions, no matter what are their level of influence.  

Posted by: alissonvale
Posted on: 6/30/2011 at 3:18 AM
Tags: , , ,
Categories: Management | Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (1) | Post RSSRSS comment feed
 
 
Lean has a lot to teach for Agile practicioners. And Agile has a lot to contribute for who wants to apply Lean.Look at this enlightening post which makes this relation more clear:http://dnicolet1.tripod.com/agile/index.blog?entry_id=1874091 

Posted by: alisson.vale
Posted on: 1/16/2009 at 11:06 PM
Tags: , ,
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Hoje vive-se o dilema de encontrar o limite adequado entre o craft e o lean. Quanto do processo de software deve ser "criação" e quanto deve ser "produção". O quanto ele deve ter de "artesanal" ou de "industrial". Hoje há dois extremos em agile. E é difícil saber onde se posicionar entre eles. Talvez seja isso que torne o paradigma tão forte conceitualmente. Software precisa de qualidade e excelência (craft), mas também precisa de gestão, números e tendências (lean)... quanto é necessário de cada um para compor um bom projeto de software? Como nivelá-los de acordo com a situação e com o cenário em que se atua?Por outro lado, há um outro fenômeno interessante. Muitos querem usar Agile, e não é fácil em muitos cenários. Enquanto uma parte das pessoas tenta proteger o novo paradigma de conceitos atrelados ao velho, outros tentam adaptá-lo a esses conceitos para que possam entrar no circuito ou expandir sua influência na indústria como um todo. Algumas vezes para dar respostas a nichos de mercado que não querem assumir o risco ou não tem certeza sobre os efeitos gerados por uma mudança tão brusca. É por isso que quando, por exemplo, a palavra CMM se junta à palavra Ágil em algum momento, a internet recebe uma enxurrada de e-mails, posts, etc com opiniões contra e a favor. O interessante que tanto quem é contra quanto quem é a favor clama por defender ou se beneficiar dos mesmos princípios. Mais uma vez, o que é certo? o que é errado? Talvez não se ter certeza sobre o que é certo e o que é errado seja a nossa principal qualidade como comunidade.O que se vê nesse momento é que o fato de Agile ter sido criado em cima de valores e princípios, e o fato de ele ser representado principalmente por comunidades virtuais, o torna mais poderoso do que se pensa. Hoje não há ninguém que controle Agile. Nenhum dos autores do manifesto, ou mesmo um pequeno grupo deles, pode, isoladamente, controlá-lo. É um movimento de vida própria. E ele vem conseguindo oferecer as peças que precisamos para o quebra-cabeças que é desenvolver software. E o que o mantém assim é o equilíbrio gerado pela existência de diferentes abordagens e soluções para um número ilimitado de realidades e cenários de negócio que temos por aí. O Movimento Ágil é hoje um Sistema Adaptativo Complexo, como descrito pelo Highsmith. Ele começa a atuar sob regras que o fazem assumir "Comportamentos Complexos", onde "Complex Behaviour = Simple Rules + Rich Relationships". Em outras palavras, a comunidade hoje funciona assumindo os mesmos comportamentos esperados em projetos Ágeis de software: emergência, adaptabilidade e colaboração, tudo isso sob a proteção de quatro regras simples.Em resumo, o que faz Agile hoje ser tão poderoso são as polêmicas, as discordâncias. Elas mantém o paradigma atrelado ao bom senso. Nenhum dos lados deixa o modelo estável. Há dois extremos, e é a experiência e o estudo de cada um de nós que nos ajudará a localizar o ponto ideal entre eles. Nesse momento, a única certeza que se tem é que nenhum dos dois extremos é o melhor lugar para se posicionar.

Posted by: alisson.vale
Posted on: 11/15/2008 at 11:25 AM
Tags: , ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Pressupostos são hipóteses que precisamos admitir antecipamente para que possamos aceitar ou compreender as teorias que delas decorrem. Eles são o ponto de partida para justificar as abordagens adotadas. Para optar entre uma ou outra linha de pensamento é necessário antes de mais nada estabelecer concordância com eles. Quando comparamos o modelo Ágil com o Waterfall eu acredito em quatro pressupostos básicos sobre a qual precisamos nos posicionar antes de partir para uma ou outra abordagem. São eles:   Waterfall Agile Sobre a natureza das atividades de desenvolvimento de software Produção taylorista Criação colaborativa Sobre o modelo de qualidade Seguir especificações Satisfazer usuários e clientes Sobre a forma de organizar as pessoas Grupos de Trabalho Times de Projeto Sobre o modelo de gestão Gestão de escopo fixo Gestão de escopo variável Afinal, software deve ser produzido de modo fabril ou criado colaborativamente? Para ter qualidade ele deverá seguir rigidamente especificações pré-acordadas ou ser capaz de satisfazer os anseios e necessidades de usuários e clientes? Como devemos organizar as pessoas? Por meio de um grupo de trabalho com tarefas pré-definidas em um processo controlado? ou criando times de projeto com liberdade para definir e otimizar o seu próprio modelo de trabalho? Talvez a pior escolha, nesse caso, seja não fazer uma escolha. O modelo teórico que vai embasar as suas práticas de trabalho será decorrente dessa escolha. Há uma conexão direta entre esses pressupostos, os princípios que lhe endereçam, e aquilo que precisamos praticar no dia-a-dia para implantá-los. Quando não há rigidez na escolha do conjunto adequado de pressupostos, o modelo teórico adotado se enfraquece, as práticas anulam-se umas às outras e os riscos de insucesso aumentam.

Posted by: alisson.vale
Posted on: 6/21/2008 at 12:35 PM
Tags:
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed
 
 
Documentação é importante em qualquer projeto. Aqueles que alegam que projetos ágeis não são bem documentados quase sempre caem no engano de misturar dois propósitos distintos para documentação de projeto. Um deles é o do registro puro e simples de algo que um dia pode ser útil para alguém, mas que quase sempre não é. O segundo envolve gerar documentação como resultado de um processo que mistura atividades de colaboração e comunicação e gera algo que contribui para o aumento da qualidade do produto em desenvolvimento. Os Testes de aceitação são um exemplo clássico. O cliente e o desenvolvedor utilizam esse instrumento em um processo de colaboração para encontrar os cenários que precisarão ser previstos durante a implementação da funcionalidade. Ao mesmo tempo, o teste é utilizado para que um comunique suas idéias para o outro, fazendo com que os objetivos sejam unificados, já que há duas visões focadas em diferentes níveis de abstração: a visão do negócio e a visão da implementação. Apesar desse tipo de teste ser uma excelente documentação, isso é apenas uma espécie de efeito-colateral, pois o seu real propósito reside em aumentar a qualidade do processo facilitando os processos de comunicação e colaboração. O modelo de trabalho da Toyota, que por si só já converge em vários pontos com a abordagem ágil, parece que funciona de maneira semelhante no que diz respeito à documentação. Quem já teve a oportunidade de estudar um pouco o seu processo de solução de problemas sabe como ele é rico em observação, diagnóstico e análise detalhada das questões que influenciam ou são influenciadas por um determinado problema. Mesmo assim, quem espera que a Toyota registre cada passo desse tipo de análise, gerando um grosso relatório a ser utilizado para avalisar as decisões, pode se preparar para rever seus conceitos. Pelo menos é o que diz o best-seller O Modelo Toyota, escrito por Jeffrey Liker e David Meier. Esse livro, que, na minha opinião, deveria estar na cabeceira de todos aqueles que estudam e/ou aplicam o modelo ágil em seus projetos de software, tem um grande número de ensinamentos dos senseis da Toyota que, ou corroboram com as idéias do movimento ágil, ou o expandem em direções inusitadas.     Figura 1: Esboço de relatório A3 adaptado ao contexto ágil Uma das partes do Manual de Aplicação que me chama a atenção já há bastante tempo é o trecho em que se fala do uso sistemático pela Toyota de relatórios montados em uma folha de papel A3, onde se conta uma história com início, meio e fim de um projeto, de solução de um problema, ou ainda histórias que apresentam marcos de evolução de um projeto. Há vários cenários em que tais relatórios são exigidos e também há toda uma técnica para esboçá-los.  A técnica possibilita a criação de um relatório capaz não só de documentar as análises e decisões tomadas em um projeto, mas também amplia a capacidade de comunicar seu conteúdo mais eficientemente, além de permitir a colaboração de todo um grupo de pessoas para produzi-lo. O fato de ele estar limitado à parte da frente de uma folha de papel A3 o torna grande o suficiente para conter as informações mais essenciais e pequeno o suficiente para evitar que ele contenha todo aquele detalhamento que afasta o leitor e atrapalha o processo de comunicação. Um relatório desses deve poder contar toda sua história em menos de 5 minutos. Há basicamente quatro tipos de histórias que podem ser contadas por meio de relatórios A3 na Toyota: História de uma proposta; História da Solução de um Problema; História da Situação de um Projeto; História de Informações; Todos eles podem ser facilmente utilizados em um qualquer projeto, Ágil ou não. Mas há dois deles que realmente podem ser muito bem aproveitados no modelo ágil: um relatório que conta a história da solução de um problema e o que conta a história da situação de um projeto. O primeiro pode ser mais um dos instrumentos disponíveis para inpeção e adaptação de projeto ágeis. Na verdade, o relatório é apenas um dos elementos de toda uma metodologia para solução de problemas (eu falo um pouco dessa metodologia em uma série de dois artigos que escrevi para a Revista Visão Ágil sobre aperfeiçoamento de projetos ágeis). Já o segundo tipo de história (aquela que descreve a situação de um projeto), também me chamou a atenção, mas dessa vez por um motivo diferente: talvez esse seja um bom instrumento para contarmos a história de uma iteração para stakeholders ou outros personagens que não participam do seu dia-a-dia e que precisam rapidamente estar informados sobre o seu andamento. Pensando nisso eu criei um esboço simplório do que poderia ser a história de uma iteração contada por meio de um relatório A3. Uma imagem ampliada desse esboço pode ser obtida clicando aqui.  À partir desse ponto é interessante ler o restante do post com o relatório aberto para facilitar o entendimento. A primeira coisa que se deve ter em mente ao produzir um relatório desse, é que ele deve contar uma história concisa e sem desvios. Um relatório de situação de projeto (ou relatório de status) é diagramado horizontalmente. A frente da página é dividida em duas partes iguais. O verso normalmente não é utilizado. Dentro dessas duas partes, são criadas 5 seções que receberão o conteúdo do relatório.  O tamanho de cada seção e o seu conteúdo vai depender do enfoque da história que está sendo contada. A primeira seção é o Histórico do Projeto até então. É nesse momento que você situa o leitor descrevendo a exata situação do projeto até o momento imediatamente anterior ao início da iteração. Como estava o projeto antes da iteração começar? Estabeleça o foco em números e utilize gráficos para mostrar tendências. Ao invés de textos com frases corridas, crie enumerações com frases simples. Utilize setas para guiar o leitor pelas informações disponibilizadas. O  segundo passo é indicar quais foram os objetivos da iteração. Você pode, opcionalmente, contar a história sobre como estes objetivos foram estabelecidos. Um backlog com uma análise de custo-benefício pode servir para este propósito. Nesta seção deve-se ser claro quanto a esses objetivos. Ou seja, ao invés de "Melhorar a cobertura de testes unitários", utilize "Alcançar 85% de cobertura nos testes unitários". Se este for um dos seus objetivos, você deve indicar em que número esta cobertura estava na seção Histórico do Projeto para depois indicar qual foi o resultado alcançado nas seções seguintes. Lembre-se que a história deve ter início, meio e fim. Repare que eu adicionei todo o backlog do projeto ao documento. Pude fazer isso, porque o projeto é pequeno. Em projetos maiores você pode se concentrar em colocar a história da iteração no contexto de um tema ou de uma release. Assim, o escopo ficará reduzido o suficiente para ilustrar seu objetivo mais imediato. Uma vez esclarecidos os objetivos, podemos entrar nos detalhes de o quê foi implementado para alcançá-los. Aqui vale informar as ações ou os critérios utilizados e quais métricas ou informações foram coletadas de forma que possamos constatar a relação do trabalho realizado com os objetivos estipulados. Finalmente, as duas seções finais descrevem o resultado alcançado, os problemas e as ações de melhoria propostas. As informações sobre problemas e melhorias propostas podem vir das retrospectivas e devem de alguma forma apresentar elementos que descrevam problemas que ou atrapalharam a execução do trabalho ou o impediram.  Vamos ver então como a história dessa iteração poderia ser contada em menos de 5 minutos: "Na parte superior esquerda nós podemos visualizar a síntese de progresso do projeto. Após 4 iterações terem se passado, podemos ver que funcionalidades têm sido desenvolvidas a uma taxa de 8 a 11 pontos por iteração. Até o momento foram desenvolvidos 39 pontos, o que representa 34% do total. A velocidade da equipe, que antes era de 8 pontos por iteração, agora está na média de 9,75, o que indica uma aceleração de 17% quando comparada ao início do projeto. Esse quadro de progresso nos oferece três possibilidades de projeção para o fim do projeto. A projeção mais otimista leva em conta a maior velocidade alcançada até agora, a mais pessimista leva em conta a menor velocidade e uma terceira visão mais realista considera a média de velocidade das últimas 4 iterações. Assim, nosso marco final, no momento anterior ao início dessa iteração, estava previsto para ser alcançado provavelmente entre 7 e 11 semanas. Ao iniciar a Iteração #5, uma análise de custo-benefício foi feita de forma a selecionar aquelas funcionalidades que teriam maior relevância para implementação imediata. Três histórias foram selecionadas e o comprometimento para o seu desenvolvimento foi estabelecido. Em termos numéricos, apenas mantivemos o índice de entrega alcançado na iteração anterior: 11 SPs. A implementação das três histórias foi realizada dentro do prazo da iteração. Testes de aceitação, de integração e testes unitários automatizados foram criados para todas elas. Houve uma variação para baixo indesejável no nível de cobertura dos testes unitários, o que deixou a equipe alerta para trabalhar na direção de elevar esse índice novamente na próxima iteração. As variações nas métricas de design não revelaram nenhuma anormalidade. Com a implementação das três novas histórias, o projeto avançou em mais 10 pontos percentuais chegando aos 44% de completude. As Projeções de término agora estão variando entre 5,2 semanas (cenário mais otimista) e 7,8 semanas (mais pessimista).  A iteração apresentou um aumento significativo no percentual de tempo alocado em atividades que não geram valor. Esse desperdício que já fora de 17% na iteração anterior, alcançou o patamar de 27% nessa iteração. Um aumento nominal de 10%.  A análise realizada na retrospectiva revelou uma grande quantidade de tempo investida em atividades de instalação e configuração de infra-estrutura de deploy, o que nos fez propor um plano para automatizar essas atividades e, dessa forma, voltar a reduzir esse nível de desperdício para que mais tempo possa ser alocado em atividades que gerem progresso no projeto." Esse exemplo é totalmente fictício, claro. Mas dá uma idéia do que é ser capaz de apresentar o trabalho de toda uma iteração em menos de 5 minutos, concentrando-se nos pontos essencialmente relevantes para a audiência do relatório. Obviamente, os gráficos, números e métricas podem ser dos mais diversos tipos e podem apresentar as mais diversas informações. O importante é manter o foco no aspecto da comunicação, preocupando-se em ser sucinto e objetivo , sem poluir ou sobrecarregar com dados sem conexão com o que se deseja apresentar.  

Posted by: alisson.vale
Posted on: 4/12/2008 at 10:59 PM
Tags: ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (2) | Post RSSRSS comment feed
 
 
Recentemente foi publicada a 3ª Edição da Revista Visão Ágil que contou com um artigo meu intitulado "Aperfeiçoamento de Projetos Ágeis - Uma Visão Sistêmica". O artigo foi dividido em duas partes. Nessa primeira parte, eu falo um pouco sobre os aspectos teóricos que circundam o processo de aperfeiçoamento contínuo que se estabelece em um projeto ágil. Na segunda parte, falarei sobre formas e modelos de aperfeiçoamento para uso na prática. Alguns trechos do artigo que acho serem importantes para discussões sobre o tema:     - Sobre o ciclo Especulação, Colaboração e Aprendizado do Jim HighSmith no contexto do auto-aperfeiçoamento.  "No contexto do aperfeiçoamento contínuo, a percepção adequada desse entendimento gira em torno de  saber que para ter um processo que seja melhor hoje do que ontem, é necessário (1) especular sobre algo que precisa ser melhorado  (2) colaborar para que tal melhoria seja alcançada e (3) aprender e incorporar a melhoria conquistada." - No fundo... "...o processo de melhoria contínua gira em torno de, dentre outras coisas, buscar melhores maneiras de (1) aumentar  a quantidade do tempo em que a equipe está envolvida com atividades que geram valor para o cliente por meio de software funcional; e (2) de aumentar a qualidade do tempo em que a equipe está envolvida com atividades que visem garantir as condições de segurança, estabilidade funcional, compreensibilidade e manutenibilidade do software que está sendo desenvolvido."      - O PDCA e seu padrão sistêmico "Em projetos ágeis, é o padrão sistêmico de adaptação e aperfeiçoamento que faz a equipe controlar o processo, evitando que o processo controle a equipe.  Entender o ciclo PDCA e identificar seu funcionamento dentro do projeto é fundamental, pois ele estabelece esse padrão sistêmico cujo resultado é a incorporação de pequenos elementos de controle sobre a complexidade do projeto. É o padrão que garantirá a sustentabilidade do projeto a longo prazo." - O entendimento da Toyota "A Toyota (...) enxerga o rendimento de um sistema (no nosso caso de um projeto) segundo uma rede complexa de influência dos chamados “elementos de avaliação primários” (...) Em seu modelo de melhoria contínua (o kaizen), toda e qualquer ação de resolução de problemas é executada por meio do entendimento de suas influências nos elementos de avaliação primários."  - A relação entre qualidade e aperfeiçoamento contínuo "A qualidade é como um fluido que deve ser colocado para lubrificar cada engrenagem do processo. Uma engrenagem sem o fluido será um potencial ponto gerador de defeitos."   Ao fim do artigo, eu tento descrever um cenário real onde o entendimento sistêmico poderia criar um diferencial na hora de se colocar em prática questões de aperfeiçoamento. Mas aí é melhor ler o artigo para saber exatamente do que esse exemplo trata.  A idéia da segunda parte desse artigo é entrar nos modelos utilizados para auto-aperfeiçoamento atualmente. Descrever práticas de retrospectiva e as práticas utilizadas pela Toyota para fazer essa roda de melhoria contínua girar a favor do projeto.   Espero que seja útil para aqueles que querem implantar um poderoso ciclo de melhoria contínua em seus projetos. 

Posted by: alisson.vale
Posted on: 1/28/2008 at 7:03 PM
Tags: , , ,
Categories: Projetos Ágeis
Actions: E-mail | Kick it! | DZone it! | del.icio.us
Post Information: Permalink | Comments (0) | Post RSSRSS comment feed
 
 
Já a algum tempo venho estudando um pouco sobre teoria de sistemas e sobre suas influências no modelo ágil para desenvolvimento de software. Especialmente no que diz respeito aos trabalhos do Jim Highsmith e do seu método adaptativo. Recentemente, durante esse estudo, me deparei com o trabalho de Donella Meadows sobre os 12 pontos de alavancagem sistêmica e vi muita convergência com os princípios e práticas relacionados a abordagem ágil para projetos de software. Pesquisei um pouco e achei aqui um pequeno texto sobre esse assunto adaptado a área de software, mas achei bastante superficial. Resolvi então juntar alguns textos que tinha escrito sobre análise sistêmica em projetos de software e produzir um artigo. Espero que seja proveitoso para todos.  Projetos Ágeis e os 12 Pontos de Alavancagem Sistêmica.pdf (573,53 kb)