Friday, November 26, 2010

Most Important Agile Team Role

My colleague and I were working on a training class for the agile product owner role and started discussing how important and difficult the role is. A good product owner needs to be able to communicate well with the business and the development team. They need to be able to take complex business requirements and break them down so the team can understand them. They need to make sure the team understands the context they fit in as well as help the team divide them in a way that is feasible from an implementation perspective and still delivers good business value. The also need to prioritize the stories or features considering ROI, usability, cost to develop and maintain and other considerations for different stakeholders. There is more they need to do. It is a complex and important role.

After discussing this role my colleague said this was the most important role on an agile team. I understand why he would say this because we have both seen many teams that struggle because they do not have someone who can perform the role. It is a difficult role to hire someone into because they need to be an expert in the field and trusted by so many people in the organization. The have to communicate on so many different levels, senior leaders, managers, business and technical.

Teams without a good product owner struggle with delays because they cannot get answers to problems. The build things that don't meet the business or customer's needs and they have a lot of rework.

I agree with my colleague that the role is important. But is it the most important role?

Assuming any role on a team is the most important is against the very basic principle that we deliver as a team. Actually a team could deliver without a specific person performing the role if others on the team had those skills even if they were shared among multiple people on the team. Would they be as efficient, no! But the could deliver and even deliver something good that meet the customer's needs.

Is it most important because it is difficult to hire for? The right person has to be found for this role and as I said earlier finding that person is difficult. Developers might be easier to find but I think this is because companies will hire anyone that has heard of java or .net as a developer. Hiring good developers is just as difficult and takes a lot of time. Difficulty in finding someone to fill the role just means more effort needs to be put into it not that is more important.

It seems to me there is no most important role on an agile team. The roles are all important in order to deliver quality software rapidly. To do this you need a team that understands how to break features down into small useful parts that are delivered often. This takes the whole team working together and being creative and working together to make sure all different aspects are thought about, developed, tested, verified and delivered. This takes a team with people that have great communications skills, development skills, testing skills and the desire and ability to work with others to push a project to completion.

However, that does not mean certain roles don't take more effort to fill. Finding a good product owner will take some effort so product or project leaders should start trying to fill this role early. This is just like any other risk management. A team does spikes or prototypes for high risk parts of the system, not because it is most important but because it is most risky. The same with the product owner role. This is a high risk role. It is important and hard to fill so start early to reduce the risk.

I think my colleague was confusing high risk with most important. There is no doubt in my mind if you wait until the development is starting to find a product owner you are going to be in trouble.

Thoughts?

Thursday, November 25, 2010

Fix Ubuntu 10.10 Sony Vaio Sleep/Hibernate Issue

I have a new sony vaio F series and I have struggled to get the sleep and hibernate to work. I finally found a solution today at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/522998/comments/30.

Here is the solution copied from the above link.


create files : /etc/pm/config.d/00sleep_module and /etc/pm/config.d/unload_module
add line to files : SUSPEND_MODULES="xhci-hcd"


I hope this helps if you are having the issue too.

Tuesday, October 19, 2010

Google and Fraud

Free is good so I guess I cannot complain too much. I like my gmail accounts but beware there is not much protection and no easy way to recover if someone gets hold of your account. There is no number to call and get someone to lock an account that has been hacked. There is a form but filling it out says it could take 24 hours to verify.

Free always has a cost attached!

Here are some things to keep in a safe place.

1) The verification code that google sends to your secondary email address.
2) Know the date that you created the account, month and year. The recovery process seems to require this. I had to guess and of course my account is still locked.
3) Place a fraud alert on your accounts
4) Check your credit reports continually
5) Close the accounts that have or may have been tampered with or not opened by you
6) File a complaint with the FTC https://www.ftccomplaintassistant.gov/
7) I also filed a complaint with the internet crimes division of the government http://www.ic3.gov/crimeschemes.aspx

I will update this as I learn more.

Tuesday, September 14, 2010

Agile Basics (A getting started guide)

* (updated Sept. 16th - new prezi version)

This presentation is meant to show to new teams to help them get started. I wish I could say it was completely unique and completely new, but it is a simplification of some existing presentation with images and ideas from others on the web. Please feel free to use it and give me any feedback you have.




Here is an example prezi of how another team implemented these ideas in a year. Thanks Thomas Ferris Nicolaisen for showing this to me.

Friday, August 27, 2010

One Metric to Rule Them All and In the Darkness Bind Them

I think metrics and measurements are good when used in the correct way based on the context and team I am working with. For each team I am working with I use metrics to help them see what their issues are. Once they see their issues then we use metrics to help us determine as early as possible if changes we are making are having a positive or negative impact on those issues and the rest of the system.

Measurements ARE necessary to know we are headed in the right direction.

There are plenty of articles out there about abusing metrics. I thought it should be well known that all metrics need to be balanced. (e.g. code coverage going up and complexity going down) And of course they need to be trended to be useful.

Now I have a requests to find 1-2 metrics to apply to all teams to determine how effective agile and coaching are doing at improving the teams. Can someone really think that 1-2 metrics can be used to determine effectiveness?

All teams do not have the same highest priority issue(s). Teams with terrible user stories and acceptance criteria do not need the same metrics as a team trying to fix high coupling code issues.

Ok, enough complaining! To help me, and I hope others, I want write about 1) what are the goals of specific metrics 2) what are the dangers and abuses of those metrics? and 3) how to balance those metrics against each other?

Average velocity trend

*Goals:*
* Predictability!! What can be done by a specific date or when can something be completed.
* Velocity is a *capacity* measure *NOT* a productivity measure.
* Velocity allows a team to know how much business value they can deliver over time.
* Developing a consistent velocity allows for more accurate (i.e. predictable) release and iteration planning.

*Possible abuses:*
* Calling this a measure of productivity. If velocity is the only number focused on it could even hurt productivity. Teams can artificially increase velocity in many ways; stop writing unit tests or acceptance test, increase estimates, stop fixing story defects and reduced customer collaboration just to name a few.
* Comparing velocity between teams. Velocity is a team value and not a global value. Many variables affect a team's velocity including relative estimating base, support requirements, number of defects, political environment of the product or project and more.
* Calculating velocity by individual. This leads to a focus on individual performance vs. team performance (i.e. sub optimization).
* Using velocity to commit to the content of an iteration when the value is not valid. Velocity is a simple concept and it provides a lightweight measure, but it is also a very mature measure. To be useful it requires estimation maturity and the consistent application of this over a period of time by a stable team base. If it lacks these elements its abuse can come at the hands of management or from the team, the latter occurring when a team makes assumption about the validity of the metric when, without the mature elements in place, it is not usable at all.

*Balancing metrics:*
* Percentage of rework versus stories done on average each iteration. This can help a team see how much of their work each iteration is delivering new value to the team's customers.
* Planned work versus unplanned work trend. A lot of unplanned work will cause a teams velocity to be of less value because it hinders a team's ability to plan. Having a low value for unplanned work will make the teams planning more consistent and accurate.
* Code quality metrics such as code test coverage, cyclomatic complexity,static error checking and performance. A team that is increasing their velocity by not focusing on code quality is making a short term decision that will have a negative impact over time.

Delivered Features vs. Rework Resolution trend

*Goals:*
* Makes _waste_ visible so that it can be eliminated.
* Gives the team a good understanding of how much of their iteration capacity is consumed by rework (i.e._waste_).

*Possible abuses/issues:*
* Lagging indicator of the team quality.
* Story defects are not worked on until a regression period giving a short term indication of fewer defects.
* Increasing story estimates and/or reducing defect estimates
* Hiding defects as stories.

*Balancing metrics:*
* An inconsistent velocity. Delaying defect correction until later will make the velocity trend erratic with large spikes.
* Planned versus unplanned scope. A team that is delaying defect correction will tend to have more unplanned work due to poor quality issues.
* Number of defects in the backlog. Ideally this number should be on a downward trend. An upward trend of the number of defects in the backlog could indicate the team is delaying defect correction.
* Increasingly long regression periods at the end of each release.

Completed work vs. Carryover trend

*Goals:*
* How well the team is in their execution of the iteration (i.e. delivering on their commitments)

*Possible abuses:*
* Planning less work than the team is capable of to allow for interruptions or poor estimating.
* Delay refactoring code to complete work but not keeping the code at a level that makes change cheaper and easier in the future. (or other good practices such as TDD/unit testing)

*Balancing metrics:*
* A velocity trend that is not improving or is going down could be caused by planning less than the real capacity of the team.
* Planned versus unplanned work can indicate if the team is being interrupted and is causing task switching that could be the cause of the carryover.
* Downward test coverage trend and/or upward cyclomatic complexity trend could indicate that the code is becoming more difficult to change and much more difficult to estimate accurately.

Planned vs. Unplanned Scope trend

*Goals:*
* Show how well the team is at planning.
* Show how often the team is being interrupted within the iteration to work on something that wasn't originally planned.

*Possible abuses:*
* Large place holders to allow unplanned work to come in and appear to be part of the planned work.

*Balancing metrics:*
* Delivered Features vs. Rework Resolution trend.
* Completed work vs. Carryover trend

Code coverage vs. Cyclomatic Complexity trends

*Goals:*
* Reduce the cost of change. Clean code tends to make the application easier to understand and safer to change.
* Indicates that the system is being tested at an accurate level.
* Indicates that the code quality is good; loosely coupled, simple as possible, etc.

*Possible abuses:*
* focusing only on one code metric. e.g. 100% code coverage with generated tests will not make the code easier to understand or change.
* focusing on code quality alone and not focusing on business goals of the customer.

*Balancing metrics:*
* Velocity trend
* Delivered Features vs. Rework Resolution trend
* afferent and efferent coupling trends
* abstractness trend
* package dependency cycles
* number of changes in a class(es)

This is far from an exhaustive list of metrics! But I hope the idea of thinking about a metric and what your goal is of measuring a value and how you can stop yourself or others from gaming the value by balancing it with other methods.

** I started this article based on a set of metrics that my colleague Mike Stout uses, so thanks for the ideas Mike. Several other coaches I work with gave me feedback on this as well. Thanks!

Not Laughing Anymore

I use to read all the blog posts about the year of Linux on the desktop and laugh. Not because I do not like Linux but simply because it was simply to complex for the average user. But my opinion is changing.

Out of absolute frustration with the poor performance of my new work laptop running XP I decided to install the latest Ubuntu version for duel boot. Wow it was easy and fast. I have been able to do everything on Ubuntu 10.04 that I do on a daily basis. It works within my corporate network seamlessly. It worked at home even better. It is super fast but I am sure this is partially because I get the full 64 bit support that I do not get with XP.

Another great thing is how it feels the same whether I am at home or at the office. Windows XP behaved better outside the corporate network. I do not think this is all Windows fault. I am sure the corporate installed tools are a big part of the problem. But now I am running the evolution email client and it works the same in and out of the office. All the development tools do as well.

I have not been able to connect to the VPN we have, which was done with the Nortel VPN client on Windows. However, I used this to connect to outlook and I do not need it now.

I am still not 100% convinced and it has only been a week but so far Ubuntu is doing great as my corporate and at home OS.

Friday, July 23, 2010

How the Underdog Outperformed the Champ

This post is a textual version covering the information I covered in my Agile2010 talk in Orlando.
I want to tell you about 3 teams and show how implementing agile practices and slowly begining to understand the principle behind those practices helped them deliver value faster. One of the strange occurrences that happened though was how a more junior team outperformed a more senior team for a very long time.





The story starts with a high profile internet booking engine for a low cost airline. The project was having a rough start and was not making progress. Most of the development was being done in a new office that was located 7 hours away from the office we were located at in Europe. I, an agile coach, moved to this location and a director for the product spent <>

The team started as 1 team that we quickly split into two teams. Soon after we added a third team at the office in Europe.

All 3 teams were new to agile. The teams were setup with 6-7 developers and 1-2 testers. The teams shared two customer representatives/business analysts. They had worked in iterations but the iteration was never a commitment but simply a place to track what was currently being worked on. The original two teams were relatively junior with mostly front-end experience. The third team we added had more years of development experience and more balanced experienced across all tiers of an application.

We started implementing extreme programming practices with the two teams at the same location. The initial focus was on planning and we moved to a 1 week iteration with daily standups. We started doing retrospectives as well and this allowed us to start make fast improvements. Which quickly identified problems with our user stories and acceptance criteria.

We did some user story training, based on Mike Cohn's material, with the team. Then we started working with them to breakdown the stories. This actually took a couple of iterations to get the stories into a size and correct split so that they could be developed and completed in a one week iteration.

The two teams at the original location had a couple of attributes I really appreciated. The first was they were open to change. They wanted to improve and they listened to everything we would recommend as issues came up. One thing that took a bit more convincing but we were able to get them to try was working in a cross discipline way. The whole team focusing on testing when testing took more effort than development or testing was behind.

The team that implemented these ideas the best became very efficient and continually solved their issues. This lead them to have a very consistent velocity. However, it was not only the most consistent it was the highest of all the teams.

The second team struggled with consistency. As we started to implement the process they did have an initial upward turn but they did not consistently solve the root issues that were occurring and struggled to have a consistent output of value.

I will start with a retrospective on the high performing, "underdog", team. The items they were doing that really helped them included focusing on finishing the work in an iteration. The iteration became a commitment that they desired to keep. The divergently searched for ways to do this and improve this.

One of the first issues that really made all the teams suffer was story size, completeness and how they were split. This was a team effort to get this right. The business analysts watched how the team worked and the development and testing team help get stories into a size that both delivered value and was completable in a one week iteration.

At times stories were still too big and were not complete until late in an iteration. This caused the testing to be very high risk. The testing team need assistance and the whole team would help as required. Generally speaking they had, or rather developed, good cooperation between all roles on the team.

As I said earlier the team that did this the best also had a very high output of completed stories and fewer defects.

This team was open to new ideas and wanted to learn. They wanted to be better at planning and technical practices although with the pressure on the time line the technical ended up getting less focus.

One of the issues was the practices were seen as absolutes and understanding the values and principles driving these practices was not completely grasped. This lead to struggles when I left the team for a few months. When I returned they told me what all of their issues were but they were not solving them. No one on the team really developed into the leader, coach and evangelists to keep improving and refocus the team back to the goals of the practices.

There were many technical practices that we did not implement well. One of these that we spent a few weeks on trying to get going was ATDD/BDD. I am a big proponent of doing this and we struggled to get the team to take the time to learn tools and techniques to do this well and it was dropped. Of course the normal problems of not have an automated suite of test came up with each release including many defects and repeated defects and longer manual regression periods that mostly focused on positive and negative checking.

We should have added or developed someone on the team to be the leaders that could have kept the team focused without a full time agile coach.

One of the things I am very thankful for was we did have management support for changing the way we were working all the way to the Senior VP level. They not only allowed us to do it but were removing as many of our roadblocks as possible. Many of these roadblocks were with internal and external teams we integrated with and the team had very little influence or relationship with. They gave the team the contacts to develop the relationships and the support to influence those teams leaders to help meet the teams needs.

Another thing I am thankful for is how well the team accepted me. I came in with authority to make changes and it is very difficult to accept an outsider who is asking you to make huge changes. They quickly allowed me to be a part of the team which allowed me to help them become a better team. This also gave me new friends to help adjust to a new country and culture which is not an easy thing to do.

Now we go to the third team in the story. This team was added a couple of weeks after we had started implementing the changes on the team. They stepped into the system we were creating from a completely different system. They had a few more years experience than the other 2 teams both within our company and outside. They had come from a team that had some level of success working in a different way. They called this way agile but as I mentioned before this was iterations that did not have the commitment of an agile iteration. They were use to working in long release cycles where mostly technical stories and upfront design were done first versus stories that delivered small pieces of business value where the design evolved and refactoring was continuous.

They really fought against the change. They did not see nor did they put effort into understanding the value of how we were working. This ultimately lead to a long period of very low performance. It was not until the last iterations that they became more consistent at delivering high quality value for the customer to use.

There were some things the team did that I liked a lot. They had better design skills and understanding of good design principles. They also fought to get refactorings and removal of some large pieces of technical debt.

But unfortunately there was a lot of room for improvement.

The team did not focus on finishing iteration commitments. This was not a priority. They did not focus on breaking the stories to fit in the iteration. If development was done they did not assists with testing. This lead to an animosity between the developer and the tester and a lot of defects.

The did understand design principles but the idea of simple design and working in small sets of changes was not understood. Refactoring was seen as a big story when things got so bad you could not longer make changes without breaking everything else. This lead to a couple of delayed releases due to large refactorings that would take a week or so to fix all the defects created. Obviously better automated tests of all types would have helped this.

They never worked as a team. Each developer owned their own stories and worked on it alone. The communication between the developers, testers and business analysts was very bad at the beginning. It took a long time to improve this.

They did not find and solve real issues. They usually identified symptoms and avoided digging to the real cause of these symptoms. This meant issues carried on for a long time before they were actually solved. I think part of this was they knew what the real issue was but did not like the possible solutions, like slowing development to help with better testing or working in smaller batches of work.

This team had no one leading for a long time. There was no one on the team who had worked in an agile way and was focused on continuous improvement. Someone was added to help lead the team after a few weeks of struggles but it took a while to get the team turned around.

The team was built from a group of people that really did not understand agile values, principles and practices. This meant myself and one of the business analysts were the only ones trying to show them the value of what we were trying to do with minimal success. One thing we did mid-way through the project was have the team that was performing well do a session with the other teams to tell them what they had been doing and how it helped solve their issues. This helped a bit.

One of the things we tend to struggle with is creating new teams for a project that is growing. We tend to start with too many people at the beginning and when we need to grow a new team we build a whole team from scratch. I think it is "eXtreme Programming Explained" by Kent Beck that says split and existing team to grow a new team. This is more likely to give you a team that is starting with people who understand the application, the current design, the process and the system as a whole.

The less experienced team wanted a set of practices to start with. A senior team needs to have more say. Most people want to improve. Let them fail early but in a way they can get quick feedback. They may still need some guidance and some help asking the right questions but the freedom will reduce the reluctance to change. I believe in most cases the team will fill the pressure to correct the problems they create.

One thing I enjoyed was the debates I had with the leader that was brought on to help the team. He did not agree with everything we were doing, similar to the rest of the team, however he wanted to discuss and understand why. We had long discussions about things we were trying to do and things that were going wrong. In the long run he ended up being a great help in convincing the team to try new things.

So what have I learned on this project.

Getting started with a new teams is hard! There is so much that needs to be considered. Our teams had domain experience and experience with part of our architecture. However, they had not worked as a team together.

The first thing I think you need is a strong leader. By a strong leader I do not mean a dictator. I mean someone who understands the system you are in. The leader should understand the process you will be starting with and understand the values and principles behind that process. This leader must be able to explain those and guide the team towards understanding and using those values and principles in order to continuously improve.

In order to continuously improve the leader and the team must know how to dig deep into the issues and find the root cause of their issues. This requires courage and openness. It is much easier to point at and blame symptoms and/or others for the issues the team has. The team will want to say the problem is "the short iterations", "pair programming" or "the open environment" when the issue is stories are too big and are not clearly understood. Their is a communication problem between developers and testers. The team is guessing when they do not understand versus having a discussing with the customer and/or business analyst. Or many other issues that are usually people related and system related.

One thing I found that helped the team see issues were coming was watching and limiting the amount of working in progress.

Limiting work in progress, like having a goal to avoid carryover, does not fix your problems. However, it is a great indicator that there are issues.

Too much work in progress hides issues and delays feedback. To much work in progress means testing is not fully done, or maybe not done at all, and you are hiding quality issues. It also means the customer cannot see and use the work yet so it delays getting customer feedback. It delays the discovery of all types of communication issues on the team and between the team and third parties they must interact with. To much work in progress also hides the progress. It is difficult to tell where you are when work is not developed, tested and accepted.

One of the interesting things I saw and the reasons I decided to write and talk about this was how I saw experience work against a team and how inexperience "helped" another team. We actually had 2 teams at a couple of different points and they both had the performance issues I discussed earlier.

With a less experienced team you can and probably should give them more of a starting point. A team not familiar with agile planning needs some absolute starting practices to try out. But be-careful that you are always explaining why you are recommending a certain practice. It is a must to mentor them and develop a leaders who understands not only what you are doing but why. If there is no one on the team who can be this find someone you can bring on to the team and develop into this role.

Senior teams need to have a bigger say! They do have some experience so make sure to spend time getting their buy in and let them help set the starting process for the team.

The way we built the 3rd team would have made this difficult because we did not build the team with a base from the existing teams. It would have been extremely difficult to manage teams with different iteration schedules from a release perspective. However, this team was so inconsistent we could not really promise what they would have done in any set of iterations anyway so we still struggled with release scope issues.

Experience is a broad term. I do not think many people believe time based experience is the only or even best measure. Someone that has done something many times and done it the exact same way each time does not have experience. It is unlikely they would be able to or would even try to adapt what they have done when the context changes.

I really like the four levels of experience that Andy Hunt used to describe different levels of experience in his book "Pragmatic Thinking and Learning":

Novice = "...have little or no previous experience in this skill area. ... They can, however, be somewhat effective if they are given context-free rules to follow"

Advanced Beginner = "...can try tasks on their own, but they still have difficulty troubleshooting"

Competent = "...can now develop conceptual models of the problem domain and work with those models effectively. ... Competents can troubleshoot."

Proficient = "need the big picture", "will be very frustrated by oversimplified information" and "can self-correct"

A team lead I worked with recently had great success limiting his small team to 3 stories in progress at once. When he moved to lead a team that was more than double the size he wanted them to keep the same WIP limit as the small team. The team was very unhappy because they were not staying busy and they knew they could do more than they were doing. He had learned a practice but could not adjust it to the new context yet.

I hope this helps you and please comment and ask questions. I am still learning too!!!

Thursday, June 24, 2010

GivWenZen for Flex

Very cool, I was sent a link to a new clone of GivWenZen for flex. Looks very interesting: http://bitbucket.org/loomis/givwenzen-flex/wiki/Home

Sunday, April 11, 2010

GivWenZen Beta 10 - Vararg Support for Step Parameters

I have finished packaging the new GivWenZen 1.0 beta 10 release. Someday I may not call it a beta but not sure I am ready for that.

The most interesting new feature was the ability to allow varargs for step parameters. A specification can now have something like the following:

given: the numbers 3, 6,12, 67

The step method to handle this could look like this:

@DomainStep("the numbers (.*)"
public setTheNumbers(int... numbers) {
// implementation here
}

This will work for all native types, String and any type that can use the normal PropertyEditor conversion of GivWenZen.

While doing this I realized that there needed to be a convention in place for automatically loading a step parameter conversion type, an implementation of MethodParameterParser, when starting a GivWenZen instance. That is now possible by placing a class that implements MethodParameterParser in the bdd.parse package. As with other custom types they are used before the default converters are used.

A few other small issues were fixed and the full list can be found here.

Tuesday, March 23, 2010

Eclipse Plugin for GivWenZen

One of the weaknesses of most BDD and collaborative acceptance testing tools is the lack of nice tools for maintaining them. What I hope is only a first step in correcting this is a new eclipse plugin for GivWenZen.

The plugin adds simple highlighting to the content.txt test file showing missing step implementations. It also allows navigation from the content.txt test to the implemented step method. If you search for usages of a step method it will show both java and content.txt files.

He has also done a nice 2 minute screeencast:


I am adding a task to create a similar plugin for IntelliJ to my todo list. Thanks for creating this plugin Szczepan!!

Monday, March 22, 2010

Running FitNesse Test in Your Automated Build


If you are creating automated acceptance tests you should be including them in your automated build. Here are a few options for including FitNesse in an automated build.

Ant:

FitNesse comes with a set of Ant tasks that can be used for running tests. Below are the targets that I use in GivWenZen:

<target name="load_fitnesse_taskdef">
<taskdef classpathref="fitnesse.classpath"
resource="tasks.properties" />
</target>
<target name="execute_fitnesse_tests"
depends="load_fitnesse_taskdef">

<start-fitnesse wikidirectoryrootpath="${basedir}"
fitnesseport="${fitnesse.port}" />

<execute-fitnesse-tests suitepage="GivWenZenTests"
fitnesseport="${fitnesse.port}"
resultsdir="${givwenzen.target.dir}"
debug="false"
resultsxmlpage="gwz-tests-results.xml"
classpathref="fitnesse.classpath" />

<stop-fitnesse fitnesseport="${fitnesse.port}" />
</target>


The FitNesse output states that the ant target is deprecated and should be replaced with the FitNesse rest commands. The ant target example from the FitNess site is below:

<target name="my_fitnesse_tests">
<java jar="dist/fitnesse.jar"
failonerror="true"
fork="true">
<arg value="-c"/>
<arg value="FitNesse.MySuitePage?suite&format=xml"/>
<arg value="-p"/>
<arg value="9234"/>
</java>
</target>

Hudson:

The hudson plugin for FitNesse is very easy to configure and use. Once it has been installed and hudson has been restarted set the path to the FitNesse xml output file for your build.




Maven:

I am not a big fan of maven but there is a maven plugin for FitNesse and info for it can be found at http://mojo.codehaus.org/fitnesse-maven-plugin/usage.html.

Monday, February 15, 2010

Metaprogramming Ruby - Thursday: Class Definitions


Wow!!! Learning takes a lot of time. :) This chapter was very interesting but very complicated. I am not sure my notes will make any since to anyone but me but here they are anyway.


Monday, February 8, 2010

Thursday, February 4, 2010

Metaprogramming Ruby - Tuesday: Methods

Continued notes on 'Metaprogramming Ruby'.




Now I am caught up with where I left off in early December and will start the next chapter this weekend.

Metaprogramming Ruby - Monday: The Object Model

I started reading the book 'Metaprogramming Ruby' about a month ago and had to stop. I am backing up and starting over. Here is my visual notes.

Wednesday, February 3, 2010

Customize GivWenZen Screencast

GivWenZen has a nice set of defaults which allow you to start using it quickly. Once you get started though you may want to change some of these defaults and add functionality to GivWenZen. Here is how.



Thanks for watching and let me know if you have any questions or issues with GivWenZen.

Monday, February 1, 2010

Fluent Creator for Advanced GivWenZen Features

I was starting to create a screencast covering some advanced features in GivWenZen and I realized it was more difficult to use these features than it should be. I had just seen Szczepan Faber, the mockito guy, give a tutorial on using fluent creators (read aobut fluent interface here) for testing and I thought this type of creator would also fit my current need. I was very pleased with how this turned out. Not only did it make using the advanced features easier, by hiding details that I did not need, it also reduced the need for a client to depend on other parts of GivWenZen when changing the defaults.

When I started I had one main class to startup the system. Normally, that is when you do not want to change the defaults, you simply call the no parameter constructor.

new GivWenZenExecutor();

Simple enough but as soon as you want to change one parameter it gets more difficult. Now you need a set of constructors that take specific objects and things get to be a pain real fast. Getting parameter order such that it makes since is difficult and many times impossible. In my case I got tired of that and created one additional constructor that tool all the parts that could be changed. The problem with this is you need to know about all the parts that could change even if you want change only one of the defaults.

Original way to override only the base package where step classes are located:

new GivWenZenExecutor(new DomainStepFinder("my.step.package."),
new DomainStepFactory(),
null);

Now the client has dependencies on DomainStepFinder and DomainStepFactory. What does that null parameter mean???

In reality the only thing the client wanted to do was configure the executor to look for steps in a different package.

Here is how it turned out with the fluent creator:

GivWenZenExecutor executor = GivWenZenExecutorCreator.instance()
.stepClassBasePackage("my.step.package.")
.create();
Wow, I no longer know about any of the dependencies the executor has. I was able to simply tell the creator what my package was and ask it to create the executor. I could even remove the dependency on the GivWenZenExecutor by assigning the created object to the GivWenZen interface.

If I neeed to change the object that finds classes I am less likely to have to change the client now. I simply make the change in the creator. I can rename and repackage and change to my hearts desire as long as I do not change the creators contract.

It is funny, and sad, that I have used this type of interface in a few cases such as several mocking frameworks and quite a bit in different Ruby frameworks but I always forget about it when writing Java code for a project. Actually, I am not sure I could have come up with this interface before using the features a few time and understanding where things were difficult. By the time I wrote the creator I knew exactly what I wanted it to do and it only took a few minutes to write.

Now I can do the screencast on using GivWenZen advanced features.

Tuesday, January 19, 2010

Common Practices that Back Agile Principles and Values

After I wrote the post on principle and values guiding the practices I thought it would be a nice exercise to take the values from the Agile Manifesto, Lean and maybe XP and put some of the practices from agile that support those values and principles. Here is my first try at it. None of these practices or absolutes but all of them add some value and in the end we need to understand how they support our values and what goal(s) they have. In the case that we cannot do them (or simply are not doing them), such as a distributed team cannot be in a single open workspace, we need to look for practices to meet those goals and support the same values and principles or be willing to accept that the goal will not be completely met.

Individuals and interactions over processes and tools

Related Agile Values and Principles: Intense collaboration, Amplify learning, See the whole, Empower the team, Feedback, Open and honest communication

User Stories and Acceptance Criteria - to me the best definition for a user story is still 'a place holder for conversation'. One goal of the user stories is to drive continual conversation around a piece of business value so that the user gets what they want and it happens through conversations.

Daily Scrum/Standup meetings - daily the team gets together for interaction and team building. Some goals are for the entire team to communicate daily to focus on the highest priority items and make sure all problems are being addressed.

Open workspace - When we cannot see each other we tend not to interact well and it is much easier to just point the finger of blame. One goal of the open space is for people to talk and work out the issues as soon as they come up. It is meant to promote communications.

Customer is always available - see the same item under customer collaboration.

Retrospectives - Retrospectives probably support all of the values but more often than not they end up being about improving how we interact. The goal of the retrocspective is to improve on the outcome of all of the practices we have put in place including the retrospective itself.


Working software over comprehensive documentation

Related Agile Values and Principles: Build integrity in, Intense collaboration, Embrace change, Quality work, Eliminate waste, Amplify learning, Feedback

Code Inspections - One goal of code inspections is to catch issues early, improve the code maintainability and quality, as well as learning.

Test Driven Development - One goal is to drive the quality from the beginning. Another goal of tests is to verify our software is working as expected. The nice thing about good tests is they also document the behavior of the application or a part of the applicaiton.

Integrate often - Integrating the code often helps keep the code in a working state which allows us to deliver often. Big/delayed merges have a higher risk of breaking something.

Deliver often - Delivering working software to the customer is great documentation. One goal is working software continually.


Customer collaboration over contract negotiation

Related Agile Values and Principles: Intense collaboration, Embrace change, Eliminate waste,

User Stories and Acceptance Criteria - Stories are the business value we are delivering and one of the major pieces which are collaborted on.

Deliver often - Not having a fixed contract is scary for a customer. Working software delivered often is a confidence builder that can help increase the collaboration.

Customer is always available - Some goals here are to make sure the customer gets what they want and the team is not waiting or blocked and is therefore very productive. The customer and the developement team or always talking.

Behavior driven development - Goal is to document business issue the application should solve for the customer. This is a collaborative effort that helps everyone understand what the goals of the applicaiton are.

Iterative Planning - One goal of iterative planning is to deliver the most important items to the customer.


Responding to change over following a plan

Related Agile Values and Principles: Decide as late as possible, Incremental change, Courage, Decide as late as possible, Decide as fast as possible

Iterative planning - The shorter the iteration the more often we have good points in time to make changes without interruptions.

Limit work in progress - One goal of limiting work in progress is to allow change without interrupting the flow of the team. A lot of work in progress means the team must stop something in progress, which is potentially dangerous and definitely a waste of effort, or we must wait longer to get to a point when soneone is ready to start a different piece of work, which delays the ability to make change.

Refactor - Goal is clean code. Clean code leads to the ability to change easier.

Collective ownership - One person 'owning' a part of the application is a guarantee to slow the ability to change and everything else.

Unit tests and acceptance test - One goal is to give the team confidence to make a change.

Automation - There are many things we can automate: the build process, code generation, deployments, release, etc. Most of these automations should have a goal of being able to respond to change faster.


As I started to put this together I realized that there are some practices that support other practices and the practice may be one or more steps away from the actual principle or value. Measuring welocity is a good example of this. Velocity is a measure of how much work a team can do in a short fixed period of time. Velocity supports iterative planning which in turn is based on several of the values.

Comments are welcome and desired on these items. I would really like some help with added other goals for practices that are related to each value and other practices that go with each value and the goals of those practices.

Monday, January 18, 2010

Agile Values and Principles Should Guide Your Practices

In a recent meeting I was in we were having a discussion about a paticular process (I leave the exact process and practice off on purpose) and one person was saying it should be mandatory to perform a specific practice. This paticular practice is a bit heavy and was put in place because some legacy products/teams had some very bad habits and the code was really unstable. My first desire was to throw my hands up and discuss and say that it was stupid. Luckily, my more sensible side asked what is the goal of the practice? This lead to a discussion identifying the goal of the practice and how individual teams might need to meet the goal but not neccesarily with that specific practice.

As with any area of our work when we get bogged down in the day to day of doing things many times we forget why we are doing certain things. New people come on the team and challenge a practice and we get defensive and walls go up. We do not want to be in a position where we feel we must defend a practice or a tool for that matter. These or things to help us a meet a specific goal and that goal is what is the important part.

Since we want, or I hope we want, teams that are continually improving we need to understand what is the goal of the practice and then verify that both the goal and practice are inline with our values and principles. Let's look at a commone agile practices and the goals, values and principles behind it.

A common agile practice is TDD. Starting from the agile manifesto: TDD supports the value of "working software over comprehensive documentation" by having a set of tests that can be used to continually verify that the software is working. As an added benefit TDD also gives documentation of how the software actually works as well. TDD support the value of "respond to change over following a plan" by allowign change to be safer. Change traditionally is seen as dangerous but TDD is one practice (but not the only practice) that helps make change easier and faster.

Moving to Lean principles: TDD supsorts the Lean principle of "build integrity in". With TDD you are thinking of the integrity of the system from the beginning, before you actually write a line of production code you are considering what it should do and building a mechanisim to verify that it does it. TDD also supports the Lean principle of "eliminate waste". Rework and defects are waste in SW development. Since the test is written first TDD helps prevent the creation of defects and reduce the amount of rework by using the tests to verify that the current changes broken some part of the system. TDD also supports the Lean principle of "amplify learning". I can run the tests often to verify current changes have not broken anything.

By understanding the goals, values and principles behind the practice we have the ability to adapt the process to the changing needs of the customer and team. If we stop doing TDD how will it affect our ability to deliver often? Will it affect the quality and therefore increase rework? Is there some other practice(s) that it can be replaced with? What is the cost of the other practice(s) in comparison to TDD? Who are the customers of this practice? How will any change to the practice affect the whole system not just me? TDD is a practice that adds a lot of value and is backed by many principles and if you decide to replace it you will need to answer a lot of questions. Other practices have fewer goals and there will be more options available that may fit different situations better.

Focusing on the goal, principles and values is what allow a team to improve and adapt to change. If our practices become our goals we will fail and our teams will become less productive. Stop getting defensive when someone challenges you to change. First make sure you understand why you are performing a practice. Explain to them the goals you are trying to achieve with a practive. Validate that it does not violate the values and principles you are following. Does it look like it could make the team better? If so give it a try and see if the team gets better or worse and then adjust appropriately.

But please do not stop doing TDD. :)

Simple GivWenZen Example Screencast

Here is my first try at a screencast. I really like the screencast that UncleBob has been doing so I decided to give it a try with GivWenZen. Not only was it fun but very easy using Screencast-o-matic.



More screencast to come soon!