March 28, 2010
Winter Agile Tuning Review
Having an Agile conference somewhere nearby (I consider Cracow nearby as it's 3h by train from Warsaw) in a suitable time, is a good enough reason to go. After I'd checked the website I knew, I cannot let it pass by. The program was interesting, with some well known names (i.e. Szczepan Faber, guy who wrote Mockito, the best mocking library in Java world), it was practically free of charge (50zł) and it was starting at 2pm on Saturday.
2PM! Do you know what that means? It means that you do not have to get up at 5am to catch a train or at 4am to get there by car. You don't have to be totally exhausted, intoxicating yourself with gallons of energy drinks. You don't have to get there a day before, wasting a perfectly sweet Friday night. It means that you get up at 8am, like normal people, you have enough time to eat lunch, and after the conference it's exactly the right moment to go to an afterparty.
I wish all the conferences would start at at least at noon.
Back to the event. I went there with a friend of mine, Tomasz Przybysz, one of not so many programmers I've met, that actually cares about the quality of his work, and as there were two tracks and two of us, we've decided to split and exchange the knowledge during breaks.
I've chosen the 'Craft' track while Tomasz went for the 'People'. That was a pretty good choice of mine and I only wish I could be on '10 tips that ScrumMasters should know...' by Nigel Baker. But let's start from the beginning.
Sweetest acceptance tests
The first lecture was given by Bartosz Bankowski and Szczepan Faber. At the last moment, they've changed the topic of the presentation from an enigmatic 'Be a VIP and rid your WIP' to 'Sweetest acceptance tests'. Can't say I was disappointed. They took us through an example of working with Sweetest which is a sort of a plugin to a wiki that allows for a very fluid and direct translation between a wiki-defined set of acceptance requirements and automatic (junit) acceptance tests.
How does it work exactly? It's quite simple: you talk with your client, you both write his requirements in a wiki, the tool creates junit automatic acceptance tests for you. Here's a demo.
It fits very well with a TDD/BDD approach and allows you to have a bit more contact with the client, as the client can see on the wiki what his own acceptance requirements are and which are already implemented.
Lately, we've spent two days creating acceptance test scenarios for my client in doc format, which could allow him to formally say 'yes'. We've been doing it by checking his requirements (again), checking whether we have implemented them right, and writing all down. The thing is, that since we do TDD/BDD, we already have all the tests that tell me whether his requirements are met, because we start implementing a new feature by writing a test for it. The main failure of the situation was that we've been writing those scenarios AFTER the whole project was implemented. Had we had them written down BEFORE, maybe even had them connected to unit tests, we'd be done before we've started.
That's what Sweetest is for.
Not that it's something totally new. If you're doing TDD you are already implementing client's requirements as tests (or at least you should start every feature with a single test for it, and then go down the implementation and Red-Green-Refactor line up to the point when it's fully working and is fully covered), but everything that helps with communication, well.. helps. And as in my example, can save you a few days writing docs.
I've been to another conference in Cracow last year where Szczepan was presenting Mockito framework. Just like the last time, his presentation was vivid, fluent, interesting and straight to the point. That's what makes a happy Panda.
Let them speak in their own language
Next was Konrad Pawlus 'Let them speak in their own language. How we enabled domain experts to build acceptances tests - case study'. Konrad shared his experience with developing Test Driven software for financial market. The clue was to allow domain experts, guys used to Excel, to verify software in a way that was familiar to them.
To do that Konrad (or his team) created a Visual Basic tool, that could convert xls files with example calculation to a scripting language, and then fire up this scripting language in a similar manner to junit tests. At the end, domain expert could verify everything down to the last number (and knowing that Excel actually creates a lot of errors with rounding that is very important), create new tests to check scenarios they never thought about before.
This is quite a cool way to bring customer into testing and have a robust feedback right away instead of getting a lot of bugs and change requests at the end. It's not something new as well, there are frameworks like FitNesse by Robert C. Martin that accomplish this, the thing was that for some clients/projects you need to have an exact Excel equivalence in calculations, which means simulating Excel's errors as well. Even those, the customer is not aware of. That's where it's nice to actually verify everything back with Excel.
Estimation of software complexity in Agile projects
Don't be fooled by the 'expert' part, expert estimation is nothing more than calculations based on intuition, something which may work for experienced people and repeating projects, but often end up with pulling numbers out of your ass. Sure, the poker thing helps a lot, having other people verify your 'out-of-ass' calculations helps as well, but it's still intuition. We would wish that it's at least heavy wizardry but it's not. It's not science at all.
Jarosław Swierczek stated that according to some statistics a bit more than 70% of expert estimation is wrong, when wrong means more than 30% difference. Unfortunately I had no chance to ask him where did he have this percent from and I believe that this may not be true for poker planning (does intuition work better for a group?), so it all stays as anecdotal evidence.
Anyway, he was able to show us a nice, scientific formula for estimating complexity and time-cost. Things that were especially interesting:
- You need at least two years of historical data (estimations of complexity and time + the actual results) to be able to do anything more than 'pull numbers out of your ass'.
- You need to choose a formal method of calculating complexity and stick with it (event the Functional Point method is not that bad)
- Complexity has got NOTHING to do with the time it takes to finish something
- How much time it takes, depends your team's performance, which should be calculated from sprint to sprint, which is depending on stuff like technology/experience/distractions, and can actually be calculated by expert method ('my intuition tells me I'm gonna be super fast/slow because I have a lot/little experience in this technology')
Jarosław has got his own consulting company Aion www.aion.com.pl where they help other companies with estimating complex projects, so of course you should be wary about marketing bullshit, but what he presented made a lot of sense to me.
Journey through tests and prototypes
The fourth lecture was given by Piotr Trochim, a game developer (his current linked-in profile shows CD Projekt RED, the company behind 'The Witcher'). Piotr was talking about TDD in game development, a situation which is quite different to typical enterprise/b2b/Internet development in that you create a hell lot of prototypes of different ideas before actually deciding what you are going to put into the trunk of your repository.
Well, we (Internet/intranet software developers) actually create a few prototypes too, especially when changing the technology, so it applies as well to us.
Since creating well written prototype does not pay off in the long run and only slows down the prototyping, Piotr suggested this change is TDD cycle:
- First create a working prototype or a few prototypes (max 4h) if you need to have to choose between something. Don't worry about tests, do it as simple and fast as possible.
- Decide whether the idea/technology is fine to be used on production
- DELETE the prototype(s) completely (never commit the prototype as it is very badly written)
- Now write the solution again, using TDD and best practices
- When you're finished, create a tool to help monitor/debug the solution. This could be anything from stuff as simple as logging annotation (so you can turn logging into debug mode and see what happens), to state visualizers.
For me the most important part was to NEVER commit the prototype and always delete it completely. This is something I've seen way too often, when programmers commit a completely chaotic prototype just because it works and try to extend it later with very poor results. It usually ends in refactoring bigger/longer than actually writing the same stuff the proper way.
It was also interesting to note, that while Piotr is using C++ for development, he creates most of the prototypes in C#, as it's simply easier.
Yeah, I wouldn't like to return to C++. The expressiveness of this language simply sucks.
BDD and Ruby On Rails using Cucumber and Rspec
Last lecture was from Pawel Wilkosz about BDD and Ruby On Rails using Cucumber and Rspec. Frankly speaking I was tired, I don't work with Ruby and using TDD for more than last 4 years I didn't have much to learn in here. That's where I'd rather be on 'People' track, where all the guests were having an Open Space kind of discussion.
And then there was an afterparty, but that's a completely different story.
You can find official pictures from the conference in here.
All pictures by Krzysztof Dorosz.