March 25, 2012

33rd Degree day 2 review

Second day of 33rd had no keynotes, and thus was even more intense. A good conference is a conference, where every hour you have a hard dilemma, because there are just too many interesting presentations to see. 33rd was definitely such a conference, and the seconds day really shined.

There were two workshops going on through the day, one about JEE6 and another about parallel programming in Java. I was considering both, but decided to go for presentations instead. Being on the Spring side of the force, I know just as much JEE as I need, and with fantastic GPars (which has Fork/Join, actors, STM , and much more), I won't need to go back to Java concurrency for a while.

GEB - Very Groovy browser automation

Luke Daley works for Gradleware, and apart from being cheerful Australian, he's a commiter to Grails, Spock and a guy behind Geb, a  browser automation lib using WebDriver, similar to Selenium a bit (though without IDE and other features).

I have to admit, there was a time where I really hated Selenium. It just felt so wrong to be writing tests that way, slow, unproductive and against the beauty of TDD. For years I've been treating frontend as a completely different animal. Uncle Bob once said at a Ruby conference: "I'll tell you what my solution to frontend tests is: I just don't". But then, you can only go so far with complex GUIs without tests, and once I've started working with Wicket and its test framework, my perspective changed. If Wicked has one thing done right, it's the frontend testing framework. Sure tests are slow, on par with integration tests, but it is way better than anything where the browser has to start up front, and I could finally do TDD with it.

Working with Grails lately, I was more than eager to learn a proper way to do these kind of tests with Groovy.

GEB looks great. You build your own API for every page you have, using CSS selectors, very similar to jQuery, and then write your tests using your own DSL. Sounds a bit complicated, but assuming you are not doing simple HTML pages, this is probably the way to go fast. I'd have to verify that on a project though, since with frontend, too many things look good on paper and than fall out in code.

The presentation was great, Luke managed to answer all the questions and get people interested. On a side note, WebDriver may become a W3C standard soon, which would really easy browser manipulation for us. Apart from thing I expected Geb to have, there are some nice surprises like working with remote browsers (e.g. IE on remote machine), dumping HTML at the end of the test and even making screenshots (assuming you are not working with headless browser).

Micro services - Java, the Unix Way

James Lewis works for ThoughtWorks and gave a presentation, for which alone it was worth to go to Krakow. No, seriously, that was a gem I really didn't see coming. Let me explain what it was about and then why it was such a mind-opener.
ThoughtWorks had a client, a big investment bank, lots of cash, lots of requirements. They spent five weeks getting the analysis done on the highest possible level, without getting into details yet (JEDI: just enough design initially). The numbers were clear: it was enormous, it will take them forever to finish, and what's worse, requirements were contradictory. The system had to have all three guarantees of the CAP theorem, a thing which is PROVED to be impossible.
So how do you deal with such a request? Being ThoughtWorks you probably never say "we can't", and having an investment bank for a client, you already smell the mountains of freshly printed money. This isn't something you don't want to try, it's just scary and challenging as much as it gets.
And then, looking at the requirements and drawing initial architecture, they've reflected, that there is a way to see the light in this darkness, and not to end up with one, monstrous application, which would be hard to finish and impossible to maintain. They have analyzed flows of data, and came up with an idea.
What if we create several applications, each so small, that you can literally "fit it in your head", each communicating with a simple web protocol (Atom), each doing one thing and one thing only, each with it's own simple embedded web server, each working on it's own port, and finding out other services through some location mechanism. What if we don't treat the web as an external environment for our application, but instead build the system as if it was inside the web, with the advantages of all the web solutions, like proxies, caches, just adding a small queue before each service, to be able to turn it off and on, without loosing anything. And we could even use a different technology, with different pair of CAP guarantees, for each of those services/applications.
Now let me tell you why it's so important for me.
If you read this blog, you may have noticed the subtitle "fighting chaos in the Dark Age of Technology". It's there, because for my whole IT life I've been pursuing one goal: to be able to build things, that would be easy to maintain. Programming is a pure pleasure, and as long as you stay near the "hello world" kind of complexity, you have nothing but fun. If we ever feel burned out, demotivated or puzzled, it's when our systems grow so much, that we can no longer understand what's going on. We lose control. And from that point, it's usually just a way downward, towards complete chaos and pain.
All the architecture, all the ideas, practices and patterns, are there for just this reason - to move the border of complexity further, to make the size of "possible to fit in your head" larger. To postpone going into chaos. To bring order and understanding into our systems.
And that really works. With TDD, DDD, CQRS I can build things which are larger in terms of features, and simpler in terms of complexity. After discovering and understanding the methods (XP, Scrum/Kanbad) my next mental shift came with Domain Driven Design. I've learned the building block, the ideas and the main concept of Bounded Contexts. And that you can and should use a different architecture/tools for each of them, simplifying the code with the usage patterns of that specific context in your ming.
That has changed a lot in my life. No longer I have to choose one database, one language and one architecture for the whole application. I can divide and conquer, choose what I want to sacrifice and what advantages I want here, in this specific place of my app, not worrying about other places where it won't fit.
But there is one problem in here: the limit of technologies I'm using, to keep the system simple, and not require omnipotence to be able to maintain, to fix bugs or implement Change Requests.
And here is the accidental solution, ThoughtWorks' micro services bring: if you system is build of the web, of small services that do one thing only, and communicate through simple protocol (like Atom), there is little code to understand, and in case of bugs or Change Requests, you can just tear down one of the services. and build it anew.
James called that "Small enough to throw them away. Rewrite over maintain". Now, isn't that a brilliant idea? Say you have a system like that, build over seven years ago, and you've got a big bag of new requests from your client. Instead of re-learning old technologies, or paying extra effort to try to bring them up-to-date (which is often simply impossible), you decide which services you are going to rewrite using the best tools of your times, and you do it, never having to dig into the original code, except for specification tests.
Too good to be true? Well, there are caveats. First, you need DevOps in your teams, to get the benefits of the web inside your system, and to build in the we as opposite to against it. Second, integration can be tricky. Third, there is not enough of experience with this architecture, to make it safe. Unless... unless you realize, that UNIX was build this way, with small tools and pipes.
That, perhaps. is the best recommendation possible.

Concurrency without Pain in Pure Java

Throughout the whole conference, Grzegorz Duda had a publicly accessible wall, with sticky notes and two sides: what's bad and what's good. One of the note on the "bad" side was saying: "Sławek Sobótka and Paweł Lipiński at the same time? WTF?". 
I had the same thought. I wanted to see both. I was luckier though, since I'm pretty sure I'll yet be able too see their presentations this year, as 33rd is the first conference in a long run of conferences planned for 2012. Not being able to decide which one to see, I've decided to go for Venkat Subramaniam and his talk about concurrency. Unless we are lucky at 4Developers, we probably won't see Venkat again this year.
Unfortunately for me, the talk ("show" seems like a more proper word), was very basic, and while very entertaining, not deep enough for me. Venkat used Closure STM to show how bad concurrency is in pure Java, and how easy it is with STM. What can I say, it's been repeated so often, it's kind of obvious by now.
Venkat didn't have enough time to show the Actor model in Java. That's sad, as the further his talk, the more interesting it was. Perhaps there should be a few 90min sessions next year?

Smarter Testing with Spock

After the lunch, I had a chance to go for Sławek Sobótka again, but this time I've decided to listen to one of the commiters of Spock, the best thing in testing world since Mockito. 
Not really convinced? Gradle is using Spock (not surprisingly), Spring is starting to use Spock. I've had some experience with Spock, and it was fabulous. We even had a Spock workshop at TouK, lately. I wanted to see what Luke Daley can teach me in an hour. 
That was a time well spent. Apart from things I knew already, Luke explained how to share state between tests (@Shared), how to verify exceptions (thrown()), keep old values of variables (old()), how to parametrize description with @Unroll and #parameterName, how to set up data from db or whatever with <<, and a bit more advanced trick with mocking mechanism. Stubbing with closures was especially interesting.

What's new in Groovy 2.0?

Guillaume Laforge is the project lead of Groovy and his presentation was the opposite to what we could see earlier about next versions of Java. Most visible changes were already done in 1.8, with all the AST transformations, and Guillaume spent some time re-introducing them, but then he moved to 2.0, and here apart from multicatch in "throw", the major thing is static compilation and type checking.
We are in the days, were the performance difference between Java and Groovy falls to a mere 20%.  That's really little compared to where it all started from (orders of magnitude). That's cool. Also, after reading some posts and successful stories about Groovy++ use, I'd really like to try static compilation with this language
Someone from the audience asked a good question. Why not use Groovy++ as the base for static compilation instead. It turned out that Groovy++ author was also there. The main reason Guillaume gave, were small differences in how they want to handle internal things. If static compilation works fine with 2.0, Groovy++ may soon die, I guess.

Scala for the Intrigued


For the last talk this day, I've chosen a bit of Scala, by Venkat Subramaniam. That was unfortunately a completely basic introduction, and after spending 15 minutes listening about differences between var and val, I've left to get prepared to the BOF session, which I had with Maciek Próchniak.

BOF: Beautiful failures


I'm not in the position to review my own talk, and conclude whether it's failure was beautiful or not, but there is one things I've learned from it.
Never, under none circumstances, never drink five coffees the day you give a talk. To keep my mind active without being overwhelmed by all the interesting knowledge, I drank those five coffees, and to my surprise, when the talk started, the adrenaline shot brought me over the level, where you loose your breath, your pulse, and you start to loose control over your own voice. Not a really nice experience. I've had the effects of caffeine intoxication for the next two days. Lesson learned, I'm staying away from black beans for some time.
If you want the slides, you can find them here.
And that was the end of the day. We went to the party, to the afterparty, we got drunk, we got the soft-reset of our caches, and there came another day of the conference.

You can find my review from the last day in here.

March 24, 2012

33rd Degree day 1 review

33rd Degree is over. After the one last year, my expectations were very high, but Grzegorz Duda once again proved he's more than able to deliver. With up to five tracks (most of the time: four presentations + one workshop), and ~650 attendees,  there was a lot to see and a lot to do, thus everyone will probably have a little bit different story to tell. Here is mine.

Twitter: From Ruby on Rails to the JVM

Raffi Krikorian talking about Twitter and JVM
The conference started with  Raffi Krikorian from Twitter, talking about their use for JVM. Twitter was build with Ruby but with their performance management a lot of the backend was moved to Scala, Java and Closure. Raffi noted, that for Ruby programmers Scala was easier to grasp than Java, more natural, which is quite interesting considering how many PHP guys move to Ruby these days because of the same reasons. Perhaps the path of learning Jacek Laskowski once described (Java -> Groovy -> Scala/Closure) may be on par with PHP -> Ruby -> Scala. It definitely feels like Scala is the holy grail of languages these days.

Raffi also noted, that while JVM delivered speed and a concurrency model to Twitter stack, it wasn't enough, and they've build/customized their own Garbage Collector. My guess is that Scala/Closure could also be used because of a nice concurrency solutions (STM, immutables and so on).

Raffi pointed out, that with the scale of Twitter, you easily get 3 million hits per second, and that means you probably have 3 edge cases every second. I'd love to learn listen to lessons they've learned from this.

 

Complexity of Complexity


The second keynote of the first day, was Ken Sipe talking about complexity. He made a good point that there is a difference between complex and complicated, and that we often recognize things as complex only because we are less familiar with them. This goes more interesting the moment you realize that the shift in last 20 years of computer languages, from the "Less is more" paradigm (think Java, ASM) to "More is better" (Groovy/Scala/Closure), where you have more complex language, with more powerful and less verbose syntax, that is actually not more complicated, it just looks less familiar.

So while 10 years ago, I really liked Java as a general purpose language for it's small set of rules that could get you everywhere, it turned out that to do most of the real world stuff, a lot of code had to be written. The situation got better thanks to libraries/frameworks and so on, but it's just patching. New languages have a lot of stuff build into, which makes their set of rules and syntax much more complex, but once you get familiar, the real world usage is simple, faster, better, with less traps laying around, waiting for you to fall.

Ken also pointed out, that while Entity Service Bus looks really simple on diagrams, it's usually very difficult and complicated to use from the perspective of the programmer. And that's probably why it gets chosen so often - the guys selling/buying it, look no deeper than on the diagram.

 

Pointy haired bosses and pragmatic programmers: Facts and Fallacies of Software Development

Venkat Subramaniam with Dima
Dima got lucky. Or maybe not.

Venkat Subramaniam is the kind of a speaker that talk about very simple things in a way, which makes everyone either laugh or reflect. Yes, he is a showman, but hey, that's actually good, because even if you know the subject quite well, his talks are still very entertaining.
This talk was very generic (here's my thesis: the longer the title, the more generic the talk will be), interesting and fun, but at the end I'm unable to see anything new I'd have learned, apart from the distinction between Dynamic vs Static and Strong vs Weak typing, which I've seen the last year, but managed to forgot. This may be a very interesting argument for all those who are afraid of Groovy/Ruby, after bad experience with PHP or Perl.

Build Trust in Your Build to Deployment Flow!


Frederic Simon talked about DevOps and deployment, and that was a miss in my  schedule, because of two reasons. First, the talk was aimed at DevOps specifically, and while the subject is trendy lately, without big-scale problems, deployment is a process I usually set up and forget about. It just works, mostly because I only have to deal with one (current) project at a time. 
Not much love for Dart.
Second, while Frederic has a fabulous accent and a nice, loud voice, he tends to start each sentence loud and fade the sound at the end. This, together with mics failing him badly, made half of the presentation hard to grasp unless you were sitting in the first row.
I'm not saying the presentation was bad, far from it, it just clearly wasn't for me.
I've left a few minutes before the end, to see how many people came to Dart presentation by Mike West. I was kind of interested, since I'm following Warsaw Google Technology User Group and heard a few voices about why I should pay attentions to that new Google language. As you can see from the picture on the right, the majority tends to disagree with that opinion.

 

Non blocking, composable reactive web programming with Iteratees

Sadek Drobi's talk about Iteratees in Play 2.0 was very refreshing. Perhaps because I've never used Play before, but the presentation was flawless, with well explained problems, concepts and solutions.
Sadek started with a reflection on how much CPU we waste waiting for IO in web development, then moved to Play's Iteratees, to explain the concept and implementation, which while very different from the that overused Request/Servlet model, looked really nice and simple. I'm not sure though, how much the problem is present when you have a simple service, serving static content before your app server. Think apache (and faster) before tomcat. That won't fix the upload/download issue though, which is beautifully solved in Play 2.0

The Future of the Java Platform: Java SE 8 & Beyond


Simon Ritter is an intriguing fellow. If you take a glance at his work history (AT&T UNIX System Labs -> Novell -> Sun -> Oracle), you can easily see, he's a heavy weight player.
His presentation was rich in content, no corpo-bullshit. He started with a bit of history of JCP and how it looks like right now, then moved to the most interesting stuff, changes. Now I could give you a summary here, but there is really no point: you'd be much better taking look at the slides. There are only 48 of them, but everything is self-explanatory.
While I'm very disappointed with the speed of changes, especially when compared to the C# world, I'm glad with the direction and the fact that they finally want to BREAK the compatibility with the broken stuff (generics, etc.).  Moving to other languages I guess I won't be the one to scream "My god, finally!" somewhere in 2017, though. All the changes together look very promising, it's just that I'd like to have them like... now? Next year max, not near the heat death of the universe.

Simon also revealed one of the great mysteries of Java, to me:
The original idea behind JNI was to make it hard to write, to discourage people form using it.
On a side note, did you know Tegra3 has actually 5 cores? You use 4 of them, and then switch to the other one, when you battery gets low.

BOF: Spring and CloudFoundry


Having most of my folks moved to see "Typesafe stack 2.0" fabulously organized by Rafał Wasilewski and  Wojtek Erbetowski (with both of whom I had a pleasure to travel to the conference) and knowing it will be recorded, I've decided to see what Josh Long has to say about CloudFoundry, a subject I find very intriguing after the de facto fiasco of Google App Engine.

The audience was small but vibrant, mostly users of Amazon EC2, and while it turned out that Josh didn't have much, with pricing and details not yet public, the fact that Spring Source has already created their own competition (Could Foundry is both an Open Source app and a service), takes a lot from my anxiety.

For the review of the second day of the conference, go here.

March 19, 2012

Beautiful Failures at 33rd Degree

33rd in Kraków, is rolling baby.

Tomorrow, together with Maciek Próchniak, we are giving a talk about failures.

There is a problem with failures withing our culture, and by our, I mean central and eastern Europe. In San Fransisco, there are regular meetings, called Mobile Monday, where speakers start by saying how many start ups they have failed, and it’s been seen as a their reference to wisdom. At the very end, they’ve learned a lot from all this failures. And it’s not limited only to San Francisco or Mobile Monday. It’s their culture, every failure gets you smarter. In US it's OK to fail.

Have you heard a story about a Japanese train controller that committed suicide, when two trains in a row have been late because of his mistakes? The Europe may not be that extreme, but it’s still at least inappropriate to admit, that you ever made a mistake.

If we never admit, if we never reflect, we never learn. So we are changing the rules for an hour. There is nothing good or bad without a context, and we would like to share the circumstances under which things don’t work.

Things like:
  • shared responsibility
  • self organized teams
  • gamification
  • open source
  • metaprogramming
  • ‘enterprise’ technologies

See yoy there.

January 23, 2012

How to get to TouK

This Tuesday, architect-wannabe group has it's meeting at TouK at 18:00. Since TouK is not an easy company to find, here's the video tutorial Piotr Przybylak asked for. It starts at Rondo Zesłańców Syberyjskich (driving from north). Hope it helps.

Thanks to Maciej Próchniak for participation :)



And here is a map:

Show larger map

January 12, 2012

Bash'ing your git deployment

Chuck Norris deploys after every commit. Smart men deploy after every successful build on their Continuous Integration server. Educated men, deploy code directly from their distributed version control systems. I, being neither, had to write my deployment script in bash.

We're using git and while doing so I wanted us to:
  • deploy from working copy, but...
  • make sure that you can deploy only if you committed everything
  • make sure that you can deploy only if you pushed everything upstream
  • tag the deployed hash
  • display changelog (all the commits between two last tags)

Here are some BASH procedures I wrote on the way, if you need them:

make sure that you can deploy only if you committed everything
verifyEverythingIsCommited() {
    gitCommitStatus=$(git status --porcelain)
    if [ "$gitCommitStatus" != "" ]; then
        echo "You have uncommited files."
        echo "Your git status:"
        echo $gitCommitStatus
        echo "Sorry. Rules are rules. Aborting!"
        exit 1
    fi
}

make sure that you can deploy only if you pushed everything upstream
verifyEverythingIsPushedToOrigin() {
    gitPushStatus=$(git cherry -v)
    if [ "$gitPushStatus" != "" ]; then
        echo "You have local commits that were NOT pushed."
        echo "Your 'git cherry -v' status:"
        echo $gitPushStatus
        echo "Sorry. Rules are rules. Aborting!"
        exit 1
    fi
}

tag the deployed hash

Notice: my script takes first parameter as the name of the server to deploy to (this is $1 passed to this procedure). Also notice, that 'git push' without the '--tags' does not push your tags.
tagLastCommit() {
    d=$(date '+%y-%m-%d_%H-%M-%S')
    git tag "$1_$d"
    git push --tags
}

This creates nice looking tags like these:
preprod_12-01-11_15-16-24
prod_12-01-12_10-51-33
test_12-01-11_15-11-10
test_12-01-11_15-53-42

display changelog (all the commits between two last tags)
printChangelog() {
    echo "This is changelog since last deploy. Send it to the client."
    twoLastHashesInOneLine=$(git show-ref --tags -s | tail -n 2 | tr "\\n" "-");
    twoLastHashesInOneLineWithThreeDots=${twoLastHashesInOneLine/-/...};
    twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd=$(echo $twoLastHashesInOneLineWithThreeDots | sed 's/-$//');
    git log --pretty=oneline --no-merges --abbrev-commit  $twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd
}

The last command gives you a nice log like this:
e755c63 deploy: fix for showing changelog from two first tags instead of two last ones
926eb02 pringing changelog between last two tags on deployment
34478b2 added git tagging to deploy


Atom Feeds with Spring MVC

How to add feeds (Atom) to your web application with just two classes?
How about Spring MVC?

Here are my assumptions:
  • you are using Spring framework
  • you have some entity, say “News”, that you want to publish in your feeds
  • your "News" entity has creationDate, title, and shortDescription
  • you have some repository/dao, say "NewsRepository", that will return the news from your database
  • you want to write as little as possible
  • you don't want to format Atom (xml) by hand
You actually do NOT need to use Spring MVC in your application already. If you do, skip to step 3.


Step 1: add Spring MVC dependency to your application
With maven that will be:
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>3.1.0.RELEASE</version>
</dependency>

Step 2: add Spring MVC DispatcherServlet
With web.xml that would be:
<servlet>
    <servlet-name>dispatcher</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:spring-mvc.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
    <servlet-name>dispatcher</servlet-name>
    <url-pattern>/feed</url-pattern>
</servlet-mapping>
Notice, I set the url-pattern to “/feed” which means I don't want Spring MVC to handle any other urls in my app (I'm using a different web framework for the rest of the app). I also give it a brand new contextConfigLocation, where only the mvc configuration is kept.

Remember that, when you add a DispatcherServlet to an app that already has Spring (from ContextLoaderListener for example), your context is inherited from the global one, so you should not create beans that exist there again, or include xml that defines them. Watch out for Spring context getting up twice, and refer to spring or servlet documentation to understand what's happaning.

Step 3. add ROME – a library to handle Atom format
With maven that is:
<dependency>
    <groupId>net.java.dev.rome</groupId>
    <artifactId>rome</artifactId>
    <version>1.0.0</version>
</dependency>

Step 4. write your very simple controller
@Controller
public class FeedController {
    static final String LAST_UPDATE_VIEW_KEY = "lastUpdate";
    static final String NEWS_VIEW_KEY = "news";
    private NewsRepository newsRepository;
    private String viewName;

    protected FeedController() {} //required by cglib

    public FeedController(NewsRepository newsRepository, String viewName) {
        notNull(newsRepository); hasText(viewName);
        this.newsRepository = newsRepository;
        this.viewName = viewName;
    }

    @RequestMapping(value = "/feed", method = RequestMethod.GET)        
    @Transactional
    public ModelAndView feed() {
        ModelAndView modelAndView = new ModelAndView();
        modelAndView.setViewName(viewName);
        List<News> news = newsRepository.fetchPublished();
        modelAndView.addObject(NEWS_VIEW_KEY, news);
        modelAndView.addObject(LAST_UPDATE_VIEW_KEY, getCreationDateOfTheLast(news));
        return modelAndView;
    }

    private Date getCreationDateOfTheLast(List<News> news) {
        if(news.size() > 0) {
            return news.get(0).getCreationDate();
        }
        return new Date(0);
    }
}
And here's a test for it, in case you want to copy&paste (who doesn't?):
@RunWith(MockitoJUnitRunner.class)
public class FeedControllerShould {
    @Mock private NewsRepository newsRepository;
    private Date FORMER_ENTRY_CREATION_DATE = new Date(1);
    private Date LATTER_ENTRY_CREATION_DATE = new Date(2);
    private ArrayList<News> newsList;
    private FeedController feedController;

    @Before
    public void prepareNewsList() {
        News news1 = new News().title("title1").creationDate(FORMER_ENTRY_CREATION_DATE);
        News news2 = new News().title("title2").creationDate(LATTER_ENTRY_CREATION_DATE);
        newsList = newArrayList(news2, news1);
    }

    @Before
    public void prepareFeedController() {
        feedController = new FeedController(newsRepository, "viewName");
    }

    @Test
    public void returnViewWithNews() {
        //given
        given(newsRepository.fetchPublished()).willReturn(newsList);
        
        //when
        ModelAndView modelAndView = feedController.feed();
        
        //then
        assertThat(modelAndView.getModel())
                .includes(entry(FeedController.NEWS_VIEW_KEY, newsList));
    }

    @Test
    public void returnViewWithLastUpdateTime() {
        //given
        given(newsRepository.fetchPublished()).willReturn(newsList);

        //when
        ModelAndView modelAndView = feedController.feed();

        //then
        assertThat(modelAndView.getModel())
                .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, LATTER_ENTRY_CREATION_DATE));
    }

    @Test
    public void returnTheBeginningOfTimeAsLastUpdateInViewWhenListIsEmpty() {
        //given
        given(newsRepository.fetchPublished()).willReturn(new ArrayList<News>());

        //when
        ModelAndView modelAndView = feedController.feed();

        //then
        assertThat(modelAndView.getModel())
                .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, new Date(0)));
    }
}
Notice: here, I'm using fest-assert and mockito. The dependencies are:
<dependency>
 <groupId>org.easytesting</groupId>
 <artifactId>fest-assert</artifactId>
 <version>1.4</version>
 <scope>test</scope>
</dependency>
<dependency>
 <groupId>org.mockito</groupId>
 <artifactId>mockito-all</artifactId>
 <version>1.8.5</version>
 <scope>test</scope>
</dependency>

Step 5. write your very simple view
Here's where all the magic formatting happens. Be sure to take a look at all the methods of Entry class, as there is quite a lot you may want to use/fill.
import org.springframework.web.servlet.view.feed.AbstractAtomFeedView;
[...]

public class AtomFeedView extends AbstractAtomFeedView {
    private String feedId = "tag:yourFantastiSiteName";
    private String title = "yourFantastiSiteName: news";
    private String newsAbsoluteUrl = "http://yourfanstasticsiteUrl.com/news/"; 

    @Override
    protected void buildFeedMetadata(Map<String, Object> model, Feed feed, HttpServletRequest request) {
        feed.setId(feedId);
        feed.setTitle(title);
        setUpdatedIfNeeded(model, feed);
    }

    private void setUpdatedIfNeeded(Map<String, Object> model, Feed feed) {
        @SuppressWarnings("unchecked")
        Date lastUpdate = (Date)model.get(FeedController.LAST_UPDATE_VIEW_KEY);
        if (feed.getUpdated() == null || lastUpdate != null || lastUpdate.compareTo(feed.getUpdated()) > 0) {
            feed.setUpdated(lastUpdate);
        }
    }

    @Override
    protected List<Entry> buildFeedEntries(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception {
        @SuppressWarnings("unchecked")
        List<News> newsList = (List<News>)model.get(FeedController.NEWS_VIEW_KEY);
        List<Entry> entries = new ArrayList<Entry>();
        for (News news : newsList) {
            addEntry(entries, news);
        }
        return entries;
    }

    private void addEntry(List<Entry> entries, News news) {
        Entry entry = new Entry();
        entry.setId(feedId + ", " + news.getId());
        entry.setTitle(news.getTitle());
        entry.setUpdated(news.getCreationDate());
        entry = setSummary(news, entry);
        entry = setLink(news, entry);
        entries.add(entry);
    }

    private Entry setSummary(News news, Entry entry) {
        Content summary = new Content();
        summary.setValue(news.getShortDescription());
        entry.setSummary(summary);
        return entry;
    }

    private Entry setLink(News news, Entry entry) {
        Link link = new Link();
        link.setType("text/html");
        link.setHref(newsAbsoluteUrl + news.getId()); //because I have a different controller to show news at http://yourfanstasticsiteUrl.com/news/ID
        entry.setAlternateLinks(newArrayList(link));
        return entry;
    }

}

Step 6. add your classes to your Spring context
I'm using xml approach. because I'm old and I love xml. No, seriously, I use xml because I may want to declare FeedController a few times with different views (RSS 1.0, RSS 2.0, etc.).

So this is the forementioned spring-mvc.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver">
        <property name="mediaTypes">
            <map>
                <entry key="atom" value="application/atom+xml"/>
                <entry key="html" value="text/html"/>
            </map>
        </property>
        <property name="viewResolvers">
            <list>
                <bean class="org.springframework.web.servlet.view.BeanNameViewResolver"/>
            </list>
        </property>
    </bean>

    <bean class="eu.margiel.pages.confitura.feed.FeedController">
        <constructor-arg index="0" ref="newsRepository"/>
        <constructor-arg index="1" value="atomFeedView"/>
    </bean>

    <bean id="atomFeedView" class="eu.margiel.pages.confitura.feed.AtomFeedView"/>
</beans>

And you are done.

I've been asked a few times before to put all the working code in some public repo, so this time it's the other way around. I've describe things that I had already published, and you can grab the commit from the bitbucket.

Hope that helps.

September 20, 2011

JBoss Envers and Spring transaction managers

I've stumbled upon a bug with my configuration for JBoss Envers today, despite having integration tests all over the application. I have to admit, it casted a dark shadow of doubt about the value of all the tests for a moment. I've been practicing TDD since 2005, and frankly speaking, I should have been smarter than that.

My fault was simple. I've started using Envers the right way, with exploratory tests and a prototype. Then I've deleted the prototype and created some integration tests using in-memory H2 that looked more or less like this example:

@Test
public void savingAndUpdatingPersonShouldCreateTwoHistoricalVersions() {
    //given
    Person person = createAndSavePerson();
    String oldFirstName = person.getFirstName();
    String newFirstName = oldFirstName + "NEW";

    //when
    updatePersonWithNewName(person, newFirstName);

    //then
    verifyTwoHistoricalVersionsWereSaved(oldFirstName, newFirstName);
}

private Person createAndSavePerson() {
    Transaction transaction = session.beginTransaction();
    Person person = PersonFactory.createPerson();
    session.save(person);
    transaction.commit();
    return person;
}    

private void updatePersonWithNewName(Person person, String newName) {
    Transaction transaction = session.beginTransaction();
    person.setFirstName(newName);
    session.update(person);
    transaction.commit();
}

private void verifyTwoHistoricalVersionsWereSaved(String oldFirstName, String newFirstName) {
    List<Object[]> personRevisions = getPersonRevisions();
    assertEquals(2, personRevisions.size());
    assertEquals(oldFirstName, ((Person)personRevisions.get(0)[0]).getFirstName());
    assertEquals(newFirstName, ((Person)personRevisions.get(1)[0]).getFirstName());
}

private List<Object[]> getPersonRevisions() {
    Transaction transaction = session.beginTransaction();
    AuditReader auditReader = AuditReaderFactory.get(session);
    List<Object[]> personRevisions = auditReader.createQuery()
            .forRevisionsOfEntity(Person.class, false, true)
            .getResultList();
    transaction.commit();
    return personRevisions;
}

Because Envers inserts audit data when the transaction is commited (in a new temporary session), I thought I have to create and commit the transaction manually. And that is true to some point.

My fault was that I didn't have an end-to-end integration/acceptance test, that would call to entry point of the application (in this case a service which is called by GWT via RPC), because then I'd notice, that the Spring @Transactional annotation, and calling transaction.commit() are two, very different things.

Spring @Transactional annotation will use a transaction manager configured for the application. Envers on the other hand is used by subscribing a listener to hibernate's SessionFactory like this:

<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean" >        
...
 <property name="eventListeners">
     <map key-type="java.lang.String" value-type="org.hibernate.event.EventListeners">
         <entry key="post-insert" value-ref="auditEventListener"/>
         <entry key="post-update" value-ref="auditEventListener"/>
         <entry key="post-delete" value-ref="auditEventListener"/>
         <entry key="pre-collection-update" value-ref="auditEventListener"/>
         <entry key="pre-collection-remove" value-ref="auditEventListener"/>
         <entry key="post-collection-recreate" value-ref="auditEventListener"/>
     </map>
 </property>
</bean>

<bean id="auditEventListener" class="org.hibernate.envers.event.AuditEventListener" />

Envers creates and collects something called AuditWorkUnits whenever you update/delete/insert audited entities, but audit tables are not populated until something calls AuditProcess.beforeCompletion, which makes sense. If you are using org.hibernate.transaction.JDBCTransaction manually, this is called on commit() when notifying all subscribed javax.transaction.Synchronization objects (and enver's AuditProcess is one of them).

The problem was, that I used a wrong transaction manager.

<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager" >
    <property name="dataSource" ref="dataSource"/>
</bean>

This transaction manager doesn't know anything about hibernate and doesn't use org.hibernate.transaction.JDBCTransaction. While Synchronization is an interface from javax.transaction package, DataSourceTransactionManager doesn't use it (maybe because of simplicity, I didn't dig deep enough in org.springframework.jdbc.datasource), and thus Envers works fine except not pushing the data to the database.

Which is the whole point of using Envers.

Use right tools for the task, they say. The whole problem is solved by using a transaction manager that is well aware of hibernate underneath.

<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" >
    <property name="sessionFactory" ref="sessionFactory"/>
</bean>

Lesson learned: always make sure your acceptance tests are testing the right thing. If there is a doubt about the value of your tests, you just don't have enough of them,