Crucible seduces with its Ajax interface. The prospect of just clicking on a button next to a changeset and then drilling open source files and clicking anywhere inside and adding comments quickly seems unbeatable. In actuality, while Fisheye and Crucible are useful, it‘s too easy to create reviews that are loaded with cruft so you tend to get the majority of the team ending up in one or the other extreme: reviews that are super sloppy and lazy where the coder just clicked the changeset button and added a phrase for description and thought him/herself done, or the person who never creates a review and is never prompted to. Considering how many dimwitted simpletons have run around yelping about how great their unit test coverage is, no one talks about code review coverage. That would be a 20x more useful metric, but its nowhere to be found in the current offerings. While Crucible is capable and even a pleasure to use, a year of their support/updates severely underwhelmed and the minute the year ended, the torrent of mails that carpeted my inbox made me think switching to a diet of only cotton candy would be preferable to capitulation.
Anyway, now I was asked to recommend a tool for another team and I am looking at Jupiter again. My complaints before about it were that the user interface was kind of hokey. One thing Jupiter has going for it (over Crucible) is that you are in the code when you are making the review, and when the person looks at the items you identified, he/she can click on them (as they are Markers in Eclipse) and go to the spot in the code. Of course, you can imagine that this is mostly going to be useful to point out very simple things, like names, etc. You could just make those changes and then do a review in Crucible showing the original author. Then again, the marker approach opens up a much tighter cycle, less retrospective, more buddy system in the code: tighter, more iterative means more agile (or at least it sounds good.. ).
In Going back to Jupiter, all my problems with it are the user interface, and it does not appear to have changed much. I was thinking this project was dead, but then I went and prodded it and saw that there are actually signs of life: releases, a blog, etc. However, either no one has gotten to the authors and told them the interface could be better or they are stubborn. I should probably make a screencast to illustrate this in more depth, but mostly what it boils down to is when you are creating incidents, you click a button to add one and then there are some fields to fill out. You do have control over the type of incident it is, what resolution you are seeking, etc., which is good. The problem is that how to fill those things in and follow them, you go back and forth between a couple of tabs, which is really not a good approach. (Why hasn't someone on this team figured out that an interface to Mylyn would be a good idea?)
As is the case with all tools, the gating factors in whether they are used or not are going to be ease of entry first (crucible wins this one but not by that much), and tracking capabilities. In this one, I kind of prefer Jupiter, but it‘s a bit too loose, since there isn‘t one public place, like a cork board, with the open issues stuck to it. Then again, the dashboard in crucible, for most the people on our team, turned into a clutter of rotting stuff very quickly and never really imparted the sense of mission that is so lacking. Tools, after all, are supposed to focus first on extending our range. The 7 +- 2 rule states that human beings can only really keep between 5 and 9 concepts going at one time. As codebases expand, marking regions as having been appropriately covered by more than one person would make a huge contribution to tampering the onset of Magoo‘s Disorder (when the field of vision recedes to barely the end of the nose). Neither of these tools does that.