Latest on Review Tools
Well, things we‘re going ok with Jupiter, then my review file somehow got detached from the perspective. I see the file in the project, but there are no items in the table anymore. The preferences/configuration pages don't have any mapping that I could find. The docs are basically a page that just tells you what the fields in the 5 or so views are for. I muddled around a bit and decided it was time to look at the Atlassian Connector. Yeah, since I my last post on this topic, this new Mylyn interface has come out.
So I got it installed today. Looks pretty promising. There doesn‘t seem to be a way to create a new review from inside the code, BUT, you can flip into crucible‘s web interface through a tab, which is probably fine since I generally create reviews from commit sets (and jack the files I don‘t want, etc.). I am not sure if the work flow they expect users to follow is create the first pass exclusively through the web? The Release Notes here are good, but leave me with that impression, which is not necessarily bad, though I will have to see how I like it.
Meantime, I decided to look at the code for Jupiter to see if there were any possibility of just contributing some time and helping out on a few of these items. Well, turns out that the code does not look like something I would want to put time into. The model is a big singleton that is hoisted out of the file. I took a look at how the marker stuff is done. I have used markers before in doing PDE dev, nice flashback reading that code. Made me think about doing a codegen model that could wrap itself around the JFace pieces. May still do that.
In the meantime, I am going back to Crucible for a while. What I would like to see in this plugin is some of the coverage stuff I was talking about in my prior post. If I could get ECF working, that would be great. I like reviews as guides. Sure, if we were all doing BDD out to the nth degree, we could just read the tests, but let‘s get real people: there are reasons to converse with each other, especially if we are going to work together on pieces. The ‘hey just read the tests‘ crowd are probably mostly perpetrating some form of Caveman. (That‘s the process antipattern where ‘teams‘ are hunting clans and actual work is done after each one takes a limb off to gnaw on it.) Or they hold a lot of meetings. Problem with that is we all take notes that end up in some pad somewhere or take up an hour to come up with an action plan that one marker and a sentence would have covered.
As I‘ve thought again about the idea of doing coverage on the review side, I‘ve started to conclude recently that maybe the SDLC stuff is still just a huge publishing model with a lot of state management that we are not even thinking about now. The New York Times doesn‘t put out opeds that haven‘t been spell checked and edited by a few people, but we regularly publish code that hasn‘t been checked really by anyone. Even if there are ‘tests,‘ how many times have you drilled into something to find that the tests were all just checking things like if a get method is returning the a field by the same name? Programmers would think of a publishing lifecycle as a nightmare. Actually, I think if it was done the right way, programmers would like it. I hae always been inclined to want to have people check things that I do. So some combination of origination with the author, but some team-focused way of seeing what still has to be looked over before something can be promoted, might be enough.. maybe..