Lean tools: synchronization
Lean tools: synchronization
Join the DZone community and get the full member experience.Join For Free
Engineers build business. See why software teams at Atlassian, PayPal, TripAdvisor, Adobe, and more use GitPrime to be more data-driven. Request a demo today.
In Scrum, XP and any method that employs iterations it is common to have a planning phase at the start of each iteration to decide what to put in it. The set of stories (vertical slices of functionality) to be developed by a cross-functional teams requires a the dialogue and synchronization of different people, such as the database administratore, the developers, front end people, and designers.
For example, in a typical story, as a developer you may need to add a few new fields to a table. The designer in charge of an additional widget will need a back end implementation based on customized queries and to separate the responsibility between client and server.
Once an iteration starts, we'll have to synchronize the work of different people. The main synchronization mechanism today is a branch under version control which contains any source material needed for building the application, from the database schema definition (if there is any) to code and graphical items.
Continuous Integration can be interpreted both as:
- a technical practice, where a process checks out every new commit and builds the application, running the tests to catch integration problems.
- a methodological practice, where team members integrate very frequently (once a day, for example) with the branch.
Continuous Integration of both kinds executed on a main line of development (trunk or master) is the current trend, but we should discount some confounding variables like the version control system in use to obtain the reason behind it. While DVCS like Git offers you both the Feature Branching and Feature Toggle models, centralized ones like Subversion often makes merging difficult and limit your options.
I usually prefer a single line of development, but I can't say feature branches are painful per-se; there are reports of many people being productive with them. But I think there are two development models in contrast here:
- the Microsoft/Apple model, where software is shipped in yearly launches and multiple branches like Word 2007 and 2010 are maintained. These choices were drivern by technology constraints in the past, but Apple leverages the model to sell new hardware every year.
- the Google/Facebook model, where features are A/B tested and evolved in the current line of development, which is in an eternal beta. At least in the small (I don't have taken a look at Google single repository) it makes sense to keep all efforts on a single
The first approach would migrate a database with a script, hiding or dropping fields while adding others. The second approach would introduce duplicated new field incrementally and only transform the code that relies on the old ones after a while. The Lean Startup movement is close to the latter position, in order to favor experimentation.
The problem with multiple branches is that may duplicate or discourage refactoring or other efforts: in a single trunk, it's easy to change a method name; in multiple branches, the change causes long-running merges performed weeks after the change to fail. It's the old beast of code duplication: synchronizing all copies of code.
Thus I would suggest the evaluate the advantages of multiple branches offer you and decide if they outweight their duplication issues. Just because we shouldn't adopt trunk-based development because what we have is Subversion, we shouldn't create dozens of feature branches just because now we are in a Git repository.
There is one fundamental pattern to apply to the initial state of an empty application to accomodate evolution by many different team members later. It is known as a Spanning Application, but also as a tracer bullet, or Walking Skeleton.
With this pattern, you develop the simplest version of a story that allows you to exercise all the technical pieces (code, database, deployment, front end) and all the team members abilities. Once you have a walking skeleton, an automated process of integration should be in place; a commit of any team member will trigger it to expand the skeleton.
The goals of the pattern are to complete the initial synchronization of a team; once it is in place, different people can work on different parts more easily as there is already a scaffolding keeping up the building.
When an application has a larger scale, the initial definition of the architecture may focus on an higher-level of abstraction than a skeleton: the development of the interfaces between components assigned to teams.
When communication channels between components have been identified, people from the two interested teams are called to decide on an interface. The main problem is getting the architecture (the interfaces) right, but at least who communicates with whom is made clear.
There are entire books that can be written on the negotiation of interfaces between different subsystems; I think the best expression of the process today is Context Mapping.
Taking the time to break up the system is plain old engineering (divide et impera). You'll have regular integration at the defined points, and only a subset of two teams discussing on interface creation and evolution.
Open source example: PHPUnit_Selenium and the JSON Wire ProtocolIn the real of end-to-end browser-based testing, Selenium is the king. However, in its 2.x version, it has merged with WebDriver (another similar project) and a model for drivers and clients has been defined:
- test drivers, like the Selenium Server, are capable of driving one or more kinds of browser and of managing multiple user sessions.
- test runners use browser sessions to exercise web applications, making them execute HTTP requests, selecting elements in the page, following links and pushing buttons.
The interface defined between drivers and runners is the Json Wire Protocol, a Plain Old Json Api where resources are sessions, elements in pages, and browser windows. The interface is encapsulated over HTTP, and correctly relies on the HTTP verbs for the creation and deletion of sessions or for reading the content of elements and pages (but it's not totally REST since it does not use hypermedia).
The introduction of this protocol before even Selenium 2 is widely adopted had disproportionate returns. There are now many drivers available and maintained by different people:
- the main implementation, allowing Firefox and Internet Explorer to be driven.
- ChromeDriver, developed by Chromium Team.
- OperaDriver, developed by volunteers.
- AndroidDriver and IPhoneDriver, allowing to driver phones browsers.
And also many runners have been created:
- Java RemoteWebDriver bindings.
- Bindings for officially supported languages like C# and Ruby.
- Independent efforts for any language that speaks HTTP: PHPUnit_Selenium is one of them.
Opinions expressed by DZone contributors are their own.