When I heard about the Code Bubbles project and saw the video, I was very impressed by the inventiveness of the solution. Hearing that it is built on Eclipse, I spoke with Andrew Bragdon to find out more about the project.
James: Hi Andrew, could you introduce yourself please?
Andrew: Hi James, thanks for having me. I’m a second-year Ph.D. student in Computer Science at Brown University. I’m working in Human-Computer Interaction research on gestural and multi-view user interfaces. Some past projects I’ve worked on are a gestural system for drawing diagrams, called Lineogrammer, a system for teaching gestural user interfaces to new users without up-front training called GestureBar, and now Code Bubbles.
James: Can you give us a bit of background into CodeBubbles? It looks like it started out as a research paper.
Andrew: Yes—Code Bubbles actually started out way back in January, 2008 when I was playing around with different ideas for using a pen to annotate and gesturally manipulate source code through a small supporting screen used flat on the desk. One of the early ideas I implemented was being able to lasso a function with the pen and then flicking with the pen to tear out a function into a floating bubble. This allowed the user to tear out bubbles from many different points in the code base and see them all at once. Once I had this implemented it became very clear to me that fragments could be beneficial for a very wide array of coding tasks, from reading and editing, to annotating and collaboration, to debugging. It was pretty exciting at the time because while I could see the potential, there was a huge amount of design and implementation work to be done in order to make something useable and scalable.
Since Code Bubbles is an officially unfunded project, it was a constant battle to get sufficient funds to complete the development. I personally am funded by an NSF Fellowship, but I needed substantial additional funds to support the four undergraduate research assistants who helped work on the project, and to pay for equipment, user study expenses, and so on. My advisor, Andy van Dam, very generously pitched in substantial funding, and I also “borrowed” funding from colleagues Steve Reiss and Joseph LaViola Jr.
I built the team out to handle the significant development task ahead, and we spent much of the next year and a half designing and building the system through an iterative design process, and running several quantitative and qualitative evaluations with users. Late last year we submitted two papers on the project and our results to CHI and ICSE, which have since been accepted.
James: So this is all part of a PhD?
Andrew: This is all work done while I have been a Ph.D. student, but I have not formally proposed my thesis yet – so we’ll see!
James: Have you done anything similar to this before (in HCI improvements)?
Andrew: I’ve also done some work in exploring multi-view interfaces for supporting 3D scientific visualizations. I ended up being drawn toward thinking about multi-view IDEs, though, since I use them on a daily basis!
James: What drove you to look for a different IDE paradigm?
Andrew: Well, as a developer working on object-oriented code I have always felt that IDEs do a good job of supporting you if you are working with a single function, but I find that I am usually working with many functions at a given time, spread throughout the code base. I find that I usually have to store what I am doing – which functions need to be changed, what needs to be changed about them, say – in my working memory, which of course is notoriously unreliable, or I have to stop to make notes to myself. Perhaps worse, since I can only easily see one or perhaps two functions at a time, I need to constantly navigate back and forth through the code, which I find distracting.
So, I have always felt this need to be able to see more code – and information in general – at once, side-by-side, easily.
James: What inspired you? Was the Mylyn project's approach a driver for you?
Andrew: There has certainly been a lot of related work over the years to draw from. Indeed, it’s important to point out that we did not invent the idea of working with fragments. The original work there is the Smalltalk environment, and later, the Squeak environment – which let you open single functions in floating child windows.
Since then, modern IDEs seem to have veered away from fragments and more toward file-based editing. It’s hard to say for certain why this was, but I think that there were a number of scalability problems with the original approach that make it unsuitable for Java. Java code tends to have long lines, which makes it difficult to fit a function in a compact space. In addition, a problem with child windows is that it requires the user to manage which windows are on top, which is fairly tedious, and all of the extra UI that comes with child windows could be seen as distracting. Finally, if you fill up your screen with windows, then you will “run out of space” so to speak.
So with Code Bubbles we have tried to build on this earlier work by designing solutions to some of these scalability problems. To handle functions with long lines, we use syntax-aware reflow that runs in real time to wrap these long lines the way a programmer would. To reduce the need to manage which windows are on top, we do not allow bubbles to overlap – instead, a recursive algorithm will automatically push any bubbles that overlap a moved bubble out of the way, while attempting to find a global minimum on the amount of movement needed. We also eliminated as much chrome as possible from each bubble to try to reduce visual clutter. Finally, bubbles exist in a continuously pannable 2-D virtual workspace so that if you run out of screen space, you can simply pan over to make more room, or navigate to an unused portion of the space to work on a new task. There are a number of other features we added as well, such as a transient zoom feature, lightweight grouping, and workspace tasks that also help with scalability.
You mentioned Mylyn – I think that Mylyn’s approach of analyzing the user’s navigations is a great idea, and actually in a way a different approach to the same problem. Mylyn is attempting to reduce the cost of navigations by identifying the user’s working set automatically, while with Code Bubbles we try to reduce the total number of navigations. I think that Mylyn’s approach of analyzing user activity and navigations could actually be combined with Code Bubbles, to help to automatically identify relationships between functions and groups of functions – so the two approaches could potentially be complimentary.
James: I see that it's built on Eclipse. Could you tell us exactly what parts of Eclipse you are using?
Andrew: Code Bubbles uses Eclipse as the backend to handle loading projects, parsing code (to identify method locations in real time, for example), building, refactoring, auto-complete information, and even the debugger. Code Bubbles runs as a separate process, which communicates with an Eclipse plug-in we developed via XML messages; the plug-in executes commands or returns query results back to the frontend.
This means that, among other things, any Eclipse Java project should work in Code Bubbles without modification. In addition, it means that if you make extensive use of Eclipse plug-ins, for source control, say, then you can simply switch to the Eclipse window running in the background to handle such tasks, and then simply return to Code Bubbles when you are done. As time goes on we hope to support more of this functionality directly in Code Bubbles, but in the mean time, it means that you have access to all your Eclipse functionality as well. This also means that deep features such as auto-complete, refactoring, and building work well since they are based on Eclipse. One other advantage of using Eclipse as the backend is that we could potentially add support for additional languages in the future.
James: Could you give some more background on how it it coded?
Andrew: Sure. So on the backend we have an Eclipse plugin written in Java that can perform queries against Eclipse, such as getting a list of all the packages, classes and methods, or execute “remote-controlled” commands, such as “compile” – which would then send the result back in XML (in the case of compile, this might be all the errors and warnings as they are generated).
The frontend, perhaps ironically, is written in C# and WPF and runs as a separate process. This is primarily because I personally am more experienced with WPF, and also because at the time I thought it would be easier to prototype the system this way. The frontend then talks to the backend via XML RPC over sockets.
With all of that said, for the beta we are actually re-implementing the system in pure Java and Swing.
There are two main reasons for this:
- WPF is not quite efficient enough to use for a more widely-used system because it tends to use up a lot of RAM and CPU and can slow down when the scenegraph gets too complex, and
- It only runs on Windows and many of the developers we have talked to use other operating systems. I am pleased to say that we have all the animations working nicely in Swing (although it was a bit more work than we would have liked); one of my worries about moving to Swing was that although it would work well across platforms it might not be possible to carry over the animations, but that part has actually gone fairly smoothly.
James: What are your plans for the product? The beta is due out soon I believe?
Andrew: Well, as researchers we are very excited about the prospect of putting out a beta that people can use at home or at work, because then users can give us feedback on their experiences, what features/design changes they need for it to be more useful to them, etc. So our current plan is to start with a very limited release in early/mid April to get initial feedback and bug reports from users, and then gradually expand the size of the beta over time as we fix bugs or add in any needed functionality. I believe very strongly in the iterative design process, which we have employed so far on Code Bubbles, and I hope to continue that on a larger scale as we start to expand the size of the beta. Once things have become sufficiently tested, and we have gotten a chance to implement and test any changes, we plan to then open up the beta for general download in the early fall.
James: And when you're past beta stage, will it be free or a paid product?
Andrew: The current plan is for it to be a free, non-profit system that you can simply download and use with your existing Eclipse installation.
James: Do you plan to open source your efforts? It could go great with Eclipse e4.
Andrew: This is definitely something that we hope to do this year. I would also like to add in a plug-in architecture soon to support developers in extending the environment to better suit their needs.
James: As someone spending time researching trends in industry, do you have any predictions for the next few years?
Andrew: That’s a hard question! Well, I think that we will see changes and a lot of improvements in the user interfaces of IDEs in general; I think there is a huge amount of room for improvement there. That said I think that it’s important to offer an incremental upgrade path to developers for new tools, so that they can actually be used. For example, I think it’s important to point out that Code Bubbles could be incrementally integrated into existing IDEs, such as Eclipse, without a complete rewrite/redesign of Eclipse. Just as web pages, GUI designers, and other types of content can be viewed in Eclipse in tabs, Code Bubbles could simply be hosted inside a tab so that users can be gradually introduced to a new tool, rather than changing everything. This could allow for coexistence between file based editing, which is great for certain tasks – such as writing a short class from scratch, say, and fragment-based editing with Code Bubbles, which has advantages for other tasks. This design could also allow Code Bubbles to be used by specific tools – such as comparing version changes, or the debugger – which could use Code Bubbles as the user interface.
I also think that there is a lot of potential in applying emerging technologies in the HCI field, such as multi-touch secondary screens, large-screen touch walls, ambient displays, and gestural user interfaces to support programming tasks like remote collaboration, brainstorming sessions, and code reviews. Many of these approaches have been found to be beneficial for other tasks and application domains, and I think there is a lot of exciting work to be done in thinking about how they could be used to support developers in the future.