JavaFX: Using Patterns & Clean Code
We are seeing quite a number of exciting JavaFX demos around, demonstrating the pretty features of the language and the capability of easily integrating cool graphics. But, as a software designer, I can't prevent myself from seeing that in most examples we see bloated code - no good separation of concerns and poor applications of the MVC pattern. This is reasonable, as JavaFX was a new language and people needed first to be taught about the syntax and features; but now that - I presume - many of us have been introduced with the new language, it's high time we started worrying about good practices, as well as writing self-commenting code. Let's remember that good practices and good design are always more important than the language! So, let's introduce a very simple project - a Contact List whose specifications are: We have a list of "contacts", where each item is name, last name, email, phone, etc... The UI shows a list of contacts, that can be filtered by a text field; if you enter a value in it, only the contacts whose name starts with the entered text are shown. Selections are applied as you type. Selecting an item in the list of contacts, the details are shown in a form, where you can edit them. At any selection change, the form is animated (a rotation and a blur effect). You can get the code from: svn co -r 37 https://kenai.com/svn/javafxstuff~svn/trunk/ContactList/src/ContactList The model Let's first have a quick look at the model classes. First a simple value object representing a piece of data: package it.tidalwave.javafxstuff.contactlist.model; public class Contact { public var id: String; public var firstName: String; public var lastName: String; public var phone: String; public var email: String; public var photo: String; override function toString() { return "\{id: {id}, value: {firstName} {lastName}, phone: {phone}, email: {email}" } } Then a small service which provides a bunch of data: package it.tidalwave.javafxstuff.contactlist.model; public abstract class ContactRegistry { public abstract function items() : Contact[]; } In the demo code, you'll find a mock implementation with some wired values; in a real case this could be a Business Delegate encapsulating code for retrieving data remotely. So far, so good - it's pretty normal to keep these things separated from the UI. The Controllers We're not going to see a classic Controller here; actually, we're slightly departing from the "pure" MVC. The code I'm showing you is more a "Presentation Model", a pattern described by Martin Fowler as: The essence of a Presentation Model is of a fully self-contained class that represents all the data and behavior of the UI window, but without any of the controls used to render that UI on the screen. A view then simply projects the state of the presentation model onto the glass. This class is basically an hybrid between a classic model and a controller. It is also a façade between the view and the domain model. package it.tidalwave.javafxstuff.contactlist.model; import it.tidalwave.javafxstuff.contactlist.model.ContactRegistry; import it.tidalwave.javafxstuff.contactlist.model.ContactRegistryMock; public class PresentationModel { public var searchText : String; public var selectedIndex : Integer; // The Business Delegate def contactRegistry = ContactRegistryMock{} as ContactRegistry; def allContacts = bind contactRegistry.items(); // The contacts filtered according the contents of the search field public-read def contacts = bind allContacts[contact | "{contact.firstName} {contact.lastName}".startsWith(searchText)]; // The selected contact; the code also triggers a notification at each change public-read def selectedContact = bind contacts[selectedIndex] on replace previousContact { onSelectedContactChange(); }; // Notifies a change in the current Contact selection public-init var onSelectedContactChange = function() { } } Note the extreme compactness brought by functional programming. There's almost no imperative programming as everything is achieved by properly using the binding feature. The only imperative part is the 'onSelectedContactChange' function, which is just a listener to notify selection changes to some external code - it will be used only for triggering the animation. BTW, I'd like to remove it from here, but I wasn't able to. Maybe it's a JavaFX thing that I've not understood yet, but I'm keeping it for another post. Now, everything about the animation goes encapsulated in a specific class, which only exposes two properties controlling the animation: the effect and the rotation angle. A single play() function is provided to start the animation. package it.tidalwave.javafxstuff.contactlist.view; import javafx.animation.Interpolator; import javafx.animation.Timeline; import javafx.scene.effect.GaussianBlur; public class AnimationController { public-read var rotation = 0; public-read def effect = GaussianBlur{} def timeline = Timeline { repeatCount: 1 keyFrames: [ at(0s) { effect.radius => 20; rotation => 45 } at(300ms) { effect.radius => 0 tween Interpolator.EASEBOTH; rotation => 0 tween Interpolator.EASEBOTH } ] } public function play() { timeline.playFromStart(); } } The Views Now, a UI component. The way we design it largely depends on the process, as a graphic designer could be involved. In any case, I think that the whole UI should not be implemented in a single, bloated class; rather relevant pieces should be split apart. For instance, a CustomNode can model the "form" that renders the contact details (in the code below I've omitted all the attributes related to rendering): package it.tidalwave.javafxstuff.contactlist.view; import javafx.scene.CustomNode; import javafx.scene.Group; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.scene.Node; import javafx.ext.swing.SwingLabel; import javafx.ext.swing.SwingTextField; import it.tidalwave.javafxstuff.contactlist.model.Contact; public class ContactView extends CustomNode { public var contact : Contact; public override function create() : Node { return Group { content: [ VBox { content: [ SwingLabel { text: bind "{contact.firstName} {contact.lastName}" } HBox { content: [ SwingLabel { text: "First name: " } SwingTextField { text: bind contact.firstName } ] } HBox { content: [ SwingLabel { text: "Last name: " } SwingTextField { text: bind contact.lastName } ] } HBox { content: [ SwingLabel { text: "Email: " } SwingTextField { text: bind contact.email } ] } HBox { content: [ SwingLabel { text: "Phone: " } SwingTextField { text: bind contact.phone } ] } ] } ] }; } } As you can see, we have only layout and data binding here, the only things a view should do. Putting all together Now the last piece of code, the Main, which builds up the application (again, I've omitted all attributes only related to rendering): package it.tidalwave.javafxstuff.contactlist.view; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.scene.Scene; import javafx.stage.Stage; import javafx.ext.swing.SwingLabel; import javafx.ext.swing.SwingList; import javafx.ext.swing.SwingListItem; import javafx.ext.swing.SwingTextField; import it.tidalwave.javafxstuff.contactlist.controller.PresentationModel; Stage { def animationController = AnimationController{}; def presentationModel = PresentationModel { onSelectedContactChange : function() { animationController.play(); } }; scene: Scene { content: VBox { content: [ HBox { content: SwingLabel { text: "Contact List" } } HBox { content: [ SwingLabel { text: "Search: " } SwingTextField { text: bind presentationModel.searchText with inverse } ] } HBox { content: [ SwingList { items: bind for (contact in presentationModel.contacts) { SwingListItem { text:"{contact.firstName} {contact.lastName}" } } selectedIndex: bind presentationModel.selectedIndex with inverse } ContactView { contact: bind presentationModel.selectedContact effect: bind animationController.effect rotate: bind animationController.rotation } ] } ] } } } As in the previous code snippets, I think the listing can be easily read and understood. Basically, we are glueing all the pieces together and binding the relevant models. In the end, each class in this small project does a simple, cohese thing: representing data, encapsulating the presentation logic, controlling the animation, rendering the views. Dependencies are reduced to the minimum and they have the correct direction: views depend on the models (and not the opposite) and the AnimationController; the AnimationController is independent. You could replace the view components without affecting the rest of the classes, as well as removing or adding other animations by properly using different AnimationControllers. This good separation of roles and responsibilities is the good way to apply OO. There is a detail which is worth discussing. Note the two bind ... with inverse. They implement the so-called "bidirectional binding" where not only a change in the model (e.g. PresentationModel.selectedIndex) is reflected to attributes in the UI (e.g. SwingList.selectedIndex), but also the opposite happens. Indeed, the reverse binding is more important in our example, because it implements the controller responsibility (it captures user's gestures from the view and changes the model); the direct binding from the PresentationModel to SwingList, instead, is useless, as in our case the PresentationModel is never the originator of a change. So, why not using a simple, direct binding in PresentationModel towards SwingList? Such as: public class PresentationModel { var list : SwingList; public def selectedIndex = bind list.selectedIndex; ... } Because this would introduce a dependency from the model/controller to the view, which is plain wrong. Here, bind ... with inverse not only works as a shortcut for writing less code (that is, explicitly declaring a binding and its inverse), but it's also an essential feature for a better design. As far as I know - but I could be wrong - for example in ActionScript (Adobe Flex language) there's no bidirectional binding (you need to put binding keywords at both ends of the association), thus introducing unneeded or circular dependencies. I believe this is true at least at code level (as far as I understand, there are different ways to do binding in ActionScript).
May 12, 2009
·
44,997 Views
·
0 Likes
Comments
Jul 27, 2013 · Mr B Loid
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples), your statement is erroneous... Composing things does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jul 27, 2013 · James Sugrue
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples), your statement is erroneous... Composing things does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jul 27, 2013 · Mr B Loid
Lieven, you started right, but ended wrong... :-) Overall, you're right: nobody should feel forced to use the release maven plugin if it doesn't fit the release process he wants. In fact, first you define a release process, then you use/adapt the best tool to run it. We agree. I don't have remarks about the release process you describe - I woulnd't use it as is, but in this area there is room for variants, above all it depends on the context (a single developer, a small agile team, a medium sized team or a large team of course would choose different processes).
But:
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.
Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.
What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).
Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.
False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.
Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples: many say that Wordpress and JIRA are poorly designed, still they work well and are very popular), your statement is erroneous... Anyway composing stuff, as you correctly say, does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.
Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)
I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jul 27, 2013 · James Sugrue
Lieven, you started right, but ended wrong... :-) Overall, you're right: nobody should feel forced to use the release maven plugin if it doesn't fit the release process he wants. In fact, first you define a release process, then you use/adapt the best tool to run it. We agree. I don't have remarks about the release process you describe - I woulnd't use it as is, but in this area there is room for variants, above all it depends on the context (a single developer, a small agile team, a medium sized team or a large team of course would choose different processes).
But:
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.
Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.
What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).
Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.
False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.
Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples: many say that Wordpress and JIRA are poorly designed, still they work well and are very popular), your statement is erroneous... Anyway composing stuff, as you correctly say, does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.
Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)
I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jul 27, 2013 · Mr B Loid
Lieven, you started right, but ended wrong... :-) Overall, you're right: nobody should feel forced to use the release maven plugin if it doesn't fit the release process he wants. In fact, first you define a release process, then you use/adapt the best tool to run it. We agree. I don't have remarks about the release process you describe - I woulnd't use it as is, but in this area there is room for variants, above all it depends on the context (a single developer, a small agile team, a medium sized team or a large team of course would choose different processes).
But:
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.
Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.
What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).
Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.
False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.
Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples: many say that Wordpress and JIRA are poorly designed, still they work well and are very popular), your statement is erroneous... Anyway composing stuff, as you correctly say, does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.
Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)
I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jul 27, 2013 · James Sugrue
Lieven, you started right, but ended wrong... :-) Overall, you're right: nobody should feel forced to use the release maven plugin if it doesn't fit the release process he wants. In fact, first you define a release process, then you use/adapt the best tool to run it. We agree. I don't have remarks about the release process you describe - I woulnd't use it as is, but in this area there is room for variants, above all it depends on the context (a single developer, a small agile team, a medium sized team or a large team of course would choose different processes).
But:
Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules.
Sure. This doesn't mean that they shouldn't be automated as much as possible. This doesn't mean that at a certain point one might have performed all the homework and approved a release, and then the decision of making a release could just turn into pressing a button (e.g. launch a job in Hudson which in turn runs Maven). The Maven release plugin might support or not this automated process, but this is another matter. Small projects, such as libraries, that have adequate test coverage can be released as frequently as one wants, for instance.
Releasing software is a process, not a single command on the command line.
What kind of logics does infer that "a process" cannot be automated, partially or totally? Manual processes are error-prone and the last thing that I'd like to see is a release failed for a trivial manual error (I've seen a lot).
Then you wrote some Maven facts that are very incorrect.
Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time.
False. The release maven plugin can be customized in very flexible ways. You can disable running tests, generating the javadoc, or the source jars, and the site, and eventually design profiles to have variants on these choices. Have you ever read about the preparationGoals, goals, completionGoals properties of the plugin? To give concrete examples: I don't need the Maven generated site, so site generation is always disabled for me. I also disabled tests, because before releasing I run tests separately. The core of the release cycle can just be: check that everything is compilable, tag and prepare the next version, and then run a forked Maven that checks out from the tag and performs the build.
The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle.
Apart from the fact that this comment on internal software quality need not to have to do with usability (there are famous counter-examples: many say that Wordpress and JIRA are poorly designed, still they work well and are very popular), your statement is erroneous... Anyway composing stuff, as you correctly say, does not violate the SRP. The important thing is that the single components have a single responsibility, and in this is true with the mentioned plugins. If your sentence was true, we should only have software apps that do a single, simple thing.
The release plugin is one of the reasons Maven has gotten a bad reputation with some people.
Maven could have a bad reputation with some people because they legitimately don't like the way it works or because people just don't read the f***ing manual, and then blog incorrect things. :-)
I set up a number of industrial products where releases are managed with the release-plugin and in most cases no problems. Yes, we do Releases Candidates too. In a case where the customer is not totally satisfied with the release plugin, he is not really satisfied with the way dependencies are managed (e.g. because you have a dependency graph among subprojects with many levels and there are "ripples" when dependencies are updated. I verified that some people have similar problems with different tools, such as Gradle, so it has more to do with the process, the project structure, the dependency management, and not the release plugin.
Jun 11, 2013 · Tony Thomas
Martin, your point (a) is very true. But here nobody is saying that starting from tomorrow you're not going to use Swing any longer, only JavaFX. You can mix the two things, and it makes sense to start using JavaFX for all the things it can already deal, and use Swing as legacy.
For what concerns tooling, I like a lot Matisse. The problem of Matisse is that it's going to lock you in NetBeans. For me it's not big deal, but for some customer it can be. One might like NetBeans, but then be forced to migrate to Eclipse because of corporate decision. At that point, if you've got tons of UI generated by Matisse, you have a problem. Now, JavaFX has got SceneBuilder, which works on FXML files, which are a defined standards. This means that UIs designed in JavaFX are IDE-independent and their layout is a resource file, not a piece of locked code, and they can be easily styled by means of CSS - definitely steps forward, and in this area Java has been behind for years. Not counting that JavaFX has got a working web browser pane based on WebKit, while all the previous attempts in Swing, both pure Java and with native libraries, have been a pain.
SceneBuilder is 1.0, so it's not as proven as Matisse, still it looks it's usable for many things. In any case, there will be further progress steps with JavaFX, while only major bugfixing for Swing in years to come.
Jun 11, 2013 · Tony Thomas
Martin, your point (a) is very true. But here nobody is saying that starting from tomorrow you're not going to use Swing any longer, only JavaFX. You can mix the two things, and it makes sense to start using JavaFX for all the things it can already deal, and use Swing as legacy.
For what concerns tooling, I like a lot Matisse. The problem of Matisse is that it's going to lock you in NetBeans. For me it's not big deal, but for some customer it can be. One might like NetBeans, but then be forced to migrate to Eclipse because of corporate decision. At that point, if you've got tons of UI generated by Matisse, you have a problem. Now, JavaFX has got SceneBuilder, which works on FXML files, which are a defined standards. This means that UIs designed in JavaFX are IDE-independent and their layout is a resource file, not a piece of locked code, and they can be easily styled by means of CSS - definitely steps forward, and in this area Java has been behind for years. Not counting that JavaFX has got a working web browser pane based on WebKit, while all the previous attempts in Swing, both pure Java and with native libraries, have been a pain.
SceneBuilder is 1.0, so it's not as proven as Matisse, still it looks it's usable for many things. In any case, there will be further progress steps with JavaFX, while only major bugfixing for Swing in years to come.
Jun 11, 2013 · Tony Thomas
Martin, your point (a) is very true. But here nobody is saying that starting from tomorrow you're not going to use Swing any longer, only JavaFX. You can mix the two things, and it makes sense to start using JavaFX for all the things it can already deal, and use Swing as legacy.
For what concerns tooling, I like a lot Matisse. The problem of Matisse is that it's going to lock you in NetBeans. For me it's not big deal, but for some customer it can be. One might like NetBeans, but then be forced to migrate to Eclipse because of corporate decision. At that point, if you've got tons of UI generated by Matisse, you have a problem. Now, JavaFX has got SceneBuilder, which works on FXML files, which are a defined standards. This means that UIs designed in JavaFX are IDE-independent and their layout is a resource file, not a piece of locked code, and they can be easily styled by means of CSS - definitely steps forward, and in this area Java has been behind for years. Not counting that JavaFX has got a working web browser pane based on WebKit, while all the previous attempts in Swing, both pure Java and with native libraries, have been a pain.
SceneBuilder is 1.0, so it's not as proven as Matisse, still it looks it's usable for many things. In any case, there will be further progress steps with JavaFX, while only major bugfixing for Swing in years to come.
Jun 11, 2013 · Tony Thomas
@Carl Antaki "What happened to Bluemarine?
A few years ago it was my first attempt to mavenize a project. I did it badly (learning the lesson for other things), and I was stupid enough to try also other things during the process; add Oracle stepping in, saying that java.net and kenai.com were going away, then changing idea, etc… so I also moved three times in the meantime; and having scarce time, I also was to upgrade various libraries. I did very bad, in a few words, and for a long time the project was not compilable outside my computer. It seems that the snapshots are now mostly compilable in my Hudson, so I should close to a decent state. Of course, compilable doesn't mean working :-) but chances are that by August the project is back. I'll have to change a lots of things that in the meantime I decided to do in a different way…
"Also what about Netbeans that uses Swing, is there a project of integrating JavaFX in Netbeans?"
I've started integrating JavaFX in smaller NetBeans projects - if you correctly did presentation separation, it's not hard. For a few time you'll have to use both Swing-derived components from the Platform and your own JavaFX stuff. I don't think there's an official plan, but I know of some people in the community working to provide JavaFX components fitting with the Platform model objects. I have still to catch up with that.
Jun 11, 2013 · Tony Thomas
@Carl Antaki "What happened to Bluemarine?
A few years ago it was my first attempt to mavenize a project. I did it badly (learning the lesson for other things), and I was stupid enough to try also other things during the process; add Oracle stepping in, saying that java.net and kenai.com were going away, then changing idea, etc… so I also moved three times in the meantime; and having scarce time, I also was to upgrade various libraries. I did very bad, in a few words, and for a long time the project was not compilable outside my computer. It seems that the snapshots are now mostly compilable in my Hudson, so I should close to a decent state. Of course, compilable doesn't mean working :-) but chances are that by August the project is back. I'll have to change a lots of things that in the meantime I decided to do in a different way…
"Also what about Netbeans that uses Swing, is there a project of integrating JavaFX in Netbeans?"
I've started integrating JavaFX in smaller NetBeans projects - if you correctly did presentation separation, it's not hard. For a few time you'll have to use both Swing-derived components from the Platform and your own JavaFX stuff. I don't think there's an official plan, but I know of some people in the community working to provide JavaFX components fitting with the Platform model objects. I have still to catch up with that.
Jun 11, 2013 · Tony Thomas
@Carl Antaki "What happened to Bluemarine?
A few years ago it was my first attempt to mavenize a project. I did it badly (learning the lesson for other things), and I was stupid enough to try also other things during the process; add Oracle stepping in, saying that java.net and kenai.com were going away, then changing idea, etc… so I also moved three times in the meantime; and having scarce time, I also was to upgrade various libraries. I did very bad, in a few words, and for a long time the project was not compilable outside my computer. It seems that the snapshots are now mostly compilable in my Hudson, so I should close to a decent state. Of course, compilable doesn't mean working :-) but chances are that by August the project is back. I'll have to change a lots of things that in the meantime I decided to do in a different way…
"Also what about Netbeans that uses Swing, is there a project of integrating JavaFX in Netbeans?"
I've started integrating JavaFX in smaller NetBeans projects - if you correctly did presentation separation, it's not hard. For a few time you'll have to use both Swing-derived components from the Platform and your own JavaFX stuff. I don't think there's an official plan, but I know of some people in the community working to provide JavaFX components fitting with the Platform model objects. I have still to catch up with that.
Jun 07, 2013 · Tony Thomas
Mar 14, 2013 · Sergey Petrov
I guess why I'm writing about the Church at DZone, but I suppose that since the post is here, it makes sense to comment it :-)
"but maybe they should be"
No, they shouldn't. Church core business is not to sell a product or service.
"The Church hasn’t embraced the modern information age, doesn’t show up in social media"
Partially false. It's true that some kind of innovation inside the Church occurs at a slow pace, but there are priests and bishops who have very active social media pages, and the Pope has recently got a Twitter account. I'm not sure
this is meaningful or notwhether it makes sense or not, probably itisdoes for some respects and not for others, in any case there is a presence."and doesn’t own its public message."
Totally false, or I didn't understand what you mean.
Mar 02, 2013 · Jon Davis
Agreed with Nikolas. It's easy to port JUnit test to TestNG as there's a compatibile assert() method (not trivial: the native assert() for TestNG has got reversed arguments). There's also a plugin for NetBeans and Hudson/Jenkins.
Feb 23, 2013 · Prabath Siriwardena
"Actually the point about git is interesting because if you think about it, svn -> git is actually a way of moving a lot of stuff away from the cloud. With svn your repository exists in the cloud and you maintain a local copy. With git, your repository exists on your own machine and you exchange changesets with everyone else over the cloud (you may also have a copy in the cloud but you don't work off it)."
I disagree and the problem is that the word "cloud" is contrived and means everything, so it probably means different things to us. For instance, the experience cited by Lund just the comment above is not cloud - if you just have a dummy client connected to a remote VM, this is client-server. Perhaps the server is immersed in a cloud and this delivers high availability, scalability and lower cost of ownership, but I say that if I'm working with a dummy client I don't enjoy a cloud. My local node should be an equal peer on the cloud: why should it be just a dummy client?
A cloud to me is extreme peer-to-peer, where every piece of stuff stays in the optimized place, to maximize the user experience. Thus to me Git/Mercurial are the best example of cloud, because I enjoy data replication and backup on the remote notes, sharing offered by the remote notes, but at the same time maximum working speed because I have also thing locally - and I still can work in disconnected mode, which is something that will never go away, even though many cloud advocates keep saying the opposite.
There's a fundamental principle of engineering that most same to have forgotten, which says that things must be designed in the simplest possible way and avoid waste of resources, and being forced to always operate remotely contradicts this principle.
Security on the cloud is just a nightmare.
Feb 21, 2013 · Prabath Siriwardena
I was undecided on the vote on this post, because on one side it's true (but also easy) to predict that the cloud will expand, but on the other hand there are too many exaggerations, so, sorry Eugene, I voted down.
In addition to remarks done by others, I add that it take a few minutes for me to set up a Linux development machine, either with a sample script that downloads and installs everything (in a physical or virtual machine), or a Vagrant setup (in a virtual machine). Indeed, the bottleneck here is the network speed to download stuff. And if the network is not super-fast, bye bye developing in the cloud.
I'd only add that saying that in the foreseeable future the cloud will expand doesn't mean that the trend will continue forever. The first massive security problem - that is just waiting to happen - could dramatically change things.
Feb 17, 2013 · Mat Mart
Definitely +1 for slf4j and logback. Most of the recent FLOSS products standardize on slf4j, I've been using both in production, even for my customers, and it's fine.
Feb 14, 2013 · Schalk Neethling
First, thank you because this is a valuable post.
"This is, of course, problematic as CrashPlan is most likely running its backup while I’m actively working in my virtual machine."
This is a very interesting point as this way of operating is common to other popular backup systems, such as Time Machine on the Mac, and basically makes this kind of backup almost useless. Not only for backing up a VM, but anything that doesn't commit atomically changes to a filesystem (which means everything, from Opera to Lightroom). Most people seem to just "trust" the coolness factor of the UI of TimeMachine, or the fact that "my backup does everything automatically", without considering these side effects. Usually you realize you didn't have a solid backup when you have to restore something.
Back to the central point, I hoped for the snapshot to be more effective, but found the same problems you describe. For me, VM aren't such important, I mean I use them for testing something for my customers, but there are no vital data inside. If something crashes without possibility of a recovery, I just have to restart from a plain system, checkout some code and run tests. That's why I'm perfectly fine with a manual, sync approach: I have created "clean" versions of Windows 7 and Ubuntu, applied the patches available at the time of creation, then zipped the .vdi and archived it (before zipping you can apply some tricks to minimize the size of the zip, for Windows e.g. http://garethtuckercrm.com/2012/07/25/shrinking-virtualbox-vdi-files/). When I have to restart from scratch, I just unzip the image, change the file name of the disk image, change the UUID which is registered by Virtual Box and I'm ready.
For a more frequent backup of vital data, I frankly don't see anything more safe and effective that manually run rsync (or something based on it), periodically, after stopping the applications. I backup my data (not only VMs) in this way and I've been fine so far. rsync can do incremental backups, preserving the overwritten files for as many times as you want. I usually do this in the launch and dinner pauses, so it's like I have a sort of automatic reminder (whenever I eat I backup) and I don't have to pause my job. For backups requiring a longer time, you can launch them before going to sleep, adding something that powers off/freezes the machine when it's over.
For the record, I also have a Time Machine backup disk, mostly because I've got a bag of recovered 2.5" disks that I "need" to use in some way. It doesn't cost me to have this disk attached all the day and it's a sort of backup complement. It's mostly useful for special cases, such as you delete a file, then you empty the trash, and then you realize you've done a mistake. It actually worked for me once, but as you can guess it's a fairly rare use case.
Serious backup of serious data today still requires discipline and manual care. Things would be different if we had a transactional file system. Unfortunately Apple seems not to care for Mac OS X; Linux users could try BTRFS; for Windows, I don't now.
Feb 04, 2013 · Mr B Loid
The Hamcrest hack is amusing, indeed. It taught me a good lesson the first time I wrote a custom Matcher.
Jan 17, 2013 · Tony Thomas
I must say I find this post pretty confusing. First you advocate against using "non standard" tools such as Lombok. Apart from the fact that the term "proprietary" is plain wrong, as Lombok is FLOSS, what is your definition of "standard"? I mean: once upon a time most of the Java ecosystem was made by official tools and APIs made by Sun. This is no more true since quite a few years, considering that there are plenty of things such as Spring, GWT, Guice, the whole huge bag of products made by Apache Software Foundation, etc. So "standard" is not really a valuable criterion to pick software, while popularity, well estabilished communities, maintenance, etc... are the good criteria to use.
On the other hand, there are de-facto standards that help in reducing entropy and integrating things, and the getter/setter convention is just one of those. It's not only used by Spring or JSF, but by a whole bag of other products, such as JPA, Swing, XStream, and again here the list is long. We should give up with this in change of what? The "get" and "set" semantics are quite clear, while e.g. the idea of having a property() method that returns the <i>previous</i> value is quite obscure (and useless, I can't think of a popular, useful use case). Good idea to allow method chaining, in fact many people do that with regular getters/setters, but again I'd advice against making it a standard, since you have to deal with a few of important things, such as the choice of returning "this" or a cloned object. So, I'd use chainable setters when needed, when it's properly documented, and using a specific naming convetion (withProperty() is a good one).
I think it would be a good thing to have a language extension that allowed calling getters/setters with a shorter syntax as you propose (e.g. property = 5), but in this case the way getters/setters are named is not important since it's hidden by the compiler...
So I really can't understand your points...
May 07, 2012 · Mr B Loid
While I mostly agree with Mark, the basic points are different: fashion and deployability. Fashion is something we can't control and fashion today is definitely with HTML 5. Deployability unfortunately is a problem, because while the Java Plugin improved a lot in the past years, it's not yet optimized and hassle-free up to the point to keep a comparison with HTML 5. In other words, if your RIA must be accessible to everybody and "in a few seconds" (I mean, it must also capture random navigators), JavaFX is not an option. Also consider that Java is no more preinstalled by default on Mac OS X and, even though things aren't completely clear to me, this has got an impact on the effectiveness of Java Web Start.
Given that, I think that JavaFX is excellent where Swing were and is excellent, that is in industrial applications (where you don't have problems with deployment) or in ad-hoc situations (where users are committed to using a service and can tolerate some problem with the installation of the client).
For what concern mobile devices, to me is clear that the world is going to be dominated by Android and iOS for a long time, thus JavaFX can't play a role there, unless Oracle deploys a sort of "adapter layer" reimplemented on Android widgets. I don't know how feasible / practical it can be.
Apr 11, 2012 · Tony Thomas
Given that we're commenting regex maps, I've been using another home-made since a few time. It works for my specific projects that needs it, but I never validated it against the Map contract. I'd like to get some feedback just while you're still hot about the topic ;-). It is not optimized with pattern compilation, but it does super.put() and has a shortcut for gets whose arg is not a regexp (which in my use cases happens 80% of the time). My impression is that designing an implementation that is optimized for a lot of generic cases *and* it complies with Map contract is a hard task.
Apr 11, 2012 · James Sugrue
Given that we're commenting regex maps, I've been using another home-made since a few time. It works for my specific projects that needs it, but I never validated it against the Map contract. I'd like to get some feedback just while you're still hot about the topic ;-). It is not optimized with pattern compilation, but it does super.put() and has a shortcut for gets whose arg is not a regexp (which in my use cases happens 80% of the time). My impression is that designing an implementation that is optimized for a lot of generic cases *and* it complies with Map contract is a hard task.
Mar 31, 2012 · Regina Obe
Well, it depends, even though I think you have most of the good points. The code reviewer might be continuously involved with the project, thus he could have comprehension of the business logics.
But it's very true that code reviewing is only part of the story and you need other things, including comprehensive testing, to assure a good quality of the product.
Mar 24, 2012 · ldz
In any case CLASS retention has nothing to do with the fact that you don't need the jar at runtime. Even RUNTIME retention doesn't require the jar; if it's missing, it's just as those annotations weren't there. See http://stackoverflow.com/questions/3567413/why-doesnt-a-missing-annotation-cause-a-classnotfoundexception-at-runtime
Mar 24, 2012 · James Sugrue
In any case CLASS retention has nothing to do with the fact that you don't need the jar at runtime. Even RUNTIME retention doesn't require the jar; if it's missing, it's just as those annotations weren't there. See http://stackoverflow.com/questions/3567413/why-doesnt-a-missing-annotation-cause-a-classnotfoundexception-at-runtime
Mar 16, 2012 · kriskrusher
The examples you and James have proposed so far are too borderline for a complete evaluation. Since James told this is just the beginning of a series of posts I'm waiting for the next ones. In the meantime I can answer your question " how do you know that you’re not querying the internal state of the object?".
The question is no relevant. It's obvious that the internal state of the object is still relevant, otherwise it would be useless. The point is about coupling: you're not coupled through the internal state of the object, but through the interaction that is written in specifications. It's the minimal and correct way of coupling, since you can't do less, and it will change only when specifications change. On the opposite, if you expose the status you can potentially do everything with it. This means that TDA, in general, leads to more readable code, as it documents what's happening. If you read A methods and you find getStatus(), well, who knows what's going to happen? If you read doSomething() the situation is clear.
I've used TDA a lot in some open source projects and I can share and illustrate some code - still waiting to see the next posts by James. What I can say is that, from a practical point of view, there are some cases in which TDA doesn't change much. For instance, a TDA purist could assert that when I have to "print" the content of an object I shouldn't expose a asString() method (ask), but a renderTo(StringBuilder) (tell). In this specific case I say that there's not much difference: what you need is a String in both cases, it's clear that generally speaking that string is not part of the status, but it's just a function of the status, and when I say asString() it's quite clear what I'm going to do with it.
Mar 14, 2012 · Mr B Loid
Jan 11, 2012 · sunwindsurfer
Anyway I prefer injection by constructors too because it enables final fields, thus possibly immutable objects. BTW, with Lombok you can just annotate with @RequiredArgsConstructor the class and the compiler will generate automatically a constructor for all the private final fields that are not initialized. Thus, the thing is perfectly DRY.
The point is that sometimes you just can't use constructors because of a framework that manages the object life cycle and doesn't support custom constructors. JEE seems to me the most evident, but even Android is another good example. Thus, I don't think we can avoid @Inject and stuff that does injection by reflection and private fields.
Jan 09, 2012 · Mr B Loid
Well, splitting a not really long post in three parts doesn't look a good idea... in any case I'm going to comment only here.
Honestly, I'm a bit tired of still reading posts about how Maven is bad and see the very same, old, list of problems that can be fixed by using a couple of good practices, mainly the superpom. Seems that people start using Maven for a few days, doesn't look around enough, and then blog the same things over and over.
Very quickly: "locking the version of a plugin" is not a problem, is instead a good practice, otherwise you can't have repeatable builds. Do you figure it out what happens if you try to rebuild your project from a tagged commit of 6 months ago if in the meantime a number of plugins have been updated? So, you have to explicitly declare the versions of every plugin that you use. Not by chance Maven warns that in future this practice will be mandatory and if we have to criticize it is because they didn't it from the beginning.
Second: every long configuration stuff must go in the superpom, using properties for configurability. To change the Java version in all of my projects I only have to set two properties. I also have complex configurations for Vaadin, Jetty, AspectJ, Android, the NetBeans Platform, whatever, in my superpom and hardly need to have another plugin configuration section in any of my JEE, JSE, Vaadin, Android, NetBeans Platform projects. My customers using my superpom share the same benefits and their POMs are very small as well. All our projects thus share the same structures, conventions, release cycles and so on.
Copypasting a POM? Sure, people not skilled with Maven tend to do that, but in general people not skilled with a technology tend to misuse it. It's the project leader responsibility to explain best practices and teaching them.
Jan 09, 2012 · Olaf Lederer
Jan 09, 2012 · Olaf Lederer
Jan 09, 2012 · Olaf Lederer
Jan 08, 2012 · Olaf Lederer
Timon, this is very interesting. I've been using JPA on the NetBeans Platform too, even though through these years I've faced only in a limited way with a "production" confrontation of the problem. What I'm saying is that in these years I've consulted for customers using the Platform with JPA, but only for a short-medium period (up to a few months); I've been using JPA for years with my own applications, but I lack a large base of measurements.
In any case, I was convinced too that JPA probably doesn't perfectly fit a desktop scenario: one of the issues was exactly the point of a bean always changing state. A few years ago I tried experimenting with some alternative approach (*): by using CGLIB etc... I tried to have a "stable" representation of the bean, which was not JPA managed, backed by some "hidden" JPA beans during transactions. It looked promising when I started, but after some time I got trapped in a cul-de-sac. It got too complex and some narrow cases emerged which persuaded me to give up with this way (unfortunately the thing still lingers around in some code).
The thing that I'm trying now is agent-oriented design. Since it leads to a model where the only shared data is immutable, it would eradicate the issue about beans being attached and detached and PropertyChangeEvents are out of sync. But I've just started with the experimentation and I'm just going to put transactions into the recipe.
(*) Actually my alternative approach wasn't with plain JPA, but with the persistence manager of OpenSesame, which is a RDF framework. But the concepts about transaction demarcation and attaching/detaching beans are very similar.
Jan 05, 2012 · Cal Evans
I'd say yes, with some annoyance. This morning I've embedded the Argyll CMS binaries into the application. I first updated the jarbundler sections as follows:
This just provides stuff to jarbundler for its three embedding areas: Java, resources and executables. The last section is important because everything here gets the executable Unix attribute, which is stripped from other sections.
So I just needed to add this in the application POM:
It retrieves the binaries (previously published to a Maven repo) and unpacks them where jarbundler expects them.
It works, but every file from Argyll is marked as executable (including README.txt and other stuff). Not an issue per se, of course, just an annoyance. It's also related to the fact that Argyll CMS is not my project, is AGPL and for sake of correctness I prefer to bundle the whole distribution as is (without this requirement I could extract stuff in a selective way, e.g. only binaries; or putting binaries in a folder and all the other stuff in another, etc...).
Argyll doesn't contain links: I don't know how jarbundler manages them. But I don't think a JDK bundle contains them.
For the record, Apple is working at a bundable JRE artifact: see http://java.net/jira/browse/MACOSX_PORT-105.
Jan 05, 2012 · Cal Evans
I'd say yes, with some annoyance. This morning I've embedded the Argyll CMS binaries into the application. I first updated the jarbundler sections as follows:
This just provides stuff to jarbundler for its three embedding areas: Java, resources and executables. The last section is important because everything here gets the executable Unix attribute, which is stripped from other sections.
So I just needed to add this in the application POM:
It retrieves the binaries (previously published to a Maven repo) and unpacks them where jarbundler expects them.
It works, but every file from Argyll is marked as executable (including README.txt and other stuff). Not an issue per se, of course, just an annoyance. It's also related to the fact that Argyll CMS is not my project, is AGPL and for sake of correctness I prefer to bundle the whole distribution as is (without this requirement I could extract stuff in a selective way, e.g. only binaries; or putting binaries in a folder and all the other stuff in another, etc...).
Argyll doesn't contain links: I don't know how jarbundler manages them. But I don't think a JDK bundle contains them.
For the record, Apple is working at a bundable JRE artifact: see http://java.net/jira/browse/MACOSX_PORT-105.
Jan 05, 2012 · Cal Evans
I'd say yes, with some annoyance. This morning I've embedded the Argyll CMS binaries into the application. I first updated the jarbundler sections as follows:
This just provides stuff to jarbundler for its three embedding areas: Java, resources and executables. The last section is important because everything here gets the executable Unix attribute, which is stripped from other sections.
So I just needed to add this in the application POM:
It retrieves the binaries (previously published to a Maven repo) and unpacks them where jarbundler expects them.
It works, but every file from Argyll is marked as executable (including README.txt and other stuff). Not an issue per se, of course, just an annoyance. It's also related to the fact that Argyll CMS is not my project, is AGPL and for sake of correctness I prefer to bundle the whole distribution as is (without this requirement I could extract stuff in a selective way, e.g. only binaries; or putting binaries in a folder and all the other stuff in another, etc...).
Argyll doesn't contain links: I don't know how jarbundler manages them. But I don't think a JDK bundle contains them.
For the record, Apple is working at a bundable JRE artifact: see http://java.net/jira/browse/MACOSX_PORT-105.
Jan 05, 2012 · Cal Evans
I'd say yes, with some annoyance. This morning I've embedded the Argyll CMS binaries into the application. I first updated the jarbundler sections as follows:
This just provides stuff to jarbundler for its three embedding areas: Java, resources and executables. The last section is important because everything here gets the executable Unix attribute, which is stripped from other sections.
So I just needed to add this in the application POM:
It retrieves the binaries (previously published to a Maven repo) and unpacks them where jarbundler expects them.
It works, but every file from Argyll is marked as executable (including README.txt and other stuff). Not an issue per se, of course, just an annoyance. It's also related to the fact that Argyll CMS is not my project, is AGPL and for sake of correctness I prefer to bundle the whole distribution as is (without this requirement I could extract stuff in a selective way, e.g. only binaries; or putting binaries in a folder and all the other stuff in another, etc...).
Argyll doesn't contain links: I don't know how jarbundler manages them. But I don't think a JDK bundle contains them.
For the record, Apple is working at a bundable JRE artifact: see http://java.net/jira/browse/MACOSX_PORT-105.
Dec 14, 2011 · Gerd Storm
Dec 14, 2011 · Gerd Storm
Dec 08, 2011 · Mr B Loid
That is, Eclipse, that offered the worse support to Maven, has been considerably improved. True. But NetBeans and IDEA have got excellent Maven support since a long time, so a good push to Maven has been provided by all the major IDEs.
Nov 07, 2011 · Jacob Eyce
I could be wrong, but I don't think that in order to activate @PostConstruct you need Spring MVC, thus the dependencies you mentioned on Spring Web and Servlet. AFAIK,<mvc:annotation-driven> isn't the only way to go, as <context:annotation-config> is enough, and only requires depending on Spring Context.
Nov 07, 2011 · James Sugrue
I could be wrong, but I don't think that in order to activate @PostConstruct you need Spring MVC, thus the dependencies you mentioned on Spring Web and Servlet. AFAIK,<mvc:annotation-driven> isn't the only way to go, as <context:annotation-config> is enough, and only requires depending on Spring Context.
Nov 01, 2011 · Mr B Loid
I think that Git is the most popular among DSCMs, but I am definitely a Mercurial guy. The two systems today can do the very same things (including branching, even though there are still a lot of people who believe that in order to branch with Mercurial you have to create a separate working area), but Mercurial is much, much simpler to understand.
CVS and SVN are still very popular in the corporates. CVS is a real problem and I always push my customers to leave it and pick SVN, at least. A DSCM can be overkill for most corporates, as Andrew said, and the complexity wouldn't pay for the advantages. So, harbouring at SVN is fine.
Still, there would be definitely some advantages for a corporate to use a DSCM, even though there's no people working offline. One for all, the release process with Maven is definitely smoother with a DSCM, as thanks to the separation between commit and push you can easily set up a workflow where you have a transactional release (I mean, either everything is fine, or the release fails with no consequences).
Oct 28, 2011 · Gerd Storm
Oct 14, 2011 · Mr B Loid
Before saying that I mostly disagree, I think it's necessary to make a preamble. "Evil" and "good" are often misused in our world. I think they are good for a brief title or for telling things with some humour, but "appropriate" and "unappropriate" are better terms. In other words, there's no black and white, but a scale of greys. There's no "true OOP" in practice, you can do things in many ways, paying some costs and getting benefits. The important thing is that you are aware of your choice, I mean, you are aware of what you are going to pay and what are you going to get, and you are consistent.
The basic point is not related to the danger of exposing the internal state. It's to depend on a public thing. For instance, if I have a Person with firstName, lastName and age properties, and I use them by means of getters, my whole application potentially depends on these three properties. If any of these properties change (let me say it disappears), you have broken code potentially everywhere. Of course, it's unlikely that a firstName, lastName disappears from a Person. So in this very example, one might assume that the risks of going with getter / setters are low. Let's just think of a more complex domain where properties are more likely to change.
In this case, the risks might be higher. What I'd do is not to expose those properties as public, but as package friends. If I have to render Person e.g. on a web page, I'd write a PersonRenderer which has a method render(Person, WebPage) [with WebPage I mean any infrastructure related to a give web technology to render something] which does the job. Should I send an email to that person, I'd use a similar friend class PersonEmailer, etc. With this approach, I know that the dependencies on the structure of Person are well known in advance and I can predict what impact will have any change in the structure of Person (of course, I can say the impact is very small).
Is it better or not? Sure, it's more complex, because I have more classes (even though they are very small and simple). I've written and Android application using this approach and, once you've understood it, it's not that difficult. But I agree, it's not necessarily always the best solution. See my preamble.
For what concerns DTO in DDD, there's a way to fix the anemic object thing which make it also possible to work without getter/setters: DCI. Google for DCI and read the Artima's article for quickly learning on it. In short, your entities get decomposed in an anemic object (what would be he DTO, called Datum) and a number of active classes (Roles) doing operations on it. Roles are not services, since there's an instance of Role per each Datum. Without getter/setters the Datum basically has no methods. I'd have person.getRenderer().renderTo(renderable) for rendering it, or person.getEmailer().sendEmailTo(recipient) in a simple fashion. If I am concerned about decoupling dependencies, I can have a dynamic approach as person.getRole(Renderer).renderTo(renderable) [or a better syntax that I prefer, person.as(Renderer).renderTo(renderable)], and roles can be dynamically injected in a number of ways.To deal with frameworks that absolutely want getters/setters as JPA, I can have person.as(Persistable).persist(), where the specific implementation of Persistable would copy the internal state of Person into a JPAPersonBean and do the job trasparently. You get that I have another advantage here, that I can replace the persistence mechanism as I want.
This is very powerful, very extensible, very robust. It costs a bit more than a plain design with getters/setters. In a way, I think it's a very appropriate way to use OOP, since the Single Responsibility Principle is respected to a very fine grain. In some cases, though, it can cost too much for the purpose.
Oct 14, 2011 · James Sugrue
Before saying that I mostly disagree, I think it's necessary to make a preamble. "Evil" and "good" are often misused in our world. I think they are good for a brief title or for telling things with some humour, but "appropriate" and "unappropriate" are better terms. In other words, there's no black and white, but a scale of greys. There's no "true OOP" in practice, you can do things in many ways, paying some costs and getting benefits. The important thing is that you are aware of your choice, I mean, you are aware of what you are going to pay and what are you going to get, and you are consistent.
The basic point is not related to the danger of exposing the internal state. It's to depend on a public thing. For instance, if I have a Person with firstName, lastName and age properties, and I use them by means of getters, my whole application potentially depends on these three properties. If any of these properties change (let me say it disappears), you have broken code potentially everywhere. Of course, it's unlikely that a firstName, lastName disappears from a Person. So in this very example, one might assume that the risks of going with getter / setters are low. Let's just think of a more complex domain where properties are more likely to change.
In this case, the risks might be higher. What I'd do is not to expose those properties as public, but as package friends. If I have to render Person e.g. on a web page, I'd write a PersonRenderer which has a method render(Person, WebPage) [with WebPage I mean any infrastructure related to a give web technology to render something] which does the job. Should I send an email to that person, I'd use a similar friend class PersonEmailer, etc. With this approach, I know that the dependencies on the structure of Person are well known in advance and I can predict what impact will have any change in the structure of Person (of course, I can say the impact is very small).
Is it better or not? Sure, it's more complex, because I have more classes (even though they are very small and simple). I've written and Android application using this approach and, once you've understood it, it's not that difficult. But I agree, it's not necessarily always the best solution. See my preamble.
For what concerns DTO in DDD, there's a way to fix the anemic object thing which make it also possible to work without getter/setters: DCI. Google for DCI and read the Artima's article for quickly learning on it. In short, your entities get decomposed in an anemic object (what would be he DTO, called Datum) and a number of active classes (Roles) doing operations on it. Roles are not services, since there's an instance of Role per each Datum. Without getter/setters the Datum basically has no methods. I'd have person.getRenderer().renderTo(renderable) for rendering it, or person.getEmailer().sendEmailTo(recipient) in a simple fashion. If I am concerned about decoupling dependencies, I can have a dynamic approach as person.getRole(Renderer).renderTo(renderable) [or a better syntax that I prefer, person.as(Renderer).renderTo(renderable)], and roles can be dynamically injected in a number of ways.To deal with frameworks that absolutely want getters/setters as JPA, I can have person.as(Persistable).persist(), where the specific implementation of Persistable would copy the internal state of Person into a JPAPersonBean and do the job trasparently. You get that I have another advantage here, that I can replace the persistence mechanism as I want.
This is very powerful, very extensible, very robust. It costs a bit more than a plain design with getters/setters. In a way, I think it's a very appropriate way to use OOP, since the Single Responsibility Principle is respected to a very fine grain. In some cases, though, it can cost too much for the purpose.
Oct 13, 2011 · Mr B Loid
As it has been said, there's no meaning in calling for a comeback, since Java is still #1. Ruby & co. are very small niches - and yes, the enterprise is still using Java for their apps. We shouldn't think that the world is only made by cool things that are being blogged on.
Re: Python, I've got a question for Rick. I've recently run into OSQA, which you DZone guys are involved with. I've met it at answer.atlassian.com and I think it's great. I'm planning to use it for replacing JForum for my software products (in particular, an Android app which has got thousands of users). My only regret that it's made in Python - don't misunderstand me, I'm not a fanatic and I don't bash a good software because it's not in Java. It's that I'm a single-man company and I personally manage my servers, and Python/Django are currently not in my knowledge. But it's likely I'll go that way.
In any case, I first searched for an equivalent web app made in Java, which would simplify my job. I found that you have Qato, made in JEE - but it's not free, so I'll probably stay with OSQA. But I think it's an interesting point that OSQA, free and not labeled as "enterprise", is made in Python, while Qato, labeled "enterprise" is made in JEE. Only marketing stuff, or are there technical reasons?
Oct 11, 2011 · Gerd Storm
Comments above seems to miss the point that Google made, that is that JavaScript is supposed to be unable to scale for being used in large projects and for intrinsic limitations about the speed that a JIT can achieve. I don't know whether the two claims are true or just an excuse for Google to try to push a new product on its own. The fact that one of the two Dart authors is the guy behind V8 gives at least some hints that they know what they are talking about.
So, instead of discussing about yet another language, it would be more interesting to understand whether the two claims made by Google are true.
Oct 07, 2011 · Mr B Loid
Oct 06, 2011 · James Sugrue
Oct 05, 2011 · Mr B Loid
I'm definitely not an expert of this type of optimizations, but clearly this is a proof of how easy is to misinterpret a microbenchmark. There are good explanations in the previous comments - I'd add that it would make more sense either to run the two tests in two different runs (I mean, restarting the main()), or start measuring time in the middle of the loop, when a good number of iterations has been executed, thus the optimization "warmed up". But I can't prevent myself from thinking that such microbenchmarks are pointless. In a real application a lot of complex interactions with the JIT optimizer occur. I think that it makes only sense to benchmark the real thing.
Sep 16, 2011 · Mr B Loid
Sep 16, 2011 · Mr B Loid
Sep 16, 2011 · Mr B Loid
Sep 16, 2011 · Mr B Loid
Sep 09, 2011 · $$ANON_USER$$
Sep 09, 2011 · Ladislav Gažo
Sep 08, 2011 · Jacco van Weert
Martin, I've read your code example since when you posted it and I didn't comment to stay focused on the point of performance. From the API design point of view, I feel ever stronger objections :-)
First, your code example actually demostrates that String.split() is poorly designed - according to one perspective, read below - because it returns an array rather than a collection of strings. If it did, the body of the loop would be the single line:
lines.add(string.split());
which is rather terse and easy to read. BTW, there are a lot of reasons for which String.split() is an array, mainly related to the fact that we're talking to that very specific API that's a language runtime: String is a "sort-of" primitive type and I see issues in making it to depend on a concrete List. This specific case apart, I'd say that generally speaking the only reason for returning an array rather than a Collection is ... premature optimization. ;-)
More in general, if you're designing an API and let the API users to decide what to do... you're going to hurt yourself badly. You can have dozens of different users wanting different things and if you're not able to *impose* your style, you'll likely run out of control. Of course, it is expectable that a single style of API doesn't fit all needs.
In any case, for the API discussion, the stronger argument against passing a collection to be filled is that you're exposing your internal state - already said in a previous answer. I didn't try, but I think there are chances FindBugs would warn about it as a bad practice, much in the same way as when you expose a mutable object through a getter method.
Sep 08, 2011 · Jacco van Weert
+1 to Mladen, Eric, Mason. I add that the reason for being worried in Java about premature optimisation is that since 1.5, mostly 1.6 and figure out 1.7, the GC behaviour is so complex that its basically unpredictable in most real-world scenarios. Matt, you didn't say wrong things about small/large objects and their capability to fit in Eden, etc... But this is qualitative reasoning. In practice things are much more complex and probably depend a lot on the configuration (heap size, VM, VM options, etc...).
Frankly speaking, within my network of direct and indirect contacts there is only a single person that I trust as someone who really knows inner stuff in the GC and it's Kirk Pepperdine, who's an expert of Java performance (of course, people at Oracle developing the GC know too, but I don't interact with them). And Kirk is the first one to warn about premature optimization.
So, in the end, I'm not saying that I'd never do things such as Matt proposes. It's that I wouldn't start code like that, but when I've a codebase large enough to start running performance tests, I'd start to measure and eventually change the code when measurements prove that I have an advantage. And I'd redo those tests again and again, since it's not unlikely that a further refactoring wipes away the whole need of splitting that array of strings, thus eradicating the problem at the root (with chances of creating another performance problem in another place).
Sep 08, 2011 · Jacco van Weert
Sep 07, 2011 · Mr B Loid
Generally speaking I agree that everybody should only use stuff that he's comfortable with, but it's always worth while to make a rational check of our feelings. The dependency doesn't "come from nowhere": it just comes out of the @Configurable contract. It's just like the EntityManager of JPA being injected out of the @PersistenceContext, or any other stuff injected because of an annotation. All contracts declared by an annotation and enforced by a container. None of those objects is a "normal" object in the sense you're referring to.
True, Spring is not a "standard", in the sense that there's no specification etc... So what? I'd only be worried if it was an obscure product made by an unknown community... which is not the case.
Given that, people are of course free of choosing not to use Spring and stay with JEE, which brings us back to your point, where you're proposing JPA enhancements. GIven that Spring and JEE cross-pollinated a lot in the past, I don't understand why to reinvent the wheel with unneeded complexity, given that the @Configurable approach works. A single annotation on the class is enough for a contract that specifies that @Inject must be enforced also for JPA entities. For the implementation, if one doesn't want to depend on AspectJ, the thing could be made by an annotation processor, which is Java 6 standard.
Sep 07, 2011 · Mr B Loid
Aug 30, 2011 · Tony Thomas
Jul 07, 2011 · Mr B Loid
"The problem is this: the maven release plugin doesn’t really work for continuous delivery."
Well, yes. It's right, but this is quite obvious, as the fact that you can't use a hammer as a screwdriver ;-) The maven-release-plugin is strictly related to the concept of working with snapshots and extracting a release every in a while (maybe even many times in the day, but always starting with a snapshot).
If you don't have the concept of "snapshot" you should just not use maven-release-plugin. BTW, what would do for you? :)
Jun 22, 2011 · Farrukh Najmi
May 17, 2011 · Gerd Storm
Hmm... I've been a bit surprised by this post. On one hand, I started reading it with disagreement, given the title and the preamble. On the other, I agree with the contents. Probably it's a matter of wording and how to interpret that "Driven"?
So, I agree with the general sentiment you express, in particular the idea that when to spot regressions in low-level code you have to build/run the whole app, or large parts of it, we're paying too much to get too little. What I don't get right is that the smell of a bad practice is precisely spotted by the Integration / Functional test boundary. I mean, there are cases in which an Integration / Functional test might be so simple not to be too expensive for the purpose I'm running it - consider, for instance, an Android application. This is not exclusive: a well-designed, modular application, even on the desktop or the server, can be split in independent components, each one still simple enough to have its Integration / Functional tests not too expensive to run.
On the other hand, I find it to be very expensive to have a whole layer of pure Unit tests to cover all the code. And thinking of "Driven", most of the times my TDD code starts with a functional test derived by a scenario / user story, which leads to the initial design of a number of classes at the same time. Of course, as they take shape, the test is likely to be split in simpler ones, but not necessarily Unit tests.
In other words, while I agree with this post, I'd not put the focus on the Unit / Integration / Functional boundary, rather on the concept of continuously evaluating how much a given test costs, and how much it delivers.
Dec 02, 2010 · Bayarsaikhan VOLODYA
Nov 29, 2010 · James Crowley
Nov 26, 2010 · Ron Pressler
Nov 24, 2010 · Mike James
Nov 24, 2010 · Mike James
Nov 24, 2010 · Mike James
Nov 24, 2010 · Mike James
Nov 16, 2010 · Amy Russell
Nov 16, 2010 · Amy Russell
Nov 16, 2010 · Amy Russell
Nov 16, 2010 · Amy Russell
Nov 16, 2010 · andrew wulf
Nov 15, 2010 · Mr B Loid
Nov 15, 2010 · Chris Hardin
Nov 12, 2010 · James Sugrue
Actually we don't know if this comes out of a listening to the community. It could be that Oracle acted on its own, and possibly the Apple public news of "deprecation" were a part of a negotiating strategy (that could have been started much earlier than we know). In fact, we don't know the detail of the deal.
For what concerns the sanity of our teeth, the solution is obvious: we of course start commenting about news just after five minutes, but we don't get too worried about consequences for some time, not thinking that the lack of an immediate communication from any corporate means a complete lack of strategy / interest / whatever.
Of course, these are good news and the closing of a very old problem in the community. This demontrates that having a strong steward is better than having a weak steward... at least for some things. Of course, we pay the strenght of Oracle in other ways (see Apache etc...).
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 07, 2010 · Alex Miller
Nov 06, 2010 · Alex Miller
Oct 29, 2010 · Raw ThinkTank
Oct 27, 2010 · Chris Miller
Oct 26, 2010 · Tony Thomas
Oct 25, 2010 · Lebon Bon Lebon
Oct 25, 2010 · Lebon Bon Lebon
Oct 25, 2010 · Lebon Bon Lebon
Oct 25, 2010 · Lebon Bon Lebon
Oct 21, 2010 · Giorgio Sironi
Oct 21, 2010 · Giorgio Sironi
Oct 14, 2010 · Mr B Loid
@Andries I'm not aware of any limitation of OpenJDK to be used on a mobile environment. What does make you think that it's impossible?
@Stephen Losing Harmony is a bad thing. We're surely less free, but not totally unfree having only OpenJDK. It's to be understood whether this remaining freedom is enough or not. I'd like to see some serious assesment of a scenario that suggests that it's not enough before getting worried.
Citing multiple implementations of JME doesn't sound reasonable: they caused the fragmentation (both in the runtimes and in the decision processes) and the very bad condition that today affects JME.
Oct 13, 2010 · Tony Thomas
"no primitives"
And this would be "recognizable" Java?
Oct 10, 2010 · Satori Singularity
Oct 08, 2010 · Stefan Koopmanschap
Oct 06, 2010 · Greg Luck
Oct 06, 2010 · Greg Luck
Oct 05, 2010 · Greg Luck
Oct 05, 2010 · Greg Luck
Sep 27, 2010 · Mr B Loid
Sep 23, 2010 · Mike James
Sep 20, 2010 · Ida Momtaheni
Agreed - we're discussing what happens in the minority of cases. Re: javadoc, is not an option. javadoc is needed and good for readers and understanding what's going on; for pure code readability, javadoc/comments must be the last chance.
Sep 20, 2010 · Tim Boudreau
Agreed - we're discussing what happens in the minority of cases. Re: javadoc, is not an option. javadoc is needed and good for readers and understanding what's going on; for pure code readability, javadoc/comments must be the last chance.
Sep 20, 2010 · Ida Momtaheni
Sep 20, 2010 · Tim Boudreau
Sep 17, 2010 · Mr B Loid
I think we can't trust you, because there's a number of people who are using it. Bad way to start a wish list hoping that some technology, that you "don't understand" but others do, goes away. Furthermore, there's no logic in your wish: the times of Sun with no money in which working on a project meant stealing resources from another are over. If Oracle works on something, it's committed on it.
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Luigi Viggiano
Sep 15, 2010 · Yuri Filimonov
Sep 15, 2010 · Luigi Viggiano
Sep 09, 2010 · Mr B Loid
Aug 29, 2010 · Amy Russell
Thanks for enlighting us about Oracle only wanting to make money. I had forgotten that, instead, Google with Android and the rest of their business is only pursuing the Good of the Mankind, because they don't do evil. How silly I am.
Now, back to the point, can you explain me why Google "is right" in boycotting JavaOne? If one stops for a moment from being a fanboy and thinks about net facts of this move:
So, I've just backed my point, that the only people harmed by the boycott are attendees and members of the community.
For what concerns the 80/20 split of the community, I disagree. In fact there's some people, including me, that when a contention between corporates bursts out don't start screaming immediately their partisanship for one party and against the other, instead they wait and see to understand what's really happening.
Aug 27, 2010 · Amy Russell
Thanks for being logical, Reza. :-) We need that to counter the ever-spreading FUD and irrational attitudes.
Back to the point, there are a few things that hurt me. In particular, I'd like Google to speak clearly. Instead of saying "We can't participate at JavaOne 2010", I'd like to read: a) Oracle is practically preventing us from speaking - b) Our lawyers told us that it would be risky for the corporate if we speak - c) We're boycotting JavaOne.
If a) is true, then shame on Oracle. If b) is true, then shame on the law system, but we can't do anything about that. In any case, I don't think b) can be totally true - Bloch is also saying "we're searching for alternate venues to speak", so speaking in the open doesn't sound as a risk. If it's c) - sorry, but I say shame on Google. They would treat us, attendees and members of the community, as human shields in their war.
In any case, this makes me think that in future independently driven conferences (such as Devoxx, Jazoon, JAX, etc...) should be more and more supported by us, rather than JavaOne or Google I/O.
Aug 27, 2010 · Amy Russell
Thanks for being logical, Reza. :-) We need that to counter the ever-spreading FUD and irrational attitudes.
Back to the point, there are a few things that hurt me. In particular, I'd like Google to speak clearly. Instead of saying "We can't participate at JavaOne 2010", I'd like to read: a) Oracle is practically preventing us from speaking - b) Our lawyers told us that it would be risky for the corporate if we speak - c) We're boycotting JavaOne.
If a) is true, then shame on Oracle. If b) is true, then shame on the law system, but we can't do anything about that. In any case, I don't think b) can be totally true - Bloch is also saying "we're searching for alternate venues to speak", so speaking in the open doesn't sound as a risk. If it's c) - sorry, but I say shame on Google. They would treat us, attendees and members of the community, as human shields in their war.
In any case, this makes me think that in future independently driven conferences (such as Devoxx, Jazoon, JAX, etc...) should be more and more supported by us, rather than JavaOne or Google I/O.
Aug 27, 2010 · Tony Thomas
I think that Odersky arguments are blatantly biased, as usual (*). Take the smartphone example. In the world the choice is not restricted to the two extrema, a morse equipment and a smartphone. There are many intermediate phones in the middle, such as those only with a set of fundamental set of features and a decluttered user interface.
Now, what's a phone for? To call people and speak. Compare two persons, one enjoying his smartphone and one enjoying his normal phone, and see them calling a friend. I don't see but marginal differences in how they place the call; after that, what matters is what they have to say to their friends (a metaphor for good architecture and design practices).
Of course, smart users will enjoy the many things that a smartphone offers. No doubt on that. But they are just a minority. Average users will instead be confused by the more complex user interface. My parents - and a lot of other people I know - find it very hard - if not impossible - to place a call with a smartphone.
Not to say that a big deal of people are buying a smartphone for fashion, and don't use but a fraction of the features it offers - a waste of complexity.
So, the smartphone example is perfect for my point: Scala is more powerful and fit for a minority of experienced programmers that can handle its complexity; Java is simpler, less powerful and fit for the average programmer.
(*) I find that one big issue with the Scala community is that - with exceptions of course - they present only biased discussions. While it's clear that everybody is biased, I think that it's smarter for one who want to push a new idea to play the devil's advocate and try to wear other people's clothes. Otherwise they won't gain any point - no big surprise Scala is still where it was years ago.
Aug 26, 2010 · Tony Thomas
Aug 26, 2010 · Tony Thomas
Aug 26, 2010 · Tony Thomas
Aug 26, 2010 · Alex Miller
Thanks for the review, James, but there's still a point that I can't understand, and you can't help me. In fact you're a professional, experienced and talented developer, while the Inventor is - AFAIU - targeted to less experienced people, and possibly even non technical people. Thus I'd rather to have feedback from that kind of people.
My biggest doubts are related to the fact that to have quality apps you need testing. This is not only a tooling problem (does Inventor have any support for testing?), but a cultural problem. Testing is underestimated even by professionals, figure out about non-professionals.
My point is not about the fear of a wave of low-quality applications that could strike us, as some think. Indeed, there are probably already tons of low-quality apps around, and customers make the selection. My point is that Inventor will indeed add tons of low-quality applications that will just sit in the dust of <50 downloads stuff. So no harm for us, but what's supposed to be the added value of those things? I only see the increase of the number of apps in the Market, a number that is good for marketing people at Google, but it's not beef.
Making a real effort to be positive and thinking of my own experience and app ecosystem, I could image some of my more advanced users to develop a very simple extension of my app, for very specific needs. It could take advantage of the powerful Activity/Intent integration facility of Android - but, hell, how many apps are really supporting this kind of integration? How many are documenting it in a way that is understandable by non technical people?
Aug 26, 2010 · Tony Thomas
This doesn't contradict my point: the burden of implementing lambdaj is up to you :-P the implementer, and doesn't necessarily reflect on me, the user. In fact, not only it's important to clearly distinguish powerfulness from complexity (and I totally agree with your example referring to signal processing, complexity is a matter of entropy), but also on the different perspectives of the provider of a tool/library/framework and its user. I've only looked at a few things about Lombok internals, and I don't know how hard is to develop it; I only know that, from the user's perspective, I just have to add some annotations with very simple and clear semantics.
This matters because we don't need millions of tool/library/frameworks developers, but a very smaller number (thousands), and the community is rich enough to provide them. On the contrary, for the success of a language/runtime/tool/library/ecosystem, they must make feel good millions of developers.
This is a good objection to my statement, and clearly it's a matter of trade-offs and our personal mileages are different.
Aug 20, 2010 · James Sugrue
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 15, 2010 · Tony Thomas
I think there's a big misconception about what's happening. If I use OpenJDK, I can freely use the security and class packaging features that are part of this complaint, because the OpenJDL GPL license guarantees to me protection from patents infringments. If I developed something that is not Java and uses those security and class packaging features, I would be at risk of being sued.
I agree that software patents are flawed. But it's not true that large amounts of FOSS projects are at risk. Things that are built upon regular Java aren't, at least for the reasons that we're discussing about.
Edited to add:
There are many things that we're not understanding about this history. I think that before drawing any judgement we have to wait and learn more details.
Aug 14, 2010 · Krishna Srinivasan
Aug 14, 2010 · Krishna Srinivasan
Aug 14, 2010 · Krishna Srinivasan
Aug 14, 2010 · Krishna Srinivasan
Aug 13, 2010 · Tony Thomas
Mark, the copyright infringement claim comes after all the patents and doesn't explain where the copyright infringement has occurred (it refers generically to Java source code, documents, specifications). Until this is clarified, I don't see the point. Also because, if any, at risk would be people who do a parallel thing to Java, not just people who develops a framework on it.
I don't see it always makes sense to ask "what's after", beyond Google. The patents are about using a certain kind of VM and if one should just be consequential, Oracle should be going to sue Microsoft and every other corporate who makes use of a VM. Maybe I'm wrong, but I don't think Oracle is going to declare war to the world. I think it's only acting against Google. I've read many doomsday warnings in the past ten years that frankly I prefer first to see some more details before declaring a disaster.
Aug 13, 2010 · Esther Schindler
Aug 13, 2010 · Zac Roberts
Answer (the names of the allegedly infringed patents):
“Protection Domains To Provide Security In A Computer System”
“Controlling Access To A Resource"
“Method And Apparatus For Preprocessing And Packaging Class Files”
“System And Method For Dynamic Preloading Of Classes Through Memory Space Cloning Of A Master Runtime System Process”
“Method And Apparatus For Resolving Data References In Generate Code”
“Interpreting Functions Utilizing A Hybrid Of Virtual And Native Machine Instructions”
“Method And System for Performing Static Initialization”
Aug 13, 2010 · Tony Thomas
For the record, I've copied the single patent claims from the cited document:
“Protection Domains To Provide Security In A Computer System”
“Controlling Access To A Resource"
“Method And Apparatus For Preprocessing And Packaging Class Files”
“System And Method For Dynamic Preloading Of Classes Through Memory Space Cloning Of A Master Runtime System Process”
“Method And Apparatus For Resolving Data References In Generate Code”
“Interpreting Functions Utilizing A Hybrid Of Virtual And Native Machine Instructions”
“Method And System for Performing Static Initialization”
As it has been said, there's no GPL or such infringement. If there was, Google would be sued for license infringement, not for patent infringement. As it has been said, those patents are really about low-level VM stuff and Oracle could probably sue everybody about that, including Microsoft for C#. And Java has little to do with that, since it's Dalvik to be under attack. So Google would not save itself in moving away from Java.
I think those patents are just a casus belli for Oracle and I think they will eventually reach an agreement. It's just another step of the Google - Sun war, continuning under Oracle.
Aug 12, 2010 · Mr B Loid
Aug 04, 2010 · Johannes Ernst
Jul 29, 2010 · Felipe Lang
I don't understand how Pivot could be responsible of the freeze. As it has been said, I rather think that it's a JVM issue. The security is probably the toughest point (again, of the JVM in general, not Pivot). For sure, it would be simpler for me to leave home for my holidays without taking care of securing all the windows and doors, but I don't think is a good idea. For mobile code, so far I only see three possibilities: 1) do nothing or just a few things (so you can stay in the default box 2) ask for confirmation for things that you need to do 3) run in a completely unsecured box and wait for the disaster to happen.
Unless some genius brings an innovative approach, it's more a social than engineering problem. It's true that there are pervasive technologies that are less intrusive. For instance, Android apps don't ask for confirmations - they just declare the permissions they need. Users are supposed to double-check what permissions are going to give, but I frankly think that 90% aren't even aware of what's happening. So far no major disaster happened... but Android is young. In any case, they have the control of the distribution channel and the kill switch, so one might suppose that a large security disaster that is going to happen would be detected early and stopped. JVM applets can't do this.
Jul 29, 2010 · Tim Nash
I beg to disagree strongly. SQLite is used by: Firefox, Thunderbird, Apple Mail, Safari, Google Chrome, Apple Aperture, Adobe Lightroom, iTunes, the iPhone/iPad/iPod, Android, perhaps even Skype, and as far as I know even some high-end Symbian phones. SQL is pretty pervasive and thanks to iPhone and Android it will be more and more in future. This doesn't surprise me because desktop and mobile applications are more and more demanding, must crawl high volume of data and people appreciate the consistency. Now it's really a matter of using the proper words: I appreciate a lot the ACID part of SQL databases. What I don't like is precisely SQL and the relational data model. That's why I'm constantly repeating that NoSQL at least means two families of use cases.
Jul 29, 2010 · Felipe Lang
As somebody pointed out during the JavaFX discussion of a few days ago, one of the still unresolved problems is the non-perfect VM deployment even in the latest Java 6 releases, at least in some contexts. This clearly affects all the VM-based technologies, including Pivot. Also, JavaFX has been criticized because of the still not smooth integration between developers and visual designers; this is an area that Pivot doesn't even address. So, if JavaFX is not reputed to be a viable alternative to Flex, I don't see how Pivot could be...
Of course, where JVM deployment and graphic designers are not a problem (this is normal for applications used in industrial contexts), Pivot could be a valid alternative for people who don't like the regular Java APIs. But in this contexts regular Java applets using Swing are perfectly fit to the problem and can deliver applications with really high complexity.
Jul 28, 2010 · Tim Nash
The following statement doesn't make much sense to me:
Well, I don't do Procedural - I might do Functional, but I don't, in any case I don't see why I should hate it - I'm doing no more Assembler, nor ShellScript. I don't like HTML 5 etc..
But independently of my or others' personal preferences, SQL is different from the other technologies because 1) it has been there for decades and 2) it creates one of the biggest problems around which is OO/RDBMS impedance. So there are objective reasons, IMHO, for SQL to attract most of the "hate".
Of course you're right that people shouldn't jump into the dark and, above all, not because of fashion. I suppose there are in the world architects who go NoSQL with salt in their mind and others who don't - but this indeed applies to many other architectural choices, we're just telling that there's smart people as well as dumb people around.
My personal take with NoSQL is that I don't like the static RDBS schemata - so when I can I go with a RDF store, which in turn relies on a RDBMs. So I have classic ACID, but I don't see SQL any longer and can enjoy a persistence mode which is closer to my "concepts". I reckon that this is a minority view of NoSQL and that most people talk about the filesystem, which is a more drastic change.
In the end, it might be hype as well and perhaps in a few years it will be over (I don't think so, BTW - probably there will be a selection and many of the current products will die, but a few will survive and get to success). But it 's good that after decades people look around and try to discover whether it's possible to change an old way to do things.
Jul 28, 2010 · Krishna Srinivasan
Jul 27, 2010 · Brian Reindel
You can use the very small EventBus designed for the NetBeans Platform. It's a very thin piece of code and only depends on the simpler module of the NetBeans Platform (org-openide-util). Indeed, since the recent release of NetBeans 6.9, the thing can be made to depend on a new split, smaller module (org-openide-lookup), which has removed a lot of useless dependencies and AFAIK (I haven't upgraded to it yet) it shouldn't be depending on Swing stuff. Please let us know whether you like it or not (and why). Thanks.
PS For the record, if you aren't acquainted with the NetBeans Platform, don't worry about the .nbm modules. In the Maven repository you can depend on plain .jar as well.
Jul 23, 2010 · Mr B Loid
I'll add that plaing Swing is not good on Mac OS X, but things vastly improve when you use the Quaqua L&F (which is a L&F for Swing, not SWT). You even get "modal sheets" (or what Apple calls them) and the proper file picker of Mac OS X. It requires some work of course, but it can be done. So, saying that Swing WORA doesn't work is false - it just requires, unfortunately, much work than you expected from the WORA promise.
Furthermore, there are lots of third party libraries on Swing, starting from SwingX just to make an example, that make customers happy because they feel they have a reasonably complete set of widgets to use.
Frankly I completely understand Edvin's point of view, since it's what I experienced in 2005, when first I convinced myself to rewrite a Swing application to Eclipse RCP, started studying it and later gave up because I didn't see what I wanted. Things worked fine with NetBeans RCP and of course I had to work less since I didn't have to abandon Swing. Note that at the time I wasn't such a strong supporter of NetBeans; I'd been using Eclipse since 2001 (probably), in the past I even refused to use the NetBeans IDE for some Sun projects I led (because, in addition to my own preference about the Eclipse IDE at the time, even my developers voted for the Eclipse IDE), thus there was no prejudice.
Furthermore, when I switched to Mac OS X and passed through the migration of the continous change of native facilities of Apple about graphics (Carbon, Cocoa,...), I got pretty frustrated about the constant problems that Eclipse had with it because of SWT (sometime I still need to use Eclipse if a customer asks me). I had almost zero problems with Swing. While SWT could have been good in the old times, I see it now only as a source of problems and no relevant advantages.
At last, I find it very good that I can choose the NBM module system or the OSGi module system. The former is pretty good for me and I use it when I don't have any specific requirement. I should add that the fact that it works well is demonstrated by the facility of upgrading and installing plugins in the NetBeans IDE. On the contrary, plugins with Eclipse have been always another source of trouble - and I keep on hearing people complaining about broken Eclipse plugin every time, at work and at the JUG, even by people who still prefer the Eclipse IDE to the NetBeans IDE. Thus, for me there are no doubts that the NetBeans RCP wins on Eclipse RCP.
Jul 21, 2010 · Lebon Bon Lebon
I think that the above statement should be clarified. Actually, Mercurial and other similar distributed source repositories have been designed exactly for allowing people to concurrently work on the same stuff (in contrast with old facilities where it was mandatory to lock a file while working on it, then unlock when the job was done) and eventually postponing the merge time as long as one wishes. Clearly, the longer you postpone the merge, the harder the merge will be, but this is a trade off one must handle.
While the lock approach is clearly safer, it's also a bottleneck and Mercurial has been designed for scale up with a very high number of committers.
So, people could work on the same file and their work should be merged at some time. Of course you're right that this scenario is a potential problem and must be dealt with care, but if I one wanted to absolutely avoid it, perhaps Mercurial is not the best tool for him (even though it offers other features that one might like).
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 20, 2010 · Harshad Oak
Jul 16, 2010 · Mr B Loid
Jul 13, 2010 · Sidu Ponnappa C, K,
Hmm.... While it's a smart thing, I don't see Apple worrying about it. While Inventor faces the problem that iStuff users are less and less computer-acculturated by making development simpler, I don't see tons of people interested in developing an app, even when the task is simpler; 99.9% (I don't know how many decimal digits follow) of people want to use apps, not write them (just consider how tiny is the fraction of people who spend a few seconds to give you a star or a comment on the stores, figure it out how few would be attracted by the idea of working on it).
Thus, I see Inventor as a way to have more developers, have people talking about Android in schools and colleges, all good stuff, but not more people buying Android because of it. To sell more Android, manufacturers should rather focus on a) cooler graphic design of devices (just yesterday we was discussing at my local JUG that only the Nexus One come reasonably close to the coolness of iPhone design, while other appliances such as the Motorola Droid, that are technically excellent, are extremely poor in this respect) and b) more advertisement about the Android brand (outside the USA).
Jul 12, 2010 · Jason Jones
Jul 12, 2010 · Jason Jones
Jul 12, 2010 · Jason Jones
Jul 12, 2010 · Jason Jones
Jun 25, 2010 · Umberto Zappia
Jun 22, 2010 · Micah Wedemeyer
It depends. Clearly, on the cited project I'm doing that. It got stable enough to minimize the burden of merge, while other projects I'm working on are still in "violent" change mode (due to refactorings) either because they are very young, or because they're still going through a general reorganization (e.g. conversion from Ant to Maven). In this case, I fear that the very fact of having a merge that could last more than one-two days create problems because in the meantime many things have changed in the default branch.
Theoretically, when everything gets reasonably stable, I should go to the named branch mode for most of them.
Jun 21, 2010 · Micah Wedemeyer
Actually I don't see any "pollution" problem with named branches, since you can "close" them. Closing is just a flag, of course, but this means that they don't appear on a list. I don't see any problem of having them archived somewhere, just to eventually be able to recall what happened.
For instance, these are active named branches of a project of mine (mostly named on the JIRA issue they relates):
[Mistral:Projects/jrawio/src] fritz% hg branches
default 956:c735e06060c0
2.0 952:b5299fe1c439
3.0 950:91c52089e89e
1.6 756:616fce1a805d
fix-JRW-264 754:fe4c22e3275e
fix-JRW-261 717:176d50455e79
fix-JRW-257 709:35434439c597
fix-jrw-246-1 640:cbf229d7afce
fix-JRW-120 562:a82b7ed21b94
fix-JRW-6 560:eb9031bec74c
fix-JRW-162-and-JRW-194 367:e75735d2055c
The above is just a filtered list of named branches. The totality of them can be seen here:
[Mistral:Projects/jrawio/src] fritz% hg branches --close
default 956:c735e06060c0
2.0 952:b5299fe1c439
3.0 950:91c52089e89e
1.6 756:616fce1a805d
fix-JRW-264 754:fe4c22e3275e
fix-JRW-261 717:176d50455e79
fix-JRW-257 709:35434439c597
fix-jrw-246-1 640:cbf229d7afce
fix-JRW-120 562:a82b7ed21b94
fix-JRW-6 560:eb9031bec74c
fix-JRW-162-and-JRW-194 367:e75735d2055c
fix-JRW-276 909:ce254969283b (closed)
fix-JRW-275 892:f80b6551b76d (closed)
fix-JRW-236-1 683:9e216f85561e (closed)
fix-JRW-240-1 668:ff403f3cc946 (closed)
fix-JRW-230-1 660:e93885f2ae9b (closed)
fix-JRW-203-1 632:4ce962bd50dc (closed)
nef-fixes-1 586:ab13ced986d3 (closed)
fix-JRW-187-1 408:1a0cbede4cb8 (closed)
src 286:d9692def01a2 (closed)
Jun 18, 2010 · Fabrizio Giudici
Jun 18, 2010 · Pavel Chuchuva
Jun 16, 2010 · Fabrizio Giudici
Jun 12, 2010 · Tony Thomas
Jun 09, 2010 · Mr B Loid
Most of the good things have been already said. But I can add one: think incrementally. Thanks to heavens, the NetBeans Platform is Swing based as your existing application. This means that you shouldn't learn all of the tremendous bag of Good Things that the platform has, but you can start from a few things, integrate them into your application, and then incrementally replace your code with the proper NetBeans APIs. At the same time, slowly but steadily refactor your stuff splitting it into properly designed modules. If you are new to module design, the Platform will teach you a lot of things.
Jun 07, 2010 · Mr B Loid
If they lived in my country, they'd write integration tests in assembler rather than doing their taxes.
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
Jun 02, 2010 · Tony Thomas
May 27, 2010 · Olivier Dangréaux
No problem with the heresy... :-) I myself don't go 100% TDD (sort of 70%, probably). I agree that the important thing is to have tests, even though they didn't come from TDD. It's a matter of grain of salts. I'd like to stress, in any case, that especially when you want to trade off some quality for speed, that is, speed is important for the reasons quoted by Avi, the risk is that tests-done-later fall under the reap of lack of time or money. In this way, the risk is that you go fast to the market, you get some ROI from the first deployments, and then the project dies young because it gets soon out of control. TDD has got the advantage that, being written first, tests won't suffer too much from the budget cuts.
In any case, we're all saying a very savvy thing: "pure" approaches exist mainly only at conferences; in the real world there are many forces driving you and you must find the proper mix.
May 27, 2010 · Olivier Dangréaux
I agree, but with some remarks. The point is that "good code" isn't a meaningful thing if you don't put it in context. I mean, in the context of Uncle Bob it means code of excellent quality, in the context of normal people striving to improve while delivering and dealing with all the everyday's mess it means another thing. The "optimal" code isn't necessarily the fastest thing to do initially, since often they're learning how to write better code while working on a project.
My point is: strive to write first the better code that you can afford, but above all go TDD. If you have tests and initially suboptimal code, at least you're fulfilling your targets. Having tests, you can afford to refactor code continuously and you can improve it later.
May 26, 2010 · Mr B Loid
Thanks for the feedback, which is very appreciated. Yes, indeed I'm figuring out how to create specific blueBill Mobile plugins (they could be probably seen next week or the week after the next one) and have a tested ecosystem, as you said.
The idea of real time, geo-based queries is part of the original concept of blueBill Mobile. Indeed, the JavaFX version has got it (prototypical) as it can connect to a server where other birders can share observations in real time and eventually even share their current position (sort of Google Friends for birders). If you search for bluebill Mobile JavaFX at my blog there should be a screencast showing a simulation of the feature. It dates back from one year ago, then the stall of JavaFX prevented me from "hitting the shelves". I'll also start talking with people who are already managing database services about observations - there are a number both in Europe and USA. By this August there will be a lot of more beef in blueBill Mobile.
And please let us know when your mushroom application is available. :-)
May 22, 2010 · Mr B Loid
BTW, guys, do you know whether it's possible, by configuration or command line, to avoid that Maven prepends the project name in the SCM tag? I get stuff such as foo-1.2.3, while I'd like to have 1.2.3 alone.
Thanks.
May 21, 2010 · Mr B Loid
You're right, it's annoying and indeed a logic flaw. With Mercurial (or other distributed systems) I've found a way to fool it so it does a push to a local repository, which I later sync to the real one only if I decide to "approve" the release. With this schema, In the same way, I make Maven to publish artifacts to a local repository, so I can get to a "real" release that it's still on my computer. If everything is ok, I can later push all the stuff (source changes and artifacts) to the network.
See http://weblogs.java.net/blog/fabriziogiudici/archive/2009/10/29/fixing-two-problems-maven-mercurial-hudson
"It ignores -Dmaven.test.skip=true"
I think you just need to put that in the configuration of the maven-release-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<configuration>
<preparationGoals>clean install verify</preparationGoals>
<goals>clean install javadoc:javadoc assembly:assembly deploy</goals>
<arguments>-Dstuff_goes_here</arguments>
</configuration>
</plugin>
That's why the release plugin is just forking another Maven process, but with the above stuff you can configure the forked thing.
May 20, 2010 · Mr B Loid
Being just graduate and supposedly young, perhaps he's got still chances to change his job... ;-)
Seriously, I second the above and especially Martjin's advice about joining an open source project.
May 17, 2010 · Riyad Kalla
May 17, 2010 · Riyad Kalla
I've got some extra code that works with Intents, but it's experimental (and probably unneeded for my current requirements). The pattern was born having in mind to solve the problem of direct coupling, but clearly the smart things it does is decoupling navigation decisions - which could end up in Intents as well.
I'm thinking of a similar things that you described, i.e. the capability of extending blueBill with "plugins" that would be delivered as extra application; for instance, a bunch of media providers, that would provide multiple documents available in the internet (or on the local storage) about a certain bird species. This would work with Intents, of course - both letting the user to pick the desired one, or even calling sequentially all the found things and aggregating the results. I've got a tough week, but I'll probably work on that the next weekend.
May 14, 2010 · Nikita Ivanov
@Alex, of course I agree, things that aren't exposed APIs or high-level stuff aren't worth while wasting too much time.
May 13, 2010 · Nikita Ivanov
May 12, 2010 · Tony Thomas
May 12, 2010 · Tony Thomas
May 12, 2010 · Tony Thomas
Session Title: Efficient development of large NetBeans Platform applications with Maven
May 04, 2010 · Smackie Chan
May 04, 2010 · Patrick Wolf
Good point, the need for personalized names of places is another reason for having one's own location manager. Yes, these are not errors, but missing features, and the fact that we're having Java 5 makes it easier to use existing stuff, even though I think that for the custom location database using sqlite is really needed.
Apr 30, 2010 · Lebon Bon Lebon
If it's verbose, it should suffice to drop the redundant parts :-) So:
"How do I pass data between Activities/Services within a single application?" There's no specific infrastructure in Android (for this purpose).
Then the rest of the article explains how you can do on your own.
Now, if you are asking what an Activity/Service is, you won't find an answer here, because I'm not writing Android tutorials for beginners - I think that there are a lot already available.
Apr 28, 2010 · Rajneesh Garg
@Rob, unfortunately I think that the XMLVM stuff is precisely what Apple forbids with the 4.0.
Generically speaking, it's nice to see a dual boot in a phone 8-) but I think it's only a geeky thing. I mean, less than 0.01% of common customer would do that - also invalidating their warranty - and can't move the market. Android has got to beat iPhone on its own. :-)
I must say that, even with some criticism (see my previous post and others to come soon) I feel ok with Android. In about a week of work in free time (ok, with a full weekend in the middle) I've been able to roll out a version of blueBill Mobile which delivers (I mean, it provides a meaningful value to users). THis is my first week with Android and I'm also experimenting different design strategies, so it's really productive.
Apr 21, 2010 · Tony Thomas
Apr 21, 2010 · Tony Thomas
I think that the enum things could have been solved anyway. Google could define two overloaded versions of findViewById(), one accepting an enum and one accepting an int. Then, enums used as API constants could have a special annotation to mark them for a special treatment I'm going to describe and have a property getter such as getCode() that returns the current int constant. In the end, since Android translates bytecode into Dalvik, a bytecode manipulator could replace all the findViewById(enum) into the corresponding findViewById(int) inlining the getCode() extraction. In this way, ranting people like me would be happy, and the generated Dalvik code would be just the same as today, with zero impact at runtime (at least for the two points mentioned by Artur) and a very slight overhead during compilation. BTW, one could even not use enum in this case (after all, the real usage is getting the getCode() value, and we don't need ordinal(), or the static values() etc), just a specific generified Key object wrapping an integer code and avoiding the cast - the special annotation would just instruct the code manipulator to do the trick. Do you see any problem with this approach?
For what concerns the aspect-via-annotations etc, I'm keeping very generic, but I still think that some solution that can be entirely applied during compilation (i.e. with an annotation processor) could work. This could be done by anybody of us, since it doesn't require changes to the Android runtime. I could work on this after I have matured more experience with the APIs and figured out how I'd like to see the code.
As per the ranting API, yes, I'm keeping the right of ranting back as I'm a happy Swing user :-) Of course I reckon that the Swing APIs are in large parts obsolete as per 2010 standards, but I'm just applying the same bashing standard to Android.
Glad to know about the Eclipse UFaceKit and going to have a look at it - in particular, if they provide JavaBean-compatible wrappers, BetterBeansBinding should work too.
Of course, it's clearly stated (even in the title, thanks to our magnific editors here at DZone) that this is a newbye perspective, with all the limits in it. I could change my mind with time on some parts, and I'll let you know.
Apr 20, 2010 · Tony Thomas
Apr 16, 2010 · Mr B Loid
Apr 16, 2010 · Mr B Loid
Apr 16, 2010 · Mr B Loid
Apr 16, 2010 · Mr B Loid
Apr 16, 2010 · Mr B Loid
Apr 16, 2010 · Mr B Loid
Apr 02, 2010 · Lebon Bon Lebon
Apr 02, 2010 · Lebon Bon Lebon
Mar 29, 2010 · Mr B Loid
Mar 29, 2010 · Mr B Loid
Mar 29, 2010 · Mr B Loid
Mar 26, 2010 · Mr B Loid
Mar 26, 2010 · Mr B Loid
Mar 14, 2010 · Mr B Loid
Mar 09, 2010 · Mr B Loid
Mar 08, 2010 · PHP Watcher
I was shocked in a tornado of thoughts when I read the bad news. The first, obvious point is that you don't expect that a young man in health passes away - and especially one that you know. Accidents, killings, quakes and other bad things seem mostly to involve people that you don't know. Indeed in the past I lost dear ones in their youth, but you always think that those dramas are an exception, and won't repeat again in future.
When I got able again to sort things out, I focused on a specific episode with respect to Felipe. The past year, Felipe was so generous to host some Hudson jobs of mine on his host - it was a very important help for me, since in a few weeks I found myself unable to properly run CI on my old server and it took months for me to find a better solution.
In change for his favour, when I went to Zurich for the OSGi DevCon I brought Felipe a couple of bottles of italian wine - in turn he invited me to a dinner at his home. I wasn't able to pay the visit, because that day I felt sick, in the way that embarasses you when you go to somebody's else home.
Indeed, I pay a lot of attention to the old-style human relationships, especially in a world where many things have gone "virtual" - something that I don't like at all - technology is cold and void without the human touch (and dumb too - hell, how I do hate that blogs and forum posts are now attaching my smiling photo in such sad circumstances).
But I wasn't too angry for that missed opportunity, since I've met Felipe in many conferences and that would happen again in future; and planned to be back in Zurich sooner or later, perhaps for Jazoon 2010, in a few months.
I couldn't really imagine that it would have been too late. Now I regret that missed dinner so much. This recalls us that we aren't fully in control of our life.
Mar 08, 2010 · Mr B Loid
I think that this post is good when properly contextualized. As OSGi and, generically speaking, component frameworks are spreading more and more, and getting known by more and more people, the risk is that they are perceived as a "silver bullet". It happened a lot of time in the past, and it will happen in future for other technologies. I understand Adam's post as the warning that OSGi is just a facilitator, but it must be introduced in a project where the best practices work. As it's powerful, OSGi is also more dangerous if not well understood.
Of course people with OSGi experience know these facts pretty well. But they aren't the intended target of this post, AFAIU.
Mar 07, 2010 · Guy Davis
Feb 24, 2010 · Ariejan de Vroom
Feb 16, 2010 · Lebon Bon Lebon
"Linux based mobile platforms include Android and Palm WebOS, as well as SymbianOS, BlackBerry and Windows Mobile"
I'd say that there's a readability problem in that sentence (SymbianOS and Windows Mobile, and probably BlacBerry AFAIK aren't Linux based).
Feb 05, 2010 · Ian Ozsvald
So now, as a regular practice, he always put aside a nice block of memory to free up when it's really needed.
I think I've possibly tried something similar during the very first times of developing with J2ME, for evident reasons. Indeed, this is sort of a pattern for many kind of problems, such as "let's put a few money in that account that we don't use, so we live as they didn't exist, and use only in case". I see a big problem: unless you completely forget that money (in which case, you're loosing them), you - perhaps unwillingly - still count on that money, in some way.
Unless you're the only one knowing that emergency resource and keep the team unaware of it; but it seems unapplicable in most of today's agile processes, where honesty and sharing are some of the key values.
Jan 28, 2010 · Jimbo Maclean
Jan 28, 2010 · Jimbo Maclean
Jan 28, 2010 · Darren Barefoot
Jan 11, 2010 · Mr B Loid
Jan 01, 2010 · Daniel Spiewak
Dec 04, 2009 · Gerd Storm
Dec 02, 2009 · Fabrizio Giudici
Nov 27, 2009 · Mr B Loid
Tim Boudreau pointed out that Swing components must be created in the EDT thread. Since we're talking of a Pluggable TopComponent, you'll discover that the Lookup.forPath() method is created from the EDT thread. But in order to make BeanFactory as much reusable as possible, I've patched it so line 34 has been replaced by the invocation of this method:
If an instance of Exception is returned, it is thrown.
Nov 26, 2009 · Mr B Loid
Nov 22, 2009 · Mr B Loid
Nov 19, 2009 · Gerd Storm
Nov 16, 2009 · Mr B Loid
Nov 15, 2009 · Gerd Storm
"nor the technology platform (both are equal)"
Indeed, there's no such a thing as a JDeveloper Platform. And the NetBeans platform has a plentyful of customers. This means that it can survive on its own, but it would be a profitable market segment for Oracle.
Nov 14, 2009 · Gerd Storm
I don't believe Oracle will focus primarily on Eclipse. It would be foolish: they are targetting IBM as their primary competitor, so I don't think it would be smart to select the competitor's primary developer tool. I believe they will support Eclipse as they do now, but not more. Jacek, if you don't see NetBeans developers, you aren't looking around a lot ;-) We are smaller than the Eclipse community, but relevant. Just talk with people at a conference and you'll find NetBeans users.
Nov 03, 2009 · Mr B Loid
Oct 28, 2009 · Mr B Loid
From a perspective, every acquisition is partially anticompetitive. In any case, the alternative was IBM (the "owner" of the Eclipse Foundation) or the failure - well, failure is anticompetitive by its very nature, as things mostly vanish in thin air. So, if the acquisition doesn't take place, it will be much worse.
For the rest, I think we're discussing on weak basis. Basically the document published today doesn't say anything or just a few things. I believe that Glassfish is here to stay, because JEE is open sourced and needs a Reference Implementation, thus justifying the existence of two separate products. For NetBeans, Oracle didn't say anything, as "is expected to provide..." doesn't make clear who will fund this "expectation". I believe that for NetBeans and other products there is still no decision.
Oct 28, 2009 · Mr B Loid
From a perspective, every acquisition is partially anticompetitive. In any case, the alternative was IBM (the "owner" of the Eclipse Foundation) or the failure - well, failure is anticompetitive by its very nature, as things mostly vanish in thin air. So, if the acquisition doesn't take place, it will be much worse.
For the rest, I think we're discussing on weak basis. Basically the document published today doesn't say anything or just a few things. I believe that Glassfish is here to stay, because JEE is open sourced and needs a Reference Implementation, thus justifying the existence of two separate products. For NetBeans, Oracle didn't say anything, as "is expected to provide..." doesn't make clear who will fund this "expectation". I believe that for NetBeans and other products there is still no decision.
Oct 28, 2009 · Mr B Loid
From a perspective, every acquisition is partially anticompetitive. In any case, the alternative was IBM (the "owner" of the Eclipse Foundation) or the failure - well, failure is anticompetitive by its very nature, as things mostly vanish in thin air. So, if the acquisition doesn't take place, it will be much worse.
For the rest, I think we're discussing on weak basis. Basically the document published today doesn't say anything or just a few things. I believe that Glassfish is here to stay, because JEE is open sourced and needs a Reference Implementation, thus justifying the existence of two separate products. For NetBeans, Oracle didn't say anything, as "is expected to provide..." doesn't make clear who will fund this "expectation". I believe that for NetBeans and other products there is still no decision.
Oct 28, 2009 · R Gin
Oct 21, 2009 · Mr B Loid
Oct 19, 2009 · Mr B Loid
Just to add some discussion points, these are the issues to be solved:
Oct 18, 2009 · Mr B Loid
I agree with Geertjan and others. It sounds as JDeveloper (and AFAIU IDEA) can be eventually defined as a "platform for development tools", but the high number of industrial applications cited by Geertjan, Toni and others demonstrate that people need a real, general purpose platform.
I think Oracle will do a smart thing if they invest in the NetBeans Platform making it a profitable asset, as there's evidence there's an existing market for it.
Oct 16, 2009 · Mr B Loid
The problem with JOGL and NetBeans is name-clashes on many combinations - for instance, Solaris and Linux share the same library name, as well as 32 and 64 bits versions in many platforms (not sure, but probably all). The best current solution that I've recently adopted is to use the NetBeans JOGL Pack - it includes all the existing native libraries and at runtime it copies the one for the current platform where NetBeans expects to find it.
Sep 21, 2009 · Brambo Brunel
Sep 17, 2009 · Mr B Loid
I totally disagree. Frankly, I find the "must catch - rethrow", being repeated for years, argument totally invalid.
First, because checked exceptions are a tool, and you decide when to use a checked or an unchecked exception. If there's a misuse of checked exceptions in the Java runtime, it's a problem of the runtime, not of the language feature.
Second, often the catch-rethrow MUST BE DONE because an exception changes meaning. What I'm catching as an IOException at a certain level usually means another type of failure at the business level. So, catch-rethrow in this case is a feature that a good design should consider, not a thing that I'm compelled to do because of the language.
One of the most common things that I'm seeing in customer's code is something like: Applet - calls via SpringRemoting a - Remote Service - that calls Hibernate. Of course, there's no Hibernate.jar in the Applet, as it shouldn't be there. Now, it happens that an operation on the database fails, throw a HibernateException subclass (which is a RuntimeException, so nobody caught is), gets serialized to the Applet and - ta-dah - you get a wonderful ClassNotFoundError - and the customer can't understand what's happening. Letting exceptions filter to the upper level without any check is mostly an error, especially in multi-tier applications.
Third. You're all thinking of good applications with a good test coverage, where all my cases for checked exceptions could be replaced by a very good test coverage. Now, please do a reality check. We are in the minority that write good coverages, that is a minority of people writing decent tests. That's bad, but that's life. This is not going to change in any way, neither in the short, nor in the medium period (I don't do long period forecasts by principle). In this world, having the compiler to provide some more checks with static means is a very good thing.
Sep 11, 2009 · James Sugrue
Sep 11, 2009 · James Sugrue
Sep 07, 2009 · Esther Schindler
Sep 07, 2009 · Esther Schindler
Sep 06, 2009 · Geertjan Wielenga
Sep 06, 2009 · Daniel Ostermeier
Aug 31, 2009 · Vladimir Rancic
Aug 31, 2009 · Mr B Loid
Could be the Oracle buyout; if Apple was a decent provider, it would have communicated it. "We're removed ZFS waiting for the Oracle buyout of Sun to complete.". Less than 70 bytes, it doesn't seem too complex. That's part of the reason I consider Apple totally unreliable.
If we want to talk about stuff that is lightyears behind, what about HFS+?
Aug 30, 2009 · Daniel Ostermeier
Right: in fact this example is a functional test.
Aug 30, 2009 · Daniel Ostermeier
Right: in fact this example is a functional test.
Jul 20, 2009 · Mr B Loid
Alexandro,
the Central Maven Repository doesn't accept snapshots - at the moment, many of my projects are still in the snapshot stage, so I can't publish them (yet). Then, not everybody wants to publish in the central maven repository - while I don't have objections for generic projecs such as BetterBeansBinding, there's a cluster of specific projects of mine for which I prefer to keep my own repo.
As a last point that I still have to study, the Central Repo forces the constraint of setting the group id as "com.kenai" that some people could dislike. Indeed, to me it doesn't make much sense, as if you want to change the source repository, you have to change it and break continuity.
Which is the problem with multiple repos, BTW?
Jul 13, 2009 · Mr B Loid
Jul 02, 2009 · Mr B Loid
Jun 13, 2009 · Lowell Heddings
I have to correct my previous comment, the point that I intented to quote is the whole stuff (I missed the second statement):
Jun 13, 2009 · Lowell Heddings
Thanks Thomas for commenting here.
Which is my point too. And as jamesjames pointed out, the core concept is not new, the new wrapper classes around variables (called Locations in JavaFX) are a "surrogate" for the lack of first-class properties in Java. Other frameworks, such as Wicket, have got similar approaches for their binding.
Jun 12, 2009 · Lowell Heddings
Jun 12, 2009 · Lowell Heddings
Jun 12, 2009 · Lowell Heddings
Jun 08, 2009 · Mr B Loid
I think that it would make sense to *discuss* the creation of a JavaFX zone, but it would mostly for ... promoting JavaFX on its own.
For the rest, JavaFX is the only non-Java language that I'm using on the Java platform. I don't have any particular interest in Groovy, JRuby, Scala; nevertheless I regularly read articles about that stuff, generically because looking at things in different perspectivesis inspiring; because who knows, sooner or later I could change my mind (unlikely) or because I would have a better awareness in motivating my decision of sticking with Java (likely).
My point is that it's important that the title (or the first lines) describes the article content. In this case it does and if I'm not in the mood of reading about Groovy I don't find it hard to skip it.
Jun 07, 2009 · Fabrizio Giudici
I've posted here a screencast of the application running with about 1.000 elements. The screencast is neither cut nor accelerated, you can see the real speed on a MacBook Pro.
For what concerns my original demo, the only problem I see on JavaFX 1.2 is some parts of the layout screwed up.
Jun 07, 2009 · Richard Carr
Can you elaborate where the promise of WORA is violated (more than it already happens as per comment by Jess Holle? For instance, I've just written code that runs in the mobile emulator (and soon on a real phone, as soon as I grab it) and as an applet. With plain Java, this is possible only for smaller things, supposing you're using some JME library that emulates Swing.
You're right, it doesn't. You could construct sequences on the fly with very few syntax overhead (e.g. foobar([a,b,c])) but sequence are heavy from what I understand (unless there's some specific compiler optimization that kicks in if you don't use binding etc...). What I'm resorting to is the same trick as SLF4J that doesn't use varargs to keep compatibility with older Java version, that is overloading each method with 3/4 variants having 1..4 arguments.
Jun 05, 2009 · Mr B Loid
@Dar Var: I believe JavaFX support for 6.7 will come later in form of a plugin. There have been a few incompatible changes in the IDE, so the plugin for 6.5 won't work in 6.7.
@PatrykRy: code formatting has been temporarily disabled because of a bug. I must say I won't probably use it: with JavaFX I'm finding that I format things in different ways in function of the context. For instance, with a large number of attributes I do:
while where a single attribute is present:
I've seen such a high degree of formatting variations in my code that I don't think an automated formatter is able to handle. Maybe it's just because I've still to find a style for my JavaFX coding?
Jun 04, 2009 · Fabrizio Giudici
Jun 04, 2009 · Brian Reindel
IMHO, Maven apart, we have a nasty compiler bug, that it's still here with 1.2 and seems to be specific of the Mobile profile. I've set up a test case: http://weblogs.java.net/blog/fabriziogiudici/archive/2009/06/javafx_12_still.html
Jun 03, 2009 · Alessandro Coppe
Jun 02, 2009 · Fabrizio Giudici
Krzysztof, no, I didn't see problems with that handful of names. BTW, I have another application where a similar trick is used and it works with about 1.000 items, with only some slightly noticeable delay in some cases. Of course, with such a big number of items you probably have to do some smarter filtering rather than using subsequences.
Jun 01, 2009 · Fabrizio Giudici
May 31, 2009 · Brian Reindel
May 31, 2009 · Mr B Loid
Excuse the frankness, but it's totally nonsense. :-)
First, when Java was released, there was no way. It was really interpreted at the time and the bad performance would have killed it. This closes the discussion about the "original sin": there was no feasible alternative.
The interesting discussion is about today. In the quoted blog, I don't see any "prove" that there would be the same performance: only the part about arrays has been talked of, but nothing about indexes as for arrays or math, or even number crunching. The most relevant comment above is by "martin" that reminds us that people use Java for a number of things, including 2D, 3D, I/O, I include math (people do image elaboration in Java too). Not to mention that Java ME runs on billions of mobile devices and performance would be still today a big problem for many of them.
This kind of recurring discussion seems to forget that Java success is due to the fact that it is a General Purpose language; in addition, many people fail to understand that "general purpose" means a wider range of things of what they do usually. For more specific domains (that, of course, can be relevant as well) there are other languages and Domain Specific Languages.
As a final point, I don't think why people needing pure objects everywhere just don't forget about "int" and use "Integer". With autoboxing the code is also clean. For instance, whenever I need nullable types (e.g. for the database) I use Integer. Where's the issue?
May 18, 2009 · Mr B Loid
May 15, 2009 · Mr B Loid
May 15, 2009 · Fabrizio Giudici
May 14, 2009 · Mr B Loid
For curious people:
1. First, Anton Epple (NBDT) will talk about NetBeans and OSGi at OSGi DevCon: http://www.osgi.org/DevConEurope2009/Speakers#Epple
2. Second, I could be able to demonstrate something at Community One http://developers.sun.com/events/communityone/2009/west/agenda.jsp, "NetBeans Platform (*) + Wicket = Reusable Components and Modular Web Apps [ESP 302]". I've used the conditional because the talk is about using the NetBeans Platform on the server side (everything is already working) and I'm right now working to try to OSGify it - among the commenters of this thread, I'm certainly the less experienced with OSGi.
(*) I've just seen that the title is wrong! Not NetBeans IDE, but NetBeans Platform!
May 13, 2009 · Fabrizio Giudici
The fact that you do things in JavaFX "better" than Java is subjective, at this time, and related to a number of things. For me, I don't see compelling reasons for turning my Java desktop stuff into JavaFX _now_. If I had some compelling graphics requirements, at the point to throw a graphic designer into the team, JavaFX would make the difference. I expect the two different perspectives to get closer and closer in the next years.
Objectively, JavaFX is more coincise and binding is the key trick. Try to write the same example I'm talking of in Java. Of course, the same functional binding could be introduced in Java (thinking of it...) thus even Java programs could be much shorter. But with Java binding can't be as natural as is with JavaFX.
May 13, 2009 · Fabrizio Giudici
May 12, 2009 · Fabrizio Giudici
It's one of the things I'll try in future. For now, the only thing I've done is to run javap on a compiled JavaFX value object, and - as expected - I saw it's very different from a regular JavaBean. Basically every attribute its an independent object, I presume with its own get()/set() and binding support. But there are also lots of other stuff that I don't understand at a glance.
Apr 20, 2009 · Daniel Spiewak
Apr 20, 2009 · Daniel Spiewak
You know, I don't like to make predictions. But Aljoscha Rittner, fellow NBTDer, just pointed me to this document: http://www.oracle.com/sun/sun-faq.pdf
There are explicit statements about MySQL and hardware stuff:
There's also another generic statement:
Let's see. Sounds even too good to believe :-)
Apr 03, 2009 · admin
Mar 23, 2009 · Thierry Lefort
Mar 20, 2009 · Mr B Loid
Karsten, you're right about Sun's bad conditions and the need of a financial operation to save it. So, perhaps being bought by IBM is better than going bankrupt. I'm the first that was sure that Sun would have been bought very soon. Frankly, IBM is only the second worst buyer after Microsoft. Yes, big customers that are acquainted to elephantiac, locked-in products such as those in IBM portfolio might feel more comfortable, because they are used in wasting money in expensive consulting and integration. All the rest of the world probably doesn't feel better.
Mar 19, 2009 · Mr B Loid
Yes, Jacek, I've the same setup as Jaroslav and fonts are pretty neat. I presume it depends largely on the distro / Java version / whatever combination.
In any case, as it has been said, NetBeans is not going to disappear. In the worst case IBM can drop funding, but can't make it vanish since is GPL. There are people willing to maintain it in case funding is dropped. I bet it's the same with GlassFish and other products.
Mar 19, 2009 · Mr B Loid
Jacek, while I personally agree with you on the low relevance of scripting languages etc (but I know that a lot of people see things differenty), I don't believe corporates "want to use Java". I mean, they want to use a good tool. If they think Java is a good tool, they use it. In this case, they don't need a lot of extra stuff in the language (in fact, all my main customers use Java, are fine with it and see the ever-lasting debate about closures and such as a waste of time). Those who are not satisfied with Java will use a different thing.
Mar 18, 2009 · Mr B Loid
Good point about the split, Umberto. There were rumors about it a few months ago, but you know, you can't guess which rumors are true in these cases.
Yacek, for what concerns NetBeans, it's not only the IDE. I mean - I obviously think it's much better than Eclipse, if the new NetBeans wasn't so successful and renewed I probably would have stayed with Eclipse - or maybe migrated to IDEA, since Eclipse proved to be very troubled during the PPC-Intel Mac OS X switch (not counting compatibility problems about plugins that occurred in the past). In any case, given what NetBeans is today, I would never switch back to Eclipse. A customer of mine, today, told me during the lunch break that there would be a "very bad mood" in his company if NetBeans went away since they are almost addicted to it.
But my fondness point about NetBeans is the _Platform_ - as I never liked the Eclipse Platform after having evaluated it a few time ago.
Mar 18, 2009 · Mr B Loid
Personally I see it as a disaster, agreeing with pessimistic responses above. And I think most of the Sun counterparts are much better than IBM's. I hope that the US anti-trust won't allow this.
OTOH, all the mentioned projects are open sourced and have momentum enough for the community to take over. Should some bad guy decide to cancel NetBeans, I'd be with the first volunteers to keep in going.
Mar 05, 2009 · Paul
Hmm... it seems I didn't explain myself properly. Looking e.g. at the FEST description I see that the typical features of existing Swing-testing frameworks are:
I agree that it's a nonsense re-writing new code for the above features - the fact that blueMarine has specific code for those is just a legacy and will disappear in the near future. Since blueMarine is a NetBeans Platform application, I'll use the specific support that the Platform provides.
My critical point is - still quoting FEST documentation - "testing long-duration tasks":
Paraphrasing the above example, I've got some scenarios where the "main" frame would appear by itself, as a consequence of other actions started in the past; thus, its mere appearance is not a good signal that I can proceed with the test. I must be sure that "main" appearance is strictly a consequence of the click on the "login" button. Unfortunately there are no different properties in the many appearances of "main" that I could use to discriminate them.
When I looked at FEST I found promising that "using(loginDialog.robot)" because it sounded as the "tagging" idea I have in mind; maybe that means that FEST will ignore any appearance of a "main" frame that has not been triggered by the "login" button click?
But the following explanation:
seems to tell me that it's another matter; for sure that "one and only instance" contrasts with my needs, as my test could launch multiple activities at the same time - still paraphrasing the previous example, think of this:
Of course this doesn't make much sense for a login window, but I'm just reusing the same example for clarity.
Does FEST or any other Swing testing framework support this scenario?
Mar 04, 2009 · Mr B Loid
Mar 04, 2009 · Paul
Hi Tim, I'll have a look at the book. In any case, I fully agree with you on the two ingredients: I don't have problems in those areas (finding components and asserting on Swing components); it's that in real-world applications the UI is likely to be updated frequently and the tricky part is to find out which update matches the testing stimulus you submitted. As far as I've read so far, books and tutorials cover simpler cases.
* Update: I've bought the PDF version of your book and quickly browsing it I found at page 187 "The Unit Test for waitForNamedThreadToFinish()". While I agree on the point of giving names to threads, which I usually do at least for a more understandable log, this can't be a solution for every case. Sometimes the thread is not controlled by you, instead is created by a 3rd party library; but, above all, in the case I've described in my post you have this sequence "EDT -> Thread -> EDT" (which is typical), so the completion of the computing sequence happens in the EDT and you can't just wait for a named thread. With tags the thing seems to work.
Feb 09, 2009 · Ryan Baldwin
Feb 09, 2009 · Geertjan Wielenga
Feb 08, 2009 · Mr B Loid
Feb 07, 2009 · osman pala
Jan 28, 2009 · Robert Evans
A better UML diagram for the idiom could be this:
Jan 19, 2009 · Lowell Heddings
Jan 15, 2009 · Robert Evans
Jan 11, 2009 · Lowell Heddings
Hi Harris.
At the moment, the great advantage is for what concerns the tier of services and models, that is the business tier. It is something that can be best clarified by examples, and I'll post some in future here on DZone (also with other two "series" of posts about NetBeans Platform Idioms etc...); for the moment, take this small one: I have a Metadata infrastructure that allow to extract metadata from media (e.g. photos), eventually store them in a database, where they can be searched for. Every kind of metadata (EXIF, IPTC, whatever) is implemented by separate modules, as well as persistence in the database is enabled by just adding specific modules, without requiring any configuration. This means that I can easily satisfy different needs (blueMarine itself, blueOcean base, blueOcean as used by my customer, and hopefully other future customers) by just assembling different set of modules in specific custom platforms. This has been achieved mostly by means of the Lookup API (and in future I could use more the layer.xml facility). Most of this stuff could be used also by taking simple .jars as libraries out of the NetBeans Platform; but as the number of configurations increases, it is really important to have the capability of checking compatibilities and dependencies among modules. You could be always safe with a good testing, but in any case I appreciate when a static tool finds / prevents problems as early as possible. Furthermore, having the very same process for two different projects is a big time saver for me.
There are two different uses of the Platform that I'll evaluate soon. First is the "Event Bus" (based on "Central Lookup" by Wade Chandler) that I've talked about a few months ago; in blueMarine it introduces another great deal of decoupling that in the customer's project based on blueOcean I don't have yet. While the Event Bus as is works fine with a single user (it is a singleton), it must be adapted in the case of concurrency (it should be enough to write a variant based on ThreadLocal). Second is about the use of Nodes for a number of things, including dynamic generation of menus based on the functions that you have dynamically included in the current configuration. This is more sensible because of the cited potential problem with the AWT Thread, which would be a serious bottleneck on the server side.
Dec 05, 2008 · Joseph fouad
Oct 22, 2008 · Satish Talim
Hmm.... maybe I'm answering too in a hurry and I didn't understand well... but AFAIK when you have compiled a class C1 that is annotated with A, you can put C1 in the compile (and run) classpath for C2 without the need of putting A in C2 classpath.You only get warnings and A is ignored (as it's proper to do: annotations have a meaning only in a certain context, and in C2 context A is meaningless).
I' ve just double checked, compiling a sample class X against a JAR that contains javax.persistence annotations (but not putting jpa.jar in the compiler classpath):
istral:/tmp> javac -classpath it-tidalwave-catalog.jar X.java
it/tidalwave/catalog/persistence/CategoryPB.class(it/tidalwave/catalog/persistence:CategoryPB.class): warning: Cannot find annotation method 'name()' in type 'javax.persistence.Table': class file for javax.persistence.Table not found
it/tidalwave/catalog/persistence/CategoryPB.class(it/tidalwave/catalog/persistence:CategoryPB.class): warning: Cannot find annotation method 'length()' in type 'javax.persistence.Column': class file for javax.persistence.Column not found
it/tidalwave/catalog/persistence/CategoryPB.class(it/tidalwave/catalog/persistence:CategoryPB.class): warning: Cannot find annotation method 'name()' in type 'javax.persistence.Column'
etc... Just warnings, the compilation is successful.
Oct 18, 2008 · Lebon Bon Lebon
Oct 15, 2008 · Nigel Wong
Oct 15, 2008 · Nigel Wong
Hmm, I still don't understand the question :-)So let's go with examples from my use cases.
Case number 1: the extension point is a service: client code needs a service S, thus it must know the interface that describes it and the interface itself is the name to pass to lookup. Since I must invoke methods in S, I must have prior knowledge of S.
Case number 2: the extension point is a listener, so the original code is not aware of it; the idea is that you can extend an existing functionality by putting into the system a new plugin, so the original code is not aware of the presence of the listener. But - the listener would be still described by an interface. The original code lookups for that interface and gets a list of implementations (possible multiple listeners) and calls them. In this scenario, the original code doesn't know whether there are zero extension points, or there are multiple ones. Still, by looking up a class we fall back to the previous case.
BTW, with the Module API in NetBeans you can actualy inspect all the modules you have in the system. At the moment I can't figure out a use case for that (apart from the plugin management facility). Can you make some example of what you're thinking of?
Christopher: clearly when one has choosen (or has been forced to use) a platform, he sticks with it. But sometimes he can choose; for instance, I evaluated the Eclipse module system in 2005 (at the time Eclipse was my IDE) since my application was plain Java. While the fact that Eclipse is not Swing played a major role in my rejecting it, I also found the module system thing pretty overcomplicated. After a few months I tried NetBeans, and sure the initial facilitator was Swing compability that made it easier to port my old code; but I really found the module system intuitive and easy to understand and use.
Oct 14, 2008 · Nigel Wong
"The extension point does not define itself: there is no information about which module-public interface is used as an extension point."
Perhaps I've not fully understood the point... Why should I know that by defaut? It is a good design principle to separate interfaces from implementations, so the only thing I need to know is that there is an extension point, not necessarily where it comes from.
"The lookup name to be used is also undefined."
The lookup name is the class itself: what better name could I imagine? If I need an extension point (which I prefer to call service) is because I need to do something with it, that is I know that I need its interface / abstact class. Why should I also search for another kind of name?
'The extension and other content are mixed in the layer.xml file: hard to determine what extension points the module contribute to.'
This is just a matter of tooling: in the IDE, if you open the icons "XML Layer" and "XML Layer in context" you can see the extension points contributed by the current module marked in bold. Indeed, there's a lot of improvements that can be done in tooling (such as generating automatically dependency graphs etc...), but - again - this is a matter of tooling.
The fact that layer.xml can contain other things is very powerful indeed, and you have just to define the proper structure of virtual directories in the file itself.
Sep 07, 2008 · Lebon Bon Lebon
Thank you Diomidis, it's really a neat tool.
Geertjan, my approach is of course very similar to yours (99%), there are only a few differences in some details of the -javadoc-with-packages script that is for RCP projects and not J2SE ones. The fact that the target name starts with a dash makes it understand that it isn't something that you call directly, but it's called by the harness when you run ant-javadoc in a module project.
For generating all the javadocs in a suite, I put this in the build.xml of the suite project:
<target name="javadoc" depends="-init">
<subant target="javadoc" buildpath="${modules.sorted}" inheritrefs="true" inheritall="false">
<property name="netbeans.javadoc.dir" value="${basedir}/build/javadoc"/>
</subant>
</target>
And that's over :-) Actually since Meera and you published that stuff, all blueMarine javadocs are updated with their UML diagrams on the website, thanks to Hudson.
Aug 10, 2008 · Rushabh Joshi
Well, it's anthropomorphism as well, as the class becomes "human".
Back to the question, I'd answer "yes". My comments in the code are always in third person or imperative, but e.g. when I have troubles and talk with other persons or mailing lists about my problems, I say "I open a connection and..." much more often than "My code opens...". When I review and correct other people's code, I say "You are doing this and that", rather than "Your code is doing...".
I don't know, though, if this is "transference" or just a way to make the communication brief ("I" is shorter than "My code").
After all, when I've been taught programming, teachers and books explained algorithms and variables as human procedures and drawers where you get and put things; and even CRC cards and Robustness Analysis in the end rely a lot about anthropomorphised (?) classes, right?
Aug 09, 2008 · Michal Talaga
Aug 09, 2008 · Ates Goral
I don't understand the objection that comments could not be updated as the code evolve. It's a programmer responsibility to update them, as well as for tests. Unless you do TDD (in which case tests are updated by definition), I don't think that test code updates automatically as code evolves. Which BTW is another of the very good points about TDD, but not everybody does TDD.
*** edited to add
What I mean is that in any case the programmer has to keep other artifacts updated to the code; if he doesn't he's bad, and not only for the comments.
Aug 08, 2008 · Michal Talaga
Yes, the browser thing is really severe, but in this case I've still some hope in Apple that they will fix it in a few months, when the final JavaFX SDK is available and people will start deploying stuff.
PS The browser problem is only with Safari or also with FireFox?
Aug 07, 2008 · Ates Goral
I second you opinion, and I'm adding some points. One of the points that bloggers or top programmers / designers / methodologists miss is their assumption that all the other people in the world have got their skills (this applies not only to commenting code, but to the whole process). I mean, my experience is that I write / advice about writing code for others, and these "others" range from teams which are very experienced in Java to others will scarse experience; I'm also in maintainance mode for some projects whose owner company has _no_ experience in Java and won't ever have, since its focus is elsewhere. It is not unlikely that sooner or later rookies will have to put their hands on the code, not necessarily with a senior supervising (we can discuss about how but is this practice, and it's bad indeed, but this is how the world goes on) and I feel as I have to make sure that the code is comprehensible also by this kind of people.
Add to that that I often use my open source projects as pedagogic material (teaching people with a project from the real world is much more interesting that simple and small lab projects, where it's too easy to have everything fit and working) and you understand the need for some extra comments.
Jul 24, 2008 · Cleverton Hentz
Jul 24, 2008 · Cleverton Hentz
Hi Valentin, thanks for trying the code.
How did you packed the PersistenceProvider?
Jul 18, 2008 · admin
Jul 18, 2008 · admin
Alexander's point is good, if you have a lot of nodes to be updated you have high costs. What is unclear to me, in this scenario, is how people think to go on indefinitely with an old version. I mean, we have the end of life thing and no more security updates after next Fall. Is this acceptable? This could be ok if e.g. the application we're talking of is getting near to its own end of life, say it will be replaced in 2009 by a completely new system. In this case, upgrading is probably an unneeded cost. But if it will live for other 5 years, is it acceptable to stay without security patches?
The whole thing of re-testing and re-tuning a JVM upgrade has clearly a cost, but it should be part of the planned maintenance costs.
BTW, I'm involved in only one project, dating back to a few years ago, that still runs Java 1.4 code. I've made an assessment and there are some things to do in order to upgrade it to Java 6 - the most serious part is the lack of a good test plan for QA from the customer. In other words, they know they have a stable product now, but they fear to introduce instability by upgrading. If they had a good test plan, they wouldn't we worried so much; and probably up to now they just didn't want to pay the costs of a specific test session after the upgrade. In any case, the end-of-life thing is probably changing their mind.
Jul 16, 2008 · admin
"I find it pretty amazing that 40% (including those running even older) still had not moved to Java 5 yet. "
Indeed I find it pretty amazing that they are so few; and I am as amazed as you in learning that Java 6 has got nearly the 40% share. Every time the discussion moves on Java 6 (for instance, every time Apple drives mad people about not fully supporting it) there's a crowd of people standing up and saying "Who cares? Most people still uses 1.4, nobody uses 1.6". Well, your poll demonstrates that both assertions are false.
Jun 30, 2008 · Stacy Doss
Bruce, of course if you have a specific deployment target, you can go with a specific model. Apart from the cool factor, the iPhone doesn't look to me the best option either, since its SDK is completely proprietary and you would get into a strong vendor lock-in.
Said that, "J2ME is a piece of junk" is not enough to convince us that there are things the iPhone can do and J2ME doesn't. BTW, I can say one thing that J2ME can do (on many platforms) and iPhone can't: multitasking. Jobs says it's for the battery life again (I think he should get more fantasy when he has to hide behind a corner), but that's clearly not true, because the competitors can. It's clearly a point about forcing people to develop with a specific model, which relies on network pushes and make you waste money with your telco. Also the impossibility of distributing my own application with my favourite channels, as I'm forced to go through Apple iStore, is bad (please no excuses about securiry: there are billions phones with Java support out there and no major security issues when you just go with digital signing).
Sorry, too many constraints. In the XXI century I'm not saying that people have to necessarily go with open source, but at least with reasonably open platforms. Most of the other mobile platforms supported by J2ME are (BTW, Symbian has just got open sourced too).
Jun 30, 2008 · Stacy Doss
Jun 30, 2008 · Stacy Doss
Jun 18, 2008 · Stacy Doss
Jun 18, 2008 · Dieter Komendera
Jun 17, 2008 · Dieter Komendera
I agree on that too. But, for instance, I have seen people to whom I've presented both technologies and were much more attracted by EJBs (in spite of my light/medium Spring bias) where "things just work" and you don't have to set up a handful of beans for the TX manager etc... Which is not an argument to me, I mean the thing is apparently more complex, but once you learn it you make it work forever, but everybody has his own perspective on this.
Jun 17, 2008 · Dieter Komendera
As time passes, I really can't find the right words to participate in these discussions: the point is that EJB and Spring are getting closer and closer (both are POJOs, both need a container, both can be used with configuration files or annotations), so today choosing one over the other seems to be mostly a matter of personal taste. This of course means that Adam is right and EJB 3 are good. Until a few time ago, I supposed there was a major difference: with EJB you can do things only in one way, and you get all the services or none in a monolithic fashion, while with Spring you can only ask for a single service, e.g. only transaction management, and you can control things at fine grain (e.g. choose your transactional manager, etc...). One of the things I've most appreciated about Spring is the capability of decorating existing beans to fit your needs. But, AFAIU, Glassfish 3 with its modular structure is going to offer more or less the same thing - am I right?
At the moment I'm going to introduce one of the two technologies in my Rich Client Platform application. My original idea was, and still is, about Spring; but I must confess that Adam is making me think more and more about EJBs... :-)
Jun 17, 2008 · Stacy Doss
Jun 16, 2008 · Trevor Sullivan
Jun 16, 2008 · Trevor Sullivan
Jun 11, 2008 · adam bien
+1 especially for the UUID stuff - but I'd say, at this point, that I'd like to see even a more flexible way, where you candefine a "named key generation strategy". You place the name of the strategy in the annotation, and you bind it to an implementation class in a configuration file.
I'd also like to see the capability of merging multiple persistence.xml files referring to the same PU together, as in modular applications this seems to be a reasonable requirement (I've recently tweaked some code for implementing this feature, but I'd really like to see it out-of-the-box).
Jun 09, 2008 · Cleverton Hentz
Jun 09, 2008 · Cleverton Hentz
Jun 09, 2008 · Cleverton Hentz
May 29, 2008 · Hooman Taherinia
Excellent remark, David, and I've actually forgotten to say that in my application, in fact, I don't use JPA relationships at all. I can live with it since indeed I have a very simple schema of relationships, in which a single entity is related with entities provided by other modules; thus is not difficult to handle them manually. Of course, in other scenarios this could be a problem.
BTW, managing the database schema upgrades needed by new versions of the software is another problem - in a desktop application, you can't afford to do that with an administration task as you'd probably do in a regular enterprise application, since the upgrade must be performed automatically and in unattended mode when the user installs and runs a new version of the application. I've done some preliminary work in this area that I should be able to post on my blog in a few days.
May 15, 2008 · Mr B Loid
May 15, 2008 · Mr B Loid
Flex is a good technology for doing simple things and I do agree that Adobe has now a prominent market position, after all it's the only platform that already ships. But it suffers from _severe_ limitations: one for all, it does not support multi-threading. This means that you can't do / can't do easily a lot of things, and that really your programming model is still an "enhanced" web one, rather than a full-fledged application. Also I see a lot of problems with the multi-core thing (no multi-threading, no capability of exploiting all the available cores), and these are valid reasons for asking for more.
"anyone rememebr ejb?" Given that they are widely used, even through the Spring success, I believe a lot of people remember them. :-)
May 14, 2008 · Mr B Loid
Try this: http://www35.cplan.com/cb_export/OT_TS-6509_296509_191-1_v1.pdf
Cay Horstmann posted a nice hack to quickly download J1 stuff:
http://weblogs.java.net/blog/cayhorstmann/
May 14, 2008 · Mr B Loid
May 14, 2008 · Mr B Loid
May 14, 2008 · Mr B Loid
May 14, 2008 · Mr B Loid
May 14, 2008 · Mr B Loid
I don't believe Swing will suffer from anything, on the contrary it will benefit from JavaFX. At the Media BoF the engineers have quickly explained how the new Java APIs for the video codecs will be, so you see that we're gaining them on the Swing side too. And I believe also the SceneNode stuff is available for Swing too.
For what concerns the IDEs, I see that people just use the best tool for the job. For instance, I've recently seen a lot of people moving to Eclipse to NetBeans because they felt NetBeans offered the best tools for J2EE or for Swing. So, my only point is to have JavaFX and all the related stuff working. If it does, the main problem derived by an hypothetical lack of support from Eclipse will be probably in the Eclipse community. ;-)
May 06, 2008 · Mads Kristensen
May 06, 2008 · Geertjan Wielenga
May 01, 2008 · Rebecca
"Apps can always be fixed, even if it means dropping the use of an existing library. It just makes things harder."
This sounds way too simplistic! For an application based on Cocoa, dropping Cocoa means to rewrite the application.
"I'm not worried. Every time Apple releases (or doesn't release) an updated VM, people seem to be angry or unhappy. After the critisism dies, Apple usually quietly fixes the issues if the software-authors haven't worked around them yet."
I don't know whether you live on the same planet where I live :-) but a) the criticism never dies and b) Apple does not fix everything (that's why criticism never dies). Java 5 on Leopard has still severely annoying bugs e.g. with Spaces (is this the kind of things you think Apple focuses on before bringing Java up-to-date?). There has been no support for Java 5 on Jaguar, nor there will be support for Java 6 on Leopard, which is another annoying point. And as Bodo says, there's no support for Java 6 for PPC and Intel 32bit, which is *deeply* annoying. I'd like to be wrong, but I bet this won't be ever fixed.
Apr 13, 2008 · Lebon Bon Lebon
Apr 09, 2008 · Lebon Bon Lebon
It's difficult to say. But, first, I'd say that I don't believe to anybody categorically saying either "applets are dead" or "applets are going to beat Flex". Nobody can predict the future so certainly, because while it's true that Flex has two big advantage points 1) it's a nice integration of engineers and designers and 2) can be installed easily, Java is a much more powerful language and environment. For instance, it can do real multithreading and there's a huge availability of components, software and frameworks. The latter point being the most important, it's really critical to see whether JSE 6 Update N will deliver what has been promised.
But there's another point. From my perspective, Applets are really unpopular for applications that deploy to the end user, while there are no problems if you're going to deploy in controlled environments (e.g. clients in a managed network). In my career from 1996 up to today I've always seen at least one applet per year in a project I've been involved in, even though I didn't work on all of them (up to a couple of years ago I was really focused on the server side). In all the cited projects the Applets have been deployed with success. In this segment point 2) can be addressed easily because computers are administered and point 1) is not so important since the human operator is not an end user - in any case, I'm seeing customers that are pretty good in delivering decently designed stuff from the graphical point of view, especially since when NetBeans Matisse is available and after the success of the Filthy Rich Clients book by Romain and Chet. Once point 1) and 2) are not an issue, Java beats Flex hands down, since the language is much more powerful, as I have already said. A third critical point has been the AWT Thread issues that did cause some troubles, but in the latest years I see that customers are better advised about that. So I dare to say that it's likely that both Applets and Flex have a future, what I can't really predict for sure is which will be their strength ratio. Note that I didn't ever cited JavaFX or the graphic design tools that Sun might deliver, since we are still waiting it is completed, that in any case might have a role in this battle.
Apr 01, 2008 · Vera Tushurashvili
Mar 05, 2008 · Peter Stofferis
Feb 12, 2008 · admin
@Moritz: ciao! :-)
@Florian: can you please tell me what would be the problem with just checking out your project from a repository for setting up a new workplace? Clearly there's some complexity in your project that I can't understand.
--Fabrizio Giudici
Feb 12, 2008 · admin
--
Fabrizio Giudici
Feb 06, 2008 · Lebon Bon Lebon
--Fabrizio Giudici
Feb 06, 2008 · Geertjan Wielenga
--Fabrizio Giudici
Feb 01, 2008 · Erik Thauvin
--Fabrizio Giudici
Jan 12, 2008 · Johannes Schneider