DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
Visualize Maven Project Dependencies with dependency:tree and Dot Diagram Output
The dependency:tree goal of the Maven plugin dependency supports various graphical outputs from the version 2.4 up. This is how you would create a diagram showing all dependencies in the com.example group in the dot format: mvn dependency:tree -Dincludes=com.example-DappendOutput=true -DoutputType=dot -DappendOutput=true -DoutputFile=/path/to/output.dot To actually produce an image from .dot you can use one of .dot renderers, f.ex. this online dot renderer (paste into the right text box, press enter). You could also generate the output f.ex. in the graphml format & visualize it in Eclipse. From http://theholyjava.wordpress.com/2012/01/13/visualize-maven-project-dependencies-with-dependencytree-and-dot-diagram-output/
January 25, 2012
by Jakub Holý
· 31,123 Views
article thumbnail
Modular Java Apps - A Microkernel Approach
Software Engineering is all about reuse. We programmers therefore love to split applications up into smaller components so that each of them can be reused or extended in an independent manner. A keyword here is "loose coupling". Slightly simplified, this means, each component should have as few dependencies to other components as possible. Most important, if I have a component B which relies on component A, I don't want that A needs to know about B. The component A should just provide a clean interface which could be used and extended by B. In Java there are many frameworks which provide this exact functionality: JavaEE, Spring, OSGI. However, each of those frameworks come with their own way to do things and provide lots and lots of additional functionality - whether you want it or not! Since we here at scireum love modularity (we build 4 products out of a set of about 10 independet modules) we built our own little framework. I factored out the most important parts and now have a single class with less than 250 lines of code+comments! I call this a microkernel approach, since it nicely compares to the situation we have with operating systems: There are monolithic kernels like the one of Linux with about 11,430,712 lines of code. And there is a concept called a microkernel, like to one of Minix with about 6,000 lines of executable kernel code. There is still an ongoing discussion which of the two solituons is better. A monolithic kernel is faster, a microkernel has way less critical code (critical code means: a bug there will crash the complete system. If you haven't already, you should read more about mikrokernels on Wikipedia. However one might think about operating systems - when it comes to Java I prefer less dependencies and if possible no black magic I don't understand. Especially if this magic involves complex ClassLoader structures. Therefore, here comes Nucleus... How does this work? The framework (Nucleus) solves two problems of modular applications: I want provide a service to other components - but I only want to show an interface and they should be provided with my implementation at runtime without knowning(referencing) it. I want to provide a service or callback for other components. I provide an interface, and I want to know all classes implementig it, so I can invoke them. Ok, we probably need examples for this. Say we want to implement a simple timer service. It provides an interface: public interface EveryMinute { void runTimer() throws Exception; } All classes implementing this interface should be invoked every minute. Additionally we provide some infos - namely, when was the timer executed last. public interface TimerInfo { String getLastOneMinuteExecution(); } Ok, next we need a client for our services: @Register(classes = EveryMinute.class) public class ExampleNucleus implements EveryMinute { private static Part timerInfo = Part.of(TimerInfo.class); public static void main(String[] args) throws Exception { Nucleus.init(); while (true) { Thread.sleep(10000); System.out.println("Last invocation: " + timerInfo.get().getLastOneMinuteExecution()); } } @Override public void runTimer() throws Exception { System.out.println("The time is: " + DateFormat.getTimeInstance().format(new Date())); } } The static field "Part timerInfo" is a simple helper class which fetches the registered instance from Nucleus on the first call and loads it into a private field. So accessing this part has almost no overhead to a normal field access - yet we only reference an interface, not an implementation. The main method first initializes Nucleus (this performs the classpath scan etc.) and then simply goes into an infinite loop, printing the last execution of our timer every ten seconds. Since our class wears a @Register annotation, it will be discovered by a special ClassLoadAction (not by Nucleus itself) instantiated and registered for the EveryMinute interface. Its method runTimer will then be invoced by our timer service every minute. Ok, but how would our TimerService look like? @Register(classes = { TimerInfo.class }) public class TimerService implements TimerInfo { @InjectList(EveryMinute.class) private List everyMinute; private long lastOneMinuteExecution = 0; private Timer timer; public TimerService() { start(); } public void start() { timer = new Timer(true); // Schedule the task to wait 60 seconds and then invoke // every 60 seconds. timer.schedule(new InnerTimerTask(), 1000 * 60, 1000 * 60); } private class InnerTimerTask extends TimerTask { @Override public void run() { // Iterate over all instances registered for // EveryMinute and invoke its runTimer method. for (EveryMinute task : everyMinute) { task.runTimer(); } // Update lastOneMinuteExecution lastOneMinuteExecution = System.currentTimeMillis(); } } @Override public String getLastOneMinuteExecution() { if (lastOneMinuteExecution == 0) { return "-"; } return DateFormat.getDateTimeInstance().format( new Date(lastOneMinuteExecution)); } } This class also wears a @Register annotation so that it will also be loaded by the ClassLoadAction named above (the ServiceLoadAction actually). As above it will be instantiated and put into Nucleus (as implementation of TimerInfo). Additionally it wears an @InjectList annotation on the everyMinute field. This will be processed by another class named Factory which performs simple dependency injection. Since its constructur starts a Java Timer for the InnerTimerTask, from that point on all instances registered for EveryMinute will be invoced by this timer - as the name says - every minute. How is it implemented? The good thing about Nucleus is, that it is powerful on the one hand, but very simple and small on the other hand. As you could see, there is no inner part for special or privileged services. Everything is built around the kernel - the class Nuclues. Here is what it does: It scans the classpath and looks for files called "component.properties". Those need to be in the root folder of a JAR or in the /src folder of each Eclipse project respectively. For each identified JAR / project / classpath element, it then collects all contained class files and loads them using Class.forName. For each class, it checks if it implements ClassLoadAction, if yes, it is put into a special list. Each ClassLoadAction is instanciated and each previously seen class is sent to it using: void handle(Class clazz) Finally each ClassLoadAction is notified, that nucleus is complete so that final steps (like annotation based dependency injection) could be performed. That's it. The only other thing Nucleus provides is a registry which can be used to register and retrieve objects for a class. (An in-depth description of the process above, can be found here: http://andreas.haufler.info/2012/01/iterating-over-all-classes-with.html). Now to make this framework useable as shown above, there is a set of classes around Nucleus. Most important is the class ServiceLoadAction, which will instantiate each class which wears a @Register annoation, runs Factory.inject (our mini DI tool) on it, and throws it into Nucleus for the listed classes. Whats important: The ServiceLoadActions has no specific rights or privileges, you can easily write your implementation which does smarter stuff. Next to some annotations, there are three other handy classes when it comes to retrieving instances from Nucleus: Factory, Part and Parts. As noted above, the Factory is a simple dependency injector. Currently only the ServiceLoadAction autmatically uses the Factory, as all classes wearing the @Register annotation are scanned for required injections. You can however use this factory to run injections on your own classes or other ClassLoadActions to do the same as ServiceLoadAction. If you can't or don't want to rely in annotation based dependency magic, you can use the two helper classes Part and Parts. Those are used like normal fields (see ExampleNucleus.timerInfo above) and fetch the appropriate object or list of objects automatically. Since the result is cached, repeated invocations have almost no overhead compared to a normal field. Nucleus and the example shown above is open source (MIT-License) and available here: https://github.com/andyHa/scireumOpen/blob/master/src/examples/ExampleNucleus.java https://github.com/andyHa/scireumOpen/tree/master/src/com/scireum/open/nucleus If you're interested in using Nucleus, I could put the relevant souces into a separater repository and also provide a release jar - just write a comment below an let me know. This post is the fourth part of the my series "Enterprisy Java" - We share our hints and tricks how to overcome the obstacles when trying to build several multi tenant web applications out of a set of common modules.
January 23, 2012
by Andreas Haufler
· 9,392 Views · 1 Like
article thumbnail
Datatype Conversion in Java: XMLGregorianCalendar to java.util.Date / java.util.Date to XMLGregorianCalendar
package singz.test; import java.util.Date; import java.util.GregorianCalendar; import javax.xml.datatype.DatatypeConfigurationException; import javax.xml.datatype.DatatypeFactory; import javax.xml.datatype.XMLGregorianCalendar; /** * A utility class for converting objects between java.util.Date and * XMLGregorianCalendar types * */ public class XMLGregorianCalendarConversionUtil { // DatatypeFactory creates new javax.xml.datatype Objects that map XML // to/from Java Objects. private static DatatypeFactory df = null; static { try { df = DatatypeFactory.newInstance(); } catch(DatatypeConfigurationException e) { throw new IllegalStateException( "Error while trying to obtain a new instance of DatatypeFactory", e); } } // Converts a java.util.Date into an instance of XMLGregorianCalendar public static XMLGregorianCalendar asXMLGregorianCalendar(java.util.Date date) { if(date == null) { return null; } else { GregorianCalendar gc = new GregorianCalendar(); gc.setTimeInMillis(date.getTime()); return df.newXMLGregorianCalendar(gc); } } // Converts an XMLGregorianCalendar to an instance of java.util.Date public static java.util.Date asDate(XMLGregorianCalendar xmlGC) { if(xmlGC == null) { return null; } else { return xmlGC.toGregorianCalendar().getTime(); } } public static void main(String[] args) { Date currentDate = new Date(); // Current date // java.util.Date to XMLGregorianCalendar XMLGregorianCalendar xmlGC = XMLGregorianCalendarConversionUtil.asXMLGregorianCalendar( currentDate); System.out.println( "Current date in XMLGregorianCalendar format: " + xmlGC.toString()); // XMLGregorianCalendar to java.util.Date System.out.println( "Current date in java.util.Date format: " + XMLGregorianCalendarConversionUtil.asDate(xmlGC).toString()); } } Why do we need XMLGregorianCalendar? Java Architecture for XML Binding (JAXB) allows Java developers to map Java classes to XML representations. JAXB provides two main features: the ability to marshal Java objects into XML and the inverse, i.e. to unmarshal XML back into Java objects. In the default data type bindings i.e. mappings of XML Schema (XSD) data types to Java data types in JAXB, the following types in XML schema (mostly used in web services definition) – xsd:dateTime, xsd:time, xsd:date and so on map to javax.xml.datatype.XMLGregorianCalendar Java type. From http://singztechmusings.in/datatype-conversion-in-java-xmlgregoriancalendar-to-java-util-date-java-util-date-to-xmlgregoriancalendar/
January 21, 2012
by Singaram Subramanian
· 100,795 Views
article thumbnail
The Persistence Layer with Spring Data JPA
This is the forth of a series of articles about Persistence with Spring. This article will focus on the configuration and implementation of the persistence layer with Spring 3.1, JPA and Spring Data. For a step by step introduction about setting up the Spring context using Java based configuration and the basic Maven pom for the project, see this article. The Persistence with Spring series: Part 1 – The Persistence Layer with Spring 3.1 and Hibernate Part 3 – The Persistence Layer with Spring 3.1 and JPA Part 5 – Transaction configuration with JPA and Spring 3.1 No More DAO implementations As I discussed in a previous post, the DAO layer usually consists of a lot of boilerplate code that can and should be simplified. The advantages of such a simplification are many fold: a decrease in the number of artifacts that need to be defined and maintained, simplification and consistency of data access patterns and consistency of configuration. Spring Data takes this simplification one step forward and makes it possible to remove the DAO implementations entirely – the interface of the DAO is now the only artifact that need to be explicitly defined. The Spring Data managed DAO In order to start leveraging the Spring Data programming model with JPA, a DAO interface needs to extend the JPA specific Repository interface - JpaRepository – in Spring’s interface hierarchy. This will enable Spring Data to find this interface and automatically create an implementation for it. Also, by extending the interface we get most if not all relevant CRUD generic methods for standard data access available in the DAO. Defining custom access method and queries As discussed, by implementing one of the Repository interfaces, the DAO will already have some basic CRUD methods (and queries) defined and implemented. To define more specific access methods, Spring JPA supports quite a few options – you can either simply define a new method in the interface, or you can provide the actual JPQ query by using the @Query annotation. A third option to define custom queries is to make use of JPA Named Queries, but this has the disadvantage that it either involves XML or burdening the domain class with the queries. In addition to these, Spring Data introduces a more flexible and convenient API, similar to the JPA Criteria API, only more readable and reusable. The advantages of this API will become more pronounced when dealing with a large number of fixed queries that could potentially be more concisely expressed through a smaller number of reusable blocks that keep occurring in different combinations. Automatic Custom Queries When Spring Data creates a new Repository implementation, it analyzes all the methods defined by the interfaces and tries to automatically generate queries from the method name. While this has limitations, it is a very powerful and elegant way of defining new custom access methods with very little effort. For example, if the managed entity has a name field (and the Java Bean standard getter and setter for that field), defining the findByName method in the DAO interface will automatically generate the correct query: public interface IFooDAO extends JpaRepository< Foo, Long >{ Foo findByName( final String name ); } This is a relatively simple example; a much larger set of keywords is supported by query creation mechanism. In the case that the parser cannot match the property with the domain object field, the following exception is thrown: java.lang.IllegalArgumentException: No property nam found for type class org.rest.model.Foo Manual Custom Queries In addition to deriving the query from the method name, a custom query can be manually specified with the method level @Query annotation. For even more fine grained control over the creation of queries, such as using named parameters or modifying existing queries, the reference is a good place to start. Spring Data transaction configuration The actual implementation of the Spring Data managed DAO – SimpleJpaRepository – uses annotations to define and configure transactions. A read only @Transactional annotation is used at the class level, which is then overridden for the non read-only methods. The rest of the transaction semantics are default, but these can be easily overridden manually per method. Exception Translation without the template One of the responsibilities of Spring ORM templates (JpaTemplate, HibernateTemplate) is exception translation – translating JPA exceptions – which tie the API to JPA – to Spring’s DataAccessException hierarchy. Without the template to do that, exception translation can still be enabled by annotating the DAOs with the @Repository annotation. That, coupled with a Spring bean postprocessor will advice all @Repository beans with all the implementations of PersistenceExceptionTranslator found in the Container – to provide exception translation without using the template. The fact that exception translation is indeed active can easily be verified with an integration test: @Test( expected = DataAccessException.class ) public void whenAUniqueConstraintIsBroken_thenSpringSpecificExceptionIsThrown(){ String name = "randomName"; this.service.save( new Foo( name ) ); this.service.save( new Foo( name ) ); } Exception translation is done through proxies; in order for Spring to be able to create proxies around the DAO classes, these must not be declared final. Spring Data Configuration To activate the Spring JPA repository support, the jpa namespace is defined and used to specify the package where to DAO interfaces are located: At this point, there is no equivalent Java based configuration – support for it is however in the works. The Spring Java or XML configuration The JPA configuration with Spring 3.1 has already been carefully discussed in the previous article of this series. Spring Data also takes advantage of the Spring support for the JPA @PersistenceContext annotation which it uses to wire the EntityManager into the Spring factory bean responsible with creating the actual DAO implementations – JpaRepositoryFactoryBean. In addition to the already discussed configuration, there is one last missing piece – including the Spring Data XML configuration in the overall persistence configuration: @Configuration @EnableTransactionManagement @ImportResource( "classpath*:*springDataConfig.xml" ) public class PersistenceJPAConfig{ ... } The Maven configuration In addition to the Maven configuration for JPA defined in a previous article, the spring-data-jpa dependency is addeed: org.springframework.data spring-data-jpa 1.0.2.RELEASE Conclusion This article covered the configuration and implementation of the persistence layer with Spring 3.1, JPA 2 and Spring JPA (part of the Spring Data umbrella project), using both XML and Java based configuration. The various method of defining more advanced custom queries are discussed, as well as configuration with the new jpa namespace and transactional semantics. The final result is a new and elegant take on data access with Spring, with almost no actual implementation work. You can check out the full implementation in the github project. From the originalThe Persistence Layer with Spring Data JPA of the Persistence with Spring series
January 20, 2012
by Eugen Paraschiv
· 153,620 Views · 2 Likes
article thumbnail
Which Integration Framework Should You Use – Spring Integration, Mule ESB or Apache Camel?
Data exchanges between companies are increasing a lot. The number of applications that must be integrated is increasing, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications must be modeled in a standardized way, realized efficiently and supported by automatic tests. Three integration frameworks are available in the JVM environment, which fulfil these requirements: Spring Integration, Mule ESB and Apache Camel. They implement the well-known Enteprise Integration Patterns (EIP, http://www.eaipatterns.com) and therefore offer a standardized, domain-specific language to integrate applications. These integration frameworks can be used in almost every integration project within the JVM environment – no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code. This article compares all three alternatives and discusses their pros and cons. If you want to know, when to use a more powerful Enterprise Service Bus (ESB) instead of one of these lightweight integration frameworks, then you should read this blog post: http://www.kai-waehner.de/blog/2011/06/02/when-to-use-apache-camel/ (it explains when to use Apache Camel, but the title could also be „When to use a lightweight integration framework“). Comparison Criteria Several criteria can be used to compare these three integration frameworks: Open source Basic concepts / architecture Testability Deployment Popularity Commercial support IDE-Support Errorhandling Monitoring Enterprise readiness Domain specific language (DSL) Number of components for interfaces, technologies and protocols Expandability Similarities All three frameworks have many similarities. Therefore, many of the above comparison criteria are even! All implement the EIPs and offer a consistent model and messaging architecture to integrate several technologies. No matter which technologies you have to use, you always do it the same way, i.e. same syntax, same API, same automatic tests. The only difference is the the configuration of each endpoint (e.g. JMS needs a queue name while JDBC needs a database connection url). IMO, this is the most significant feature. Each framework uses different names, but the idea is the same. For instance, „Camel routes“ are equivalent to „Mule flows“, „Camel components“ are called „adapters“ in Spring Integration. Besides, several other similarities exists, which differ from heavyweight ESBs. You just have to add some libraries to your classpath. Therefore, you can use each framework everywhere in the JVM environment. No matter if your project is a Java SE standalone application, or if you want to deploy it to a web container (e.g. Tomcat), JEE application server (e.g. Glassfish), OSGi container or even to the cloud. Just add the libraries, do some simple configuration, and you are done. Then you can start implementing your integration stuff (routing, transformation, and so on). All three frameworks are open source and offer familiar, public features such as source code, forums, mailing lists, issue tracking and voting for new features. Good communities write documentation, blogs and tutorials (IMO Apache Camel has the most noticeable community). Only the number of released books could be better for all three. Commercial support is available via different vendors: Spring Integration: SpringSource (http://www.springsource.com) Mule ESB: MuleSoft (http://www.mulesoft.org) Apache Camel: FuseSource (http://fusesource.com) and Talend (http://www.talend.com) IDE support is very good, even visual designers are available for all three alternatives to model integration problems (and let them generate the code). Each of the frameworks is enterprise ready, because all offer required features such as error handling, automatic testing, transactions, multithreading, scalability and monitoring. Differences If you know one of these frameworks, you can learn the others very easily due to their same concepts and many other similarities. Next, let’s discuss their differences to be able to decide when to use which one. The two most important differences are the number of supported technologies and the used DSL(s). Thus, I will concentrate especially on these two criteria in the following. I will use code snippets implementing the well-known EIP „Content-based Router“ in all examples. Judge for yourself, which one you prefer. Spring Integration Spring Integration is based on the well-known Spring project and extends the programming model with integration support. You can use Spring features such as dependency injection, transactions or security as you do in other Spring projects. Spring Integration is awesome, if you already have got a Spring project and need to add some integration stuff. It is almost no effort to learn Spring Integration if you know Spring itself. Nevertheless, Spring Integration only offers very rudimenary support for technologies – just „basic stuff“ such as File, FTP, JMS, TCP, HTTP or Web Services. Mule and Apache Camel offer many, many further components! Integrations are implemented by writing a lot of XML code (without a real DSL), as you can see in the following code snippet: You can also use Java code and annotations for some stuff, but in the end, you need a lot of XML. Honestly, I do not like too much XML declaration. It is fine for configuration (such as JMS connection factories), but not for complex integration logic. At least, it should be a DSL with better readability, but more complex Spring Integration examples are really tough to read. Besides, the visual designer for Eclipse (called integration graph) is ok, but not as good and intuitive as its competitors. Therefore, I would only use Spring Integration if I already have got an existing Spring project and must just add some integration logic requiring only „basic technologies“ such as File, FTP, JMS or JDBC. Mule ESB Mule ESB is – as the name suggests – a full ESB including several additional features instead of just an integration framework (you can compare it to Apache ServiceMix which is an ESB based on Apache Camel). Nevertheless, Mule can be use as lightweight integration framework, too – by just not adding and using any additional features besides the EIP integration stuff. As Spring Integration, Mule only offers a XML DSL. At least, it is much easier to read than Spring Integration, in my opinion. Mule Studio offers a very good and intuitive visual designer. Compare the following code snippet to the Spring integration code from above. It is more like a DSL than Spring Integration. This matters if the integration logic is more complex. The major advantage of Mule is some very interesting connectors to important proprietary interfaces such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway. If your integration project requires some of these connectors, then I would probably choose Mule! A disadvantage for some projects might be that Mule says no to OSGi: http://blogs.mulesoft.org/osgi-no-thanks/ Apache Camel Apache Camel is almost identical to Mule. It offers many, many components (even more than Mule) for almost every technology you could think of. If there is no component available, you can create your own component very easily starting with a Maven archetype! If you are a Spring guy: Camel has awesome Spring integration, too. As the other two, it offers a XML DSL: ${in.header.type} is ‘com.kw.DvdOrder’ ${in.header.type} is ‘com.kw.VideogameOrder’ Readability is better than Spring Integration and almost identical to Mule. Besides, a very good (but commercial) visual designer called Fuse IDE is available by FuseSource – generating XML DSL code. Nevertheless, it is a lot of XML, no matter if you use a visual designer or just your xml editor. Personally, I do not like this. Therefore, let’s show you another awesome feature: Apache Camel also offers DSLs for Java, Groovy and Scala. You do not have to write so much ugly XML. Personally, I prefer using one of these fluent DSLs instead XML for integration logic. I only do configuration stuff such as JMS connection factories or JDBC properties using XML. Here you can see the same example using a Java DSL code snippet: from(“file:incomingOrders “) .choice() .when(body().isInstanceOf(com.kw.DvdOrder.class)) .to(“file:incoming/dvdOrders”) .when(body().isInstanceOf(com.kw.VideogameOrder.class)) .to(“jms:videogameOrdersQueue “) .otherwise() .to(“mock:OtherOrders “); The fluent programming DSLs are very easy to read (even in more complex examples). Besides, these programming DSLs have better IDE support than XML (code completion, refactoring, etc.). Due to these awesome fluent DSLs, I would always use Apache Camel, if I do not need some of Mule’s excellent connectors to proprietary products. Due to its very good integration to Spring, I would even prefer Apache Camel to Spring Integration in most use cases. By the way: Talend offers a visual designer generating Java DSL code, but it generates a lot of boilerplate code and does not allow vice-versa editing (i.e. you cannot edit the generated code). This is a no-go criteria and has to be fixed soon (hopefully)! And the winner is… … all three integration frameworks, because they are all lightweight and easy to use – even for complex integration projects. It is awesome to integrate several different technologies by always using the same syntax and concepts – including very good testing support. My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. I would only use Mule if I need some of its unique connectors to proprietary products. I would only use Spring Integration in an existing Spring project and if I only need to integrate „basic technologies“ such as FTP or JMS. Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job! Best regards, Kai Wähner (Twitter: @KaiWaehner) http://www.kai-waehner.de/blog/2012/01/10/spoilt-for-choice-which-integration-framework-to-use-spring-integration-mule-esb-or-apache-camel/
January 19, 2012
by Kai Wähner DZone Core CORE
· 110,079 Views · 9 Likes
article thumbnail
Java Garbage Collection Algorithm Design Choices And Metrics To Evaluate Garbage Collector Performance
Memory Management in the Java HotSpot Virtual Machine View more documents from white paper Serial vs Parallel With serial collection, only one thing happens at a time. For example, even when multiple CPUs are available, only one is utilized to perform the collection. When parallel collection is used, the task of garbage collection is split into parts and those subparts are executed simultaneously, on different CPUs. The simultaneous operation enables the collection to be done more quickly, at the expense of some additional complexity and potential fragmentation. Concurrent versus Stop-the-world When stop-the-world garbage collection is performed, execution of the application is completely suspended during the collection. Alternatively, one or more garbage collection tasks can be executed concurrently, that is, simultaneously, with the application. Typically, a concurrent garbage collector does most of its work concurrently, but may also occasionally have to do a few short stop-the-world pauses. Stop-the-world garbage collection is simpler than concurrent collection, since the heap is frozen and objects are not changing during the collection. Its disadvantage is that it may be undesirable for some applications to be paused. Correspondingly, the pause times are shorter when garbage collection is done concurrently, but the collector must take extra care, as it is operating over objects that might be updated at the same time by the application. This adds some overhead to concurrent collectors that affects performance and requires a larger heap size. Compacting versus Non-compacting versus Copying After a garbage collector has determined which objects in memory are live and which are garbage, it can compact the memory, moving all the live objects together and completely reclaiming the remaining memory. After compaction, it is easy and fast to allocate a new object at the first free location. A simple pointer can be utilized to keep track of the next location available for object allocation. In contrast with a compacting collector, a non-compacting collector releases the space utilized by garbage objects in-place, i.e., it does not move all live objects to create a large reclaimed region in the same way a compacting collector does. The benefit is faster completion of garbage collection, but the drawback is potential fragmentation. In general, it is more expensive to allocate from a heap with in-place deallocation than from a compacted heap. It may be necessary to search the heap for a contiguous area of memory sufficiently large to accommodate the new object. A third alternative is a copying collector, which copies (or evacuates) live objects to a different memory area. The benefit is that the source area can then be considered empty and available for fast and easy subsequent allocations, but the drawback is the additional time required for copying and the extra space that may be required. Performance Metrics Several metrics are utilized to evaluate garbage collector performance, including: Throughput—the percentage of total time not spent in garbage collection, considered over long periods of time. Garbage collection overhead—the inverse of throughput, that is, the percentage of total time spent in garbage collection. Pause time—the length of time during which application execution is stopped while garbage collection is occurring. Frequency of collection—how often collection occurs, relative to application execution. Footprint—a measure of size, such as heap size. Promptness—the time between when an object becomes garbage and when the memory becomes available. If you’d like to explore more on this and in general about Java’s garbage collection / memory management, have a look at these slides: Java Garbage Collection, Monitoring, and Tuning View more presentations from Carol McDonald Related articles Practical Garbage Collection – Part 1: Introduction (worldmodscode.wordpress.com) Reducing memory churn when processing large data set (stackoverflow.com) The Top Java Memory Problems – Part 2 (dynatrace.com) imabonehead: Performance Tuning the JVM for Running Apache Tomcat | TomcatExpert (tomcatexpert.com) When Does the Garbage Collector Run in JVM ? (javacircles.wordpress.com) Why Garbage Collection Paranoia is Still (sometimes) Justified (prog21.dadgum.com) Adventures in Java Garbage Collection Tuning (rapleaf.com) From http://singztechmusings.in/java-garbage-collection-algorithm-design-choices-and-metrics-to-evaluate-garbage-collector-performance/
January 19, 2012
by Singaram Subramanian
· 14,198 Views
article thumbnail
Method Validation With Spring 3.1 and Hibernate Validator 4.2
JSR 303 and Hibernate Validator have been awesome additions to the Java ecosystem, giving you a standard way to validate your domain model across application layers.
January 13, 2012
by Gunnar Hillert
· 32,516 Views
article thumbnail
From Java to Node.js
I’ve been developing for quite a while and in quite a few languages. Somehow though, I’ve always seemed to fall back to Java when doing my own stuff – maybe partly from habit, partly because it has in my opinion the best open source selection out there, and party because I liked its mix of features and performance. Originally authored by Matan Amir Specifically though, in the web arena things have been moving fast and furious with new languages, approaches, and methods like RoR, Play!, and Lift (and many others!). While I “get it” regarding the benefits of these frameworks, I never felt the need to give them more than an initial deep dive to see how they work. I nod a few times at their similarities and move on back to plain REST-ful services in Java with Spring, maybe an ORM (i try to avoid them these days), and a JS-rich front-end. Recently, two factors made me deep dive into Node.js. First is my growing appreciation with the progress of the JavaScript front-end. What used to be little JavaScript snippets to validate a form “onSubmit” have evolved to a complete ecosystem of technologies, frameworks, and stacks. My personal favorite these days is Backbone.js and try to use it whenever I can. Second was the first hand feedback I got from a friend at Voxer about their success in using and deploying Node.js at real scale (they have a pretty big Node.js deployment). So I jumped in. In the short time I’ve been using it for some (real) projects, I can say that it’s my new favorite language of choice. First of all, the event-driven (and non-blocking) model is a perfect fit for server side development. While this has existed in the other languages and forms (Java Servlet 3.0, Event Machine, Twisted to name a few), I connected with the easy of use and natural maturity of it in JavaScript. We’re all used to using callbacks anyway from your typical run-of-the-mill AJAX work right? Second is the community and how large its gotten in the short time Node has been around. There are lots of useful open source libraries to use to solve your problems and the quality level is high. Third is that it was just so easy to pick up. I have to admit that my core JavaScript was decent before I started using Node, but in terms of the Node core library and feature set itself, it’s pretty lean and mean and provides a good starting point to build what you need in case you can’t find someone who already did (which you most likely can). So having said all that, I wanted to share what resources helped me get up to speed in Node.js coming from a Java background. Getting to know JavaScript There are lots of good resources out there on Node.js and i’ve listed them below. For me, the most critical piece was getting my JavaScript knowledge to the next level. Because you’ll be spending all your time writing your server code in JavaScript, it pays huge dividends to understand how to take full advantage of it. As a Java developer you’re probably trained to think in OO designs. With me, this was the part that I focused the most on. Fortunately (or unforunately), JavaScript is not a classic OO language. It can be if you shoehorn it, but i think that defeats the purpose. Here is my short list of JavaScript resources: JavaScript: The Good Parts - Definitely a requirement. Chapters 4 and 5 (functions, objects, and inheritance) are probably the most important part to understand well for those with an OO background. Learning Javascript with Object Graphs (part 2, and part 3) – howtonode.org has lots of good Node material, but this 3 part post is a good resource to top off your knowledge from the book. Simple “Class” Instantiation – Another good post I read recently. Worth digesting. Learning Node.js With a good JavaScript background, starting to use Node.js is pretty straightforward. The main part to understand is the asynchronous nature of the I/O and the need to produce and consume events to get stuff done. Here is a list of resources I used to get up to speed: DailyJS’s Node Tutorial - A multipart tutorial for Node on DailyJS’s blog. It’s a great resource and worth going through all the posts. Mixu’s Node Book - Not complete, but still worth it. I look forward to reading the future chapters. Node Beginner Book – A good starter read. How To Node – A blog dedicated to Node.js. Bookmark it. I felt that going through these was more than enough to give me the push I needed get started. Hopefully it does for you too (thanks to the authors for taking the time to share their knowledge!). Frameworks? Java is all about open source frameworks. That’s part of why it’s so popular. While Node.js is much newer, lots of people have already done quite a bit of heavy-lifting and have shared their code with the world. Below is what I think is a good mapping of popular Java frameworks to their Node.js equivalents (from what I know so far). Web MVC In Java land, most people are familiar with Web MVC frameworks like Spring MVC, Struts, Wicket, and JSF. More recently though, the trend towards client-side JS MVC frameworks like Ember.js (SproutCore) and Backbone.js reduces the required feature-set of some of these frameworks. Nonetheless, a good comparable Node.js web framework is Express. In a sense, it’s even more than a web framework because it also provides most of the “web server” functionality most Java developers are used to through Tomcat, Jetty, etc (more specifically, Express builds on top of Connect) It’s well thought out and provides the feature set needed to get things done. Application Lifecycle Framework (DI Framework) Spring is a popular framework in Java to provide a great deal of glue and abstraction functionality. It allows easy dependency injection, testing, object lifecycle management, transaction management, etc. It’s usually one of the first things I slap into a new project in Java. In Node.js… I haven’t really missed it. JavaScript allows much more flexible ways of decoupling dependencies and I personally haven’t felt the need to find a replacement for Spring. Maybe that’s a good thing? Object-Relational Mapping (ORMs) I have mixed feelings regarding ORMs in general, but it does make your life easier sometimes. There is no shortage of ORM librares for Node.js. Take your pick. Package Management Tools Maven is probably most popular build management tool for Java. While it is very flexible and powerful with a wide variety of plug-ins, it can get very cumbersome. Npm is the mainstream package manager for Node.js. It’s light, fast, and useful. Testing Framework Java has lots of these for sure, jUnit and company as standard. There are also mock libraries, stub libraries, db test libraries, etc. Node.js has quite a few as well. Pick your poison. I see that nodeunit is popular and is similar to jUnit. I’m personally testing Mocha. Testing tools are a more personal and subjective choice, but the good thing is that there definitely are good choices out there. Logging Java developers have quite a list of choices when choosing a logger library. Commons logging, log4j, logback, and slf4j (wrapper) are some of the more popular ones. Node.js also has a few. I’m currently using winston and have no complaints so far. It has the logging levels, multiple transports (appenders to log4j people), and does it all asynchronously as well. Hopefully this will help someone save some time when peeking into the world of Node. Good luck! Source: http://n0tw0rthy.wordpress.com/2012/01/08/from-java-to-node-js/
January 10, 2012
by Chris Smith
· 27,991 Views · 2 Likes
article thumbnail
Object-oriented Clojure
Clojure is a LISP dialect, and as such a functional language based on a large set of functions and a small set of data structures that they operate with. However, it is possible to implement classes and object in Clojure, with constructs well-supported in the language itself. I do not claim you should program in Clojure only with the techniques described in this article: they are just an attempt to bridge Java libraries with Clojure, and to introduce objects and interfaces where needed. Java interoperability Even from the Clojure REPL, you can easily instantiate Java classes as long as they are in the classpath (so use lein if they are in a library) (def ford (Car. arg1 arg2)) Since classes usually reside into packages, in real code you would use their fully qualified name: (def ford (com.dzone.example.Car. arg1 arg2)) You can also call methods: (.brake ford arg1 arg2) The first argument of the evaluation of a .methodName is always the object, while additional arguments are placed after it, in the same s-expression (a fancy name for a list that can be evaluated.) One of the first exercises in the book The Joy of Clojure is an instantiation of a java.awt.Frame object and a rendering done on it, all from the Clojure interactive interpreter. Doing exploratory testing classes and objects for Java development with the REPL is as faster as writing code into JUnit test. Define your own: interfaces defprotocol takes as the first argument an identifier, and a variable number of parameters. Each of this parameters is a method definition like you would do with fn, but without the fn keyword itself and a body to evaluate. The first argument of the methods should always be this, representing the current object (remember Python?). (defprotocol Position (description [this]) (translateX [this dx]) (translateY [this dy]) (doubleCoords [this]) (average [this another])) Define your own: classes Defining classes in Clojure is simple, at least when they implement immutable Value Objects. defrecord takes a class name, a list of parameters for the constructor, an optional protocol name and a variable number of arguments representing the methods. Again, the methods definition are similar to the ones made with fn, but this time with a body, an s-expression to evaluate. There is a catch: you can only implement methods defined in a protocol. (defrecord CartesianPoint [x y] Position (description [this] (str "(x=" x ", y=" y ")")) (translateX [this dx] (CartesianPoint. (+ x dx) y)) (translateY [this dy] (CartesianPoint. x (+ y dy))) (doubleCoords [this] (CartesianPoint. (* 2 x) (* 2 y))) (average [this another] (let [mean (fn [a b] (/ (+ a b) 2))] (CartesianPoint. (mean x (:x another)) (mean y (:y another)))))) The method access the (immutable) fields of the current object with their names, so this is not probably necessary if not for calling other methods on the object. Note that records are not strictly objects in the OO sense, since they do not encapsulate their fields: fields can be accessed anywhere with (:fieldname object). Here's how to work with CartesianPoint objects: (deftest test-point-x-field (is (= 1 (:x (CartesianPoint. 1 2))))) (deftest test-point-y-field (is (= 2 (:y (CartesianPoint. 1 2))))) (deftest test-point-to-string (is (= "(x=1, y=2)" (description (CartesianPoint. 1 2))))) (deftest test-point-translation-x (is (= (CartesianPoint. 11 2) (translateX (CartesianPoint. 1 2) 10)))) (deftest test-point-translation-y (is (= (CartesianPoint. 1 12) (translateY (CartesianPoint. 1 2) 10)))) (deftest test-point-doubling (is (= (CartesianPoint. 2 4) (doubleCoords (CartesianPoint. 1 2))))) (deftest test-points-average (is (= (CartesianPoint. 3 4) (average (CartesianPoint. 1 2) (CartesianPoint. 5 6)))))
January 10, 2012
by Giorgio Sironi
· 13,476 Views · 2 Likes
article thumbnail
Searching relational content with Lucene's BlockJoinQuery
Lucene's 3.4.0 release adds a new feature called index-time join (also sometimes called sub-documents, nested documents or parent/child documents), enabling efficient indexing and searching of certain types of relational content. Most search engines can't directly index relational content, as documents in the index logically behave like a single flat database table. Yet, relational content is everywhere! A job listing site has each company joined to the specific listings for that company. Each resume might have separate list of skills, education and past work experience. A music search engine has an artist/band joined to albums and then joined to songs. A source code search engine would have projects joined to modules and then files. Perhaps the PDF documents you need to search are immense, so you break them up and index each section as a separate Lucene document; in this case you'll have common fields (title, abstract, author, date published, etc.) for the overall document, joined to the sub-document (section) with its own fields (text, page number, etc.). XML documents typically contain nested tags, representing joined sub-documents; emails have attachments; office documents can embed other documents. Nearly all search domains have some form of relational content, often requiring more than one join. If such content is so common then how do search applications handle it today? One obvious "solution" is to simply use a relational database instead of a search engine! If relevance scores are less important and you need to do substantial joining, grouping, sorting, etc., then using a database could be best overall. Most databases include some form a text search, some even using Lucene. If you still want to use a search engine, then one common approach is to denormalize the content up front, at index-time, by joining all tables and indexing the resulting rows, duplicating content in the process. For example, you'd index each song as a Lucene document, copying over all fields from the song's joined album and artist/band. This works correctly, but can be horribly wasteful as you are indexing identical fields, possibly including large text fields, over and over. Another approach is to do the join yourself, outside of Lucene, by indexing songs, albums and artist/band as separate Lucene documents, perhaps even in separate indices. At search-time, you first run a query against one collection, for example the songs. Then you iterate through all hits, gathering up (joining) the full set of corresponding albums and then run a second query against the albums, with a large OR'd list of the albums from the first query, repeating this process if you need to join to artist/band as well. This approach will also work, but doesn't scale well as you may have to create possibly immense follow-on queries. Yet another approach is to use a software package that has already implemented one of these approaches for you! elasticsearch, Apache Solr, Apache Jackrabbit, Hibernate Search and many others all handle relational content in some way. With BlockJoinQuery you can now directly search relational content yourself! Let's work through a simple example: imagine you sell shirts online. Each shirt has certain common fields such as name, description, fabric, price, etc. For each shirt you have a number of separate stock keeping units or SKUs, which have their own fields like size, color, inventory count, etc. The SKUs are what you actually sell, and what you must stock, because when someone buys a shirt they buy a specific SKU (size and color). Maybe you are lucky enough to sell the incredible Mountain Three-wolf Moon Short Sleeve Tee, with these SKUs (size, color): small, blue small, black medium, black large, gray Perhaps a user first searches for "wolf shirt", gets a bunch of hits, and then drills down on a particular size and color, resulting in this query: name:wolf AND size=small AND color=blue which should match this shirt. name is a shirt field while the size and color are SKU fields. But if the user drills down instead on a small gray shirt: name:wolf AND size=small AND color=gray then this shirt should not match because the small size only comes in blue and black. How can you run these queries using BlockJoinQuery? Start by indexing each shirt (parent) and all of its SKUs (children) as separate documents, using the new IndexWriter.addDocuments API to add one shirt and all of its SKUs as a single document block. This method atomically adds a block of documents into a single segment as adjacent document IDs, which BlockJoinQuery relies on. You should also add a marker field to each shirt document (e.g. type = shirt), as BlockJoinQuery requires a Filter identifying the parent documents. To run a BlockJoinQuery at search-time, you'll first need to create the parent filter, matching only shirts. Note that the filter must use FixedBitSet under the hood, like CachingWrapperFilter: Filter shirts = new CachingWrapperFilter( new QueryWrapperFilter( new TermQuery( new Term("type", "shirt")))); Create this filter once, up front and re-use it any time you need to perform this join. Then, for each query that requires a join, because it involves both SKU and shirt fields, start with the child query matching only SKU fields: BooleanQuery skuQuery = new BooleanQuery(); skuQuery.add(new TermQuery(new Term("size", "small")), Occur.MUST); skuQuery.add(new TermQuery(new Term("color", "blue")), Occur.MUST); Next, use BlockJoinQuery to translate hits from the SKU document space up to the shirt document space: BlockJoinQuery skuJoinQuery = new BlockJoinQuery( skuQuery, shirts, ScoreMode.None); The ScoreMode enum decides how scores for multiple SKU hits should be aggregated to the score for the corresponding shirt hit. In this query you don't need scores from the SKU matches, but if you did you can aggregate with Avg, Max or Total instead. Finally you are now free to build up an arbitrary shirt query using skuJoinQuery as a clause: BooleanQuery query = new BooleanQuery(); query.add(new TermQuery(new Term("name", "wolf")), Occur.MUST); query.add(skuJoinQuery, Occur.MUST); You could also just run skuJoinQuery as-is if the query doesn't have any shirt fields. Finally, just run this query like normal! The returned hits will be only shirt documents; if you'd also like to see which SKUs matched for each shirt, use BlockJoinCollector: BlockJoinCollector c = new BlockJoinCollector( Sort.RELEVANCE, // sort 10, // numHits true, // trackScores false // trackMaxScore ); searcher.search(query, c); The provided Sort must use only shirt fields (you cannot sort by any SKU fields). When each hit (a shirt) is competitive, this collector will also record all SKUs that matched for that shirt, which you can retrieve like this: TopGroups hits = c.getTopGroups( skuJoinQuery, skuSort, 0, // offset 10, // maxDocsPerGroup 0, // withinGroupOffset true // fillSortFields ); Set skuSort to the sort order for the SKUs within each shirt. The first offset hits are skipped (use this for paging through shirt hits). Under each shirt, at most maxDocsPerGroup SKUs will be returned. Use withinGroupOffset if you want to page within the SKUs. If fillSortFields is true then each SKU hit will have values for the fields from skuSort. The hits returned by BlockJoinCollector.getTopGroups are SKU hits, grouped by shirt. You'd get the exact same results if you had denormalized up-front and then used grouping to group results by shirt. You can also do more than one join in a single query; the joins can be nested (parent to child to grandchild) or parallel (parent to child1 and parent to child2). However, there are some important limitations of index-time joins: The join must be computed at index-time and "compiled" into the index, in that all joined child documents must be indexed along with the parent document, as a single document block. Different document types (for example, shirts and SKUs) must share a single index, which is wasteful as it means non-sparse data structures like FieldCache entries consume more memory than they would if you had separate indices. If you need to re-index a parent document or any of its child documents, or delete or add a child, then the entire block must be re-indexed. This is a big problem in some cases, for example if you index "user reviews" as child documents then whenever a user adds a review you'll have to re-index that shirt as well as all its SKUs and user reviews. There is no QueryParser support, so you need to programmatically create the parent and child queries, separating according to parent and child fields. The join can currently only go in one direction (mapping child docIDs to parent docIDs), but in some cases you need to map parent docIDs to child docIDs. For example, when searching songs, perhaps you want all matching songs sorted by their title. You can't easily do this today because the only way to get song hits is to group by album or band/artist. The join is a one (parent) to many (children), inner join. As usual, patches are welcome! There is work underway to create a more flexible, but likely less performant, query-time join capability, which should address a number of the above limitations. Source: http://blog.mikemccandless.com/2012/01/searching-relational-content-with.html
January 9, 2012
by Michael Mccandless
· 14,269 Views
article thumbnail
Java Sequential IO Performance
Many applications record a series of events to file-based storage for later use. This can be anything from logging and auditing, through to keeping a transaction redo log in an event sourced design or its close relative CQRS. Java has a number of means by which a file can be sequentially written to, or read back again. This article explores some of these mechanisms to understand their performance characteristics. For the scope of this article I will be using pre-allocated files because I want to focus on performance. Constantly extending a file imposes a significant performance overhead and adds jitter to an application resulting in highly variable latency. "Why is a pre-allocated file better performance?", I hear you ask. Well, on disk a file is made up from a series of blocks/pages containing the data. Firstly, it is important that these blocks are contiguous to provide fast sequential access. Secondly, meta-data must be allocated to describe this file on disk and saved within the file-system. A typical large file will have a number of "indirect" blocks allocated to describe the chain of data-blocks containing the file contents that make up part of this meta-data. I'll leave it as an exercise for the reader, or maybe a later article, to explore the performance impact of not preallocating the data files. If you have used a database you may have noticed that it preallocates the files it will require. The Test I want to experiment with 2 file sizes. One that is sufficiently large to test sequential access, but can easily fit in the file-system cache, and another that is much larger so that the cache subsystem is forced to retire pages so that new ones can be loaded. For these two cases I'll use 400MB and 8GB respectively. I'll also loop over the files a number of times to show the pre and post warm-up characteristics. I'll test 4 means of writing and reading back files sequentially: RandomAccessFile using a vanilla byte[] of page size. Buffered FileInputStream and FileOutputStream. NIO FileChannel with ByteBuffer of page size. Memory mapping a file using NIO and direct MappedByteBuffer. The tests are run on a 2.0Ghz Sandybridge CPU with 8GB RAM, an Intel 230 SSD on Fedora Core 15 64-bit Linux with an ext4 file system, and Oracle JDK 1.6.0_30. The Code import java.io.*; import java.nio.ByteBuffer; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; import static java.lang.Integer.MAX_VALUE; import static java.lang.System.out; import static java.nio.channels.FileChannel.MapMode.READ_ONLY; import static java.nio.channels.FileChannel.MapMode.READ_WRITE; public final class TestSequentialIoPerf { public static final int PAGE_SIZE = 1024 * 4; public static final long FILE_SIZE = PAGE_SIZE * 2000L * 1000L; public static final String FILE_NAME = "test.dat"; public static final byte[] BLANK_PAGE = new byte[PAGE_SIZE]; public static void main(final String[] arg) throws Exception { preallocateTestFile(FILE_NAME); for (final PerfTestCase testCase : testCases) { for (int i = 0; i < 5; i++) { System.gc(); long writeDurationMs = testCase.test(PerfTestCase.Type.WRITE, FILE_NAME); System.gc(); long readDurationMs = testCase.test(PerfTestCase.Type.READ, FILE_NAME); long bytesReadPerSec = (FILE_SIZE * 1000L) / readDurationMs; long bytesWrittenPerSec = (FILE_SIZE * 1000L) / writeDurationMs; out.format("%s\twrite=%,d\tread=%,d bytes/sec\n", testCase.getName(), bytesWrittenPerSec, bytesReadPerSec); } } deleteFile(FILE_NAME); } private static void preallocateTestFile(final String fileName) throws Exception { RandomAccessFile file = new RandomAccessFile(fileName, "rw"); for (long i = 0; i < FILE_SIZE; i += PAGE_SIZE) { file.write(BLANK_PAGE, 0, PAGE_SIZE); } file.close(); } private static void deleteFile(final String testFileName) throws Exception { File file = new File(testFileName); if (!file.delete()) { out.println("Failed to delete test file=" + testFileName); out.println("Windows does not allow mapped files to be deleted."); } } public abstract static class PerfTestCase { public enum Type { READ, WRITE } private final String name; private int checkSum; public PerfTestCase(final String name) { this.name = name; } public String getName() { return name; } public long test(final Type type, final String fileName) { long start = System.currentTimeMillis(); try { switch (type) { case WRITE: { checkSum = testWrite(fileName); break; } case READ: { final int checkSum = testRead(fileName); if (checkSum != this.checkSum) { final String msg = getName() + " expected=" + this.checkSum + " got=" + checkSum; throw new IllegalStateException(msg); } break; } } } catch (Exception ex) { ex.printStackTrace(); } return System.currentTimeMillis() - start; } public abstract int testWrite(final String fileName) throws Exception; public abstract int testRead(final String fileName) throws Exception; } private static PerfTestCase[] testCases = { new PerfTestCase("RandomAccessFile") { public int testWrite(final String fileName) throws Exception { RandomAccessFile file = new RandomAccessFile(fileName, "rw"); final byte[] buffer = new byte[PAGE_SIZE]; int pos = 0; int checkSum = 0; for (long i = 0; i < FILE_SIZE; i++) { byte b = (byte)i; checkSum += b; buffer[pos++] = b; if (PAGE_SIZE == pos) { file.write(buffer, 0, PAGE_SIZE); pos = 0; } } file.close(); return checkSum; } public int testRead(final String fileName) throws Exception { RandomAccessFile file = new RandomAccessFile(fileName, "r"); final byte[] buffer = new byte[PAGE_SIZE]; int checkSum = 0; int bytesRead; while (-1 != (bytesRead = file.read(buffer))) { for (int i = 0; i < bytesRead; i++) { checkSum += buffer[i]; } } file.close(); return checkSum; } }, new PerfTestCase("BufferedStreamFile") { public int testWrite(final String fileName) throws Exception { int checkSum = 0; OutputStream out = new BufferedOutputStream(new FileOutputStream(fileName)); for (long i = 0; i < FILE_SIZE; i++) { byte b = (byte)i; checkSum += b; out.write(b); } out.close(); return checkSum; } public int testRead(final String fileName) throws Exception { int checkSum = 0; InputStream in = new BufferedInputStream(new FileInputStream(fileName)); int b; while (-1 != (b = in.read())) { checkSum += (byte)b; } in.close(); return checkSum; } }, new PerfTestCase("BufferedChannelFile") { public int testWrite(final String fileName) throws Exception { FileChannel channel = new RandomAccessFile(fileName, "rw").getChannel(); ByteBuffer buffer = ByteBuffer.allocate(PAGE_SIZE); int checkSum = 0; for (long i = 0; i < FILE_SIZE; i++) { byte b = (byte)i; checkSum += b; buffer.put(b); if (!buffer.hasRemaining()) { channel.write(buffer); buffer.clear(); } } channel.close(); return checkSum; } public int testRead(final String fileName) throws Exception { FileChannel channel = new RandomAccessFile(fileName, "rw").getChannel(); ByteBuffer buffer = ByteBuffer.allocate(PAGE_SIZE); int checkSum = 0; while (-1 != (channel.read(buffer))) { buffer.flip(); while (buffer.hasRemaining()) { checkSum += buffer.get(); } buffer.clear(); } return checkSum; } }, new PerfTestCase("MemoryMappedFile") { public int testWrite(final String fileName) throws Exception { FileChannel channel = new RandomAccessFile(fileName, "rw").getChannel(); MappedByteBuffer buffer = channel.map(READ_WRITE, 0, Math.min(channel.size(), MAX_VALUE)); int checkSum = 0; for (long i = 0; i < FILE_SIZE; i++) { if (!buffer.hasRemaining()) { buffer = channel.map(READ_WRITE, i, Math.min(channel.size() - i , MAX_VALUE)); } byte b = (byte)i; checkSum += b; buffer.put(b); } channel.close(); return checkSum; } public int testRead(final String fileName) throws Exception { FileChannel channel = new RandomAccessFile(fileName, "rw").getChannel(); MappedByteBuffer buffer = channel.map(READ_ONLY, 0, Math.min(channel.size(), MAX_VALUE)); int checkSum = 0; for (long i = 0; i < FILE_SIZE; i++) { if (!buffer.hasRemaining()) { buffer = channel.map(READ_WRITE, i, Math.min(channel.size() - i , MAX_VALUE)); } checkSum += buffer.get(); } channel.close(); return checkSum; } }, }; } Results 400MB file =========== RandomAccessFile write=379,610,750 read=1,452,482,269 bytes/sec RandomAccessFile write=294,041,636 read=1,494,890,510 bytes/sec RandomAccessFile write=250,980,392 read=1,422,222,222 bytes/sec RandomAccessFile write=250,366,748 read=1,388,474,576 bytes/sec RandomAccessFile write=260,394,151 read=1,422,222,222 bytes/sec BufferedStreamFile write=98,178,331 read=286,433,566 bytes/sec BufferedStreamFile write=100,244,738 read=288,857,545 bytes/sec BufferedStreamFile write=82,948,562 read=154,100,827 bytes/sec BufferedStreamFile write=108,503,311 read=153,869,271 bytes/sec BufferedStreamFile write=113,055,478 read=152,608,047 bytes/sec BufferedChannelFile write=388,246,445 read=358,041,958 bytes/sec BufferedChannelFile write=390,467,111 read=375,091,575 bytes/sec BufferedChannelFile write=321,759,622 read=1,539,849,624 bytes/sec BufferedChannelFile write=318,259,518 read=1,539,849,624 bytes/sec BufferedChannelFile write=322,265,932 read=1,534,082,397 bytes/sec MemoryMappedFile write=300,955,180 read=305,899,925 bytes/sec MemoryMappedFile write=313,149,847 read=310,538,286 bytes/sec MemoryMappedFile write=326,374,501 read=303,857,566 bytes/sec MemoryMappedFile write=327,680,000 read=304,535,315 bytes/sec MemoryMappedFile write=326,895,450 read=303,632,320 bytes/sec 8GB File ============ RandomAccessFile write=167,402,321 read=251,922,012 bytes/sec RandomAccessFile write=193,934,802 read=257,052,307 bytes/sec RandomAccessFile write=192,948,159 read=248,460,768 bytes/sec RandomAccessFile write=191,814,180 read=245,225,408 bytes/sec RandomAccessFile write=190,635,762 read=275,315,073 bytes/sec BufferedStreamFile write=154,823,102 read=248,355,313 bytes/sec BufferedStreamFile write=152,083,913 read=253,418,301 bytes/sec BufferedStreamFile write=133,099,369 read=146,056,197 bytes/sec BufferedStreamFile write=131,065,708 read=146,217,827 bytes/sec BufferedStreamFile write=132,694,052 read=148,116,004 bytes/sec BufferedChannelFile write=406,147,744 read=304,693,892 bytes/sec BufferedChannelFile write=397,457,668 read=298,183,671 bytes/sec BufferedChannelFile write=364,672,364 read=414,281,379 bytes/sec BufferedChannelFile write=371,266,711 read=404,343,534 bytes/sec BufferedChannelFile write=373,705,579 read=406,934,578 bytes/sec MemoryMappedFile write=123,023,322 read=231,530,156 bytes/sec MemoryMappedFile write=121,961,023 read=230,403,600 bytes/sec MemoryMappedFile write=123,317,778 read=229,899,250 bytes/sec MemoryMappedFile write=121,472,738 read=231,739,745 bytes/sec MemoryMappedFile write=120,362,615 read=231,190,382 bytes/sec Analysis For years I was a big fan of using RandomAccessFile directly because of the control it gives and the predictable execution. I never found using buffered streams to be useful from a performance perspective and this still seems to be the case. In more recent testing I've found that using NIO FileChannel and ByteBuffer are the clear winners from a performance perspective. With Java 7 the flexibility of this programming approach has been improved for random access with SeekableByteChannel. I've seen these results vary greatly depending on platform. File system, OS, storage devices, and available memory all have a significant impact. In a few cases I've seen memory-mapped files perform significantly better than the others but this needs to be tested on your platform because your mileage may vary... A special note should be made for the use of memory-mapped large files when pushing for maximum throughput. I've often found the OS can become unresponsive due the the pressure put on the virtual memory sub-system. Conclusion There is a significant difference in performance for the different means of doing sequential file IO from Java. Not all methods are even remotely equal. For most IO I've found the use of ByteBuffers and Channels to be the best optimised parts of the IO libraries. If buffered streams are your IO libraries of choice, then it is worth branching out and and getting familiar with the sub-classes of Channel and Buffer. From http://mechanical-sympathy.blogspot.com/2011/12/java-sequential-io-performance.html
January 9, 2012
by Martin Thompson
· 18,717 Views · 2 Likes
article thumbnail
Test Doubles With Mockito
Introduction A common thing I come across is that teams using a mocking framework assume they are mocking. They are not aware that Mocks are just one of a number of 'Test Doubles' which Gerard Meszaros has categorised at xunitpatterns.com. It’s important to realise that each type of test double has a different role to play in testing. In the same way that you need to learn different patterns or refactoring’s, you need to understand the primitive roles of each type of test double. These can then be combined to achieve your testing needs. I'll cover a very brief history of how this classification came about, and how each of the types differs. I'll do this using some short, simple examples in Mockito. A Very Brief History For years people have been writing lightweight versions of system components to help with testing. In general it was called stubbing. In 2000' the article 'Endo-Testing: Unit Testing with Mock Objects' introduced the concept of a Mock Object. Since then Stubs, Mocks and a number of other types of test objects have been classified by Meszaros as Test Doubles. This terminology has been referenced by Martin Fowler in "Mocks Aren't Stubs" and is being adopted within the Microsoft community as shown in "Exploring The Continuum of Test Doubles" A link to each of these important papers are shown in the reference section. Categories of test doubles The diagram above shows the commonly used types of test double. The following URL gives a good cross reference to each of the patterns and their features as well as alternative terminology. http://xunitpatterns.com/Test%20Double.html Mockito Mockito is a test spy framework and it is very simple to learn. Notable with Mockito is that expectations of any mock objects are not defined before the test as they sometimes are in other mocking frameworks. This leads to a more natural style(IMHO) when beginning mocking. The following examples are here purely to give a simple demonstration of using Mockito to implement the different types of test doubles. There are a much larger number of specific examples of how to use Mockito on the website. http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html Test Doubles with Mockito Below are some basic examples using Mockito to show the role of each test double as defined by Meszaros. I’ve included a link to the main definition for each so you can get more examples and a complete definition. Dummy Object http://xunitpatterns.com/Dummy%20Object.html This is the simplest of all of the test doubles. This is an object that has no implementation which is used purely to populate arguments of method calls which are irrelevant to your test. For example, the code below uses a lot of code to create the customer which is not important to the test. The test couldn't care less which customer is added, as long as the customer count comes back as one. public Customer createDummyCustomer() { County county = new County("Essex"); City city = new City("Romford", county); Address address = new Address("1234 Bank Street", city); Customer customer = new Customer("john", "dobie", address); return customer; } @Test public void addCustomerTest() { Customer dummy = createDummyCustomer(); AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); assertEquals(1, addressBook.getNumberOfCustomers()); } We actually don't care about the contents of customer object - but it is required. We can try a null value, but if the code is correct you would expect some kind of exception to be thrown. @Test(expected=Exception.class) public void addNullCustomerTest() { Customer dummy = null; AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); } To avoid this we can use a simple Mockito dummy to get the desired behaviour. @Test public void addCustomerWithDummyTest() { Customer dummy = mock(Customer.class); AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); Assert.assertEquals(1, addressBook.getNumberOfCustomers()); } It is this simple code which creates a dummy object to be passed into the call. Customer dummy = mock(Customer.class); Don't be fooled by the mock syntax - the role being played here is that of a dummy, not a mock. It's the role of the test double that sets it apart, not the syntax used to create one. This class works as a simple substitute for the customer class and makes the test very easy to read. Test stub http://xunitpatterns.com/Test%20Stub.html The role of the test stub is to return controlled values to the object being tested. These are described as indirect inputs to the test. Hopefully an example will clarify what this means. Take the following code public class SimplePricingService implements PricingService { PricingRepository repository; public SimplePricingService(PricingRepository pricingRepository) { this.repository = pricingRepository; } @Override public Price priceTrade(Trade trade) { return repository.getPriceForTrade(trade); } @Override public Price getTotalPriceForTrades(Collection trades) { Price totalPrice = new Price(); for (Trade trade : trades) { Price tradePrice = repository.getPriceForTrade(trade); totalPrice = totalPrice.add(tradePrice); } return totalPrice; } The SimplePricingService has one collaborating object which is the trade repository. The trade repository provides trade prices to the pricing service through the getPriceForTrade method. For us to test the business logic in the SimplePricingService, we need to control these indirect inputs i.e. inputs we never passed into the test. This is shown below. In the following example we stub the PricingRepository to return known values which can be used to test the business logic of the SimpleTradeService. @Test public void testGetHighestPricedTrade() throws Exception { Price price1 = new Price(10); Price price2 = new Price(15); Price price3 = new Price(25); PricingRepository pricingRepository = mock(PricingRepository.class); when(pricingRepository.getPriceForTrade(any(Trade.class))) .thenReturn(price1, price2, price3); PricingService service = new SimplePricingService(pricingRepository); Price highestPrice = service.getHighestPricedTrade(getTrades()); assertEquals(price3.getAmount(), highestPrice.getAmount()); } Saboteur Example There are 2 common variants of Test Stubs: Responder’s and Saboteur's. Responder's are used to test the happy path as in the previous example. A saboteur is used to test exceptional behaviour as below. @Test(expected=TradeNotFoundException.class) public void testInvalidTrade() throws Exception { Trade trade = new FixtureHelper().getTrade(); TradeRepository tradeRepository = mock(TradeRepository.class); when(tradeRepository.getTradeById(anyLong())) .thenThrow(new TradeNotFoundException()); TradingService tradingService = new SimpleTradingService(tradeRepository); tradingService.getTradeById(trade.getId()); } Mock Object http://xunitpatterns.com/Mock%20Object.html Mock objects are used to verify object behaviour during a test. By object behaviour I mean we check that the correct methods and paths are excercised on the object when the test is run. This is very different to the supporting role of a stub which is used to provide results to whatever you are testing. In a stub we use the pattern of defining a return value for a method. when(customer.getSurname()).thenReturn(surname); In a mock we check the behaviour of the object using the following form. verify(listMock).add(s); Here is a simple example where we want to test that a new trade is audited correctly. Here is the main code. public class SimpleTradingService implements TradingService{ TradeRepository tradeRepository; AuditService auditService; public SimpleTradingService(TradeRepository tradeRepository, AuditService auditService) { this.tradeRepository = tradeRepository; this.auditService = auditService; } public Long createTrade(Trade trade) throws CreateTradeException { Long id = tradeRepository.createTrade(trade); auditService.logNewTrade(trade); return id; } The test below creates a stub for the trade repository and mock for the AuditService We then call verify on the mocked AuditService to make sure that the TradeService calls it's logNewTrade method correctly @Mock TradeRepository tradeRepository; @Mock AuditService auditService; @Test public void testAuditLogEntryMadeForNewTrade() throws Exception { Trade trade = new Trade("Ref 1", "Description 1"); when(tradeRepository.createTrade(trade)).thenReturn(anyLong()); TradingService tradingService = new SimpleTradingService(tradeRepository, auditService); tradingService.createTrade(trade); verify(auditService).logNewTrade(trade); } The following line does the checking on the mocked AuditService. verify(auditService).logNewTrade(trade); This test allows us to show that the audit service behaves correctly when creating a trade. Test Spy http://xunitpatterns.com/Test%20Spy.html It's worth having a look at the above link for the strict definition of a Test Spy. However in Mockito I like to use it to allow you to wrap a real object and then verify or modify it's behaviour to support your testing. Here is an example were we check the standard behaviour of a List. Note that we can both verify that the add method is called and also assert that the item was added to the list. @Spy List listSpy = new ArrayList(); @Test public void testSpyReturnsRealValues() throws Exception { String s = "dobie"; listSpy.add(new String(s)); verify(listSpy).add(s); assertEquals(1, listSpy.size()); } Compare this with using a mock object where only the method call can be validated. Because we only mock the behaviour of the list, it does not record that the item has been added and returns the default value of zero when we call the size() method. @Mock List listMock = new ArrayList(); @Test public void testMockReturnsZero() throws Exception { String s = "dobie"; listMock.add(new String(s)); verify(listMock).add(s); assertEquals(0, listMock.size()); } Another useful feature of the testSpy is the ability to stub return calls. When this is done the object will behave as normal until the stubbed method is called. In this example we stub the get method to always throw a RuntimeException. The rest of the behaviour remains the same. @Test(expected=RuntimeException.class) public void testSpyReturnsStubbedValues() throws Exception { listSpy.add(new String("dobie")); assertEquals(1, listSpy.size()); when(listSpy.get(anyInt())).thenThrow(new RuntimeException()); listSpy.get(0); } In this example we again keep the core behaviour but change the size() method to return 1 initially and 5 for all subsequent calls. public void testSpyReturnsStubbedValues2() throws Exception { int size = 5; when(listSpy.size()).thenReturn(1, size); int mockedListSize = listSpy.size(); assertEquals(1, mockedListSize); mockedListSize = listSpy.size(); assertEquals(5, mockedListSize); mockedListSize = listSpy.size(); assertEquals(5, mockedListSize); } This is pretty Magic! Fake Object http://xunitpatterns.com/Fake%20Object.html Fake objects are usually hand crafted or light weight objects only used for testing and not suitable for production. A good example would be an in-memory database or fake service layer. They tend to provide much more functionality than standard test doubles and as such are probably not usually candidates for implementation using Mockito. That’s not to say that they couldn’t be constructed as such, just that its probably not worth implementing this way. References Test Double Patterns Endo-Testing: Unit Testing with Mock Objects Mock Roles, Not Objects Mocks Aren't Stubs http://msdn.microsoft.com/en-us/magazine/cc163358.aspx Original Article. The original article and others can be found here http://johndobie.blogspot.com/2011/11/test-doubles-with-mockito.html
January 7, 2012
by John Dobie
· 29,290 Views
article thumbnail
Simplifying the Data Access Layer with Spring and Java Generics
1. Overview This is the second of a series of articles about Persistence with Spring. The previous article discussed setting up the persistence layer with Spring 3.1 and Hibernate, without using templates. This article will focus on simplifying the Data Access Layer by using a single, generified DAO, which will result in elegant data access, with no unnecessary clutter. Yes, in Java. The Persistence with Spring series: Part 1 – The Persistence Layer with Spring 3.1 and Hibernate Part 3 – The Persistence Layer with Spring 3.1 and JPA Part 4 – The Persistence Layer with Spring Data JPA Part 5 – Transaction configuration with JPA and Spring 3.1 2. The DAO mess Most production codebases have some kind of DAO layer. Usually the implementation ranges from a raw class with no inheritance to some kind of generified class, but one thing is consistent – there is always more then one. Most likely, there are as many DAOs as there are entities in the system. Also, depending on the level of generics involved, the actual implementations can vary from heavily duplicated code to almost empty, with the bulk of the logic grouped in an abstract class. 2.1. A Generic DAO Instead of having multiple implementations – one for each entity in the system – a single parametrized DAO can be used in such a way that it still takes full advantage of the type safety provided by generics. Two implementations of this concept are presented next, one for a Hibernate centric persistence layer and the other focusing on JPA. These implementation are by no means complete – only some data access methods are included, but they can be easily be made more thorough. 2.2. The Abstract Hibernate DAO public abstract class AbstractHibernateDAO< T extends Serializable > { private Class< T > clazz; @Autowired SessionFactory sessionFactory; public void setClazz( Class< T > clazzToSet ){ this.clazz = clazzToSet; } public T findOne( Long id ){ return (T) this.getCurrentSession().get( this.clazz, id ); } public List< T > findAll(){ return this.getCurrentSession() .createQuery( "from " + this.clazz.getName() ).list(); } public void save( T entity ){ this.getCurrentSession().persist( entity ); } public void update( T entity ){ this.getCurrentSession().merge( entity ); } public void delete( T entity ){ this.getCurrentSession().delete( entity ); } public void deleteById( Long entityId ){ T entity = this.getById( entityId ); this.delete( entity ); } protected Session getCurrentSession(){ return this.sessionFactory.getCurrentSession(); } } The DAO uses the Hibernate API directly, without relying on any Spring templates (such as HibernateTemplate). Using of templates, as well as management of the SessionFactory which is autowired in the DAO were covered in the previous post of the series. 2.3. The Abstract JPA DAO public abstract class AbstractJpaDAO< T extends Serializable > { private Class< T > clazz; @PersistenceContext EntityManager entityManager; public void setClazz( Class< T > clazzToSet ){ this.clazz = clazzToSet; } public T findOne( Long id ){ return this.entityManager.find( this.clazz, id ); } public List< T > findAll(){ return this.entityManager.createQuery( "from " + this.clazz.getName() ) .getResultList(); } public void save( T entity ){ this.entityManager.persist( entity ); } public void update( T entity ){ this.entityManager.merge( entity ); } public void delete( T entity ){ this.entityManager.remove( entity ); } public void deleteById( Long entityId ){ T entity = this.getById( entityId ); this.delete( entity ); } } Similar to the Hibernate DAO implementation, the Java Persistence API is used here directly, again not relying on the now deprecated Spring JpaTemplate. 2.4. The Generic DAO Now, the actual implementation of the generic DAO is as simple as it can be – it contains no logic. Its only purpose is to be injected by the Spring container in a service layer (or in whatever other type of client of the Data Access Layer): @Repository @Scope( BeanDefinition.SCOPE_PROTOTYPE ) public class GenericJpaDAO< T extends Serializable > extends AbstractJpaDAO< T > implements IGenericDAO< T >{ // } @Repository @Scope( BeanDefinition.SCOPE_PROTOTYPE ) public class GenericHibernateDAO< T extends Serializable > extends AbstractHibernateDAO< T > implements IGenericDAO< T >{ // } First, note that the generic implementation is itself parametrized – allowing the client to choose the correct parameter in a case by case basis. This will mean that the clients gets all the benefits of type safety without needing to create multiple artifacts for each entity. Second, notice the prototype scope of these generic DAO implementation. Using this scope means that the Spring container will create a new instance of the DAO each time it is requested (including on autowiring). That will allow a service to use multiple DAOs with different parameters for different entities, as needed. The reason this scope is so important is due to the way Spring initializes beans in the container. Leaving the generic DAO without a scope would mean using the default singleton scope, which would lead to a single instance of the DAO living in the container. That would obviously be majorly restrictive for any kind of more complex scenario. 3. The Service There is now a single DAO to be injected by Spring; also, the Class needs to be specified: @Service class FooService implements IFooService{ IGenericDAO< Foo > dao; @Autowired public void setDao( IGenericDAO< Foo > daoToSet ){ this.dao = daoToSet; this.dao.setClazz( Foo.class ); } // ... } Spring autowires the new DAO insteince using setter injection so that the implementation can be customized with the Class object. After this point, the DAO is fully parametrized and ready to be used by the service. 4. Conclusion This article discussed the simplification of the Data Access Layer by providing a single, reusable implementation of a generic DAO. This implementation was presented in both a Hibernate and a JPA based environment. The result is a streamlined persistence layer, with no unnecessary clutter. For a step by step introduction about setting up the Spring context using Java based configuration and the basic Maven pom for the project, see this article. The next article of the Persistence with Spring series will focus on setting up the DAL layer with Spring 3.1 and JPA. In the meantime, you can check out the full implementation in the github project. If you read this far, you should follow me on twitter here.
January 5, 2012
by Eugen Paraschiv
· 24,634 Views · 1 Like
article thumbnail
Different SOAP encoding styles – RPC, RPC-literal, and document-literal
SOAP uses XML to marshal data that is transported to a software application. Since SOAP’s introduction, three SOAP encoding styles have become popular and are reliably implemented across software vendors and technology providers: SOAP Remote Procedure Call (RPC) encoding, also known as Section 5 encoding, which is defined by the SOAP 1.1 specification SOAP Remote Procedure Call Literal encoding (SOAP RPC-literal), which uses RPC methods to make calls but uses an XML do-it-yourself method for marshalling the data SOAP document-style encoding, which is also known as message-style or document-literal encoding. There are other encoding styles, but software developers have not widely adopted them, mostly because their promoters disagree on a standard. For example, Microsoft is promoting Direct Internet Message Exchange (DIME) to encode binary file data, while the rest of the world is promoting SOAP with Attachments. SOAP RPC encoding, RPC-literal, and document-style SOAP encoding have emerged as the encoding styles that a software developer can count on. SOAP RPC is the encoding style that offers you the most simplicity. You make a call to a remote object, passing along any necessary parameters. The SOAP stack serializes the parameters into XML, moves the data to the destination using transports such as HTTP and SMTP, receives the response, deserializes the response back into objects, and returns the results to the calling method. Whew! SOAP RPC handles all the encoding and decoding, even for very complex data types, and binds to the remote object automatically. Now, imagine that you have some data already in XML format. SOAP RPC also allows literal encoding of the XML data as a single field that is serialized and sent to the Web service host. This is what’s referred to as RPC-literal encoding. Since there is only one parameter — the XML tree — the SOAP stack only needs to serialize one value. The SOAP stack still deals with the transport issues to get the request to the remote object. The stack binds the request to the remote object and handles the response. Lastly, in a SOAP document-style call, the SOAP stack sends an entire XML document to a server without even requiring a return value. The message can contain any sort of XML data that is appropriate to the remote service. In SOAP document-style encoding, the developer handles everything, including determining the transport (e.g., HTTP, MQ, SMTP), marshaling and unmarshaling the body of the SOAP envelope, and parsing the XML in the request and response to find the needed data. The three encoding systems are compared here: SOAP RPC encoding is easiest for the software developer; however, all that ease comes with a scalability and performance penalty. In SOAP RPC-literal encoding, you are more involved with handling XML parsing, but it requires there to be overhead for the SOAP stack to deal with. SOAP document-literal encoding is most difficult for the software developer, but consequently requires little SOAP overhead. Why is SOAP RPC easier? With this encoding style, you only need to define the public object method in your code once; the SOAP stack unmarshals the request parameters into objects and passes them directly into the method call of your object. Otherwise, you are stuck with the task of parsing through the XML tree to find the data elements you need before you get to make the call to the public method. There is an argument for parsing the XML data yourself: since you know the data in the XML tree best, your code will parse that data more efficiently than generalized SOAP stack code. You will find this when measuring scalability and performance in SOAP encoding styles. References: 1. Discover SOAP encoding’s impact on Web service performance (http://www.ibm.com/developerworks/webservices/library/ws-soapenc/) From http://singztechmusings.wordpress.com/2011/12/20/different-soap-encoding-styles-rpc-rpc-literal-and-document-literal/
January 4, 2012
by Singaram Subramanian
· 49,146 Views
article thumbnail
Java: How to Save / Download a File Available at a Particular URL Location on the Internet?
package singz.test; import java.io.BufferedInputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import org.apache.commons.io.FileUtils; /* * @author Singaram Subramanian */ public class FileDownloadTest { public static void main(String[] args) { // Make sure that this directory exists String dirName = "C:\\FileDownload"; try { System.out.println("Downloading \'Maven, Eclipse and OSGi working together\' PDF document..."); saveFileFromUrlWithJavaIO( dirName + "\\maven_eclipse_and_osgi_working_together.pdf", "http://singztechmusings.files.wordpress.com/2011/09/maven_eclipse_and_osgi_working_together.pdf"); System.out.println("Downloaded \'Maven, Eclipse and OSGi working together\' PDF document."); System.out.println("Downloading \'InnoQ Web Services Standards Poster\' PDF document..."); saveFileFromUrlWithCommonsIO( dirName + "\\innoq_ws-standards_poster_2007-02.pdf", "http://singztechmusings.files.wordpress.com/2011/08/innoq_ws-standards_poster_2007-02.pdf"); System.out.println("Downloaded \'InnoQ Web Services Standards Poster\' PDF document."); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } // Using Java IO public static void saveFileFromUrlWithJavaIO(String fileName, String fileUrl) throws MalformedURLException, IOException { BufferedInputStream in = null; FileOutputStream fout = null; try { in = new BufferedInputStream(new URL(fileUrl).openStream()); fout = new FileOutputStream(fileName); byte data[] = new byte[1024]; int count; while ((count = in.read(data, 0, 1024)) != -1) { fout.write(data, 0, count); } } finally { if (in != null) in.close(); if (fout != null) fout.close(); } } // Using Commons IO library // Available at http://commons.apache.org/io/download_io.cgi public static void saveFileFromUrlWithCommonsIO(String fileName, String fileUrl) throws MalformedURLException, IOException { FileUtils.copyURLToFile(new URL(fileUrl), new File(fileName)); } } From http://singztechmusings.wordpress.com/2011/12/20/java-how-to-save-download-a-file-available-at-a-particular-url-location-in-internet/
January 3, 2012
by Singaram Subramanian
· 148,152 Views
article thumbnail
Styling a JavaFX Control with CSS
Change the look and feel of any JavaFX control using CSS.
January 2, 2012
by Toni Epple
· 96,425 Views · 1 Like
article thumbnail
JAXB, SAX, DOM Performance
This post investigates the performance of unmarshalling an XML document to Java objects using a number of different approaches. The XML document is very simple. It contains a collection of Person entities. person0 name0 person1 name1 ... There is a corresponding Person Java object for the Person entity in the XML ... @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "id", "name" }) public class Person { private String id; private String name; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String value) { this.name = value; } } and a PersonList object to represent a collection of Persons. @XmlAccessorType(XmlAccessType.FIELD) @XmlRootElement(name = "persons") public class PersonList { @XmlElement(name="person") private List personList = new ArrayList(); public List getPersons() { return personList; } public void setPersons(List persons) { this.personList = persons; } } The approaches investigated were: Various flavours of JAXB SAX DOM In all cases, the objective was to get the entities in the XML document to the corresponding Java objects. The JAXB annotations on the Person and PersonList POJOS are used in the JAXB tests. The same classes can be used in SAX and DOM tests (the annotations will just be ignored). Initially the reference implementations for JAXB, SAX and DOM were used. The Woodstox STAX parsing was then used. This would have been called in some of the JAXB unmarshalling tests. The tests were carried out on my Dell Laptop, a Pentium Dual-Core CPU, 2.1 GHz running Windows 7. Test 1 - Using JAXB to unmarshall a Java File. @Test public void testUnMarshallUsingJAXB() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); PersonList obj = (PersonList)unmarshaller.unmarshal(new File(filename)); } Test 1 illustrates how simple the progamming model for JAXB is. It is very easy to go from an XML file to Java objects. There is no need to get involved with the nitty gritty details of marshalling and parsing. Test 2 - Using JAXB to unmarshall a Streamsource Test 2 is similar Test 1, except this time a Streamsource object wraps around a File object. The Streamsource object gives a hint to the JAXB implementation to stream the file. @Test public void testUnMarshallUsingJAXBStreamSource() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); StreamSource source = new StreamSource(new File(filename)); PersonList obj = (PersonList)unmarshaller.unmarshal(source); } Test 3 - Using JAXB to unmarshall a StAX XMLStreamReader Again similar to Test 1, except this time an XMLStreamReader instance wraps a FileReader instance which is unmarshalled by JAXB. @Test public void testUnMarshallingWithStAX() throws Exception { FileReader fr = new FileReader(filename); JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmler = xmlif.createXMLStreamReader(fr); PersonList obj = (PersonList)unmarshaller.unmarshal(xmler); } Test 4 - Just use DOM This test uses no JAXB and instead just uses the JAXP DOM approach. This means straight away more code is required than any JAXB approach. @Test public void testParsingWithDom() throws Exception { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document doc = builder.parse(filename); List personsAsList = new ArrayList(); NodeList persons = doc.getElementsByTagName("person"); for (int i = 0; i persons = new ArrayList(); DefaultHandler handler = new DefaultHandler() { boolean bpersonId = false; boolean bpersonName = false; public void startElement(String uri, String localName,String qName, Attributes attributes) throws SAXException { if (qName.equalsIgnoreCase("id")) { bpersonId = true; Person person = new Person(); persons.add(person); } else if (qName.equalsIgnoreCase("name")) { bpersonName = true; } } public void endElement(String uri, String localName, String qName) throws SAXException { } public void characters(char ch[], int start, int length) throws SAXException { if (bpersonId) { String personID = new String(ch, start, length); bpersonId = false; Person person = persons.get(persons.size() - 1); person.setId(personID); } else if (bpersonName) { String name = new String(ch, start, length); bpersonName = false; Person person = persons.get(persons.size() - 1); person.setName(name); } } }; saxParser.parse(filename, handler); } The tests were run 5 times for 3 files which contain a collection of Person entities. The first first file contained 100 Person entities and was 5K in size. The second contained 10,000 entities and was 500K in size and the third contained 250,000 Person entities and was 15 Meg in size. In no cases was any XSD used, or any validations performed. The results are given in result tables where the times for the different runs are comma separated. TEST RESULTS The tests were first run using JDK 1.6.26, 32 bit and the reference implementation for SAX, DOM and JAXB shipped with JDK was used. Unmarshall Type 100 Persons time (ms) 10K Persons time (ms) 250K Persons time (ms) JAXB (Default) 48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317 JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136 JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481 DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773 SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591 JDK 1.6.26 Test comments The first time unmarshalling happens is usually the longest. The memory usage for the JAXB and SAX is similar. It is about 2 Meg for the file with 10,000 persons and 36 - 38 Meg file with 250,000. DOM Memory usage is far higher. For the 10,000 persons file it is 6 Meg, for the 250,000 person file it is greater than 130 Meg. The performance times for pure SAX are better. Particularly, for very large files. The exact same tests were run again, using the same JDK (1.6.26) but this time the Woodstox implementation of StAX parsing was used. Unmarshall Type 100 Persons time (ms) 10K Persons time (ms) 250K Persons time (ms) JAXB (Default) 48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317 JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136 JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481 DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773 SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591 JDK 1.6.26 + Woodstox test comments Again, the first time unmarshalling happens is usually proportionally longer. Again, memory usage for SAX and JAXB is very similar. Both are far better than DOM. The results are very similar to Test 1. The JAXB (StAX) approach time has improved considerably. This is due to the Woodstox implementation of StAX parsing being used. The performance times for pure SAX are still the best. Particularly for large files. The the exact same tests were run again, but this time I used JDK 1.7.02 and the Woodstox implementation of StAX parsing. Unmarshall Type 100 Persons time (ms) 10,000 Persons time (ms) 250,000 Persons time (ms) JAXB (Default) 165,5, 3, 3,5 611,23, 24, 46, 28 578, 539, 511, 511, 519 JAXB(Streamsource) 13,4, 3, 4, 3 43,24, 21, 26, 22 678, 520, 509, 504, 627 JAXB (StAX) 21,1,0, 0, 0 300,69, 20, 16, 16 637, 487, 422, 435, 458 DOM 22,2,2,2,2 420,25, 24, 23, 24 1304, 807, 867, 747, 1189 SAX 7,2,2,1,1 169,15, 15, 19, 14 366, 364, 363, 360, 358 JDK 7 + Woodstox test comments: The performance times for JDK 7 overall are much better. There are some anomolies - the first time the 100 persons and the 10,000 person file is parsed. The memory usage is slightly higher. For SAX and JAXB it is 2 - 4 Meg for the 10,000 persons file and 45 - 49 Meg for the 250,000 persons file. For DOM it is higher again. 5 - 7.5 Meg for the 10,000 person file and 136 - 143 Meg for the 250,000 persons file. Note: W.R.T. all tests No memory analysis was done for the 100 persons file. The memory usage was just too small and so it would have pointless information. The first time to initialise a JAXB context can take up to 0.5 seconds. This was not included in the test results as it only took this time the very first time. After that the JVM initialises context very quickly (consistly < 5ms). If you notice this behaviour with whatever JAXB implementation you are using, consider initialising at start up. These tests are a very simple XML file. In reality there would be more object types and more complex XML. However, these tests should still provide a guidance. Conclusions: The peformance times for pure SAX are slightly better than JAXB but only for very large files. Unless you are using very large files the performance differences are not worth worrying about. The progamming model advantages of JAXB win out over the complexitiy of the SAX programming model. Don't forget JAXB also provides random accses like DOM does. SAX does not provide this. Performance times look a lot better with Woodstox, if JAXB / StAX is being used. Performance times with 64 bit JDK 7 look a lot better. Memory usuage looks slightly higher. From http://dublintech.blogspot.com/2011/12/jaxb-sax-dom-performance.html
December 31, 2011
by Alex Staveley
· 46,547 Views · 4 Likes
article thumbnail
What are the differences between JAXB 1.0 and JAXB 2.0
What are the differences between JAXB 1.0 and JAXB 2.0? JAXB 1.0 only requires JDK 1.3 or later. JAXB 2.0 requires JDK 1.5 or later. JAXB 2.0 makes use of generics and thus provides compile time type safety checking thus reducing runtime errors. Validation is only available during marshalling in JAXB 1.0. Validation is also available during unmarshalling in JAXB 2.0. Termination occurs in JAXB 1.0 when a validation error occurs. In JAXB 2.0 custom ValidationEventHandlers can be used to deal with validation errors. JAXB 2.0 uses annotations and supports bi-directional mapping. JAXB 2.0 generates less code. JAXB 1.0 does not support key XML Schema components like anyAttribute, key, keyref, and unique. It also does not support attributes like complexType.abstract, element.abstract, element.substitutionGroup, xsi:type, complexType.block, complexType.final, element.block, element.final, schema.blockDefault, and schema.finalDefault. In version 2.0, support has been added for all of these schema constructs. References: http://javaboutique.internet.com/tutorials/jaxb/index3.html From http://dublintech.blogspot.com/2011/04/what-are-differences-between-jaxb-10.html
December 30, 2011
by Alex Staveley
· 15,025 Views
article thumbnail
XML Schema to Java - Generating XmlAdapters
In previous posts I have demonstrated how powerful JAXB's XmlAdapter can be when starting from domain objects. In this example I will demonstrate how to leverage an XmlAdapter when generating an object model from an XML schema. This post was inspired by an answer I gave to a question on Stack Overflow (feel free to up vote). XMLSchema (format.xsd) The following is the XML schema that will be used for this example. The interesting portion is a type called NumberCodeValueType. This type has a specified pattern requiring it be a seven digit number. This number can have leading zeros which would not be marshalled by JAXB's default conversion of numbers. NumberFormatter Since JAXB's default number to String algorithm will not match our schema requirements, we will need to write our own formatter. We are required to provide two static methods one that coverts our type to the desired XML format, and another that converts from the XML format. package blog.xmladapter.bindings; public class NumberFormatter { public static String printInt(Integer value) { String result = String.valueOf(value); for(int x=0, length = 7 - result.length(); x XJC Call The bindings file is referenced in the XJC call as: xjc -d out -p blog.xmladapter.bindings -b bindings.xml format.xsd Adapter1 This will cause an XmlAdapter to be created that leverages the formatter: package blog.xmladapter.bindings; import javax.xml.bind.annotation.adapters.XmlAdapter; public class Adapter1 extends XmlAdapter { public Integer unmarshal(String value) { return (blog.xmladapter.bindings.NumberFormatter.parseInt(value)); } public String marshal(Integer value) { return (blog.xmladapter.bindings.NumberFormatter.printInt(value)); } } Root The XmlAdapter will be referenced from the domain object using the @XmlJavaTypeAdapter annotation: package blog.xmladapter.bindings; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "number" }) @XmlRootElement(name = "root") public class Root { @XmlElement(required = true, type = String.class) @XmlJavaTypeAdapter(Adapter1 .class) protected Integer number; public Integer getNumber() { return number; } public void setNumber(Integer value) { this.number = value; } } Demo Now if we run the following demo code: package blog.xmladapter.bindings; import javax.xml.bind.JAXBContext; import javax.xml.bind.Marshaller; public class Demo { public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Root.class); Root root = new Root(); root.setNumber(4); Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(root, System.out); } } Output We will get the desired output: 0000004 From http://blog.bdoughan.com/2011/08/xml-schema-to-java-generating.html
December 30, 2011
by Blaise Doughan
· 13,778 Views
article thumbnail
JAXB and Joda-Time: Dates and Times
Joda-Time provides an alternative to the Date and Calendar classes currently provided in Java SE. Since they are provided in a separate library JAXB does not provide a default mapping for these classes. We can supply the necessary mapping via XmlAdapters. In this post we will cover the following Joda-Time types: DateTime, DateMidnight, LocalDate, LocalTime, LocalDateTime. Java Model The following domain model will be used for this example: package blog.jodatime; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType; import org.joda.time.DateMidnight; import org.joda.time.DateTime; import org.joda.time.LocalDate; import org.joda.time.LocalDateTime; import org.joda.time.LocalTime; @XmlRootElement @XmlType(propOrder={ "dateTime", "dateMidnight", "localDate", "localTime", "localDateTime"}) public class Root { private DateTime dateTime; private DateMidnight dateMidnight; private LocalDate localDate; private LocalTime localTime; private LocalDateTime localDateTime; public DateTime getDateTime() { return dateTime; } public void setDateTime(DateTime dateTime) { this.dateTime = dateTime; } public DateMidnight getDateMidnight() { return dateMidnight; } public void setDateMidnight(DateMidnight dateMidnight) { this.dateMidnight = dateMidnight; } public LocalDate getLocalDate() { return localDate; } public void setLocalDate(LocalDate localDate) { this.localDate = localDate; } public LocalTime getLocalTime() { return localTime; } public void setLocalTime(LocalTime localTime) { this.localTime = localTime; } public LocalDateTime getLocalDateTime() { return localDateTime; } public void setLocalDateTime(LocalDateTime localDateTime) { this.localDateTime = localDateTime; } } XmlAdapters Since Joda-Time and XML Schema both represent data and time information according to ISO 8601 the implementation of the XmlAdapters is quite trivial. DateTimeAdapter package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlAdapter; import org.joda.time.DateTime; public class DateTimeAdapter extends XmlAdapter{ public DateTime unmarshal(String v) throws Exception { return new DateTime(v); } public String marshal(DateTime v) throws Exception { return v.toString(); } } DateMidnightAdapter package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlAdapter; import org.joda.time.DateMidnight; public class DateMidnightAdapter extends XmlAdapter { public DateMidnight unmarshal(String v) throws Exception { return new DateMidnight(v); } public String marshal(DateMidnight v) throws Exception { return v.toString(); } } LocalDateAdapter package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlAdapter; import org.joda.time.LocalDate; public class LocalDateAdapter extends XmlAdapter{ public LocalDate unmarshal(String v) throws Exception { return new LocalDate(v); } public String marshal(LocalDate v) throws Exception { return v.toString(); } } LocalTimeAdapter package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlAdapter; import org.joda.time.LocalTime; public class LocalTimeAdapter extends XmlAdapter { public LocalTime unmarshal(String v) throws Exception { return new LocalTime(v); } public String marshal(LocalTime v) throws Exception { return v.toString(); } } LocalDateTimeAdapter package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlAdapter; import org.joda.time.LocalDateTime; public class LocalDateTimeAdapter extends XmlAdapter{ public LocalDateTime unmarshal(String v) throws Exception { return new LocalDateTime(v); } public String marshal(LocalDateTime v) throws Exception { return v.toString(); } } Registering the XmlAdapters We will use the @XmlJavaTypeAdapters annotation to register the Joda-Time types at the package level. This means that whenever these types are found on a field/property on a class within this package the XmlAdapter will automatically be applied. @XmlJavaTypeAdapters({ @XmlJavaTypeAdapter(type=DateTime.class, value=DateTimeAdapter.class), @XmlJavaTypeAdapter(type=DateMidnight.class, value=DateMidnightAdapter.class), @XmlJavaTypeAdapter(type=LocalDate.class, value=LocalDateAdapter.class), @XmlJavaTypeAdapter(type=LocalTime.class, value=LocalTimeAdapter.class), @XmlJavaTypeAdapter(type=LocalDateTime.class, value=LocalDateTimeAdapter.class) }) package blog.jodatime; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapters; import org.joda.time.DateMidnight; import org.joda.time.DateTime; import org.joda.time.LocalDate; import org.joda.time.LocalDateTime; import org.joda.time.LocalTime; Demo To run the following demo you will need the Joda-Time jar on your classpath. It can be obtained here: http://sourceforge.net/projects/joda-time/files/joda-time/ package blog.jodatime; import javax.xml.bind.JAXBContext; import javax.xml.bind.Marshaller; import org.joda.time.DateMidnight; import org.joda.time.DateTime; import org.joda.time.LocalDate; import org.joda.time.LocalDateTime; import org.joda.time.LocalTime; public class Demo { public static void main(String[] args) throws Exception { Root root = new Root(); root.setDateTime(new DateTime(2011, 5, 30, 11, 2, 30, 0)); root.setDateMidnight(new DateMidnight(2011, 5, 30)); root.setLocalDate(new LocalDate(2011, 5, 30)); root.setLocalTime(new LocalTime(11, 2, 30)); root.setLocalDateTime(new LocalDateTime(2011, 5, 30, 11, 2, 30)); JAXBContext jc = JAXBContext.newInstance(Root.class); Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(root, System.out); } } Output The following is the output from our demo code: 2011-05-30T11:02:30.000-04:00 2011-05-30T00:00:00.000-04:00 2011-05-30 11:02:30.000 2011-05-30T11:02:30.000 From http://blog.bdoughan.com/2011/05/jaxb-and-joda-time-dates-and-times.html
December 29, 2011
by Blaise Doughan
· 15,398 Views
  • Previous
  • ...
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: