DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
Top 10 Causes of Java EE Enterprise Performance Problems
Performance problems are one of the biggest challenges to expect when designing and implementing Java EE related technologies.
June 20, 2012
by Pierre - Hugues Charbonneau
· 273,345 Views · 20 Likes
article thumbnail
How to Resolve java.lang.NoClassDefFoundError: How to resolve – Part 2
This article is part 2 of our NoClassDefFoundError troubleshooting series. It will focus and describe the simplest type of NoClassDefFoundError problem. This article is ideal for Java beginners and I highly recommend that you compile and run the sample Java program yourself. The following writing format will be used going forward and will provide you with: - Description of the problem case and type of NoClassDefFoundError - Sample Java program “simulating” the problem case - ClassLoader chain view - Recommendations and resolution strategies NoClassDefFoundError problem case 1 – missing JAR file The first problem case we will cover is related to a Java program packaging and / or classpath problem. A typical Java program can include one or many JAR files created at compile time. NoClassDefFoundError can often be observed when you forget to add JAR file(s) containing Java classes referenced by your Java or Java EE application. This type of problem is normally not hard to resolve once you analyze the Java Exception and missing Java class name. Sample Java program The following simple Java program is split as per below: - The main Java program NoClassDefFoundErrorSimulator - The caller Java class CallerClassA - The referencing Java class ReferencingClassA - A util class for ClassLoader and logging related facilities JavaEETrainingUtil This program is simple attempting to create a new instance and execute a method of the Java class CallerClassA which is referencing the class ReferencingClassA.It will demonstrate how a simple classpath problem can trigger NoClassDefFoundError. The program is also displaying detail on the current class loader chain at class loading time in order to help you keep track of this process. This will be especially useful for future and more complex problem cases when dealing with larger class loader chains. #### NoClassDefFoundErrorSimulator.java package org.ph.javaee.training1; import org.ph.javaee.training.util.JavaEETrainingUtil; /** * NoClassDefFoundErrorTraining1 * @author Pierre-Hugues Charbonneau * */ public class NoClassDefFoundErrorSimulator { /** * @param args */ public static void main(String[] args) { System.out.println("java.lang.NoClassDefFoundError Simulator - Training 1"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com"); // Print current Classloader context System.out.println("\nCurrent ClassLoader chain: "+JavaEETrainingUtil.getCurrentClassloaderDetail()); // 1. Create a new instance of CallerClassA CallerClassA caller = new CallerClassA(); // 2. Execute method of the caller caller.doSomething(); System.out.println("done!"); } } #### CallerClassA.java package org.ph.javaee.training1; import org.ph.javaee.training.util.JavaEETrainingUtil; /** * CallerClassA * @author Pierre-Hugues Charbonneau * */ public class CallerClassA { private final static String CLAZZ = CallerClassA.class.getName(); static { System.out.println("Classloading of "+CLAZZ+" in progress..."+JavaEETrainingUtil.getCurrentClassloaderDetail()); } public CallerClassA() { System.out.println("Creating a new instance of "+CallerClassA.class.getName()+"..."); } public void doSomething() { // Create a new instance of ReferencingClassA ReferencingClassA referencingClass = new ReferencingClassA(); } } #### ReferencingClassA.java package org.ph.javaee.training1; import org.ph.javaee.training.util.JavaEETrainingUtil; /** * ReferencingClassA * @author Pierre-Hugues Charbonneau * */ public class ReferencingClassA { private final static String CLAZZ = ReferencingClassA.class.getName(); static { System.out.println("Classloading of "+CLAZZ+" in progress..."+JavaEETrainingUtil.getCurrentClassloaderDetail()); } public ReferencingClassA() { System.out.println("Creating a new instance of "+ReferencingClassA.class.getName()+"..."); } public void doSomething() { //nothing to do... } } #### JavaEETrainingUtil.java package org.ph.javaee.training.util; import java.util.Stack; import java.lang.ClassLoader; /** * JavaEETrainingUtil * @author Pierre-Hugues Charbonneau * */ public class JavaEETrainingUtil { /** * getCurrentClassloaderDetail * @return */ public static String getCurrentClassloaderDetail() { StringBuffer classLoaderDetail = new StringBuffer(); Stack classLoaderStack = new Stack(); ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader(); classLoaderDetail.append("\n-----------------------------------------------------------------\n"); // Build a Stack of the current ClassLoader chain while (currentClassLoader != null) { classLoaderStack.push(currentClassLoader); currentClassLoader = currentClassLoader.getParent(); } // Print ClassLoader parent chain while(classLoaderStack.size() > 0) { ClassLoader classLoader = classLoaderStack.pop(); // Print current classLoaderDetail.append(classLoader); if (classLoaderStack.size() > 0) { classLoaderDetail.append("\n--- delegation ---\n"); } else { classLoaderDetail.append(" **Current ClassLoader**"); } } classLoaderDetail.append("\n-----------------------------------------------------------------\n"); return classLoaderDetail.toString(); } } Problem reproduction In order to replicate the problem, we will simply “voluntary” omit one of the JAR files from the classpath that contains the referencing Java class ReferencingClassA. The Java program is packaged as per below: - MainProgram.jar (contains NoClassDefFoundErrorSimulator.class and JavaEETrainingUtil.class) - CallerClassA.jar (contains CallerClassA.class) - ReferencingClassA.jar (contains ReferencingClassA.class) Now, let’s run the program as is: ## Baseline (normal execution) .\bin>java -classpath CallerClassA.jar;ReferencingClassA.jar;MainProgram.jar org.ph.javaee.training1.NoClassDefFoundErrorSimulator java.lang.NoClassDefFoundError Simulator - Training 1 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com Current ClassLoader chain: ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- Classloading of org.ph.javaee.training1.CallerClassA in progress... ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- Creating a new instance of org.ph.javaee.training1.CallerClassA... Classloading of org.ph.javaee.training1.ReferencingClassA in progress... ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- Creating a new instance of org.ph.javaee.training1.ReferencingClassA... done! For the initial run (baseline), the main program was able to create a new instance of CallerClassA and execute its method successfully; including successful class loading of the referencing class ReferencingClassA. ## Problem reproduction run (with removal of ReferencingClassA.jar) ../bin>java -classpath CallerClassA.jar;MainProgram.jar org.ph.javaee.training1.NoClassDefFoundErrorSimulator java.lang.NoClassDefFoundError Simulator - Training 1 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com Current ClassLoader chain: ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- Classloading of org.ph.javaee.training1.CallerClassA in progress... ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- Creating a new instance of org.ph.javaee.training1.CallerClassA... Exception in thread "main" java.lang.NoClassDefFoundError: org/ph/javaee/training1/ReferencingClassA at org.ph.javaee.training1.CallerClassA.doSomething(CallerClassA.java:25) at org.ph.javaee.training1.NoClassDefFoundErrorSimulator.main(NoClassDefFoundErrorSimulator.java:28) Caused by: java.lang.ClassNotFoundException: org.ph.javaee.training1.ReferencingClassA at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 2 more What happened? The removal of the ReferencingClassA.jar, containing ReferencingClassA, did prevent the current class loader to locate this referencing Java class at runtime leading to ClassNotFoundException and NoClassDefFoundError. This is the typical Exception that you will get if you omit JAR file(s) from your Java start-up classpath or within an EAR / WAR for Java EE related applications. ClassLoader view Now let’s review the ClassLoader chain so you can properly understand this problem case. As you saw from the Java program output logging, the following Java ClassLoaders were found: Classloading of org.ph.javaee.training1.CallerClassA in progress... ----------------------------------------------------------------- sun.misc.Launcher$ExtClassLoader@17c1e333 --- delegation --- sun.misc.Launcher$AppClassLoader@214c4ac9 **Current ClassLoader** ----------------------------------------------------------------- ** Please note that the Java bootstrap class loader is responsible to load the core JDK classes and is written in native code ** ## sun.misc.Launcher$AppClassLoader This is the system class loader responsible to load our application code found from the Java classpath specified at start-up. ##sun.misc.Launcher$ExtClassLoader This is the extension class loader responsible to load code in the extensions directories (/lib/ext, or any other directory specified by the java.ext.dirs system property). As you can see from the Java program logging output, the extension class loader is the actual super parent of the system class loader. Our sample Java program was loaded at the system class loader level. Please note that this class loader chain is very simple for this problem case since we did not create child class loaders at this point. This will be covered in future articles. Recommendations and resolution strategies Now find below my recommendations and resolution strategies for NoClassDefFoundError problem case 1: - Review the java.lang.NoClassDefFoundError error and identify the missing Java class - Verify and locate the missing Java class from your compile / build environment - Determine if the missing Java class is from your application code, third part API or even the Java EE container itself. Verify where the missing JAR file(s) is / are expected to be found - Once found, verify your runtime environment Java classpath for any typo or missing JAR file(s) - If the problem is triggered from a Java EE application, perform the same above steps but verify the packaging of your EAR / WAR file for missing JAR and other library file dependencies such as MANIFEST Please feel free to post any question or comment. The part 3 will be available shortly.
June 16, 2012
by Pierre - Hugues Charbonneau
· 173,435 Views · 1 Like
article thumbnail
How to Identify and Resolve Hibernate N+1 SELECT's Problems
Let’s assume that you’re writing code that’d track the price of mobile phones. Now, let’s say you have a collection of objects representing different Mobile phone vendors (MobileVendor), and each vendor has a collection of objects representing the PhoneModels they offer. To put it simple, there’s exists a one-to-many relationship between MobileVendor:PhoneModel. MobileVendor Class Class MobileVendor{ long vendor_id; PhoneModel[] phoneModels; ... } Okay, so you want to print out all the details of phone models. A naive O/R implementation would SELECT all mobile vendors and then do N additional SELECTs for getting the information of PhoneModel for each vendor. -- Get all Mobile Vendors SELECT * FROM MobileVendor; -- For each MobileVendor, get PhoneModel details SELECT * FROM PhoneModel WHERE MobileVendor.vendorId=? As you see, the N+1 problem can happen if the first query populates the primary object and the second query populates all the child objects for each of the unique primary objects returned. Resolve N+1 SELECTs problem (i) HQL fetch join "from MobileVendor mobileVendor join fetch mobileVendor.phoneModel PhoneModels" Corresponding SQL would be (assuming tables as follows: t_mobile_vendor for MobileVendor and t_phone_model for PhoneModel) SELECT * FROM t_mobile_vendor vendor LEFT OUTER JOIN t_phone_model model ON model.vendor_id=vendor.vendor_id (ii) Criteria query Criteria criteria = session.createCriteria(MobileVendor.class); criteria.setFetchMode("phoneModels", FetchMode.EAGER); In both cases, our query returns a list of MobileVendor objects with the phoneModels initialized. Only one query needs to be run to return all the PhoneModel and MobileVendor information required.
June 13, 2012
by Singaram Subramanian
· 201,117 Views · 13 Likes
article thumbnail
Get TeamCity Artifacts Using HTTP, Ant, Gradle and Maven
In how many ways can you retrieve TeamCity artifacts? I say plenty to choose from! If you’re in a world of Java build tools then you can use plain HTTP request, Ant + Ivy, Gradle and Maven to download and use binaries produced by TeamCity build configurations. How? Read on. Build Configuration “id” Before you retrieve artifacts of any build configuration you need to know its "id" which can be seen in a browser when corresponding configuration is browsed. Let’s take IntelliJ IDEA Community Edition project hosted at teamcity.jetbrains.com as an example. Its “Community Dist” build configuration provides a number of artifacts which we’re going to play with. And as can be seen on the screenshot below, its "id" is "bt343". HTTP Anonymous HTTP access is probably the easiest way to fetch TeamCity artifacts, the URL to do so is: http://server/guestAuth/repository/download/// Fot this request to work 3 parameters need to be specified: btN Build configuration "id", as mentioned above. buildNumber Build number or one of predefined constants: "lastSuccessful", "lastPinned", or "lastFinished". For example, you can download periodic IDEA builds from last successful TeamCity execution. artifactName Name of artifact like "ideaIC-118.SNAPSHOT.win.zip". Can also take a form of "artifactName!archivePath" for reading archive’s content, like IDEA’s build file. You can get a list of all artifacts produced in a certain build by requesting a special "teamcity-ivy.xml" artifact generated by TeamCity. Ant + Ivy All artifacts published to TeamCity are accompanied by "teamcity-ivy.xml" Ivy descriptor, effectively making TeamCity an Ivy repository. The code below downloads "core/annotations.jar" from IDEA distribution to "download/ivy" directory: "ivyconf.xml" "ivy.xml" "build.xml" Gradle Identically to Ivy example above it is fairly easy to retrieve TeamCity artifacts with Gradle due to its built-in Ivy support. In addition to downloading the same jar file to "download/gradle" directory with a custom Gradle task let’s use it as "compile" dependency for our Java class, importing IDEA’s @NotNull annotation: "Test.java" import org.jetbrains.annotations.NotNull; public class Test { private final String data; public Test ( @NotNull String data ){ this.data = data; } } "build.gradle" apply plugin: 'java' repositories { ivy { ivyPattern 'http://teamcity.jetbrains.com/guestAuth/repository/download/[module]/[revision]/teamcity-ivy.xml' artifactPattern 'http://teamcity.jetbrains.com/guestAuth/repository/download/[module]/[revision]/[artifact](.[ext])' } } dependencies { compile ( 'org:bt343:lastSuccessful' ){ artifact { name = 'core/annotations' type = 'jar' } } } task copyJar( type: Copy ) { from configurations.compile into "${ project.projectDir }/download/gradle" } Maven The best way to use Maven with TeamCity is by setting up an Artifactory repository manager and its TeamCity plugin. This way artifacts produced by your builds are nicely deployed to Artifactory and can be served from there as from any other remote Maven repository. However, you can still use TeamCity artifacts in Maven without any additional setups. "ivy-maven-plugin" bridges two worlds allowing you to plug Ivy resolvers into Maven’s runtime environment, download dependencies required and add them to corresponding "compile" or "test" scopes. Let’s compile the same Java source from the Gradle example but using Maven this time. "pom.xml" 4.0.0 com.test maven jar 0.1-SNAPSHOT [${project.groupId}:${project.artifactId}:${project.version}] Ivy Maven plugin example com.github.goldin ivy-maven-plugin 0.2.5 get-ivy-artifacts ivy initialize ${project.basedir}/ivyconf.xml ${project.basedir}/ivy.xml ${project.basedir}/download/maven compile When this plugin runs it resolves IDEA annotations artifact using the same "ivyconf.xml" and "ivy.xml" files we’ve seen previously, copies it to "download/maven" directory and adds to "compile" scope so our Java sources can compile. GitHub Project All examples demonstrated are available in my GitHub project. Feel free to clone and run it: git clone git://github.com/evgeny-goldin/teamcity-download-examples.git cd teamcity-download-examples chmod +x run.sh dist/ant/bin/ant gradlew dist/maven/bin/mvn ./run.sh Resources The links below can provide you with more details: TeamCity – Patterns For Accessing Build Artifacts TeamCity – Accessing Server by HTTP TeamCity – Configuring Artifact Dependencies Using Ant Build Script Gradle – Ivy repositories "ivy-maven-plugin" That’s it, you’ve seen it – TeamCity artifacts are perfectly accessible using either of 4 ways: direct HTTP access, Ant + Ivy, Gradle or Maven. Which one do you use? Let me know!
June 5, 2012
by Evgeny Goldin
· 10,788 Views
article thumbnail
Spring Integration - Robust Splitter Aggregator
A Robust Splitter Aggregator Design Strategy - Messaging Gateway Adapter Pattern What do we mean by robust? In the context of this article, robustness refers to an ability to manage exception conditions within a flow without immediately returning to the caller. In some processing scenarios n of m responses is good enough to proceed to conclusion. Example processing scenarios that typically have these tendencies are: Quotations for finance, insurance and booking systems. Fan-out publishing systems. Why do we need Robust Splitter Aggregator Designs? First and foremost an introduction to a typical Splitter Aggregator pattern maybe necessary. The Splitter is an EIP pattern that describes a mechanism for breaking composite messages into parts in order that they can be processed individually. A Router is an EIP pattern that describes routing messages into channels - aiming them at specific messaging endpoints. The Aggregator is an EIP pattern that collates and stores a set of messages that belong to a group, and releases them when that group is complete. Together, those three EIP constructs form a powerful mechanism for dividing processing into distinct units of work. Spring Integration (SI) uses the same pattern terminology as EIP and so readers of that methodology will be quite comfortable with Spring Integration Framework constructs. The SI Framework allows significant customisations of all three of those constructs and furthermore, by simply using asynchronous channels as you would in any other multi-threaded configuration, allows those units of work to be executed in parallel. An interesting challenge working with SI Splitter Aggregator designs is building appropriately robust flows that operate predictably in a number of invocation scenarios. A simple splitter aggregator design can be used in many circumstances and operate without heavy customisation of the SI constructs. However, some service requirements demand a more robust processing strategy and therefore more complex configuration. The following sections describe and show what a Simple Splitter Aggregator design actually looks like, the type of processing your design must be able to deal with and then goes on to suggest candidate solutions for more robust processing. A Simple Splitter Aggregator Design The following Splitter Aggregator design shows a simple flow that receives document request messages into messaging gateway, splits the message into two processing routes and then aggregates the response. Note that the diagram has been built from EIP constructs in OmniGraffle rather than being an Integration Graph view from within STS; the channels are missing from the diagram for the sake of brevity. SI Constructs in detail: Messaging Gateways - there are three messaging gateways. A number of configurations are available for gateway specifications but significantly can return business objects, exceptions and nulls (following a timeout). The gateway to the far left is the service gateway for which we are defining the flow. The other two gateways, between the Router and Aggregator, are external systems that will be providing responses to business questions that our flow generates. The Splitter - a single splitter exists and is responsible for consuming the document message and producing a collection of messages for onward processing. The Java signature for the, most often, custom Splitter specifies a single object argument and a collection for return. The Recipient List Router - a single router exists, any appropriate router can be used, chose the one that closely matches your requirements - you can easily route by expression or payload type. The primary purpose of the router is route a collection of messages supplied by the splitter. This is a pretty typical Splitter Aggregator configuration. Aggregator - a single construct that is responsible for collecting messages together in a group in order that further processing can take place on the gateway responses. Although the Aggregator can be configured with attributes and bean definitions to provide alternative grouping and release strategies, most often the default aggregation strategy suffices. Interesting Aspects of Splitter Aggregator Operation Gateway - the inbound gateway, the one on the far left, may or may not have an error handling bean reference defined on it. If it does then that bean will have an opportunity to handle an exceptions thrown within the flow to the right of that gateway. If not, any exception will be thrown straight out of the gateway. Gateway - an optional default-reply-timeout can be set on each of the gateways, there are significant implications for setting this value, ensure that they're well understood. An expired timeout will result in a null being returned from the gateway. This is the very same condition that can lead to a thread getting parked if an upstream gateway also has no default-reply-timeout set. Splitter Input Channel - this can be a simple direct channel or a direct channel with a dispatcher defined on it. If the channel has a dispatcher specified the flow downstream of this point will be asynchronous, multi-threaded. This also changes the upstream gateway semantics as it usually means that an otherwise impotent default-reply-timeout becomes active. Splitter - the splitter must return a single object. The single object returned by the splitter is a collection, a java.util.List. The SI framework will take each member of that list and feed it into the output-channel of the Splitter - as with this example, usually straight into a router. The contract for Splitter List returns is as its use in Java - it may contain zero, one or more elements. If the splitter returns an empty list it's unlikely that the router will have any work to do and so the flow invocation will be complete. However, if the List contains one item, the SI framework will extract that item from the list and push it into the router, if this gets routed successfully, the flow will continue. Router - the router will simply route messages into one of two gateways in this example. Gateways - the two gateways that are used between the Splitter and Aggregator are interesting. In this example I have used the generic gateway EIP pattern to represent a message sub-system but not defined it explicitly - we could use an HTTP outbound gateway, another SI flow or any other external system. Of course, for each of those sub-systems, a number of responses is possible. Depending on the protocol and external system, the message request may fail to send, the response fail to arrive, a long running process invoked, a network error or timeout or a general processing exception. Aggregator - the single aggregator will wait for a number of responses depending on what's been created by the Splitter. In the case where the splitter return list is empty the Aggregator will not get invoked. In the case where the Splitter return list has one entry, the aggregator will be waiting for one gateway response to complete the group. In the case where the Splitter list has n entries the Aggregator will be waiting for n entries to complete the group. Custom correlation strategies, release strategies and message stores can be injected amongst a set of rich configuration aspects. Interesting Aspects of Simple Splitter Aggregator Operation The primary deciding factor for establishing whether this type of simple gateway is adequate for requirements is to understand what happens in the event of failure. If any exception occurring in your SI flow results in the flow invocation being abandoned and that suits your requirements, there's no need to read any further. If, however, you need to continue processing following failure in one of the gateways the remainder of this article may be of more interest. Exceptions, from any source, generated between the splitter and aggregator, will result in an empty or partial group being discarded by the Aggregator. The exception will propagate back to the closest upstream gateway for either handling by a custom bean or re-throwing by the gateway. Note that a custom release strategy on the Aggregator is difficult to use and especially so alongside timeouts but would not help in this case as the exception will propagate back to the leftmost gateway before the aggregator is invoked. It's also possible to configure exception handlers on the innermost gateways, the exception message could be caught but how do you route messages from a custom exception handler into the aggregator to complete the group, inject the aggregator channel definition into the custom exception handler? This is a poor approach and would involve unpacking an exception message payload, copying the original message headers into a new SI message and then adding the original payload - only four or five lines of code, but dirty it is. Following exception generation, exception messages (without modification) cannot be routed into an Aggregator to complete the group. The original message, the one that contains the correlation and sequence ids for the group and group position are buried inside the SI messages exception payload. If processing needs to continue following exception generation, it should be clear that in order to continue processing, the following must take place: the aggregation group needs to be completed, any exceptions must be caught and handled before getting back to the closet upstream gateway, the correlation and sequence identifiers that allow group completion in the aggregator are buried within the exception message payload and will require extraction and setting on the message that's bound for the aggregator A More Robust Solution - Messaging Gateway Adapter Pattern Dealing with exceptions and null returns from gateways naturally leads to a design that implements a wrapper around the messaging gateway. This affords a level of control that would otherwise be very difficult to establish. This adapter technique allows all returns from messaging gateways to be caught and processed as the messaging gateway is injected into the Service Activator and called directly from that. The messaging gateway no longer responds to the aggregator directly, it responds to a custom Java code Spring bean configured in the Service Activator namespace definition. As expected, processing that does not undergo exception will continue as normal. Those flows that experience exception conditions or unexpected or missing responses from messaging gateways need to process messages in such as way that message groups bound for aggregation can be completed. If the Service Activator were to allow the exception to be propagated outside of it's backing bean, the group would not complete. The same applies not just for exceptions but any return object that does not carry the prerequisite group correlation id and sequence headers - this is where the adaptation is applied. Exception messages or null responses from messaging gateways are caught and handled as shown in the following example code: import com.l8mdv.sample.*; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.integration.Message; import org.springframework.integration.MessageHeaders; import org.springframework.integration.support.MessageBuilder; import org.springframework.util.Assert; public class AvsServiceImpl implements AvsService { private static final Logger logger = LoggerFactory.getLogger(AvsServiceImpl.class); public static final String MISSING_MANDATORY_ARG = "Mandatory argument is missing."; private AvsGateway avsGateway; public AvsServiceImpl(final AvsGateway avsGateway) { this.avsGateway = avsGateway; } public Message service(Message message) { Assert.notNull(message, MISSING_MANDATORY_ARG); Assert.notNull(message.getPayload(), MISSING_MANDATORY_ARG); MessageHeaders requestMessageHeaders = message.getHeaders(); Message responseMessage = null; try { logger.debug("Entering AVS Gateway"); responseMessage = avsGateway.send(message); if (responseMessage == null) responseMessage = buildNewResponse(requestMessageHeaders, AvsResponseType.NULL_RESULT); logger.debug("Exited AVS Gateway"); return responseMessage; } catch (Exception e) { return buildNewResponse(responseMessage, requestMessageHeaders, AvsResponseType.EXCEPTION_RESULT, e); } } private Message buildNewResponse(MessageHeaders requestMessageHeaders, AvsResponseType avsResponseType) { Assert.notNull(requestMessageHeaders, MISSING_MANDATORY_ARG); Assert.notNull(avsResponseType, MISSING_MANDATORY_ARG); AvsResponse avsResponse = new AvsResponse(); avsResponse.setError(avsResponseType); return MessageBuilder.withPayload(avsResponse) .copyHeadersIfAbsent(requestMessageHeaders).build(); } private Message buildNewResponse(Message responseMessage, MessageHeaders requestMessageHeaders, AvsResponseType avsResponseType, Exception e) { Assert.notNull(responseMessage, MISSING_MANDATORY_ARG); Assert.notNull(responseMessage.getPayload(), MISSING_MANDATORY_ARG); Assert.notNull(requestMessageHeaders, MISSING_MANDATORY_ARG); Assert.notNull(avsResponseType, MISSING_MANDATORY_ARG); Assert.notNull(e, MISSING_MANDATORY_ARG); AvsResponse avsResponse = new AvsResponse(); avsResponse.setError(avsResponseType, responseMessage.getPayload(), e); return MessageBuilder.withPayload(avsResponse) .copyHeadersIfAbsent(requestMessageHeaders).build(); } } Notice the last line of the catch clause of the exception handling block. This line of code copies the correlation and sequence headers into the response message, this is mandatory if the aggregation group is going to be allowed to complete and will always be necessary following an exception as shown here. Consequences of using this technique There's no doubt that introducing a Messaging Gateway Adapter into SI config makes the configuration more complex to read and follow. The key factor here is that there is no longer a linear progression through the configuration file. This because the Service Activator must forward reference a Gateway or a Gateway defined before it's adapting Service Activator - in both cases the result is the same. Resources Note:- The design for the software that drove creation of this meta-pattern was based on a requirement that a number of external risk assessment services would be accessed by a single, central Risk Assessment Service. In order to satisfy clients of the service, invocation had to take place in parallel and continue despite failure in any one of those external services. This requirement lead to the design of the Messaging Gateway Adapter Pattern for the project. Spring Integration Reference Manual The solution approach for this problem was discussed directly with Mark Fisher (SpringSource) in the context of building Risk Assessment flows for a large US financial institution. Although the configuration and code is protected by NDA and copyright, it's acceptable to express the design intention and similar code in this article.
June 3, 2012
by Matt Vickery
· 23,114 Views
article thumbnail
Spring Integration Gateways - Null Handling & Timeouts
Spring Integration (SI) Gateways Spring Integration Gateways () provide a semantically rich interface to message sub-systems. Gateways are specified using namespace constructs, these reference a specific Java interface () that is backed by an object dynamically implemented at run-time by the Spring Integration framework. Furthermore, these Java interfaces can, if you so wish, be defined entirely independent of any Spring artefacts - that's both code and configuration. One of the primary advantages of using the SI gateway as an interface to message sub-systems is that it's possible to automatically adopt the benefit of rich, default and customisable, gateway configuration. One such configuration attribute deserves further scrutiny and discussion primarily because it's easy to misunderstand and misconfigure around - default-reply-timeout. Primary Motivator for Gateway Analysis During recent consulting engagements, I've encountered a number of deployments that use Spring Integration Gateway specifications that may, in some circumstances, lead to production operational instability. This has often been in high-pressure environments or those where technology support is not backed by adequate training, testing, review or technology mentoring. How do gateways behave in Spring Integration (R2.0.5) One of the key sections, regarding gateways, in the Spring Integration manual clearly explains gateway semantics. Below is a 2-dimensional table of possible non-standard gateway returns for each of the scenarios that the SI Manual (r2.0.5) refers to. Gateway Non-standard Responses Runtime Events default-reply-timeout=x Single-threaded default-reply-timeout=x Multi-threaded default-reply-timeout=null Single-threaded default-reply-timeout=null Multi-threaded 1. Long Running Process Thread Parked null returned Thread Parked Thread Parked 2. Null Returned Downstream null returned null returned Thread Parked Thread Parked 3. void method Downstream null returned null returned Thread Parked Thread Parked 4. Runtime Exception Error handler invoked or exception thrown. Error handler invoked or exception thrown. Error handler invoked or exception thrown. Error handler invoked or exception thrown. The key parts of this table are the conditions that lead to invoking threads being parked (noted in red), nulls returned (noted in orange) and exceptions (noted in green). Each contributor consists of configuration that is under the developers control, deployed code that is under developers control and conditions that are usually not under developers control. Clearly, the column headings in the table above are divided into two sections; two gateway configuration attributes. The default-reply-timeout is set by the SI configured and is the amount of time that a client call is wiling to wait for a response from the gateway. Secondly, synchronous flows are represented by Single-threaded flows, asynchronous by Multi-threaded flows. A synchronous, or single-threaded flow, is one such as the following: The implicit input channel (gateway-request-channel) has no associated dispatcher configured. An asynchronous, or multi-threaded flow, is one such as the following: The explicit input channel has a dispatcher configured ("taskExecutor"). This task executor specifies a thread pool that supplies threads for execution and whose configuration as above marks a thread boundary. Note: This is not the only way of making channels asynchronous The other configuration attribute referenced is default-reply-timeout, this is set on the gateway namespace configuration such as the example above. Note that both of these runtime aspects are set by the configurer during SI flow design and implementation. They are entirely under developer control. The 'Runtime Events' column indicates gateway relevant runtime events that have to be considered during gateway configuration - these are obviously not under developer control. Trigger conditions for these events are not as unusual as one may hope. 1. Long Running Processes It's not uncommon for thread pools to become exhausted because all pooled threads are waiting for an external resource accessed through a socket, this may be a long running database query, a firewall keeping a connection open despite the server terminating etc. There is significant potential for these types of trigger. Some long-running processes terminate naturally, sometimes they never completed - an application restart is required. 2. Null returned downstream A null may be returned from a downstream SI construct such as a Transformer, Service Activator or Gateway. A Gateway may return null in some circumstances such as following a gateway timeout event. 3. Void method downstream Any custom code invoked during an SI flow may use a void method signature. This can also be caused by configuration in circumstances where flows are determined dynamically at runtime. 4. Runtime Exception RuntimeException's can be triggered during normal operation and are generally handled by catching them at the gateway or allowing them to propagate through. The reason that they are coloured green in the table above is that they are generally much easier to handle than timeouts. Gateway Timeout Handling Strategies There are four possible outcomes from invoking a gateway with a request message, all of these as a result of specific runtime events: a) an ordinary message response, b) an exception message, c) a null or d) no-response. Ordinary business responses and exceptions are straight forward to understand and will not be covered further in this article. The two significant outcomes that will be explored further are strategies for dealing with nulls and no-response. Generally speaking, long running processes either terminate or not. Long running processes that terminate may eventually return a message through the invoked gateway or timeout depending on timeout configuration, in which case a null may be returned. The severity of this as a problem depends on throughput volume, length of long running process and system resources (thread-pool size). Configuration exists for default-reply-timeout In the case where a long running process event is underway and a default-reply-timeout has been set, as long as the long running process completes before the default-reply-timeout expires, there is no problem to deal with. However, if the long running process does not complete before that timeout expires one of three outcomes will apply. Firstly, if the long running process terminates subsequent to the reply timeout expiry, the gateway will have already returned null to the invoker so the null response needs handling by the invoker. The thread handling the long-running process will be returned to the pool. Secondly, if the long running process does not terminate and a reply timeout has been set, the gateway will return null to the gateway invoker but the thread executing the long-running process will not get returned to the pool. Thirdly, and most significantly, if a default-reply-timeout has been configured but the long running process is running on the same thread as the invoker, i.e. synchronous channels supply messages to that process, the thread will not return, the default-reply-timeout has no affect. Assuming the most common processing scenario, a long running process completes either before or after the reply timeout expiry. When a null is returned by the gateway, the invoker is forced to deal with a null response. It's often unacceptable to force gateway consumers to deal with null responses and is not necessary as with a little additional configuration, this can be avoided. Absent Configuration for default-reply-timeout The most significant danger exists around gateways that have no default-reply-timeout configuration set. A long running process or a null returned from downstream will mean that the invoking thread is parked. This is true for both synchronous and asynchronous flows and may ultimately force an application to be restarted because the invoker thread pool is likely to start on a depletion course if this continues to occur. Spring Integration Timeout Handling Design Strategies For those Spring Integration configuration designers that are comfortable with gateway invokers dealing with null responses, exceptions and set default-reply-timeouts on gateways, there's no need to read further. However, if you wish to provide clients of your gateway a more predictable response, a couple of strategies exist for handling null responses from gateways in order that invokers are protected from having to deal with them. Firstly, the simpliest solution is to wrap the gateway with a service activator. The gateway must have the default-reply-timeout attribute value set in order to avoid unnecessary parking of threads. In order to avoid the consequence of long-running threads it's also very prudent to use a dispatcher soon after entry to the gateway - this breaks the thread boundary. Whilst this is a valid technical approach, the impact is that we have forced a different entry point to our message sub-system. Entry is now via a Service Activator rather than a Gateway. A side affect of this change is that the testing entry point changes. Integration tests that would normally reference a gateway to send a message now have to locate the backing implementation for the Service Activator, not ideal. An alternative approach toward solving this problem would be to configure two gateways with a Service Activator between them. Only one of the gateways would be exposed to invokers, the outer one. Both Gateways would reference the same service interface. The outer gateway specification would not specify the default-reply-timeout but would specify the input and output channels in the same way that a single gateway would. The Service Activator between the Gateways would handle null gateway responses and possibly any exceptions if preferred to the gateway error handler approach. An example is as follows: The Service Activator bean (enrollmentServiceGatewayHandler) deals with both null and exception responses from the adapted gateway (enrollmentServiceAdaptedGateway), in the situation where these are generated a business response detailing the error is generated. Spring Integration R2.1 Changes async-executor on gateway spec
May 26, 2012
by Matt Vickery
· 23,986 Views · 1 Like
article thumbnail
Connection Pooling in a Java Web Application with Tomcat and NetBeans IDE
After my article Connection Pooling in a Java Web Application with Glassfish and NetBeans IDE, here are the instructions for Tomcat. Requirements NetBeans IDE (this tutorial uses NetBeans 7) Tomcat (this tutorial uses Tomcat 7 that is bundled within NetBeans) MySQL database MySQL Java Driver Steps Assuming your MySQL database is ready, connect to it and create a database. Lets call it connpool: mysql> create database connpool; Now we create and populate the table from which we will fetch the data: mysql> use connpool; mysql> create table data(id int(5) not null unique auto_increment, name varchar(255) not null); mysql> insert into data(name) values("Fred Flintstone"), ("Pink Panther"), ("Wayne Cramp"), ("Johnny Bravo"), ("Spongebob Squarepants"); That is it for the database part. We now create our web application. In NetBeans IDE, click File → New Project... Select Java Web → Web Application: Click Next and give the project the name TomPool. Click Next Choose the server as Tomcat and, since we are not going to use any frameworks, click Finish. The project will be created and the start page, index.jsp, opened for us in the IDE. Now we create the connection pooling parameters. In the Projects window, expand configuration files and open "context.xml". You will see that the IDE has added this code for us: Delete the last line: and then add the following to the context.xml file. I have explained the sections along the way. Make sure you edit your MySQL username and password appropriately: Next, expand the Web Pages node, right-click WEB-INF → New → Other → XML → XML Document. Click Next and type web for the File Name. Click next and choose Well-Formed Document then Finish. You will now have the file "web.xml": Delete everything in the file and paste this code: MySQL Test App DB Connection connpool javax.sql.DataSource Container That is it for the connection pool. We now edit our code to make use of it. Edit index.jsp by adding this code just after the initial coments but before Edit the section of the page: Data in my Connection Pooled Database Now, we test the connection pool by running the application: If you want to have the one connection pool used in multiple applications, you need to edit the following two files: 1. /conf/web.xml Just before the closing tag, add the code DB Connection connpool javax.sql.DataSource Container 2. /conf/context.xml Just before the closing tag, add the code Now you can use the pool without editing XML files in each of your applications. Just use the sample code as given in index.jsp That's it folks!
May 23, 2012
by Arthur Buliva
· 69,972 Views · 2 Likes
article thumbnail
Lucene Setup on OracleDB in 5 Minutes
This tutorial is for people who want to run an Apache Lucene example with OracleDB in just five minutes.
May 19, 2012
by Mohammad Juma
· 30,869 Views · 4 Likes
article thumbnail
Functional Programming on the JVM
Introduction In recent times, many programming languages that run on JVM have emerged. Many of these languages support the concept of writing code in a functional style. Programmers have started realizing the benefits of functional programming and are beginning to rediscover the powerful style of this programming paradigm. The emergence of multiple languages on JVM have only helped to reignite the strong interest in this paradigm. Java at its core is an imperative programming language. However in recent past many new languages like Scala, Clojure, Groovy etc. have become popular which supports functional programming style and yet run on JVM. However none of these languages can be considered as pure functional language since all of them allow Java code to be called from within them and Java on its own is not a functional language. Still they have different degree of support for writing code in functional style and have their own benefits. Functional programming requires different kind of thinking and has its own advantages as compared to imperative programming. It seems that Java has also realized functional programming advantages and is slowly inching towards it. First sign of this can be seen in the form of Lambda Expressions that will be supported in Java 8. Although it's too early to comment on this as the draft for Java 8 is still under review and is expected to be released next year, but it does show that Java has plans of supporting functional programming style going forward. In this article we will first discuss what functional programming is and how it is different from imperative programming. Later we will see where does each of the above mentioned Java based programming languages i.e. Scala, Clojure and Groovy fits in the world of functional programming and what each of them has to offer. And at the last we will sneak-peak into Java 8's lambda expressions. Why Functional Programming? Computers of current era are shipped with multicore processors. Going forward the number of processors in a machine is only going to increase. The code we write today and tomorrow will probably never run on a single processor system. In order to get best out of this, software must be designed to make more and more use of concurrency and hence keep all available processors busy. Java does provide concurrency concepts like threads, synchronization, locks etc. to execute code in parallel. But shared memory multi-threading approach in Java causes more trouble than solving the problem. Java based functional programming languages like Scala, Clojure, Groovy etc. looks into these problems with a different angle and provides less complex and less error-prone solutions as compared to imperative programming. They provide immutability concepts out of the box and hence eliminate need of synchronization and associated risk of deadlocks or livelocks. Concepts like Actors, Agents and DataFlow variables provide high level concurrency abstraction and makes very easy to write concurrent programs. What is Functional Programming? Functional Programming is a concept which treats functions as first class citizens. At the core of functional programming is immutability. It emphasizes on application of functions in contrast to imperative programming style which emphasizes on change in state. Functional programming has no side effects whereas programming in imperative style can result in side-effects. Let's elaborate more on each of these characteristics to understand the concept behind functional programming. Immutable state - The state of an object doesn't change and hence need not be protected or synchronized. That might sound a bit awkward at first, since if nothing changes, one might think that we are not writing a useful program. However that's not what immutable state means. In functional programming, change in state occurs via series of transformations which keeps the object immutable and yet achieves change in state. Functions as first class citizens - There was a major shift in the way programs were written when Object oriented concepts came into picture. Everything was conceptualized as object and any action to be performed was treated as method call on objects. Hence there is a series of method calls executed on objects to get the desired work done. In functional programming world, it's more about thinking in terms of communication chain between functions than method calls on objects. This makes functions as first class citizens of functional programming since everything is modelled around functions. Higher-order functions - Functions in functional programming are higher order functions since following actions can be performed with them. 1. Functions can be passed within functions as arguments. 2. Functions can be created within functions just as objects can be created in functions 3. Functions can be returned from functions Functions with no side-effects - In functional programming, function execution has no side-effects. In other words a function code will always return same result for same argument when called multiple times. It doesn't change anything outside its boundaries and is also not affected by any external change outside it's boundary. It doesn't change input value and can only produce new output. However once the output has been produced and returned by function, it also becomes immutable and cannot be modified by any other function. In other words, they support referential transparency i.e. if a function takes an input and returns some output, multiple invocation of that function at different point of time will always return same output as long as input remains same. This is one of the main motivations behind using functional language as it makes easy to understand and predict behaviour of program. Characteristics like immutability and no side-effects are extremely helpful while writing multi-threaded code and developers need not to worry for synchronizing the state. Hence functional code is very easy to distribute across multiple cores as they don't have any side effects. JVM based Functional Programming Languages There are many JVM based languages which supports functional programming paradigm. However I intend to limit discussion around following. Scala Clojure Groovy Lambda Expressions in Java 8 Lambda Expressions is not a programming language but a feature that will be supported in Java8. The reason for including it in this article is to emphasize on the fact that going forward Java will also support writing code in functional style. Scala Scala is a statically typed multi-paradigm programming language designed to integrate features of object oriented programming and functional programming. Since it is static, one cannot change class definition at run time i.e. one cannot add new methods or variables at run-time. However Scala does provide functional programming concepts i.e. immutability, higher-order functions, nested functions etc. Apart from supporting Java's concurrency model, it also provides concept of Actor model out of the box for event based asynchronous message passing between objects. The code written in Scala gets compiled into very efficient bytecode which can then be executed on JVM. Creating immutable list in Scala is very simple and doesn't require any extra effort. "val" keyword does the trick. val numbers = List(1,2,3,4) Functions can be passed as arguments. Let's see this with an example. Suppose we have a list of 10 numbers and we want to calculate sum of all the numbers in list. val numbers = List(1,2,3,4,5,6,7,8,9,10) val total = numbers.foldLeft(0){(a,b) => a+b } As can be seen in above example, we are passing a function to add two variables "a" and "b" to another function "foldLeft" which is provided by Scala library on collections. We have also not used any iteration logic and temporary variable to calculate the sum. "foldLeft" method eliminates the need to maintain state in temporary variable which would have otherwise be required if we were to write this code in pure Java way (as mentioned below). int total = 0; for(int number in numbers){ total+=number; } Scala function can easily be executed in parallel without any need for synchronization since it does not mutate state. This was just a small example to showcase the power of Scala as functional programming language. There are whole lot of features available in Scala to write code in functional style. Clojure Clojure is a dynamic language with an excellent support for writing code in functional style. It is a dialect of "lisp" programming language with an efficient and robust infrastructure for multithreaded programming. Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures. When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct multithreaded designs. Apart from this since Clojure is a dynamic language, it allows to modify class definition at run time by adding new methods or modifying existing one at run time. This makes it different from Scala which is a statically typed language. Immutability is in the root of Clojure. To create immutable list just following needs to be done. By default list in Clojure is immutable, so does not require any extra effort. (def numbers (list 1 2 3 4 5 6 7 8 9 10)) To add numbers without maintaining state, reduce function can be used as mentioned below (reduce + 0 '(1 2 3 4 5 6 7 8 9 10)) As can be seen, adding list of numbers just requires one line of code without mutating any state. This is the beauty about functional programming languages and plays an important role for parallel execution. Groovy Groovy is again a dynamic language with some support for functional programming. Amongst the 3 languages, Groovy can be considered weakest in terms of functional programming features. However because of it's dynamic nature and close resemblance to Java, it has been widely accepted and considered good alternative to Java. Groovy does not provide immutable objects out of the box but has excellent support for higher order functions. Immutable objects can be created with annotation @Immutation, but it's far less flexible than immutablity support in Scala and Clojure. In Groovy functions can be passed around just as any other variable in the form of Closures. The same example in Groovy can be written as follows def numbers = [1,2,3,4,5,6,7,8,9,10] def total = numbers.inject(0){a,b -> a+b } However the point to be noted is that variables "numbers" and "total" are not immutable and can be modified at any point of time. Hence writing multithreaded code can be a bit challenging. But Groovy does provide the concept of Actors, Agents and DataFlow variables via library called GPars(Groovy Parallel System) which reduces the challenges associated with multithreaded code to a greater extent. Java8 Lambda Expression Java has finally realized the power of writing code in functional style and is going to support the concept of closures starting from Java8. JSR 335 - Lambda Expressions for the JavaTM Programming Language aims to support programming in a multicore environment by adding closures and related features to the Java language. So it will finally be possible to pass around functions similar to variables in pure Java code. Currently if someone wants to try out and play around lambda expressions, Project Lambda of OpenJDK provides prototype implementation of JSR-335. Following code snippet should run fine with OpenJDK Project Lambda compiler. ExecutorService executor = Executors.newCachedThreadPool(); executor.submit(() -> {System.out.println("I am running")}) As can be seen above, a closure(function) has been passed to executor's submit method. It does not take any argument and hence empty brackets () have been placed. This function just prints "I am running" when executed. Just as we can pass functions to function, it will also be possible to create closure within functions and return closure from function. I would recommend to try out OpenJDK to get a feel of lambda expressions which is going to be part of Java8 Conclusion So this was all about functional programming, it's concepts, benefits and options available on JVM to write function code. Functional programming requires a different mind-set and can be very useful if used correctly. Functional Programming along with Object Oriented Programming can be a jewel in crown. As discussed there are various options available to write code in functional style that can be executed on JVM. Choice depends on various factors and there is no one language that can be considered best in all aspects. However one thing is for sure, going forward we are going to see more and more usage of functional programming.
May 14, 2012
by Gagan Agrawal
· 29,061 Views
article thumbnail
Apache Camel Tutorial—EIP, Routes, Components, Testing, and More
Learn how Apache Camel implements the EIPs and offers a standardized, internal domain-specific language (DSL) to integrate applications.
May 7, 2012
by Kai Wähner DZone Core CORE
· 134,591 Views · 4 Likes
article thumbnail
Java Thread Deadlock: A Case Study
This article will describe the complete root cause analysis of a recent Java deadlock problem observed from a Weblogic 11g production system running on the IBM JVM 1.6.This case study will also demonstrate the importance of mastering Thread Dump analysis skills; including for the IBM JVM Thread Dump format. Environment specification Java EE server: Oracle Weblogic Server 11g & Spring 2.0 OS: AIX 5.3 Java VM: IBM JRE 1.6.0 Platform type: Portal & ordering application Monitoring and troubleshooting tools JVM Thread Dump (IBM JVM format) Compuware Server Vantage (Weblogic JMX monitoring & alerting) Problem overview A major stuck Threads problem was observed & reported from Compuware Server Vantage and affecting 2 of our Weblogic 11g production managed servers causing application impact and timeout conditions from our end users. Gathering and validation of facts As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause: · What is the client impact? MEDIUM (only 2 managed servers / JVM affected out of 16) · Recent change of the affected platform? Yes (new JMS related asynchronous component) · Any recent traffic increase to the affected platform? No · How does this problem manifest itself? A sudden increase of Threads was observed leading to rapid Thread depletion · Did a Weblogic managed server restart resolve the problem? Yes, but problem is returning after few hours (unpredictable & intermittent pattern) - Conclusion #1: The problem is related to an intermittent stuck Threads behaviour affecting only a few Weblogic managed servers at the time - Conclusion #2: Since problem is intermittent, a global root cause such as a non-responsive downstream system is not likely Thread Dump analysis – first pass The first thing to do when dealing with stuck Thread problems is to generate a JVM Thread Dump. This is a golden rule regardless of your environment specifications & problem context. A JVM Thread Dump snapshot provides you with crucial information about the active Threads and what type of processing / tasks they are performing at that time. Now back to our case study, an IBM JVM Thread Dump (javacore.xyz format) was generated which did reveal the following Java Thread deadlock condition below: 1LKDEADLOCK Deadlock detected !!! NULL --------------------- NULL 2LKDEADLOCKTHR Thread "[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'" (0x000000012CC08B00) 3LKDEADLOCKWTR is waiting for: 4LKDEADLOCKMON sys_mon_t:0x0000000126171DF8 infl_mon_t: 0x0000000126171E38: 4LKDEADLOCKOBJ weblogic/jms/frontend/FESession@0x07000000198048C0/0x07000000198048D8: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread "[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'" (0x000000012E560500) 3LKDEADLOCKWTR which is waiting for: 4LKDEADLOCKMON sys_mon_t:0x000000012884CD60 infl_mon_t: 0x000000012884CDA0: 4LKDEADLOCKOBJ weblogic/jms/frontend/FEConnection@0x0700000019822F08/0x0700000019822F20: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread "[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'" (0x000000012CC08B00) This deadlock situation can be translated as per below: - Weblogic Thread #8 is waiting to acquire an Object monitor lock owned by Weblogic Thread #10 - Weblogic Thread #10 is waiting to acquire an Object monitor lock owned by Weblogic Thread #8 Conclusion: both Weblogic Threads #8 & #10 are waiting on each other; forever! Now before going any deeper in this root cause analysis, let me provide you a high level overview on Java Thread deadlocks. Java Thread deadlock overview Most of you are probably familiar with Java Thread deadlock principles but did you really experience a true deadlock problem? From my experience, true Java deadlocks are rare and I have only seen ~5 occurrences over the last 10 years. The reason is that most stuck Threads related problems are due to Thread hanging conditions (waiting on remote IO call etc.) but not involved in a true deadlock condition with other Thread(s). A Java Thread deadlock is a situation for example where Thread A is waiting to acquire an Object monitor lock held by Thread B which is itself waiting to acquire an Object monitor lock held by Thread A. Both these Threads will wait for each other forever. This situation can be visualized as per below diagram: Thread deadlock is confirmed…now what can you do? Once the deadlock is confirmed (most JVM Thread Dump implementations will highlight it for you), the next step is to perform a deeper dive analysis by reviewing each Thread involved in the deadlock situation along with their current task & wait condition.Find below the partial Thread Stack Trace from our problem case for each Thread involved in the deadlock condition: ** Please note that the real application Java package name was renamed for confidentiality purposes ** Weblogic Thread #8 "[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x000000012CC08B00, j9thread_t:0x00000001299E5100, java/lang/Thread:0x070000001D72EE00, state:B, prio=1 (native thread ID:0x111200F, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.stop(FEConnection.java:671(Compiled Code)) at weblogic/jms/frontend/FEConnection.invoke(FEConnection.java:1685(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSConnection.stop(JMSConnection.java:863(Compiled Code)) at weblogic/jms/client/WLConnectionImpl.stop(WLConnectionImpl.java:843) at org/springframework/jms/connection/SingleConnectionFactory.closeConnection(SingleConnectionFactory.java:342) at org/springframework/jms/connection/SingleConnectionFactory.resetConnection(SingleConnectionFactory.java:296) at org/app/JMSReceiver.receive() …………………………………………………………………… Weblogic Thread #10 "[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x000000012E560500, j9thread_t:0x000000012E35BCE0, java/lang/Thread:0x070000001ECA9200, state:B, prio=1 (native thread ID:0x4FA027, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.getPeerVersion(FEConnection.java:1381(Compiled Code)) at weblogic/jms/frontend/FESession.setUpBackEndSession(FESession.java:755(Compiled Code)) at weblogic/jms/frontend/FESession.consumerCreate(FESession.java:1025(Compiled Code)) at weblogic/jms/frontend/FESession.invoke(FESession.java:2995(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSSession.consumerCreate(JMSSession.java:2982(Compiled Code)) at weblogic/jms/client/JMSSession.setupConsumer(JMSSession.java:2749(Compiled Code)) at weblogic/jms/client/JMSSession.createConsumer(JMSSession.java:2691(Compiled Code)) at weblogic/jms/client/JMSSession.createReceiver(JMSSession.java:2596(Compiled Code)) at weblogic/jms/client/WLSessionImpl.createReceiver(WLSessionImpl.java:991(Compiled Code)) at org/springframework/jms/core/JmsTemplate102.createConsumer(JmsTemplate102.java:204(Compiled Code)) at org/springframework/jms/core/JmsTemplate.doReceive(JmsTemplate.java:676(Compiled Code)) at org/springframework/jms/core/JmsTemplate$10.doInJms(JmsTemplate.java:652(Compiled Code)) at org/springframework/jms/core/JmsTemplate.execute(JmsTemplate.java:412(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:650(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:641(Compiled Code)) at org/app/JMSReceiver.receive() …………………………………………………………… As you can see in the above Thread Strack Traces, such deadlock did originate from our application code which is using the Spring framework API for the JMS consumer implementation (very useful when not using MDB’s). The Stack Traces are quite interesting and revealing that both Threads are in a race condition against the same Weblogic JMS consumer session / connection and leading to a deadlock situation: - Weblogic Thread #8 is attempting to reset and close the current JMS connection - Weblogic Thread #10 is attempting to use the same JMS Connection / Session in order to create a new JMS consumer - Thread deadlock is triggered! Root cause: non Thread safe Spring JMS SingleConnectionFactory implementation A code review and a quick research from Spring JIRA bug database did reveal the following Thread safe defect below with a perfect correlation with the above analysis: # SingleConnectionFactory's resetConnection is causing deadlocks with underlying OracleAQ's JMS connection https://jira.springsource.org/browse/SPR-5987 A patch for Spring SingleConnectionFactory was released back in 2009 which did involve adding proper synchronized{} block in order to prevent Thread deadlock in the event of a JMS Connection reset operation: synchronized (connectionMonitor) { //if condition added to avoid possible deadlocks when trying to reset the target connection if (!started) { this.target.start(); started = true; } } Solution Our team is currently planning to integrate this Spring patch in to our production environment shortly. The initial tests performed in our test environment are positive. Conclusion I hope this case study has helped understand a real-life Java Thread deadlock problem and how proper Thread Dump analysis skills can allow you to quickly pinpoint the root cause of stuck Thread related problems at the code level. Please don’t hesitate to post any comment or question.
May 6, 2012
by Pierre - Hugues Charbonneau
· 14,561 Views
article thumbnail
Spring Integration - Payload Storage via Claim-check
Continuing on the theme of temporary storage for transient messages used within Spring Integration flows, the claim-check model offers configurable storage for message payloads. The advantage in using this Enterprise Integration pattern, compared against header enrichment, is that objects don't have to be packed into the header using a Header Enrichment technique. They can be stored in a local Java Map, an IMDB, cache or anything else that be used to hold data. Several advantages using this approach are evident. Firstly, performance and efficiency. When using header enrichment, if message payloads need to be managed outside of the JVM that generates the enriched message header, the object will not be available unless it's serialised and transported around the distributed application. This could be costly in terms of performance and transport efficiency. The key factor here is the frequency of remote dispatch and the size of the header object. In specific circumstances the claim-check pattern may offer an advantage here, objects can be serialised and/or transformed into a storage specific format and stored internally in memory or externally in a data store. Secondly, accessibility. It's conceivable that message payloads undergoing claim-check processing may need to be accessed by third party applications that are unable to receive Spring Integration messages. The claim-check pattern allows this type of processing to take place. Thirdly, resiliency is offered. A data store can be chosen that guarantees persistence for messages in order that they can be recovered following failure. The following code details how the claim-check pattern can be used: The gateway used is specified as the following Java class: package com.l8mdv.sample; import org.springframework.integration.Message; import org.springframework.integration.annotation.Gateway; public interface ClaimCheckGateway { public static final String CLAIM_CHECK_ID = "CLAIM_CHECK_ID"; @Gateway (requestChannel = "claim-check-in-channel") public Message send(Message message); } Lastly, this can all be tested by using the following JUnit test case: package com.l8mdv.sample; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.integration.Message; import org.springframework.integration.support.MessageBuilder; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import static com.l8mdv.sample.ClaimCheckGateway.CLAIM_CHECK_ID; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration( locations = {"classpath:META-INF/spring/claim-check.xml"} ) public class ClaimCheckIntegrationTest { @Autowired ClaimCheckGateway claimCheckGateway; @Test public void locatePayloadInHeader() { String payload = "Sample test message."; Message message = MessageBuilder.withPayload(payload).build(); Message response = claimCheckGateway.send(message); Assert.assertTrue(response.getPayload().equals(payload)); Assert.assertTrue(response.getHeaders().get(CLAIM_CHECK_ID) != null); } }
May 4, 2012
by Matt Vickery
· 13,763 Views
article thumbnail
Preventing CSRF in Java Web Apps
Cross-site request forgery attacks (CSRF) are very common in web applications and can cause significant harm if allowed. If you have never heard of CSRF I recommend you check out OWASPs page about it. Luckily preventing CSRF attacks is quite simple, I’ll try to show you how they work and how we can defend from them in the least obtrusive way possible in Java based web apps. Imagine you are about to perform a money transfer in your bank’s secure web page, when you click on the transfer option a form page is loaded that allows you to choose the debit and credit accounts, and enter the amount of money to move. When you are satisfied with your options you press “submit” and send the form information to your bank’s web server, which in turns performs the transaction. Now add the following to the picture, a malicious website (which you think harmless of course) is open on another window/tab of your browser while you are innocently moving all your millions in your bank’s site. This evil site knows the bank’s web forms structure, and as you browse through it, it tries to post transactions withdrawing money from your accounts and depositing it on the evil overlord’s accounts, it can do it because you have an open and valid session with the banks site in the same browser! This is the basis for a CSRF attack. One simple and effective way to prevent it is to generate a random (i.e. unpredictable) string when the initial transfer form is loaded and send it to the browser. The browser then sends this piece of data along with the transfer options, and the server validates it before approving the transaction for processing. This way, malicious websites cannot post transactions even if they have access to a valid session in a browser. To implement this mechanism in Java I choose to use two filters, one to create the salt for each request, and another to validate it. Since the users request and subsequent POST or GETs that should be validated do not necessarily get executed in order, I decided to use a time based cache to store a list of valid salt strings. The first filter, used to generate a new salt for a request and store it in the cache can be coded as follows: package com.ricardozuasti.csrf; import com.google.common.cache.Cache; import com.google.common.cache.CacheBuilder; import com.google.common.cache.CacheLoader; import com.google.common.cache.LoadingCache; import java.io.IOException; import java.security.SecureRandom; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; import javax.servlet.*; import javax.servlet.http.HttpServletRequest; import org.apache.commons.lang.RandomStringUtils; public class LoadSalt implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { // Assume its HTTP HttpServletRequest httpReq = (HttpServletRequest) request; // Check the user session for the salt cache, if none is present we create one Cache csrfPreventionSaltCache = (Cache) httpReq.getSession().getAttribute("csrfPreventionSaltCache"); if (csrfPreventionSaltCache == null){ csrfPreventionSaltCache = CacheBuilder.newBuilder() .maximumSize(5000) .expireAfterWrite(20, TimeUnit.MINUTES) .build(); httpReq.getSession().setAttribute("csrfPreventionSaltCache", csrfPreventionSaltCache); } // Generate the salt and store it in the users cache String salt = RandomStringUtils.random(20, 0, 0, true, true, null, new SecureRandom()); csrfPreventionSaltCache.put(salt, Boolean.TRUE); // Add the salt to the current request so it can be used // by the page rendered in this request httpReq.setAttribute("csrfPreventionSalt", salt); chain.doFilter(request, response); } @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void destroy() { } } I used Guava CacheBuilder to create the salt cache since it has both a size limit and an expiration timeout per entry. To generate the actual salt I used Apache Commons RandomStringUtils, powered by Java 6 SecureRandom to ensure a strong generation seed. This filter should be used in all requests ending in a page that will link, post or call via AJAX a secured transaction, so in most cases it’s a good idea to map it to every request (maybe with the exception of static content such as images, CSS, etc.). It’s mapping in your web.xml should look similar to: ... loadSalt com.ricardozuasti.csrf.LoadSalt ... loadSalt * ... As I said, to validate the salt before executing secure transactions we can write another filter: package com.ricardozuasti.csrf; import com.google.common.cache.Cache; import java.io.IOException; import javax.servlet.*; import javax.servlet.http.HttpServletRequest; public class ValidateSalt implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { // Assume its HTTP HttpServletRequest httpReq = (HttpServletRequest) request; // Get the salt sent with the request String salt = (String) httpReq.getParameter("csrfPreventionSalt"); // Validate that the salt is in the cache Cache csrfPreventionSaltCache = (Cache) httpReq.getSession().getAttribute("csrfPreventionSaltCache"); if (csrfPreventionSaltCache != null && salt != null && csrfPreventionSaltCache.getIfPresent(salt) != null){ // If the salt is in the cache, we move on chain.doFilter(request, response); } else { // Otherwise we throw an exception aborting the request flow throw new ServletException("Potential CSRF detected!! Inform a scary sysadmin ASAP."); } } @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void destroy() { } } You should configure this filter for every request that needs to be secure (i.e. retrieves or modifies sensitive information, move money, etc.), for example: ... validateSalt com.ricardozuasti.csrf.ValidateSalt ... validateSalt /transferMoneyServlet ... After configuring both servlets all your secured requests should fail :). To fix it you have to add, to each link and form post that ends in a secure URL, the csrfPreventionSalt parameter containing the value of the request parameter with the same name. For example, in an HTML form within a JSP page: ... ... ... Of course you can write a custom tag, a nice Javascript code or whatever you prefer to inject the new parameter in every needed link/form.
May 1, 2012
by Ricardo Zuasti
· 130,190 Views · 7 Likes
article thumbnail
Java Thread CPU Analysis on Windows
This article will provide you with a tutorial on how you can quickly pinpoint the Java Thread contributors to a high CPU problem on the Windows OS. Windows, like other OS such as Linux, Solaris & AIX allow you to monitor the CPU utilization at the process level but also for individual Thread executing a task within a process. For this tutorial, we created a simple Java program that will allow you to learn this technique in a step by step manner. Troubleshooting tools The following tools will be used below for this tutorial: - Windows Process Explorer (to pinpoint high CPU Thread contributors) - JVM Thread Dump (for Thread correlation and root cause analysis at code level) High CPU simulator Java program The simple program below is simply looping and creating new String objects. It will allow us to perform this CPU per Thread analysis. I recommend that you import it in an IDE of your choice e.g. Eclipse and run it from there. You should observe an increase of CPU on your Windows machine as soon as you execute it. package org.ph.javaee.tool.cpu; /** * HighCPUSimulator * @author Pierre-Hugues Charbonneau * http://javaeesupportpatterns.blogspot.com * */ public class HighCPUSimulator { private final static int NB_ITERATIONS = 500000000; // ~1 KB data footprint private final static String DATA_PREFIX = "datadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadata"; /** * @param args */ public static void main(String[] args) { System.out.println("HIGH CPU Simulator 1.0"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com/"); try { for (int i = 0; i < NB_ITERATIONS; i++) { // Perform some String manipulations to slowdown and expose looping process... String data = DATA_PREFIX + i; } } catch (Throwable any) { System.out.println("Unexpected Exception! " + any.getMessage() + " [" + any + "]"); } System.out.println("HighCPUSimulator done!"); } } Step #1 – Launch Process Explorer The Process Explorer tool visually shows the CPU usage dynamically. It is good for live analysis. If you need historical data on CPU per Thread then you can also use Windows perfmon with % Processor Time & Thread Id data counters. You can download Process Explorer from the link below: http://technet.microsoft.com/en-us/sysinternals/bb896653 In our example, you can see that the Eclipse javaw.exe process is now using ~25% of total CPU utilization following the execution of our sample program. Step #2 – Launch Process Explorer Threads view The next step is to display the Threads view of the javaw.exe process. Simply right click on the javaw.exe process and select Properties. The Threads view will be opened as per below snapshot: - The first column is the Thread Id (decimal format) - The second column is the CPU utilization % used by each Thread - The third column is also another counter indicating if Thread is running on the CPU In our example, we can see our primary culprit is Thread Id #5996 using ~ 25% of CPU. Step #3 – Generate a JVM Thread Dump At this point, Process Explorer will no longer be useful. The goal was to pinpoint one or multiple Java Threads consuming most of the Java process CPU utilization which is what we achieved. In order to go the next level in your analysis you will need to capture a JVM Thread Dump. This will allow you to correlate the Thread Id with the Thread Stack Trace so you can pinpoint that type of processing is consuming such high CPU. JVM Thread Dump generation can be done in a few manners. If you are using JRockit VM you can simply use the jrcmd tool as per below example: Once you have the Thread Dump data, simply search for the Thread Id and locate the Thread Stack Trace that you are interested in. For our example, the Thread “Main Thread” which was fired from Eclipse got exposed as the primary culprit which is exactly what we wanted to demonstrate. "Main Thread" id=1 idx=0x4 tid=5996 prio=5 alive, native_blocked at org/ph/javaee/tool/cpu/HighCPUSimulator.main (HighCPUSimulator.java:31) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace Step #4 – Analyze the culprit Thread(s) Stack Trace and determine root cause At this point you should have everything that you need to move forward with the root cause analysis. You will need to review each Thread Stack Trace and determine what type of problem you are dealing with. That final step is typically where you will spend most of your time and problem can be simple such as infinite looping or complex such as garbage collection related problems. In our example, the Thread Dump did reveal the high CPU originates from our sample Java program around line 31. As expected, it did reveal the looping condition that we engineered on purpose for this tutorial. for (int i = 0; i < NB_ITERATIONS; i++) { // Perform some String manipulations to slowdown and expose looping process... String data = DATA_PREFIX + i; } I hope this tutorial has helped you understand how you can analyze and help pinpoint root cause of Java CPU problems on Windows OS. Please stay tuned for more updates, the next article will provide you with a Java CPU troubleshooting guide including how to tackle that last analysis step along with common problem patterns.
April 30, 2012
by Pierre - Hugues Charbonneau
· 18,939 Views · 1 Like
article thumbnail
yield(), sleep(0), wait(0,1) and parkNanos(1)
On the surface these methods do the same thing in Java; Thread.yield(), Thread.sleep(0), Object.wait(0,1) and LockSupport.parkNanos(1) They all wait a sort period of time, but how much that is varies a surprising amount and between platforms. Timing a short delay The following code times how long it takes to repeatedly call those methods. import java.util.concurrent.locks.LockSupport; public class Pausing { public static void main(String... args) throws InterruptedException { int repeat = 10000; for (int i = 0; i < 3; i++) { long time0 = System.nanoTime(); for (int j = 0; j < repeat; j++) Thread.yield(); long time1 = System.nanoTime(); for (int j = 0; j < repeat; j++) Thread.sleep(0); long time2 = System.nanoTime(); synchronized (Thread.class) { for (int j = 0; j < repeat/10; j++) Thread.class.wait(0, 1); } long time3 = System.nanoTime(); for (int j = 0; j < repeat/10; j++) LockSupport.parkNanos(1); long time4 = System.nanoTime(); System.out.printf("The average time to yield %.1f μs, sleep(0) %.1f μs, " + "wait(0,1) %.1f μs and LockSupport.parkNanos(1) %.1f μs%n", (time1 - time0) / repeat / 1e3, (time2 - time1) / repeat / 1e3, (time3 - time2) / (repeat/10) / 1e3, (time4 - time3) / (repeat/10) / 1e3); } } } On Windows 7 The average time to yield 0.3 μs, sleep(0) 0.6 μs, wait(0,1) 999.9 μs and LockSupport.parkNanos(1) 1000.0 μs The average time to yield 0.3 μs, sleep(0) 0.6 μs, wait(0,1) 999.5 μs and LockSupport.parkNanos(1) 1000.1 μs The average time to yield 0.2 μs, sleep(0) 0.5 μs, wait(0,1) 1000.0 μs and LockSupport.parkNanos(1) 1000.1 μs On RHEL 5.x The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2003.8 μs and LockSupport.parkNanos(1) 3.8 μs The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2004.8 μs and LockSupport.parkNanos(1) 3.4 μs The average time to yield 1.1 μs, sleep(0) 1.1 μs, wait(0,1) 2005.6 μs and LockSupport.parkNanos(1) 3.1 μs In summary If you want to wait for a short period of time, you can't assume that all these methods do the same thing, nor will be the same between platforms.
April 27, 2012
by Peter Lawrey
· 9,378 Views
article thumbnail
Camel Exception Handling Overview for a Java DSL
Here are some notes on adding Camel (from v2.3+) exception handling to a JavaDSL route. There are various approaches/options available. These notes cover the important distinctions between approaches... Default handling The default mode uses the DefaultErrorHandler strategy which simply propagates any exception back to the caller and ends the route immediately. This is rarely the desired behavior, at the very least, you should define a generic/global exception handler to log the errors and put them on a queue for further analysis (during development, testing, etc). onException(Exception) .to("log:GeneralError?level=ERROR") .to("activemq:queue:GeneralErrorQueue"); Try-catch-finally This approach mimics the Java for exception handling and is designed to be very readable and easy to implement. It inlines the try/catch/finally blocks directly in the route and is useful for route specific error handling. from("direct:start") .doTry() .process(new MyProcessor()) .doCatch(Exception.class) .to("mock:error"); .doFinally() .to("mock:end"); onException This approach defines the exception clause separately from the route. This makes the route and exception handling code more readable and reusable. Also, the exception handling will apply to any routes defined in its CamelContext. onException(Exception.class) .to("mock:error"); from("direct:start") .process(new MyProcessor()) .to("mock:end"); Handled/Continued These APIs provide valuable control over the flow. Adding handled(true) tells Camel to not propagate the error back to the caller (should almost always be used). The continued(true) tells Camel to resume the route where it left off (rarely used, but powerful). These can both be used to control the flow of the route in interesting ways, for example... from("direct:start") .process(new MyProcessor()) .to("mock:end"); //send the exception back to the client (rarely used, clients need a meaningful response) onException(ClientException.class) .handled(false) //default .log("error sent back to the client"); //send a readable error message back to the client and handle the error internally onException(HandledException.class) .handled(true) .setBody(constant("error")) .to("mock:error"); //ignore the exception and continue the route (can be dangerous, use wisely) onException(ContinuedException.class) .continued(true); Using a processor for more control If you need more control of the handler code, you can use an inline Processor to get a handle to the exception that was thrown and write your own handler code... onException(Exception.class) .handled(true) .process(new Processor() { public void process(Exchange exchange) throws Exception { Exception exception = (Exception) exchange.getProperty(Exchange.EXCEPTION_CAUGHT); //log, email, reroute, etc. } }); Summary Overall, the exception handling is very flexible and can meet almost any scenario you can come up with. For the sake of focusing on the basics, many advanced features haven't been covered here. For more details, see these pages on Camel's site... http://camel.apache.org/error-handling-in-camel.html http://camel.apache.org/try-catch-finally.html http://camel.apache.org/exception-clause.html
April 25, 2012
by Ben O'Day
· 65,875 Views · 2 Likes
article thumbnail
Guava Splitter vs StringUtils
So I recently wrote a post about good old reliable Apache Commons StringUtils, which provoked a couple of comments, one of which was that Google Guava provides better mechanisms for joining and splitting Strings. I have to admit, this is a corner of Guava I've yet to explore. So thought I ought to take a closer look, and compare with StringUtils, and I have to admit I was surprised at what I found. Splitting strings eh? There can't be many different ways of doing this surely? Well Guava and StringUtils do take a sylisticly different approach. Lets start with the basic usage. // Apache StringUtils... String[] tokens1 = StringUtils.split("one,two,three",','); // Guava splitter... Iterable tokens2 = Splitter.on(',').split("one,two,three"); So, my first observation is that Splitter is more object orientated. You have to create a splitter object, which you then use to do the splitting. Whereas the StringUtils splitter methods uses a more functional style, with static methods. Here I much prefer Splitter. Need a reusable splitter that splits comma separated lists? A splitter that also trims leading and trailing white space, and ignores empty elements? Not a problem: Splitter niceCommaSplitter = Splitter.on(',') .omitEmptyString() .trimResults(); niceCommaSplitter.split("one,, two, three"); //"one","two","three" niceCommaSplitter.split(" four , five "); //"four","five" That looks really useful, any other differences? The other thing to notice is that Splitter returns an Iterable, whereas StringUtils.split returns a String array. Don't really see that making much of a difference, most of the time I just want to loop through the tokens in order anyway! I also didn't think it was a big deal, until I examined the performance of the two approaches. To do this I tried running the following code: final String numberList = "One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten"; long start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { StringUtils.split(numberList , ','); } System.out.println(System.currentTimeMillis() - start); start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { Splitter.on(',').split(numberList ); } System.out.println(System.currentTimeMillis() - start); On my machine this output the following times: 594 31 Guava's Splitter is almost 10 times faster! Now this is a much bigger difference than I was expecting, Splitter is over 10 times faster than StringUtils. How can this be? Well, I suspect it's something to do with the return type. Splitter returns an Iterable, whereas StringUtils.split gives you an array of Strings! So Splitter doesn't actually need to create new String objects. It's also worth noting you can cache your Splitter object, which results in an even faster runtime. Blimey, end of argument? Guava's Splitter wins every time? Hold on a second. This isn't quite the full story. Notice we're not actually doing anything with the result of the Strings? Like I mentioned, it looks like the Splitter isn't actually creating any new Strings. I suspect it's actually deferring this to the Iterator object it returns. So can we test this? Sure thing. Here's some code to repeatedly check the lengths of the generated substrings: final String numberList = "One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten"; long start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { final String[] numbers = StringUtils.split(numberList, ','); for(String number : numbers) { number.length(); } } System.out.println(System.currentTimeMillis() - start); Splitter splitter = Splitter.on(','); start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { Iterable numbers = splitter.split(numberList); for(String number : numbers) { number.length(); } } System.out.println(System.currentTimeMillis() - start); On my machine this outputs: 609 2048 Guava's Splitter is almost 4 times slower! Indeed, I was expecting them to be about the same, or maybe Guava slightly faster, so this is another surprising result. Looks like by returning an Iterable, Splitter is trading immediate gains, for longer term pain. There's also a moral here about making sure performance tests are actually testing something useful. In conclusion I think I'll still use Splitter most of the time. On small lists the difference in performance is going to be negligible, and Splitter just feels much nicer to use. Still I was surprised by the result, and if you're splitting lots of Strings and performance is an issue, it might be worth considering switching back to Commons StringUtils.
April 25, 2012
by Tom Jefferys
· 39,998 Views
article thumbnail
Bridging between JMS and RabbitMQ (AMQP) using Spring Integration
An old customer recently asked me if I had a solution for how to integrate between their existing JMS infrastructure on Websphere MQ with RabbitMQ. Although I know that RabbitMQ has the shovel plugin which can bridge between Rabbit instances I've yet not found a good plugin for JMS <-> AMQP forwarding. The first thing that came to my mind was to utilize a Spring Integration mediation as SI has excellent support for both JMS and Rabbit. Curious as I am I started a PoC and this is the result. It takes messages of a JMS queue and forwards to an AMQP exchange that is bound to a queue the consumer application is supposed to listen to. I used an external HornetQ instance in JBoss 6.1 as the JMS Provider, but I am 100% secure that the same setup would work for Websphere MQ as they both implement JMS. Be aware that I've done no performance tweaking or QoS setup yet as this is just a proof-of-concept. For a real setup you'd probably have to think about delivery guarantees versus performance and etc... The code will be available at a GitHub repository near you soon.. SpringContext in XML: org.jnp.interfaces.NamingContextFactory jnp://localhost:1099 org.jnp.interfaces:org.jboss.naming ConnectionFactory Maven POM: 4.0.0 org.rl si.jmstorabbit 0.0.1-SNAPSHOT jar si.jmstorabbit http://maven.apache.org UTF-8 2.2.5.Final 2.1.0.RELEASE springsource-release http://repository.springsource.com/maven/bundles/release false springsource-external http://repository.springsource.com/maven/bundles/external false org.springframework.integration spring-integration-core ${spring.integration.version} org.springframework.integration spring-integration-file ${spring.integration.version} org.springframework.integration spring-integration-amqp ${spring.integration.version} org.springframework.integration spring-integration-jms ${spring.integration.version} junit junit 3.8.1 test org.springframework spring-context 3.0.7.RELEASE jboss jnp-client 4.2.2.GA org.hornetq hornetq-core-client ${hornet.version} org.hornetq hornetq-jms-client ${hornet.version} org.hornetq hornetq-jms ${hornet.version} jboss jboss-common-client 3.2.3 org.jboss.netty netty 3.2.7.Final javax.jms jms 1.1
April 24, 2012
by Billy Sjöberg
· 29,729 Views
article thumbnail
Connect to RabbitMQ Using Scala, Play and Akka
In this article we'll look at how you can connect from Scala to RabbitMQ so you can support the AMQP protocol from your applications. In this example I'll use the Play Framework 2.0 as container (for more info on this see my other article on this subject) to run the application in, since Play makes developing with Scala a lot easier. This article will also use Akka actors to send and receive the messages from RabbitMQ. What is AMQP First, a quick introduction into AMQP. AMQP stands for "Advanced Message Queueing Protocol" and is an open standard for messaging. The AMQP homepage states their vision as this: "To become the standard protocol for interoperability between all messaging middleware". AMQP defines a transport level protocol for exchanging messages that can be used to integrate applications from a number of different platform, languages and technologies. There are a number of tools implementing this protocol, but one that is getting more and more attention is RabbitMQ. RabbitMQ is an open source, erlang based message broker that uses AMQP. All application that can speak AMQP can connect to and make use of RabbitMQ. So in this article we'll show how you can connect from your Play2/Scala/Akka based application to RabbitMQ. In this article we'll show you how to do implement the two most common scenarios: Send / recieve: We'll configure one sender to send a message every couple of seconds, and use two listeners that will read the messages, in a round robin fashion, from the queue. Publish / subscribe: For this example we'll create pretty much the same scenario, but this time, the listeners will both get the message at the same time. I assume you've got an installation of RabbitMQ. If not follow the instructions from their site. Setup basic Play 2 / Scala project For this example I created a new Play 2 project. Doing this is very easy: jos@Joss-MacBook-Pro.local:~/Dev/play-2.0-RC2$ ./play new Play2AndRabbitMQ _ _ _ __ | | __ _ _ _| | | '_ \| |/ _' | || |_| | __/|_|\____|\__ (_) |_| |__/ play! 2.0-RC2, http://www.playframework.org The new application will be created in /Users/jos/Dev/play-2.0/PlayAndRabbitMQ What is the application name? > PlayAndRabbitMQ Which template do you want to use for this new application? 1 - Create a simple Scala application 2 - Create a simple Java application 3 - Create an empty project > 1 OK, application PlayAndRabbitMQ is created. Have fun! I am used to work from Eclipse with the scala-ide pluging, so I execute play eclipsify and import the project in Eclipse. The next step we need to do is set up the correct dependencies. Play uses sbt for this and allows you to configure your dependencies from the build.scala file in your project directory. The only dependency we'll add is the java client library from RabbitMQ. Even though Lift provides a scala based AMQP library, I find using the RabbitMQ one directly just as easy. After adding the dependency my build.scala looks like this: import sbt._ import Keys._ import PlayProject._ object ApplicationBuild extends Build { val appName = "PlayAndRabbitMQ" val appVersion = "1.0-SNAPSHOT" val appDependencies = Seq( "com.rabbitmq" % "amqp-client" % "2.8.1" ) val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( ) } Add rabbitMQ configuration to the config file For our examples we can configure a couple of things. The queue where to send the message to, the exchange to use, and the host where RabbitMQ is running. In a real world scenario we would have more configuration options to set, but for this case we'll just have these three. Add the following to your application.conf so that we can reference it from our application. #rabbit-mq configuration rabbitmq.host=localhost rabbitmq.queue=queue1 rabbitmq.exchange=exchange1 We can now access these configuration files using the ConfigFactory. To allow easy access create the following object: object Config { val RABBITMQ_HOST = ConfigFactory.load().getString("rabbitmq.host"); val RABBITMQ_QUEUE = ConfigFactory.load().getString("rabbitmq.queue"); val RABBITMQ_EXCHANGEE = ConfigFactory.load().getString("rabbitmq.exchange"); } Initialize the connection to RabbitMQ We've got one more object to define before we'll look at how we can use RabbitMQ to send and receive messages. to work with RabbitMQ we require a connection. We can get a connection to a server by using a ConnectionFactory. Look at the javadocs for more information on how to configure the connection. object RabbitMQConnection { private val connection: Connection = null; /** * Return a connection if one doesn't exist. Else create * a new one */ def getConnection(): Connection = { connection match { case null => { val factory = new ConnectionFactory(); factory.setHost(Config.RABBITMQ_HOST); factory.newConnection(); } case _ => connection } } } Start the listeners when the application starts We need to do one more thing before we can look at the RabbitMQ code. We need to make sure our message listeners are registered on application startup and our senders start sending. Play 2 provides a GlobalSettings object for this which you can extend to execute code when your application starts. For our example we'll use the following object (remember, this needs to be stored in the default namespace: import play.api.mvc._ import play.api._ import rabbitmq.Sender object Global extends GlobalSettings { override def onStart(app: Application) { Sender.startSending } } We'll look at this Sender.startSending operation, which initializes all the senders and receivers in the following sections. Setup send and receive scenario Let's look at the Sender.startSending code that will setup a sender that sends a msg to a specific queue. For this we use the following piece of code: object Sender { def startSending = { // create the connection val connection = RabbitMQConnection.getConnection(); // create the channel we use to send val sendingChannel = connection.createChannel(); // make sure the queue exists we want to send to sendingChannel.queueDeclare(Config.RABBITMQ_QUEUE, false, false, false, null); Akka.system.scheduler.schedule(2 seconds, 1 seconds , Akka.system.actorOf(Props( new SendingActor(channel = sendingChannel, queue = Config.RABBITMQ_QUEUE))) , "MSG to Queue"); } } class SendingActor(channel: Channel, queue: String) extends Actor { def receive = { case some: String => { val msg = (some + " : " + System.currentTimeMillis()); channel.basicPublish("", queue, null, msg.getBytes()); Logger.info(msg); } case _ => {} } } In this code we take the following steps: Use the factory to retrieve a connection to RabbitMQ Create a channel on this connection to use in communicating with RabbitMQ Use the channel to create the queue (if it doesn't exist yet) Schedule Akka to send a message to an actor every second. This all should be pretty straightforward. The only (somewhat) complex part is the scheduling part. What this schedule operation does is this. We tell Akka to schedule a message to be sent to an actor. We want a 2 seconds delay before it is fired, and we want to repeat this job every second. The actor that should be used for this is the SendingActor you can also see in this listing. This actor needs access to a channel to send a message and this actor also needs to know where to send the message it receives to. This is the queue. So every second this Actor will receive a message, append a timestamp, and use the provided channel to send this message to the queue: channel.basicPublish("", queue, null, msg.getBytes());. Now that we send a message each second it would be nice to have listeners on this queue that can receive messages. For receiving messages we've also created an Actor that listens indefinitely on a specific queue. class ListeningActor(channel: Channel, queue: String, f: (String) => Any) extends Actor { // called on the initial run def receive = { case _ => startReceving } def startReceving = { val consumer = new QueueingConsumer(channel); channel.basicConsume(queue, true, consumer); while (true) { // wait for the message val delivery = consumer.nextDelivery(); val msg = new String(delivery.getBody()); // send the message to the provided callback function // and execute this in a subactor context.actorOf(Props(new Actor { def receive = { case some: String => f(some); } })) ! msg } } } This actor is a little bit more complex than the one we used for sending. When this actor receives a message (kind of message doesn't matter) it starts listening on the queue it was created with. It does this by creating a consumer using the supplied channel and tells the consumers to start listening on the specified queue. The consumer.nextDelivery() method will block until a message is waiting in the configured queue. Once a message is received, a new Actor is created to which the message is sent. This new actor passes the message on to the supplied method, where you can put your business logic. To use this listener we need to supply the following arguments: Channel: Allows access to RabbitMQ Queue: The queue to listen to for messages f: The function that we'll execute when a message is received. The final step for this first example is glueing everything together. We do this by adding a couple of method calls to the Sender.startSending method. def startSending = { ... val callback1 = (x: String) => Logger.info("Recieved on queue callback 1: " + x); setupListener(connection.createChannel(),Config.RABBITMQ_QUEUE, callback1); // create an actor that starts listening on the specified queue and passes the // received message to the provided callback val callback2 = (x: String) => Logger.info("Recieved on queue callback 2: " + x); // setup the listener that sends to a specific queue using the SendingActor setupListener(connection.createChannel(),Config.RABBITMQ_QUEUE, callback2); ... } private def setupListener(receivingChannel: Channel, queue: String, f: (String) => Any) { Akka.system.scheduler.scheduleOnce(2 seconds, Akka.system.actorOf(Props(new ListeningActor(receivingChannel, queue, f))), ""); } In this code you can see that we define a callback function, and use this callback function, together with the queue and the channel to create the ListeningActor. We use the scheduleOnce method to start this listener in a separate thread. Now with this code in place we can run the application (play run) open up localhost:9000 to start the application and we should see something like the following output. [info] play - Starting application default Akka system. [info] play - Application started (Dev) [info] application - MSG to Exchange : 1334324531424 [info] application - MSG to Queue : 1334324531424 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324531424 [info] application - MSG to Exchange : 1334324532522 [info] application - MSG to Queue : 1334324532522 [info] application - Recieved on queue callback 1: MSG to Queue : 1334324532522 [info] application - MSG to Exchange : 1334324533622 [info] application - MSG to Queue : 1334324533622 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324533622 [info] application - MSG to Exchange : 1334324534722 [info] application - MSG to Queue : 1334324534722 [info] application - Recieved on queue callback 1: MSG to Queue : 1334324534722 [info] application - MSG to Exchange : 1334324535822 [info] application - MSG to Queue : 1334324535822 [info] application - Recieved on queue callback 2: MSG to Queue : 1334324535822 Here you can clearly see the round-robin way messages are processed. Setup publish and subscribe scenario Once we've got the above code running, adding publish / subscribe functionality is very trivial. Instead of the SendingActor we now use a PublishingActor: class PublishingActor(channel: Channel, exchange: String) extends Actor { /** * When we receive a message we sent it using the configured channel */ def receive = { case some: String => { val msg = (some + " : " + System.currentTimeMillis()); channel.basicPublish(exchange, "", null, msg.getBytes()); Logger.info(msg); } case _ => {} } } An exchange is used by RabbitMQ to allow multiple recipients to receive the same message (and a whole lot of other advanced functionality). The only change in the code from the other actor is that this time we send the message to an exchange instead of to a queue. The listener code is exactly the same, the only thing we need to do is connect a queue to a specific exchange. So that listeners on that queue receive the messages sent to to the exchange. We do this, once again, from the setup method we used earlier. ... // create a new sending channel on which we declare the exchange val sendingChannel2 = connection.createChannel(); sendingChannel2.exchangeDeclare(Config.RABBITMQ_EXCHANGEE, "fanout"); // define the two callbacks for our listeners val callback3 = (x: String) => Logger.info("Recieved on exchange callback 3: " + x); val callback4 = (x: String) => Logger.info("Recieved on exchange callback 4: " + x); // create a channel for the listener and setup the first listener val listenChannel1 = connection.createChannel(); setupListener(listenChannel1,listenChannel1.queueDeclare().getQueue(), Config.RABBITMQ_EXCHANGEE, callback3); // create another channel for a listener and setup the second listener val listenChannel2 = connection.createChannel(); setupListener(listenChannel2,listenChannel2.queueDeclare().getQueue(), Config.RABBITMQ_EXCHANGEE, callback4); // create an actor that is invoked every two seconds after a delay of // two seconds with the message "msg" Akka.system.scheduler.schedule(2 seconds, 1 seconds, Akka.system.actorOf(Props( new PublishingActor(channel = sendingChannel2 , exchange = Config.RABBITMQ_EXCHANGEE))), "MSG to Exchange"); ... We also created an overloaded method for setupListener, which, as an extra parameter, also accepts the name of the exchange to use. private def setupListener(channel: Channel, queueName : String, exchange: String, f: (String) => Any) { channel.queueBind(queueName, exchange, ""); Akka.system.scheduler.scheduleOnce(2 seconds, Akka.system.actorOf(Props(new ListeningActor(channel, queueName, f))), ""); } In this small piece of code you can see that we bind the supplied queue (which is a random name in our example) to the specified exchange. After that we create a new listener as we've seen before. Running this code now will result in the following output: [info] play - Application started (Dev) [info] application - MSG to Exchange : 1334325448907 [info] application - MSG to Queue : 1334325448907 [info] application - Recieved on exchange callback 3: MSG to Exchange : 1334325448907 [info] application - Recieved on exchange callback 4: MSG to Exchange : 1334325448907 [info] application - MSG to Exchange : 1334325450006 [info] application - MSG to Queue : 1334325450006 [info] application - Recieved on exchange callback 4: MSG to Exchange : 1334325450006 [info] application - Recieved on exchange callback 3: MSG to Exchange : 1334325450006 As you can see, in this scenario both listeners receive the same message. That pretty much wraps it up for this article. As you've seen using the Java based client api for RabbitMQ is more than sufficient, and easy to use from Scala. Note though that this example is not production ready, you should take care to close connections, nicely shutdown listeners and actors. All this shutdown code isn't shown here.
April 19, 2012
by Jos Dirksen
· 22,848 Views · 1 Like
article thumbnail
A Regular Expression HashMap Implementation in Java
Below is an implementation of a Regular Expression HashMap. It works with key-value pairs which the key is a regular expression. It compiles the key (regular expression) while adding (i.e. putting), so there is no compile time while getting. Once getting an element, you don't give regular expression; you give any possible value of a regular expression. As a result, this behaviour provides to map numerous values of a regular expression into the same value. The class does not depend to any external libraries, uses only default java.util. So, it will be used simply when a behaviour like that is required. import java.util.ArrayList; import java.util.HashMap; import java.util.regex.Pattern; /** * This class is an extended version of Java HashMap * and includes pattern-value lists which are used to * evaluate regular expression values. If given item * is a regular expression, it is saved in regexp lists. * If requested item matches with a regular expression, * its value is get from regexp lists. * * @author cb * * @param : Key of the map item. * @param : Value of the map item. */ public class RegExHashMap extends HashMap { // list of regular expression patterns private ArrayList regExPatterns = new ArrayList(); // list of regular expression values which match patterns private ArrayList regExValues = new ArrayList(); /** * Compile regular expression and add it to the regexp list as key. */ @Override public V put(K key, V value) { regExPatterns.add(Pattern.compile(key.toString())); regExValues.add(value); return value; } /** * If requested value matches with a regular expression, * returns it from regexp lists. */ @Override public V get(Object key) { CharSequence cs = new String(key.toString()); for (int i = 0; i < regExPatterns.size(); i++) { if (regExPatterns.get(i).matcher(cs).matches()) { return regExValues.get(i); } } return super.get(key); } }
April 11, 2012
by Cagdas Basaraner
· 24,421 Views
  • Previous
  • ...
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: