DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Lieven Doclo

Java Developer at Sourced

Brugge, BE

Joined Apr 2008

https://www.insaneprogramming.be

About

Lieven Doclo is a Belgian developer specialized in Java development. He's an avid OSS supporter and has an healthy appetite for new and disruptive technologies.

Stats

Reputation: 1193
Pageviews: 872.4K
Articles: 13
Comments: 30
  • Articles
  • Comments

Articles

article thumbnail
Is Java Remote Procedure Call Dead in the REST Age?
JSON-RPC excels at simple tasks when mapping business concepts to resources via REST is too involved.
October 12, 2015
· 34,198 Views · 12 Likes
article thumbnail
JDBI, a Nice Spring JDBC Alternative
If you’re looking into doing plain JDBC work and need something different than JDBC template, have a look at JDBI. Here's how you can use it as a JDBC alternative.
September 6, 2015
· 27,316 Views · 7 Likes
article thumbnail
Using JAX-RS With Spring Boot Instead of MVC
It’s easy to integrate JAX-RS into Spring applications, but why would you do this? Spring MVC should be enough, right?
September 4, 2015
· 101,665 Views · 13 Likes
article thumbnail
The UUID Discussion
UUID really start coming in handy is when you start synchronizing data across servers.
July 3, 2015
· 26,392 Views
article thumbnail
Using HA-JDBC with Spring Boot
This is a really simple way to provide high-availability with failover and load balancing to any Java backend using JDBC and Spring Boot .
July 2, 2015
· 22,045 Views · 1 Like
article thumbnail
Using GeoJSON With Spring Data for MongoDB and Spring Boot
In my previous articles I compared 4 frameworks commonly used in communicating with MongoDB from the JVM and found out that in that use-case, Spring Data for MongoDB was the easiest solution. However I did make the remark that it doesn’t use the GeoJSON format to store geolocation coordinates and geometries. I tried to add GeoJSON support before, but couldn’t get the conversion to work propertly. But after some extensive searching I found out that the reason for it not working was my use of Spring Boot: its autoconfiguration for MongoDB does not support custom conversion out of the box. Luckily, the solution was simple: provide an extra configuration that extends from AbstractMongoConfiguration and import that in the Boot application. In that configuration you can override the customConversions() and add your converters. When you compare the geo classes in Spring Data and GeoJSON, I noticed that only a subset of GeoJSON geometries can be mapped on Spring Data geo classes: Point and Polygon. Spring Boot does not support LineString, MultiLineString, MultiPolygon or MultiPoint. However, in your mapped domain classes, you won’t use these normally. Creating a converter that adheres to the GeoJSON format is quite straightforward. import com.mongodb.BasicDBObject import com.mongodb.DBObject import org.springframework.core.convert.converter.Converter import org.springframework.data.convert.ReadingConverter import org.springframework.data.convert.WritingConverter import org.springframework.data.geo.Point import org.springframework.data.geo.Polygon final class GeoJsonConverters { static List> getConvertersToRegister() { return [ GeoJsonDBObjectToPointConverter.INSTANCE, GeoJsonDBObjectToPolygonConverter.INSTANCE, GeoJsonPointToDBObjectConverter.INSTANCE, GeoJsonPolygonToDBObjectConverter.INSTANCE ] } @WritingConverter static enum GeoJsonPointToDBObjectConverter implements Converter { INSTANCE; @Override DBObject convert(Point source) { return new BasicDBObject([type: 'Point', coordinates: [source.x, source.y]]) } } @ReadingConverter static enum GeoJsonDBObjectToPointConverter implements Converter { INSTANCE; @Override Point convert(DBObject source) { def coordinates = source.coordinates as double[] return new Point(coordinates[0], coordinates[1]) } } @WritingConverter static enum GeoJsonPolygonToDBObjectConverter implements Converter { INSTANCE; @Override DBObject convert(Polygon source) { def coordinates = source.points.collect { [it.x, it.y] } return new BasicDBObject([type: 'Polygon', coordinates: coordinates]) } } @ReadingConverter static enum GeoJsonDBObjectToPolygonConverter implements Converter { INSTANCE; @Override Polygon convert(DBObject source) { def coordinates = source.coordinates as double[] return new Point(coordinates[0], coordinates[1]) } } } To add those converters to the Spring context, you’ll have to override some methods in your MongoDB spring configuration class. import com.mongodb.Mongo import org.springframework.beans.factory.annotation.* import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.EnableAutoConfiguration import org.springframework.context.annotation.* import org.springframework.data.mongodb.config.AbstractMongoConfiguration import org.springframework.data.mongodb.core.convert.* @EnableAutoConfiguration @ComponentScan @Configuration @Import([MongoComparisonMongoConfiguration]) class MongoComparison { static void main(String[] args) { SpringApplication.run(MongoComparison, args); } } @Configuration class MongoComparisonMongoConfiguration extends AbstractMongoConfiguration { @Autowired Mongo mongo; @Value("\${spring.data.mongodb.database}") String databaseName; @Override protected String getDatabaseName() { return databaseName } @Override Mongo mongo() throws Exception { return mongo } @Override CustomConversions customConversions() { def customConverters = [] customConverters << GeoJsonConverters.convertersToRegister return new CustomConversions(customConverters.flatten()) } } As Spring Boot already provides the configuration of the Mongo instance and the name of the database, we can reuse these in the MongoDB configuration class. The custom conversions take preference over the existing ones for Point and Polygon. I’ll be writing a library this weekend to add support for all GeoJSON geometries in Spring Data for MongoDB. However, I already noticed it’ll be very hard to provide support for those in generated query methods in repositories, but with annotated queries being possible, I don’t think this will be a big issue but we’ll see.
December 13, 2014
· 22,163 Views · 1 Like
article thumbnail
Hystrix and Spring Boot's Health Endpoint
In an earlier post I showed how easy it is to integrate Hystrix into a Spring Boot application. Now I’m going to show you a neat trick which combines the health indicator endpoint in Spring Boot and the metrics provided by Hystrix. Hystrix has a built-in system to query the metrics that drive the framework. For example you can query the metrics of each command such as the mean execution time or whether the circuit breaker for that command has tripped. And it’s that last one that is very interesting to the health indicator of your application. Most production environments have a dashboard that show the health of an application’s instances. If the circuitbreaker has tripped, your application is essentially in an unhealthy state. The circuit breaker mechanism will ensure that failures won’t cascade, but in a clustered environment you’d want that server removed from the pool or at least have a general indication that something is wrong. Spring Boot’s health endpoints works by querying various indicators. Like most things in Spring Boot, indicators are only active if there are components that can be checked. For example, if you have a datasource, an indicator will become active checking the state of that datasource. The same thing happens with NoSQL or AMQP connections. A simple implementation with Hystrix, which I’ll show in a minute, could be that when there is a tripped circuitbreaker in the system, the health of the application might be ‘out of service’. This is actually very easy to do. You just need to add a bean in your configurations returning an implementation of AbstractHealthIndicator: class HystrixMetricsHealthIndicator extends AbstractHealthIndicator { @Override protected void doHealthCheck(Health.Builder builder) throws Exception { def breakers = [] HystrixCommandMetrics.instances.each { def breaker = HystrixCircuitBreaker.Factory.getInstance(it.commandKey) def breakerOpen = breaker.open?:false if(breakerOpen) { breakers << it.commandGroup.name() + "::" + it.commandKey.name() } } breakers ? builder.outOfService().withDetail("openCircuitBreakers", breakers) : builder.up() } } Whenever a circuitbreaker gets tripped, the health endpoint will return the state of the application as OUT_OF_SERVICE and will also return the name of the open circuit breakers (the command key and the group it’s in). Now, this implementation can go a whole lot further. For example, you can add a new state to the health indication, for example UNSTABLE. This will however require you to change to order of the health aggregator, as Spring Boot will aggregate all the indicators and show a single application state. The new state needs to be fit in the existing order of states (DOWN > OUT_OF_SERVICE > UP > UNKNOWN). In the case of UNSTABLE, it would probably be between OUT_OF_SERVICE and UP. I can also think of a use-case in which the tripping of certain circuit breakers may be more critical than others, in which case the state of the application might really become OUT_OF_SERVICE. In that case you might decide to remove the instance from the pool of available instances (in a clustered environment) or restart the server. Or you can automate the process :). The last use case I’ll discuss is when your application is slow or is getting hammered by requests, which can be detected by Hystrix as well. In this case, you can introduce yet another state STRUGGLING, which would logically be between UNSTABLE and UP. In this case you can automate a process that starts up another instance and add it automatically to the pool. You can also see this the other way around, adding a state UNUSED which is on the same level as UP. This might indicate you have too many instances running and can possible shutdown that node (if it’s not the only one), or that you need to take a look at the load-balancing. As you can see, with such mechanisms it becomes possible to create a self-regulating instance pool, creating and removing instances as it goes. The health indicators in Spring Boot are an invaluable tool for DevOps teams and show how versatile Spring Boot actually is. UPDATE: Normally, if you want to alter the order in which statuses are aggregated, you can use a property in your application.properties like health.status.order = DOWN,OUT_OF_SERVICE,UNSTABLE,STRUGGLING,UP,UNKNOWN as documented. However, if you’re using the YAML-style properties, you’re out of luck, as there’s an annoying bugthat’s restricting you from using this feature. So if you’re using YAML properties, you’ll have to configure the HealthAggregator yourself. Luckily, this isn’t that hard, just add this bean to your application context: @Bean HealthAggregator healthAggregator() { def healthAggregator = new OrderedHealthAggregator(); healthAggregator.setStatusOrder(["DOWN", "OUT_OF_SERVICE", "UNSTABLE", "UP", "UNKNOWN"]); return healthAggregator; } Why they didn’t use the @EnableConfigurationProperties in the HealthIndicatorAutoConfiguration is a mystery to me, as this would have solved the issue. Perhaps I’ll do it myself and make a pull request.
September 6, 2014
· 13,429 Views
article thumbnail
Hystrix and Spring Boot
Making your application resilient to failure can seem like a daunting task. Those who read “Release It!” know how many aspects there can be to making your application ready for the apocalypse. Luckily we live in a world where a lot of software needs such resilience and where there are companies who are willing to share their solutions. Enter what Netflix has created: Hystrix. Hystrix is a Java library aimed towards making integration points less susceptible to failures and mitigating the impact a failure might have on your application. It provides the means to incorporate bulkheads, circuit breakers and metrics into your framework. Those not familiar with these concepts should read the book I mentioned earlier. For example, a circuit breaker makes sure that if a certain integration point is having trouble, your application will not be affected. If for example a integration point takes 20 seconds to reply instead of the normal 50ms, you can configure a circuit breaker that trips if 10 calls within 10 seconds take longer than 5 seconds. When tripped, you can configure a quick fallback or fail fast. Hystrix has an elegant solution for this. Every command to an external integration point should get wrapped in a HystrixCommand. HystrixCommand provide support for circuit breakers, timeouts, fallbacks and other disaster recovery methods. So instead of directly calling the integration point, you’ll call a command that in turn calls the integration point. Hystrix also allows you to choose whether you want to do this synchronously or asynchronously (returning a Future). One of the really nice things about Hystrix is that it also has support for metrics and even has a nice dashboard to show those metrics. I can almost imagine that every development team has this on the dashboard next to the Hudson/Jenkins monitor in the near future, just because it’s so trivial to incorporate. Now, creating a new subclass for each and every distinct call to an integration endpoint may seems like a lot of work. It is, but the reasoning behind this is that incorporating Hystrix in your application should be explicit. However, if you really don’t like this, Hystrix also supports Spring AOP and has a aspect that does most of the work for you, using a contributed module (javanica). The only thing you need to do is annotate the methods you want covered by Hystrix. Whenever I see decent Spring integration, I now immediately look at Spring Boot support. Hystrix doesn’t have autoconfiguration for Spring Boot yet, but it’s really easy to implement. I used the annotation/aspect approach because I’m lazy and I like the transparency of going down this path. First you need to add a couple of dependencies. Here’s what you need in Gradle: compile("com.netflix.hystrix:hystrix-javanica:1.3.16") compile("com.netflix.hystrix:hystrix-metrics-event-stream:1.3.16") Then you need to create a configuration for Hystrix. I opted to create the configuration just like any other autoconfiguration module in Spring Boot (an @Configuration annotated class and a class describing the configuration properties). I also used conditional beans so that the . /** * {@link EnableAutoConfiguration Auto-configuration} for Hystrix. * * @author Lieven Doclo */ @Configuration @EnableConfigurationProperties(HystrixProperties) @ConditionalOnExpression("\${hystrix.enabled:true}") class HystrixConfiguration { @Autowired HystrixProperties hystrixProperties; @Bean @ConditionalOnClass(HystrixCommandAspect) HystrixCommandAspect hystrixCommandAspect() { new HystrixCommandAspect(); } @Bean @ConditionalOnClass(HystrixMetricsStreamServlet) @ConditionalOnExpression("\${hystrix.streamEnabled:false}") public ServletRegistrationBean hystrixStreamServlet(){ new ServletRegistrationBean(new HystrixMetricsStreamServlet(), hystrixProperties.streamUrl); } } /** * Configuration properties for Hystrix. * * @author Lieven Doclo */ @ConfigurationProperties(prefix = "hystrix", ignoreUnknownFields = true) class HystrixProperties { boolean enabled = true boolean streamEnabled = false String streamUrl = "/hystrix.stream" } In short, if you add this to your Spring Boot application, Hystrix will be automatically integrated in your application. As you might have seen, I’ve also added some configuration properties. I added support for the event stream that powers the dashboard and which is only activated if you add hystrix.streamEnabled = true to your application.properties. The URL through which the stream is served is also configurable (but has a sensible default). If you want, you can disable Hystrix as a whole by adding hystrix.enabled = false to your application.properties. This code is actually ready to be put into Spring Boot’s autoconfigure module :). Two simple classes and two simple dependencies and your code is ready for the apocalypse. Doesn’t seem like a bad deal to me. Hystrix has a lot more to offer than I touched in this article (command aggregation, reactive calls through events, …). If your application has a lot of integration points, certainly have a look at this library. Your application may be stable, but that doesn’t mean that all the REST services you’re calling are.
September 3, 2014
· 27,909 Views
article thumbnail
IntelliJ, Scala and Gradle: Revisiting Hell
So I finally made the decision on trying to learn Scala. Little did I know I was in for another round of IntelliJ integration hell. Let me rephrase that: IntelliJ with Gradle hell. I love Gradle. I love IntelliJ. However, the combination of the two is sometimes enough to drive me utterly crazy. Now take for example the Scala integration. I made the most simple Gradle build possible that compiles a standard Hello World application. apply plugin: 'scala' apply plugin: 'idea' repositories{ mavenCentral() mavenLocal() } dependencies{ compile 'org.slf4j:slf4j-api:1.7.5' compile "org.scala-lang:scala-library:2.10.4" compile "org.scala-lang:scala-compiler:2.10.4" testCompile "junit:junit:4.11" } task run(type: JavaExec, dependsOn: classes) { main = 'Main' classpath sourceSets.main.runtimeClasspath classpath configurations.runtime } First I stumbled upon the first issue: the Scala gradle plugin is incompatible with Java 8. Not a big issue, but this meant changing my java environment for this build, so it is a nuisance. Once this was fixed, the Gradle build succeeded and Hello World was printed out. I opened up IntelliJ and made sure the Scala plugin was installed. Then I imported the project using the Gradle build file. Everything looked okay, IntelliJ recognized the Scala source folder and provided the correct editor for the Scala source file. Then I tried to run the Main class. This resulted in a NoClassDefFoundException. IntelliJ didn’t want to compile my source classes. So I started digging. Apparently, the project was lacking a Scala facet. I’d expected IntelliJ to automatically add this once it saw I was using the scala plugin but it didn’t. So I tried manually adding the facet and there I got stuck. See, the facet requires you to state which scala compiler library you want to use. Luckily IntelliJ correctly added the jars to the classpath, so I was able to choose the correct jar. This, however, did not fix the issue as IntelliJ now complained it could not locate the scala runtime library (scala-library*.jar). This library was however included in the build. If you were to choose the runtime library as the scala library, it would complain it cannot find the compiler library. And this is where I am now: deadlocked. There is an issue in the bugtracker of IntelliJ here but it’s been eerily quiet at Jetbrains on this issue. As it is, it’s impossible to use IntelliJ with Gradle and Scala unless you’re willing to execute every bit of code including unit tests with Gradle instead of the IDE (which in effect defeats the purpose of an IDE). And I’ll die before adopting yet another build framework (SBT) that is supposed to work. Honestly, I really don’t know whether I want to learn Scala anymore. Just the fact that you can’t compile Scala in the most popular IDE at the moment when using the most popular build tool at the moment is something I cannot comprehend. Forcing me to adopt a Scala-specific build tool is unacceptable to me. If I were TypeSafe, I’d put an engineer on this and fix this as this would seriously aid in promoting the language. If it were easy to adopt Scala in an existing build cycle, it would pop up on more radars than it would right now. But it’s not just Scala and IntelliJ: most newer JVM languages struggle with IntelliJ. This is a real pity as this either forces me to change my IDE (i.e. Ceylon has its own IDE based on Eclipse) or not consider the language. As it is, the current viable option with IntelliJ is Java and Groovy (and Kotlin, but it’s not even near production ready quality). Wouldn’t it be nice to only need one IDE for all development? I couldn’t care less if it would cost $500, I just want things to work. I’d love to be able to write my AngularJS front-end that’s consuming my Scala/Java hydrid backend reading data from a MongoDB that’s feeded data from my Arduino sensors (for which I’ve written and uploaded the sketch from that same IDE).
April 1, 2014
· 23,041 Views · 2 Likes
article thumbnail
When Reading Excel with POI, Beware of Floating Points
Our problem began when we tried to read a certain cell that contained the value 929 as a numeric field and store it into an integer.
August 30, 2013
· 46,748 Views · 1 Like
article thumbnail
Why I Never Use the Maven Release Plugin
Just about every 6 months or so an article appears cursing Maven, attracting both proponents as opponents to Maven and Ant. While it’s real fun to watch (I really get a laugh when people start to advocate the return to Ant), most of the time it’s always the same arguments. Maven lacks flexibility, the plugin system sucks (when will people learn to use plugin versions…), you can’t use scripting and the all time favorite: the release plugin sucks. Well, I am a Maven addict and I’m happy to say: yes, I agree, the release plugin sucks. Big time. But here’s something you may have forgotten: you don’t need it! Even more: you shouldn’t use it. The Maven release plugin tries to make releasing software a breeze. That’s where the plugin authors got it wrong to start with. Releases are not something done on a whim. They are carefully planned and orchestrated actions, preceded by countless rules and followed by more rules. Assuming you can bundle all that in a simple mvn release:release is just plain naive. Even Maven’s most fierce supporters agree on this. The Maven release plugin just tries to do too much stuff at once: build your software, tag it, build it again, deploy it, build the site (triggering yet another build in the process) and deploy the site. And whilst doing that, running the tests x times. Most of the time, you’re making candidate releases, so building the complete documentation is a complete waste of time. Now, if you break down the release plugin into sensible steps, you’ll really save yourself a whole lot of trouble. I use these steps to release something. As a sidenote: I use git and git-flow standards (as described here). Assume the POM’s version’s currently on 1.0-SNAPSHOT. Announce the release process Very important. As I said, you don’t release on a whim. Make sure everyone on your team knows a release is pending and has all their stuff pushed to the development branch that needs to be included. Branch the development branch into a release branch. Following git-flow rules, I make a release branch 1.0. Update the POM version of the development branch. Update the version to the next release version. For example mvn versions:set -DnewVersion=2.0-SNAPSHOT. Commit and push. Now you can put resources developing towards the next release version. Update the POM version of the release branch. Update the version to the standard CR version. For example mvn versions:set -DnewVersion=1.0.CR-SNAPSHOT. Commit and push. Run tests on the release branch. Run all the tests. If one or more fail, fix them first. Create a candidate release from the release branch. Use the Maven version plugin to update your POM’s versions. For example mvn versions:set -DnewVersion=1.0.CR1. Commit and push. Make a tag on git. Use the Maven version plugin to update your POM’s versions back to the standard CR version. For example mvn versions:set -DnewVersion=1.0.CR-SNAPSHOT. Commit and push. Checkout the new tag. Do a deployment build (mvn clean deploy). Since you’ve just run your tests and fixed any failing ones, this shouldn’t fail. Put deployment on QA environment. Iterate until QA gives a green light on the candidate release. Fix bugs. Fix bugs reported on the CR releases on the release branch. Merge into development branch on regular intervals (or even better, continuous). Run tests continuously, making bug reports on failures and fixing them as you go. Create a candidate release. Use the Maven version plugin to update your POM’s versions. For example mvn versions:set -DnewVersion=1.0.CRx. Commit and push. Make a tag on git. Use the Maven version plugin to update your POM’s versions back to the standard CR version. For example mvn versions:set -DnewVersion=1.0.CR-SNAPSHOT. Commit and push. Checkout the new tag. Do a deployment build (mvn clean deploy). Since you’ve run your tests continuously, this shouldn’t fail. Put deployment on QA environment. Once QA has signed off on the release, create a final release. Check whether there are no new commits since the last release tag (if there are, slap developers as they have done stuff that wasn’t needed or asked for). Use the Maven version plugin to update your POM’s versions. For example mvn versions:set -DnewVersion=1.0. Commit and push. Tag the release branch. Merge into the master branch. Checkout the master branch. Do a deployment build (mvn clean deploy). Start production release and deployment process (in most companies, not a small feat). This can involve building the site and doing other stuff, some not even Maven related. There’s no way in hell Maven can automate this process and if you try, you’ll bump into the many pitfalls the release plugin has to offer. The release plugin is just a combination of the versions, scm, deploy and site plugin that seriously violates the single responsibility principle. The release plugin is one of the reasons Maven has gotten a bad reputation with some people. It’s long due for an overhaul, but if you ask me, they should just remove it altogether. Releasing software is a process, not a single command on the command line. The process I just described isn’t perfect in any way, but it works and I avoid using the release plugin as it just does too much stuff. Have fun bashing Maven, but please, keep it clean :) .
July 26, 2013
· 115,778 Views · 13 Likes
article thumbnail
Circular Dependencies With Jackson
Circular dependencies and JSON have always been a pain. But it’s not just JSON, the problem also exists when you’re trying to serialize a graph which contains circular dependencies (parent/child with bidirectional relationships). Some time ago, we were considering exposing our JPA datamodel through REST services. Off course, a lot of JPA model classes contain bidirectional relationships, which was a real pain to get working. We ended up with a separate data model consisting of DTO’s (yuck!) and a mapping between the two models. But after a while we had to abandon our REST quest due to the fact that the JPA data model was getting to complicated. So we let go of the loose coupling between the client and the server, which made the issue go away completely. REST services where built when the need for external communication arose, but for client-server communication a more direct dependency was used (CDI/EJB or Spring injection). Recently, I once again looked at Jackson. My reasons now where a bit different. Our data model has grown to a point where finding out what exactly is in a graph is getting problematic. A simple SQL query doesn’t cut it anymore and we’re forced to start debugging in order to see what an object actually contains. Knowing in advance how a complex JPA datamodel is populated through a JQPL query is a science on its own. So I thought, why not have the possibility to send the same JPQL query and have the result returned to us as JSON. The problem, I thought, would be those wretched circular dependencies. Luckily, the Jackson developers have since developed a solution to the problem: their JSON serializer now supports object references. And it’s usable out-of-the-box for JPA datamodels. Their JSON object reference requires an object to have a unique ID. Luckily, this is also the case for JPA entities. However, JSON id references need to be unique across the entire graph, whereas JPA id’s only need to be unique within the same entity. In our case, it wasn’t really an issue, as we use UUID’s for JPA id fields, which are unique throughout the entire database. So how do you serialize an object graph? Well, assume you have two entities with bidirectional relationships like this: @Entity public class ParentEntity { @Id private String id; private String description; @OneToMany(mappedBy = "parent") private List children; // getters and setters omitted for brevity } @Entity public class ChildEntity { @Id private String id; private String description; @ManyToOne private Parent parent; // getters and setters omitted for brevity } Adding Jackson JSON identities is very simple: @Entity @JsonIdentityInfo(generator=ObjectIdGenerators.PropertyGenerator.class, property="id") public class ParentEntity { ... } @Entity @JsonIdentityInfo(generator=ObjectIdGenerators.PropertyGenerator.class, property="id") public class ChildEntity { ... } And that’s it! If you would now serialize a parent object with 2 children, you’ll get something like this: { "id": "parent-id1", "description": "parent", "children": [ { "id": "child-id1", "description": "child1", "parent": "parent-id1" }, { "id": "child-id2", "description": "child2", "parent": "parent-id1" } ] }
July 25, 2013
· 28,970 Views
article thumbnail
Working With Gradle, Spring Aspects and Compile-time Weaving
As one of my previous posts already mentioned, I’m in the process of migrating most if not all of my projects to Gradle. I’m very pleased with the fact it’s going a lot smoother than I had anticipated. However I came across an issue: how to get AspectJ compile-time weaving working with Gradle. My use case is quite simple: I’m using Spring’s @Configurable annotation, which either requires load-time or compile-time weaving to work correctly. Load-time weaving isn’t really an option, so compile time weaving it was. There is a plugin but it’s outdated: it doesn’t work with Gradle 1.6. If this was the case with Maven, I’d be screwed/ I would have to write my own plugin. Luckily with Gradle, it was not that difficult. The old plugin relied on the old Gradle 0.9 DSL, so I could use that as a starting point. The instructions required me to define some new configurations and add some things to the javaCompile task. So I started out with this: configurations { ajc aspects } compileJava { sourceCompatibility="1.6" targetCompatibility="1.6" doLast{ ant.taskdef( resource:"org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties", classpath: configurations.ajc.asPath) ant.iajc(source:"1.6", target:"1.6", destDir:sourceSets.main.output.classesDir.absolutePath, maxmem:"512m", fork:"true", aspectPath:configurations.aspects.asPath, sourceRootCopyFilter:"**/.svn/*,**/*.java",classpath:configurations.compile.asPath){ sourceroots{ sourceSets.main.java.srcDirs.each{ pathelement(location:it.absolutePath) } } } } } dependencies { ajc "org.aspectj:aspectjtools:1.6.10" compile "org.aspectj:aspectjrt:1.6.10" aspects "org.springframework:spring-aspects:3.2.2.RELEASE" } This, however, did not work. I was greeted by the following exception: can't determine superclass of missing type org.springframework.transaction.interceptor.TransactionAspectSupport It seems like spring-aspects contains multiple aspects, each requiring some dependencies which are required regardless whether you want to use those aspects. This, by the way, was also required when using the Maven AspectJ plugin. After some POM fishing, I found out I had to add a couple of libraries. So now we have the following dependencies: dependencies { ajc "org.aspectj:aspectjtools:1.6.10" compile "org.aspectj:aspectjrt:1.6.10" aspects "org.springframework:spring-aspects:3.2.2.RELEASE" aspects "javax.persistence:persistence-api:1.0" aspects "org.springframework:spring-tx:3.2.2.RELEASE" aspects "org.springframework:spring-orm:3.2.2.RELEASE" } My code compiled and my aspects were weaved in! Unfortunately, it wasn’t the end of the story. My tests failed. It was missing the aspect library at runtime, as the (now weaved) classes referenced some classes that were contained in spring-aspects and I was subclassing them in my testcode. The quick solution would be to duplicate the spring-aspects dependency in the compile configuration, but that was something I didn’t like. I did however came up with a cleaner solution: I had to make sure the aspect dependencies were added to the compile configuration. A small change in the configurations section was all I needed to do. configurations { ajc aspects compile { extendsFrom aspects } } Hurrah! I wished. Unfortunately now the dependencies I had to add to the build in order to make the weaving work were now also added to the compile configuration. I didn’t need them or used them, so I had to find a way to exclude them from the compile configuration, but still include them in the aspect weaving. The solution was simple and clean: add a new configuration and include those configuration’s dependencies in the weaving classpath. The new configiration is not added in the compile configuration, so your build is not dependent on those dependencies. The final product looked like this: configurations { ajc aspects aspectCompile compile { extendsFrom aspects } } compileJava { sourceCompatibility="1.6" targetCompatibility="1.6" doLast{ ant.taskdef( resource:"org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties", classpath: configurations.ajc.asPath) ant.iajc(source:"1.6", target:"1.6", destDir:sourceSets.main.output.classesDir.absolutePath, maxmem:"512m", fork:"true", aspectPath:configurations.aspects.asPath}, sourceRootCopyFilter:"**/.svn/*,**/*.java",classpath:"${configurations.compile.asPath};${configurations.aspectCompile.asPath}"){ sourceroots{ sourceSets.main.java.srcDirs.each{ pathelement(location:it.absolutePath) } } } } } dependencies { ajc "org.aspectj:aspectjtools:1.6.10" compile "org.aspectj:aspectjrt:1.6.10" aspects "org.springframework:spring-aspects:3.2.2.RELEASE" aspectCompile "javax.persistence:persistence-api:1.0" aspectCompile "org.springframework:spring-tx:3.2.2.RELEASE" aspectCompile "org.springframework:spring-orm:3.2.2.RELEASE" } The entire process took me about an hour, although most of it was looking up stuff in the Gradle documentation. I’m also really impressed with the error notification of Gradle. If something goes wrong, it’s quite accurate in pinpointing the problem. This was one of the more complex things in the builds I have, so anything else should be a breeze. One thing I really, really like about Gradle is that you can merge all build configuration of a multi-module project into a single build.gradle file. For a build with 10 modules this meant going from maintaining 11 Maven POM files to only one gradle build script. Even better: the single build.gradle file is 3 times smaller than the projects root POM! If that’s not nice, I don’t know what is.
July 22, 2013
· 25,962 Views · 2 Likes

Comments

Hystrix and Spring Boot

Sep 25, 2019 · James Sugrue

Or you could have realized that it's actually Groovy code?

You might want to take a more polite approach instead of assuming people don't know what they are writing about. Mind you, this is code I wrote 5 years ago and I'd probably replace Hystrix with Resilience4j if I had the choice today.

Which Is the Right Java Abstraction for JSON

Dec 07, 2017 · Theodore Ravindranath

This means you have to separate your datastructures as well. If you have infrastructure concerns (like JSON) seeping into your application or domain logic, you're setting yourself up for a world of trouble. Even with Spring REST I'd advise to split them up, otherwise you'll end up with a @JsonFormat in your domain code because you wanted that date displayed differently in your JSON (and that's one of the more clean approaches...)

Fixing 7 Common Java Exception Handling Mistakes

Sep 04, 2017 · Mike Gates

I'm not really sure about that because each and every example in this article contains RuntimeException subtypes. You don't declare these in a throws clause or catch them, unless there's something meaningful you can do with them. Exception handling is something that shouldn't be done directly, expecially RuntimeException types. You let them bubble up to the surface and handle them there.

The core of the article is correct, but the implementation is very misleading.

Perhaps a mistake nr. 8 can be added: Catching RuntimeException or its subtypes. But then he'd need to rewrite his entire article.

Fixing 7 Common Java Exception Handling Mistakes

Sep 01, 2017 · Mike Gates

Interesting, but I don't see the point in declaring RuntimeException classes (like IllegalArgumentException) in the method signature. Say your method is using a third party framework, are you going to declare all their RuntimeException types if you use one of their APIs?

The gist of RuntimeExceptions is that there's no point in catching them, as there is nothing as a coder you can meaningfully do with that exception, except for maybe logging and rethrowing it (and there are better ways to do that).

The Bean Class for Java Programming

Aug 24, 2017 · Michael Tharrington

In any case, making a rule to create default constructors because some frameworks can't handle non-default constructors or immutable objects isn't the best way forward. If a framework requires a default constructor, you should create a specific wrapper class for that framework so that the concern of the default constructor is isolated and can be removed easily later. Modelling your domain through framework constraints is not good design.

The Bean Class for Java Programming

Aug 24, 2017 · Michael Tharrington

JPA can handle package private default constructors, as of Java 8 using Jackson with non-default constructors is a breeze... In most cases all frameworks have a way to handle non-default constructors. My point is: if you can avoid public default constructors, you should do so.

The Bean Class for Java Programming

Aug 22, 2017 · Michael Tharrington

The problem with your approach is that it encourages POOP programming and moves away from domain classes that enforce invariants. For example. one should never be able to change the ISBN of a book. But due to your default constructor rule, you're effectively forcing people to write a setter. If you want to mutate a field in your example, it needs to be prefixed with set. To get the value, it's get. So basically you've exposed a private field publicly, breaking encapsulation and tell-don't-ask principles. I thought our ecosystem had finally moved away from the JavaBeans specification.

The Bean Class for Java Programming

Aug 22, 2017 · Michael Tharrington

Correct, but immutable object don't have default constructors nor setters, which violates your rules... But declaring method parameters as final is always a good idea, if not for avoiding bad habits... For example, it's impossible in Kotlin to reassign parameter references (final forced).

Remembering Clean Architecture

May 23, 2017 · Mahan Hashemizadeh

Nice article, although I do have some concerns regarding dependency inversion. At the moment, your perifery can perfectly inject domain services because of the transitive dependencies. You should separate your boundaries from your use cases on a module level in order to make sure architectural boundaries are enforced. I.e. CustomerAdaper should have an interface that makes it possible to decouple the domain from the rest of your system. Especially when using Spring Boot you want to do this just because it's so easy to inject stuff.

Clean Architecture Is Screaming

Mar 08, 2017 · Grzegorz Ziemoński

Well, actually you can do this, but you can't use @Component. You can use the more standard @Named annotation when enables you to defer the choice of framework (Java EE or Spring). Same thing with @Transactional, you need to use the javax one and on the Spring one. If an annotation forces you to choose a framework on the application or domain layer, it's either a smell or a conscious decision to accept technical debt in your architectural design.

Clean Architecture Is Screaming

Mar 08, 2017 · Grzegorz Ziemoński

I'm fairly certain something is missing there. The use case is getting the details of a codecast by its permalink and the response model is actually missing the codecast :). But you *could* have a use-case called 'CheckExistenceOfCodecastUseCase' that actually does that and that would be perfectly acceptable. It takes a bit of time to get your head around the fact that you actually don't have any boundary services: it's a single use case per class instead of having all the use cases as methods on a service.

Clean Architecture Is Screaming

Mar 08, 2017 · Grzegorz Ziemoński

Nice write up. It's fairly easy to defer any framework choice to the edge of your system. It's perfectly possible to have your domain and application layer completely devoid of any large framework like Spring. Given the fact that large systems tend to outlive major releases, I can't say I agree with your second consideration when to use clean architecture. I've demonstrated this in a small example: https://github.com/lievendoclo/cleanarch and discussed some findings in an article (https://www.insaneprogramming.be/article/2017/02/14/thoughts-on-clean-architecture/).

Onion Architecture Is Interesting

Feb 28, 2017 · Grzegorz Ziemoński

Well, you can actually create an persistence gateway interface that is able to persist your domain. In the implementation of that interface you can map your domain onto another class that can then be annotated with JPA annotations. Okay, you'll add a mapping to your architecture, but it'll keep your domain clean.

That being said, adding JPA annotations to your domain can be a possibility if you accept the technical debt there (there is a thing as acceptable techical debt on a design level) and as long as the existence of those JPA annotations do not interfere in how you would model your domain model. If you need to change you domain model because otherwise a certain JPA construct wouldn't work, I'd be very careful and even hesitant to apply such changes,

Onion Architecture Is Interesting

Feb 27, 2017 · Grzegorz Ziemoński

Good article, not so sure whether I'd agree that the UI would be able to directly communicate with the domain. Too many possibilities for abuse. So in other words, I'd prefer the call stack can go outside-in, but only the adjacent layer.

I wonder whether your next article is going to be on Clean Architecture :).

An Opinionless Comparison of Spring and Guice

Oct 05, 2016 · Nikhil Wanpal

While interesting, I fail to see any real value in such benchmarks. As far as I can tell, any DI container starts up under 10ms and all fetches are in the microsecond and nanosecond range. If a one-off startup of 10ms is bothering you, you really have some serious issues. Using a 5 million multiplier you can prove just about everything, but the business value or practicality of such a multiplier is zip, because you'll never ever see such behavior.

An Opinionless Comparison of Spring and Guice

Oct 04, 2016 · Nikhil Wanpal

This article points out a lot of differences, but it's all but opinionless. It's clear the author has a preference for Guice, but it's only halfway through that this leaks into the article. Too bad, but I was probably expecting too much when hoping to read an objective comparison between 2 DI frameworks...

ORM Is an Offensive Anti-Pattern

Jan 07, 2015 · grspain

Christ... what a piece of FUD! Clearly a case of 'I don't get it, so I'm against it'.

If you really want you objects to be able to be self-persisting, take a look at GORM. A small issue might be... that it's using an ORM (Hibernate) in the background, but in most cases you won't even notice it's there.

And on writing your own transaction wrappers using SQL statements: have fun in a distributed environment using JMS or AMQP, or just 2 databases at the same time.

And if you're not using an ORM, then at least use something like Spring's JdbcTemplate, which handles those nasty things like transactions for you. Oh, and you don't need to write your own rowmapping system.

To finish: it's not because you CAN use an ORM, you need to use it for everything! Read up on things like CQRS and polyglot persistence to see what I'm talking about.

Animation Basics for JavaFX Beginners

Oct 03, 2014 · Alla Redko

I updated the source post, the plugin has now been published to my bintray account (see source post for more details) and will be synced to jcenter shortly.

5 great code highlighting plugins for wordpress

Aug 30, 2014 · james anderson

Ron,

Thanks for your reply. It's great to hear that the issues I pointed out are being addressed. However, I still don't see how you can use swagger in a distributed microservice environment: you'll still have a single swagger spec file per deployment. What would really help is to have an aggregator that merges multiple swagger files into a single one (something like what Netflix does with Turbine for real-time streams like Hystrix).

That being said, I also wrote a follow-up on the article describing possible alternatives to Swagger. Swagger, as it currently stands, is a top down approach. This sort of limits the use of Swagger to documentation only (like Javadoc). Frameworks like RAML work bottom up, enabling you to specify the API first and write the implementation afterwards (unit testing as you go against the specification). With 2.0, it looks like you're going the same way. In fact, reading your site you're almost copying what RAML does already, except for some (albeit cool) minor additions. So why would I choose Swagger 2.0 over RAML (which actually has none of the Swagger's current annoyances)? It's also dead-easy to use with Spring MVC. On that subject, I still baffles me you don't include support for Spring MVC out of the box! You support Play, Dropwizard, CXF, Resteasy, ... but you don't support the most popular web framework (I'm not saying that, the statistics do) and leave it up to an external library to do the integration. Again, another dependency and another possibility on unwanted transitive dependencies.

In any case, I'll join the Google group :).

5 great code highlighting plugins for wordpress

Aug 30, 2014 · james anderson

Ron,

Thanks for your reply. It's great to hear that the issues I pointed out are being addressed. However, I still don't see how you can use swagger in a distributed microservice environment: you'll still have a single swagger spec file per deployment. What would really help is to have an aggregator that merges multiple swagger files into a single one (something like what Netflix does with Turbine for real-time streams like Hystrix).

That being said, I also wrote a follow-up on the article describing possible alternatives to Swagger. Swagger, as it currently stands, is a top down approach. This sort of limits the use of Swagger to documentation only (like Javadoc). Frameworks like RAML work bottom up, enabling you to specify the API first and write the implementation afterwards (unit testing as you go against the specification). With 2.0, it looks like you're going the same way. In fact, reading your site you're almost copying what RAML does already, except for some (albeit cool) minor additions. So why would I choose Swagger 2.0 over RAML (which actually has none of the Swagger's current annoyances)? It's also dead-easy to use with Spring MVC. On that subject, I still baffles me you don't include support for Spring MVC out of the box! You support Play, Dropwizard, CXF, Resteasy, ... but you don't support the most popular web framework (I'm not saying that, the statistics do) and leave it up to an external library to do the integration. Again, another dependency and another possibility on unwanted transitive dependencies.

In any case, I'll join the Google group :).

5 great code highlighting plugins for wordpress

Aug 30, 2014 · james anderson

Ron,

Thanks for your reply. It's great to hear that the issues I pointed out are being addressed. However, I still don't see how you can use swagger in a distributed microservice environment: you'll still have a single swagger spec file per deployment. What would really help is to have an aggregator that merges multiple swagger files into a single one (something like what Netflix does with Turbine for real-time streams like Hystrix).

That being said, I also wrote a follow-up on the article describing possible alternatives to Swagger. Swagger, as it currently stands, is a top down approach. This sort of limits the use of Swagger to documentation only (like Javadoc). Frameworks like RAML work bottom up, enabling you to specify the API first and write the implementation afterwards (unit testing as you go against the specification). With 2.0, it looks like you're going the same way. In fact, reading your site you're almost copying what RAML does already, except for some (albeit cool) minor additions. So why would I choose Swagger 2.0 over RAML (which actually has none of the Swagger's current annoyances)? It's also dead-easy to use with Spring MVC. On that subject, I still baffles me you don't include support for Spring MVC out of the box! You support Play, Dropwizard, CXF, Resteasy, ... but you don't support the most popular web framework (I'm not saying that, the statistics do) and leave it up to an external library to do the integration. Again, another dependency and another possibility on unwanted transitive dependencies.

In any case, I'll join the Google group :).

How To Build An iPhone App: A Guide

Dec 29, 2013 · Rich LaMarche

Indeed, I don't see the added value over QueryDSL or even the JPA2 metamodel criteria queries and it doesn't support things like aggregation. It's a nice idea, but they've reinvented the wheel and it's an incomplete reinvention at that.

HTML 5 Canvas Lessons: Zooming and Reusing

Jul 28, 2013 · Mr B Loid

Really? I have yet to see a project or a client where they are using both Ant and Maven at the same time, except maybe in a transitioning phase. What sense would maintaining 2 build systems within the same company make?

The polarization is continuing today (this post is based on quite an old post from me), only now it's Gradle vs Maven. And the rants and discussion haven't gotten smaller, that's for sure. I can easily dig up numerous Maven vs Ant rants, so it's really not a load of crock, it's the reality of clashing egos in IT.

My point was: religious wars like Maven vs Ant or now Maven vs Gradle are disrupting the one thing that actually counts: productivity. If you're happy with Ant (whoever you are), great. If you're fine with Maven, fantastic. Like Gradle? Super! Just stop writing 'X sucks, Y is a lot better'. It all depends on the context. Use whatever makes you the most productive and if you're passionate about your chosen techonology, stop ranting about the competing techs. You'll probably won't be convinced anyhow, so: live with the fact that other people have other choices. Chances are their context is a lot different from yours.

HTML 5 Canvas Lessons: Zooming and Reusing

Jul 28, 2013 · Mr B Loid

Really? I have yet to see a project or a client where they are using both Ant and Maven at the same time, except maybe in a transitioning phase. What sense would maintaining 2 build systems within the same company make?

The polarization is continuing today (this post is based on quite an old post from me), only now it's Gradle vs Maven. And the rants and discussion haven't gotten smaller, that's for sure. I can easily dig up numerous Maven vs Ant rants, so it's really not a load of crock, it's the reality of clashing egos in IT.

My point was: religious wars like Maven vs Ant or now Maven vs Gradle are disrupting the one thing that actually counts: productivity. If you're happy with Ant (whoever you are), great. If you're fine with Maven, fantastic. Like Gradle? Super! Just stop writing 'X sucks, Y is a lot better'. It all depends on the context. Use whatever makes you the most productive and if you're passionate about your chosen techonology, stop ranting about the competing techs. You'll probably won't be convinced anyhow, so: live with the fact that other people have other choices. Chances are their context is a lot different from yours.

HTML 5 Canvas Lessons: Zooming and Reusing

Jul 28, 2013 · Mr B Loid

Really? I have yet to see a project or a client where they are using both Ant and Maven at the same time, except maybe in a transitioning phase. What sense would maintaining 2 build systems within the same company make?

The polarization is continuing today (this post is based on quite an old post from me), only now it's Gradle vs Maven. And the rants and discussion haven't gotten smaller, that's for sure. I can easily dig up numerous Maven vs Ant rants, so it's really not a load of crock, it's the reality of clashing egos in IT.

My point was: religious wars like Maven vs Ant or now Maven vs Gradle are disrupting the one thing that actually counts: productivity. If you're happy with Ant (whoever you are), great. If you're fine with Maven, fantastic. Like Gradle? Super! Just stop writing 'X sucks, Y is a lot better'. It all depends on the context. Use whatever makes you the most productive and if you're passionate about your chosen techonology, stop ranting about the competing techs. You'll probably won't be convinced anyhow, so: live with the fact that other people have other choices. Chances are their context is a lot different from yours.

Maven 3.1.0 Released, What a Disappointment

Jul 17, 2013 · Lieven Doclo

Have you read the article written by the Hibernate developers? I think that article displays a lot of reasons why Maven is insufficient for certain builds. The fact is: Gradle has evolved to a point where it's as much effort to write a build in Gradle as it is in Maven. However, with Gradle, you can do stuff which is either impossible in Maven or takes several plugins to make it work (in which case you'll need to cross your fingers and hope they are compatible with each other). I'll even go further and dare to state that complex, multi-module projects are easier to do in Gradle than in Maven. The one thing that bothers me about Gradle is the size of the gun it provides you with. With Maven, you can lose a toe if you shoot yourself in the foot. Gradle has the potential to blow your entire leg off.
Multi-threading strategies in PHP

Apr 07, 2010 · Mr B Loid

Great post! Finally a git adoption story from the trenches I can actually use.
Live bookmarks in mainly health sites

Apr 06, 2010 · Peterson Mark

@Nicolas

I agree that the best decoupling is done through XML, but then again, how many project go down the Spring or EJB path and change their mind after a while? Bigger projects won't.

While I use @Inject as much as I can, the new annotations of Spring 3.0 are too good not to be used and when you already made the choice to go with Spring, you might as wel go all the way. If you don't want to be coupled to any framework, your development options will be severely limited. You'll be completely standards-compliant, but at what cost?

I think the main point of this article is that when you're using Spring, you're better off using annotations (and some XML for the corner cases).

Groovy 1.1-beta-1 with annotation support

Apr 06, 2010 · Mr B Loid

Although you bring up some really interesting ideas, what you are suggesting is impossible in real software development. Unless you're simply porting an old application to a new platform without adding functionality (why such projects exist is another discussion), you won't be able to find out all the behavior your userbase needs. You might be able to get about 30 to 40 percent, maybe.

Remember, software users are a special breed of humans. As programmers, it's hard to find out the specifics of a business, just because we're not confronted with its quirks every day. As users, it's hard to understand the complexity that goes into a software application. To this day this mismatch, I believe, has caused the most project failures.

Your statement 'software projects should start with design, not programming', therefore, is incorrect. I believe that the best way to find out user needs is to use prototyping until the sun explodes. How do find out what a user wants? By quickly showing him what you think he wants. You'll know quickly enough whether your assumptions were right. User interface design is not the responsibilty of a developer. It's the responsibility of your key users. After all, they're the one who'll end up working with the damn thing. We're only there to materialize their needs and provide guidance when their needs are technically impossible.

While I applaud any programmer that takes up usability study (every front-end developer should have read Designing Interfaces), what we really need are middlemen. People that can transfer knowledge between the business end and the technical user interface. Unfortunately, there aren't many of those around who excel at this.

Java Build Tools: Ant vs. Maven

Jan 02, 2010 · Joseph Bradington

Well, that gives us another 3 months until the next Maven rant. Glad we have this one out of the way.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: