DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Automating the build of MSI setup packages on Jenkins
a short "how-to" based on an issue one of my work mates recently faced when trying to automate the creation of an msi package on jenkins. normally, visual studio solutions can be build on jenkins by using the appropriate msbuild plugin . apparently though, for visual studio setup projects, msbuild cannot be used and one has to switch to using visual studio itself to execute the build. so the first approach was to use devenv.exe as follows devenv.exe visualstudiosolution.sln /build "release" while this works, the problem is that it is an "async call", meaning that the compilation goes on in the background while the console from which the build is executed, immediately returns. obviously this isn't suited for being used on jenkins. searching around for a while, it turned out that you have to use devenv.com instead of devenv.exe : "c:\program files (x86)\microsoft visual studio 10.0\common7\ide\devenv.com"visualstudiosolution.sln /build "release" once you got that, integrating everything into jenkins is quite straightforward: (obviously you may also simply set an environment variable pointing to devenv.com on your build server rather than indicating the entire path)
March 13, 2014
by Juri Strumpflohner
· 13,291 Views
article thumbnail
Spring Boot & JavaConfig integration
Java EE in general and Context and Dependency Injection has been part of the Vaadin ecosystem since ages. Recently, Spring Vaadin is a joint effort of the Vaadin and the Spring teams to bring the Spring framework into the Vaadin ecosystem, lead by Petter Holmström for Vaadin and Josh Long for Pivotal. Integration is based on the Spring Boot project - and its sub-modules, that aims to ease creating new Spring web projects. This article assumes the reader is familiar enough with Spring Boot. If not the case, please take some time to get to understand basic notions about the library. Note that at the time of this writing, there's no release for Spring Vaadin. You'll need to clone the project and build it yourself. The first step is to create the UI. In order to display usage of Spring's Dependency Injection, it should use a service dependency. Let's injection the UI through Constructor Injection to favor immutability. The only addition to a standard UI is to annotate it with org.vaadin.spring.@VaadinUI. @VaadinUI public class VaadinSpringExampleUi extends UI { private HelloService helloService; public VaadinSpringExampleUi(HelloService helloService) { this.helloService = helloService; } @Override protected void init(VaadinRequest vaadinRequest) { String hello = helloService.sayHello(); setContent(new Label(hello)); } } The second step is standard Spring Java configuration. Let's create two configuration classes, one for the main context and the other for the web one. Two thing of note: The method instantiating the previous UI has to be annotated with org.vaadin.spring.@UIScope in addition to standard Spring org.springframework.context.annotation.@Bean to bind the bean lifecycle to the new scope provided by the Spring Vaadin library. At the time of this writing, a RequestContextListener bean must be provided. In order to be compliant with future versions of the library, it's a good practice to annotate the instantiating method with @ConditionalOnMissingBean(RequestContextListener.class). @Configuration public class MainConfig { @Bean public HelloService helloService() { return new HelloService(); } } @Configuration public class WebConfig extends MainConfig { @Bean @ConditionalOnMissingBean(RequestContextListener.class) public RequestContextListener requestContextListener() { return new RequestContextListener(); } @Bean @UIScope public VaadinSpringExampleUi exampleUi() { return new VaadinSpringExampleUi(helloService()); } } The final step is to create a dedicated WebApplicationInitializer. Spring Boot already offers a concrete implementation, we just need to reference our previous configuration classes as well as those provided by Spring Vaadin, namely VaadinAutoConfiguration and VaadinConfiguration. public class ApplicationInitializer extends SpringBootServletInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.showBanner(false) .sources(MainConfig.class) .sources(VaadinAutoConfiguration.class, VaadinConfiguration.class) .sources(WebConfig.class); } } At this point, we demonstrated a working Spring Vaadin sample application. Code for this article can be browsed and forked on Github.
March 10, 2014
by Nicolas Fränkel DZone Core CORE
· 13,301 Views
article thumbnail
Exporting Spring Data JPA Repositories as REST Services using Spring Data REST
Spring Data modules provides various modules to work with various types of datasources like RDBMS, NOSQL stores etc in unified way. In my previous article SpringMVC4 + Spring Data JPA + SpringSecurity configuration using JavaConfig I have explained how to configure Spring Data JPA using JavaConfig. Now in this post let us see how we can use Spring Data JPA repositories and export JPA entities as REST endpoints using Spring Data REST. First let us configure spring-data-jpa and spring-data-rest-webmvc dependencies in our pom.xml. org.springframework.data spring-data-jpa 1.5.0.RELEASE org.springframework.data spring-data-rest-webmvc 2.0.0.RELEASE Make sure you have latest released versions configured correctly, otherwise you will encounter the following error: java.lang.ClassNotFoundException: org.springframework.data.mapping.SimplePropertyHandler Create JPA entities. @Entity @Table(name = "USERS") public class User implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "user_id") private Integer id; @Column(name = "username", nullable = false, unique = true, length = 50) private String userName; @Column(name = "password", nullable = false, length = 50) private String password; @Column(name = "firstname", nullable = false, length = 50) private String firstName; @Column(name = "lastname", length = 50) private String lastName; @Column(name = "email", nullable = false, unique = true, length = 50) private String email; @Temporal(TemporalType.DATE) private Date dob; private boolean enabled=true; @OneToMany(fetch=FetchType.EAGER, cascade=CascadeType.ALL) @JoinColumn(name="user_id") private Set roles = new HashSet<>(); @OneToMany(mappedBy = "user") private List contacts = new ArrayList<>(); //setters and getters } @Entity @Table(name = "ROLES") public class Role implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "role_id") private Integer id; @Column(name="role_name",nullable=false) private String roleName; //setters and getters } @Entity @Table(name = "CONTACTS") public class Contact implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "contact_id") private Integer id; @Column(name = "firstname", nullable = false, length = 50) private String firstName; @Column(name = "lastname", length = 50) private String lastName; @Column(name = "email", nullable = false, unique = true, length = 50) private String email; @Temporal(TemporalType.DATE) private Date dob; @ManyToOne @JoinColumn(name = "user_id") private User user; //setters and getters } Configure DispatcherServlet using AbstractAnnotationConfigDispatcherServletInitializer. Observe that we have added RepositoryRestMvcConfiguration.class to getServletConfigClasses() method. RepositoryRestMvcConfiguration is the one which does the heavy lifting of looking for Spring Data Repositories and exporting them as REST endpoints. package com.sivalabs.springdatarest.web.config; import javax.servlet.Filter; import org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration; import org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter; import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer; import com.sivalabs.springdatarest.config.AppConfig; public class SpringWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override protected Class[] getRootConfigClasses() { return new Class[] { AppConfig.class}; } @Override protected Class[] getServletConfigClasses() { return new Class[] { WebMvcConfig.class, RepositoryRestMvcConfiguration.class }; } @Override protected String[] getServletMappings() { return new String[] { "/rest/*" }; } @Override protected Filter[] getServletFilters() { return new Filter[]{ new OpenEntityManagerInViewFilter() }; } } Create Spring Data JPA repositories for JPA entities. public interface UserRepository extends JpaRepository { } public interface RoleRepository extends JpaRepository { } public interface ContactRepository extends JpaRepository { } That's it. Spring Data REST will take care of rest of the things. You can use spring Rest Shell https://github.com/spring-projects/rest-shell or Chrome's Postman Addon to test the exported REST services. D:\rest-shell-1.2.1.RELEASE\bin>rest-shell http://localhost:8080:> Now we can change the baseUri using baseUri command as follows: http://localhost:8080:>baseUri http://localhost:8080/spring-data-rest-demo/rest/ http://localhost:8080/spring-data-rest-demo/rest/> http://localhost:8080/spring-data-rest-demo/rest/>list rel href ====================================================================================== users http://localhost:8080/spring-data-rest-demo/rest/users{?page,size,sort} roles http://localhost:8080/spring-data-rest-demo/rest/roles{?page,size,sort} contacts http://localhost:8080/spring-data-rest-demo/rest/contacts{?page,size,sort} Note: It seems there is an issue with rest-shell when the DispatcherServlet url mapped to "/" and issue list command it responds with "No resources found". http://localhost:8080/spring-data-rest-demo/rest/>get users/ { "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/{?page,size,sort}", "templated": true }, "search": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/search" } }, "_embedded": { "users": [ { "userName": "admin", "password": "admin", "firstName": "Administrator", "lastName": null, "email": "admin@gmail.com", "dob": null, "enabled": true, "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1" }, "roles": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1/roles" }, "contacts": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1/contacts" } } }, { "userName": "siva", "password": "siva", "firstName": "Siva", "lastName": null, "email": "sivaprasadreddy.k@gmail.com", "dob": null, "enabled": true, "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2" }, "roles": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2/roles" }, "contacts": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2/contacts" } } } ] }, "page": { "size": 20, "totalElements": 2, "totalPages": 1, "number": 0 } } You can find the source code at https://github.com/sivaprasadreddy/sivalabs-blog-samples-code/tree/master/spring-data-rest-demo For more Info on Spring Rest Shell: https://github.com/spring-projects/rest-shell
March 7, 2014
by Siva Prasad Reddy Katamreddy
· 29,688 Views
article thumbnail
XML to Avro Conversion
We all know what XML is right? Just in case not, no problem here is what it is all about. 5 Now, what the computer really needs is the number five and some context around it. In XML you (human and computer) can see how it represents context to five. Now lets say instead you have a business XML document like FPML 32.00 150000 1.00 EUR 405000 2001-07-17Z NONE EUR 2.70 ISDA2002 ISDA2002Equity TODO GBEN Party A Party B That is a lot of extra unnecessary data points. Now lets look at this using Apache Avro. With Avro, the context and the values are separated. This means the schema/structure of what the information is does not get stored or streamed over and over and over and over (and over) again. The Avro schema is hashed. So the data structure only holds the value and the computer understands the fingerprint (the hash) of the schema and can retrieve the schema using the fingerprint. 0x d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 This type of implementation is pretty typical in the data space. When you do this you can reduce your data between 20%-80%. When I tell folks this they immediately ask, “why such a large gap of unknowns”. The answer is because not every XML is created the same. But that is the problem because you are duplicating the information the computer needs to understand the data. XML is nice for humans to read, sure … but that is not optimized for the computer. Here is a converter we are working on https://github.com/stealthly/xml-avro to help get folks off of XML and onto lower cost, open source systems. This allows you to keep parts of your systems (specifically the domain business code) using the XML and not having to be changed (risk mitigation) but store and stream the data with less overhead (optimize budget).
March 7, 2014
by Joe Stein
· 25,592 Views
article thumbnail
How to Install R Packages with Ansible
Here is a short snippet of Ansible playbook that installs R and any required packages to any nodes of the cluster: - name: Making sure R is installed apt: pkg=r-base state=installed - name: adding a few R packages command: /usr/bin/Rscript --slave --no-save --no-restore-history -e "if (! ('{{item}' %in% installed.packages()[,'Package'])) install.packages(pkgs={{item}, repos=c('http://www.freestatistics.org/cran/'))" with_items: - rjson - rPython - plyr - psych - reshape2 You should replace the repos with one chosen from the list of Cran mirrors. Note that the command above installs each package only if it is not already present, but messes up the “changed” status of Ansible’s PLAY RECAP by incorrectly reporting a change per R package at every run. Find more big data technical posts on my blog.
March 5, 2014
by Svend Vanderveken
· 5,983 Views
article thumbnail
Lessons Learned: ActiveMQ, Apache Camel and Connection Pooling
Every once in a while, I run into an interesting problem related to connections and pooling with ActiveMQ, and today I’d like to discuss something that is not always very clear and could potentially cause you to drink heavily when using ActiveMQ and Camel JMS. Not to say that you won’t want to drink heavily when using ActiveMQ and Camel anyway… in celebration of how delightful integration and messaging become when using them of course. So first up. Connection pooling. Sure, you’ve always heard to pool your connections. What does that really mean, and why do you want to do it? Opening up a connection to an ActiveMQ broker is a relativley expensive operation when compared to other actions like creating a session or consumer. So when sending or receiving messages and generally interacting with the broker, you’d like to reuse existing connections if possible. What you don’t want to do is rely on a JMS library (like Spring JmsTemplate for example) that opens and closes connections for each send or receive of a message… unless you can pool/cache your connections. So if we can agree that pooling connections is a good idea, take a look at an example config: You may even want to use Apache Camel and its wonderful camel-jms component because doing otherwise would just be silly. So maybe you want to set up a JMS config similar to so: This config basically means for consumers, set up 15 concurrent consumers, use transactions (local), use PERSISTENT messages for producers, set a timeout for 10000 for request-reply etc, etc. Huge note: If you want a more thorough taste of the configs for the jms component, especially around caching consumers, transactions and more, please take a look at Torsten’s excellent blog on Camel JMS with transactions – lesson learned. Maybe you should also spend some time poking around his blog as he’s got lots of good Camel/ActiveMQ stuff too Awesome so far. We have a connection pool of 10 connections, we will expect 10 sessions per connection (for a total of 100 sessions if we needed that…), and 15 concurrent consumers. We should be able to deal with some serious load, right? Take a look at this route here. It’s simple enough, exposes the activemq component (which will use the jmsConfig from above, so 15 concurrent consumers) and just does some logging: from("activemq:test.queue") .routeId("test.queue.routeId") .to("log:org.apache.camel.blog?groupSize=100"); Try and run this. You will find your consumers blocked up right away and stack traces will show this beauty: "Camel (camel-1) thread #1 - JmsConsumer[test.queue]" daemon prio=5 tid=7f81eb4bc000 nid=0x10abbb000 in Object.wait() [10abba000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <7f40e9070> (a org.apache.commons.pool.impl.GenericKeyedObjectPool$Latch) at java.lang.Object.wait(Object.java:485) at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1151) - locked <7f40e9070> (a org.apache.commons.pool.impl.GenericKeyedObjectPool$Latch) at org.apache.activemq.pool.ConnectionPool.createSession(ConnectionPool.java:146) at org.apache.activemq.pool.PooledConnection.createSession(PooledConnection.java:173) at org.springframework.jms.support.JmsAccessor.createSession(JmsAccessor.java:196) .... How can that possibly be? We have connection pooling… we have sessions per connection set to 10 per connection, so how are we all blocked up on creating new sessions? The answer is you’re exhausting the number of sessions, as you can expect by the stack trace. But how? And how much do I need to drink to resolve this? Well hold on now. Grab a beer and hear me out. First understand this. ActiveMQ’s pooling implementation uses commons-pool and the maxActiveSessionsPerConnection attribute is actually mapped to the maxActive property of the underlying pool. From the docs this means: maxActive controls the maximum number of objects (per key) that can allocated by the pool (checked out to client threads, or idle in the pool) at one time. The key here is “key” (literally… the ‘per key’ clause of the documentation). So in the ActiveMQ implementation the key is an object that represents 1) whether the session mode is transacted and 2) what the acknowledgement mode is () as seen here. So in plain terms, you’ll end up with a “maxActive” sessions for each key that’s used on that connection.. so if you have clients that use transactions, no transactions, client-ack, auto-ack, transacted-session, dups-okay, etc you can start to see that you’d end up with “maxActive” sessions for each permutation. So if you have maxActiveSesssionsPerConnection set to 10, you could really end up with 10 x 2 x 4 == 80 sessions. This is something to tuck away in the back of your mind. The second key here is that when the camel-jms component sets up consumers, it ends up sharing a single connection among all the consumers specified by the concurrentConsumers session. This is an interesting point, because camel-jms uses the underlying Spring framework’s DefaultMessageListenerContainer and unfortunately this restriction comes from that library. So if you have 15 concurrent consumers, they will all share a single connection (even if pooling… it will grab one connection from the pool and hold it). So if you have 15 consumers that each share a connection, each share a transacted mode, each share an ack mode, then you end up trying to create 15 sessions for that one connection. And you end up with the above. So my rule of thumb for avoiding these scenarios: Understand exactly what each of your producers and consumers are doing, what their TX and ACK modes are Always tune the max sessions param when you NEED to (too many session threads? i dunno..) but always do concurrentConsumers+1 as the value AT LEAST If producers and consumers are producing/consuming the same destination SPLIT UP THE CONNECTION POOL: one pool for consumers, one pool for producers Dunno how valuable this info will be, but I wanted to jot it down for myself. If someone else finds it valuable, or has questions, let me know in the comments.
March 4, 2014
by Christian Posta
· 25,534 Views · 2 Likes
article thumbnail
When to Use MongoDB Rather than MySQL (or Other RDBMS): The Billing Example
NoSQL has been a hot topic a pretty long time (well, it's not only a buzz anymore). However, when should we really use it instead of an RDBMS?
March 3, 2014
by Moshe Kaplan
· 378,063 Views · 12 Likes
article thumbnail
Step-by-Step: Live Migrate Multiple (Clustered) VMs in One Line of PowerShell - Revisited
A while back, I wrote an article showing how to Live Migrate Your VMs in One Line of Powershell between non-clustered Windows Server 2012 Hyper-V hosts using Shared Nothing Live Migration. Since then, I’ve been asked a few times for how this type of parallel Live Migration would be performed for highly available virtual machines between Hyper-V hosts within a cluster. In this article, we’ll walk through the steps of doing exactly that … via Windows PowerShell on Windows Server 2012 or 2012 R2 or our FREE Hyper-V Server 2012 R2 bare-metal, enterprise-grade hypervisor in a clustered configuration. Wait! Do I need PowerShell to Live Migrate multiple VMs within a Cluster? Well, actually … No. You could certainly use the Failover Cluster Manager GUI tool to select multiple highly available virtual machines, right-click and select Move | Live Migration … Failover Cluster Manager – Performing Multi-VM Live Migration But, you may wish to script this process for other reasons … perhaps to efficiently drain all VM’s from a host as part of a maintenance script that will be performing other tasks. Can I use the same PowerShell cmdlets for Live Migrating within a Cluster? Well, actually … No again. When VMs are made highly available resources within a cluster, they’re managed as cluster group resources instead of being standalone VM resources. As a result, we have a different set of Cluster-aware PowerShell cmdlets that we use when managing these cluster groups. To perform a scripted multi-VM Live Migration, we’ll be leveraging three of these cmdlets: Get-ClusterNode, Get-ClusterGroup and Move-ClusterVirtualMachineRole Now, let’s see that one line of PowerShell! Before getting to the point of actually performing the multi-VM Live Migration in a single PowerShell command line, we first need to setup a few variables to handle the "what" and "where" of moving these VMs. First, let’s specify the name of the cluster with which we’ll be working. We’ll store it in a $clusterName variable. $clusterName = read-host -Prompt "Cluster name" Next, we’ll need to select the cluster node to which we’ll be Live Migrating the VMs. Lets use the Get-ClusterNode and Out-GridView cmdlets together to prompt for the cluster node and store the value in a $targetClusterNode variable. $targetClusterNode = Get-ClusterNode -Cluster $clusterName | Out-GridView -Title "Select Target Cluster Node" ` -OutputMode Single And then, we’ll need to create a list of all the VMs currently running in the cluster. We can use the Get-ClusterGroup cmdlet to retrieve this list. Below, we have an example where we are combining this cmdlet with a Where-Object cmdlet to return only the virtual machine cluster groups that are running on any node except the selected target cluster node. After all, it really doesn’t make any sense to Live Migrate a VM to the same node on which it’s currently running! $haVMs = Get-ClusterGroup -Cluster $clusterName | Where-Object {($_.GroupType -eq "VirtualMachine") ` -and ($_.OwnerNode -ne $targetClusterNode.Name)} We’ve stored the resulting list of VMs in a $haVMs variable. Ready to Live Migrate! OK … Now we have all of our variables defined for the cluster, the target cluster node and the list of VMs from which to choose. Here’s our single line of PowerShell to do the magic … $haVMs | Out-GridView -Title "Select VMs to Move" –PassThru | Move-ClusterVirtualMachineRole -MigrationType Live ` -Node $targetClusterNode.Name -Wait 0 Proceed with care: Keep in mind that your target cluster node will need to have sufficient available resources to run the VM's that you select for Live Migration. Of course, it's best to initially test tasks like this in your lab environment first. Here’s what is happening in this single PowerShell command line: We’re passing the list of VMs stored in the $haVMs variable to the Out-GridView cmdlet. Out-GridView prompts for which VMs to Live Migrate and then passes the selected VMs down the PowerShell object pipeline to the Move-ClusterVirtualMachineRole cmdlet. This cmdlet initiates the Live Migration for each selected VM, and because it’s using a –Wait 0 parameter, it initiates each Live Migration one-after-another without waiting for the prior task to finish. As a result, all of the selected VMs will Live Migrate in parallel, up to the maximum number of concurrent Live Migrations that you’ve configured on these cluster nodes. The VMs selected beyond this maximum will simply queue up and wait their turn. Unlike some competing hypervisors, Hyper-V doesn't impose an artificial hard-coded limit on how many VMs for you can Live Migrate concurrently. Instead, it's up to you to set the maximum to a sensible value based on your hardware and network capacity. Do you have your own PowerShell automation ideas for Hyper-V? Feel free to share your ideas in the Comments section below. See you in the Clouds! - Keith
March 3, 2014
by Keith Mayer
· 10,038 Views
article thumbnail
Java 8: Lambda Expressions vs Auto Closeable
If you used earlier versions of Neo4j via its Java API with Java 6 you probably have code similar to the following to ensure write operations happen within a transaction: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); Transaction tx = db.beginTx(); try { db.createNode(); tx.success(); } finally { tx.close(); } } } In Neo4j 2.0 Transaction started extending AutoCloseable which meant that you could use ‘try with resources’ and the ‘close’ method would be automatically called when the block finished: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); try ( Transaction tx = db.beginTx() ) { Node node = db.createNode(); tx.success(); } } } This works quite well although it’s still possible to have transactions hanging around in an application when people don’t use this syntax – the old style is still permissible. In Venkat Subramaniam’s Java 8 book he suggests an alternative approach where we use a lambda based approach: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); Db.withinTransaction(db, neo4jDb -> { Node node = neo4jDb.createNode(); }); } static class Db { public static void withinTransaction(GraphDatabaseService db, Consumer fn) { try ( Transaction tx = db.beginTx() ) { fn.accept(db); tx.success(); } } } } The ‘withinTransaction’ function would actually go on GraphDatabaseService or similar rather than being on that Db class but it was easier to put it on there for this example. A disadvantage of this style is that you don’t have explicit control over the transaction for handling the failure case – it’s assumed that if ‘tx.success()’ isn’t called then the transaction failed and it’s rolled back. I’m not sure what % of use cases actually need such fine grained control though. Brian Hurt refers to this as the ‘hole in the middle pattern‘ and I imagine we’ll start seeing more code of this ilk once Java 8 is released and becomes more widely used.
March 3, 2014
by Mark Needham
· 8,035 Views
article thumbnail
Python CSV Files: Reading and Writing
Learn to parse CSV (Comma Separated Values) files with Python examples using the csv module's reader function and DictReader class.
March 3, 2014
by Mike Driscoll
· 375,139 Views · 6 Likes
article thumbnail
Generating a War File From a Plain IntelliJ Web Project
Sometimes you just want to create a quick web project in IntelliJ IDEA, and you would use their wizard and with web or Java EE module as starter project. But these projects will not have Ant nor Maven script generated for you automatically, and the IDEA Build would only compile your classes. So if you want an war file generated, try the following: 1) Menu: File > Project Structure > Artifacts 2) Click the green + icon and create a "Web Application: Archive", then OK 3) Menu: Build > Build Artifacts ... > Web: war By default it should generate it under your /out/artifacts/web_war.war Note that IntelliJ also allows you to setup "Web Application: Exploded" artifact, which great for development that run and deploy to an application server within your IDE.
March 3, 2014
by Zemian Deng
· 75,091 Views · 2 Likes
article thumbnail
Jersey: Ignoring SSL certificate – javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException
Last week Alistair and I were working on an internal application and we needed to make a HTTPS request directly to an AWS machine using a certificate signed to a different host. We use jersey-client so our code looked something like this: Client client = Client.create(); client.resource("https://some-aws-host.compute-1.amazonaws.com").post(); // and so on When we ran this we predictably ran into trouble: com.sun.jersey.api.client.ClientHandlerException: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative DNS name matching some-aws-host.compute-1.amazonaws.com found. at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149) at com.sun.jersey.api.client.Client.handle(Client.java:648) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670) at com.sun.jersey.api.client.WebResource.post(WebResource.java:241) at com.neotechnology.testlab.manager.bootstrap.ManagerAdmin.takeBackup(ManagerAdmin.java:33) at com.neotechnology.testlab.manager.bootstrap.ManagerAdminTest.foo(ManagerAdminTest.java:11) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.junit.runner.JUnitCore.run(JUnitCore.java:157) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:202) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:65) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative DNS name matching some-aws-host.compute-1.amazonaws.com found. at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1884) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:276) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:270) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1341) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:153) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:868) at sun.security.ssl.Handshaker.process_record(Handshaker.java:804) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1016) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1339) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1323) at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147) ... 31 more Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching some-aws-host.compute-1.amazonaws.com found. at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:191) at sun.security.util.HostnameChecker.match(HostnameChecker.java:93) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:347) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:203) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:126) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1323) ... 45 more We figured that we needed to get our client to ignore the certificate and came across this Stack Overflow thread which had some suggestions on how to do this. None of the suggestions worked on their own but we ended up with a combination of a couple of the suggestions which did the trick: public Client hostIgnoringClient() { try { SSLContext sslcontext = SSLContext.getInstance( "TLS" ); sslcontext.init( null, null, null ); DefaultClientConfig config = new DefaultClientConfig(); Map properties = config.getProperties(); HTTPSProperties httpsProperties = new HTTPSProperties( new HostnameVerifier() { @Override public boolean verify( String s, SSLSession sslSession ) { return true; } }, sslcontext ); properties.put( HTTPSProperties.PROPERTY_HTTPS_PROPERTIES, httpsProperties ); config.getClasses().add( JacksonJsonProvider.class ); return Client.create( config ); } catch ( KeyManagementException | NoSuchAlgorithmException e ) { throw new RuntimeException( e ); } } You’re welcome Future Mark.
March 2, 2014
by Mark Needham
· 42,541 Views · 3 Likes
article thumbnail
How to "Backcast" a Time Series in R
Sometimes it is useful to “backcast” a time series — that is, forecast in reverse time. Although there are no in-built R functions to do this, it is very easy to implement. Suppose x is our time series and we want to backcast for periods. Here is some code that should work for most univariate time series. The example is non-seasonal, but the code will also work with seasonal data. library(forecast) x <- WWWusage h <- 20 f <- frequency(x) # Reverse time revx <- ts(rev(x), frequency=f) # Forecast fc <- forecast(auto.arima(revx), h) plot(fc) # Reverse time again fc$mean <- ts(rev(fc$mean),end=tsp(x)[1] - 1/f, frequency=f) fc$upper <- fc$upper[h:1,] fc$lower <- fc$lower[h:1,] fc$x <- x # Plot result plot(fc, xlim=c(tsp(x)[1]-h/f, tsp(x)[2]))
February 28, 2014
by Rob J Hyndman
· 5,491 Views
article thumbnail
Hibernate Query by Example (QBE)
What is It Query by example is an alternative querying technique supported by the main JPA vendors but not by the JPA specification itself. QBE returns a result set depending on the properties that were set on an instance of the queried class. So if I create an Address entity and fill in the city field then the query will select all the Address entities having the same city field as the given Address entity. The typical use case of QBE is evaluating a search form where the user can fill in any search fields and gets the results based on the given search fields. In this case QBE can reduce code size significantly. When to Use · Using many fields of an entity in a query · User selects which fields of an Entity to use in a query · We are refactoring the entities frequently and don’t want to worry about breaking the queries that rely on them Limitations · QBE is not available in JPA 1.0 or 2.0 · Version properties, identifiers and associations are ignored · The query object should be annotated with @Entity Test Data I used the following entities to test the QBE feature of Hibernate: · Address (long id, String city, String street, String countryISO2Code, AddressType addressType) · AddressType (Integer type, String description) Imports The examples will refer to the following classes: import org.hibernate.Criteria; import org.hibernate.Session; import org.hibernate.criterion.Example; import org.hibernate.criterion.Restrictions; import org.junit.Test; import java.util.List; Utility Methods I also made two utility methods to present a list of the two entity types: private void listAddresses(List addresses) { for (Address address : addresses) { System.out.println(address.getId() + ", " + address.getCountryISO2Code() + ", " + address.getCity() + ", " + address.getStreet() + ", " + address.getAddressType().getType() + ", " + address.getAddressType().getDescription()); } } private void listAddressTypes(List addressTypes) { for (AddressType addressType : addressTypes) { System.out.println(addressType.getType() + ", " + addressType.getDescription()); } } Example 1: Equals This example code returns the Address entities matching the given CountryISO2Code and City. Method: @Test public void testEquals() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); address.setCity("CHICAGO"); Example addressExample = Example.create(address); Criteria criteria = session.createCriteria(Address.class).add(addressExample); listAddresses(criteria.list()); } Result: 75, US, CHICAGO, Los Angeles Way2, 6, Customer 170, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 63, US, CHICAGO, Main Avenue 1, 5, Bill to 37, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 36, US, CHICAGO, Jackson Blvd 33a, 4, Delivery Example 2: Id Limitation This example presents that id fields in the query object are ignored. Method: @Test public void testIdLimitation() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); address.setCity("CHICAGO"); address.setId(100); // setting id is ignored Example addressExample = Example.create(address); Criteria criteria = session.createCriteria(Address.class).add(addressExample); listAddresses(criteria.list()); } Result: 75, US, CHICAGO, Los Angeles Way2, 6, Customer 170, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 63, US, CHICAGO, Main Avenue 1, 5, Bill to 37, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 36, US, CHICAGO, Jackson Blvd 33a, 4, Delivery Example 3: Association Limitation Associations of the query object are ignored, too. Method: @Test public void testAssociationLimitation() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); address.setCity("CHICAGO"); AddressType addressType = new AddressType(); addressType.setType(5); address.setAddressType(addressType); // setting an association is ignored Example addressExample = Example.create(address); Criteria criteria = session.createCriteria(Address.class).add(addressExample); listAddresses(criteria.list()); } Result: 75, US, CHICAGO, Los Angeles Way2, 6, Customer 170, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 63, US, CHICAGO, Main Avenue 1, 5, Bill to 37, US, CHICAGO, Jackson Blvd 33a, 4, Delivery 36, US, CHICAGO, Jackson Blvd 33a, 4, Delivery Example 4: Like QBE supports like in the query object if we enable it with Example.enableLike(). Method: @Test public void testLike() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); address.setCity("AT%"); Example addressExample = Example.create(address).enableLike(); Criteria criteria = session.createCriteria(Address.class).add(addressExample); listAddresses(criteria.list()); } Result: 83, US, ATLANTA, null, 6, Customer 184, US, ATLANTA, null, 1, Shipper 25, US, ATLANTA, null, 1, Shipper Example 5: ExcludeProperty We can exclude a property with Example.excludeProperty(String propertyName). Method: @Test public void testExcludeProperty() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); address.setCity("AT%"); Example addressExample = Example.create(address).enableLike() .excludeProperty("countryISO2Code"); // countryISO2Code is a property of Address Criteria criteria = session.createCriteria(Address.class).add(addressExample); listAddresses(criteria.list()); } Result: 154, GR, ATHENS, BETA ALPHA Street 5, 2, Consignee 83, US, ATLANTA, null, 6, Customer 25, US, ATLANTA, null, 1, Shipper 184, US, ATLANTA, null, 1, Shipper Example 6: IgnoreCase Case-insensitive search is supported by Example.ignoreCase(). Method: @Test public void testIgnoreCase() throws Exception { Session session = (Session) entityManager.getDelegate(); AddressType addressType = new AddressType(); addressType.setDescription("customer"); Example addressTypeExample = Example.create(addressType).ignoreCase(); Criteria criteria = session.createCriteria(AddressType.class) .add(addressTypeExample); listAddressTypes(criteria.list()); } Result: 6, Customer Example 7: ExcludeZeroes We can ignore 0 values of the query object by Example.excludeZeroes(). Method: @Test public void testExcludeZeroes() throws Exception { Session session = (Session) entityManager.getDelegate(); AddressType addressType = new AddressType(); addressType.setType(0); addressType.setDescription("Customer"); Example addressTypeExample = Example.create(addressType) .excludeZeroes(); Criteria criteria = session.createCriteria(AddressType.class) .add(addressTypeExample); listAddressTypes(criteria.list()); } Result: 6, Customer Example 8: Combining with Criteria QBE can be combined with criteria query. In this example we add further restriction to the query object using criteria query. Method: @Test public void testCombiningWithCriteria() throws Exception { Session session = (Session) entityManager.getDelegate(); AddressType addressType = new AddressType(); addressType.setDescription("Customer"); Example addressTypeExample = Example.create(addressType); Criteria criteria = session .createCriteria(AddressType.class).add(addressTypeExample) .add(Restrictions.eq("type", 6)); listAddressTypes(criteria.list()); } Result: 6, Customer Example 9: Association With criteria query we can filter both sides of an association, using two query objects. Method: @Test public void testAssociation() throws Exception { Session session = (Session) entityManager.getDelegate(); Address address = new Address(); address.setCountryISO2Code("US"); AddressType addressType = new AddressType(); addressType.setType(6); Example addressExample = Example.create(address); Example addressTypeExample = Example.create(addressType); Criteria criteria = session.createCriteria(Address.class).add(addressExample) .createCriteria("addressType").add(addressTypeExample); // addressType is a property of Address listAddresses(criteria.list()); } Result: 84, US, BOSTON, null, 6, Customer 83, US, ATLANTA, null, 6, Customer 82, US, SAN FRANCISCO, null, 6, Customer 75, US, CHICAGO, Los Angeles Way2, 6, Customer EclipseLink EclipseLink QBE uses QueryByExamplePolicy, ReadObjectQuery and JpaHelper: QueryByExamplePolicy qbePolicy =newQueryByExamplePolicy(); qbePolicy.excludeDefaultPrimitiveValues(); Address address =newAddress(); address.setCity("CHICAGO"); ReadObjectQuery roq =newReadObjectQuery(address, qbePolicy); Query query =JpaHelper.createQuery(roq, entityManager); OpenJPA OpenJPA uses OpenJPAQueryBuilder: CriteriaQuery cq = openJPAQueryBuilder.createQuery(Address.class); Address address =newAddress(); address.setCity("CHICAGO"); cq.where(openJPAQueryBuilder.qbe(cq.from(Address.class), address); References Hibernate: · Srinivas Guruzu and Gary Mak: Hibernate Recipes: A Problem-Solution Approach (Apress) · http://docs.jboss.org/hibernate/core/3.3/reference/en/html/querycriteria.html#querycriteria-examples · http://www.java2s.com/Code/Java/Hibernate/CriteriaQBEQueryByExampleCriteria.htm · http://www.dzone.com/snippets/hibernate-query-example · http://gal-levinsky.blogspot.de/2012/01/qbe-pattern.html Hibernate associations: · http://stackoverflow.com/questions/9309884/query-by-example-on-associations · http://stackoverflow.com/questions/8236596/hibernate-query-by-example-equivalent-of-association-criteria-query JPA: · http://stackoverflow.com/questions/2880209/jpa-findbyexample EclipseLink: · http://www.coderanch.com/t/486528/ORM/databases/findByExample-JPA-book OpenJPA: · http://www.ibm.com/developerworks/java/library/j-typesafejpa/#N10C18
February 27, 2014
by Donat Szilagyi
· 62,175 Views · 3 Likes
article thumbnail
A Deeper Look into the Java 8 Date and Time API
Within this post we will have a deeper look into the new Date/Time API we get with Java 8 (JSR 310). Please note that this post is mainly driven by code examples that show the new API functionality. I think the examples are self-explanatory so I did not spent much time writing text around them :-) Let's get started! Working with Date and Time Objects All classes of the Java 8 Date/Time API are located within the java.time package. The first class we want to look at is java.time.LocalDate. A LocalDate represents a year-month-day date without time. We start with creating new LocalDate instances: // the current date LocalDate currentDate = LocalDate.now(); // 2014-02-10 LocalDate tenthFeb2014 = LocalDate.of(2014, Month.FEBRUARY, 10); // months values start at 1 (2014-08-01) LocalDate firstAug2014 = LocalDate.of(2014, 8, 1); // the 65th day of 2010 (2010-03-06) LocalDate sixtyFifthDayOf2010 = LocalDate.ofYearDay(2010, 65); LocalTime and LocalDateTime are the next classes we look at. Both work similar to LocalDate. ALocalTime works with time (without dates) while LocalDateTime combines date and time in one class: LocalTime currentTime = LocalTime.now(); // current time LocalTime midday = LocalTime.of(12, 0); // 12:00 LocalTime afterMidday = LocalTime.of(13, 30, 15); // 13:30:15 // 12345th second of day (03:25:45) LocalTime fromSecondsOfDay = LocalTime.ofSecondOfDay(12345); // dates with times, e.g. 2014-02-18 19:08:37.950 LocalDateTime currentDateTime = LocalDateTime.now(); // 2014-10-02 12:30 LocalDateTime secondAug2014 = LocalDateTime.of(2014, 10, 2, 12, 30); // 2014-12-24 12:00 LocalDateTime christmas2014 = LocalDateTime.of(2014, Month.DECEMBER, 24, 12, 0); By default LocalDate/Time classes will use the system clock in the default time zone. We can change this by providing a time zone or an alternative Clock implementation: // current (local) time in Los Angeles LocalTime currentTimeInLosAngeles = LocalTime.now(ZoneId.of("America/Los_Angeles")); // current time in UTC time zone LocalTime nowInUtc = LocalTime.now(Clock.systemUTC()); From LocalDate/Time objects we can get all sorts of useful information we might need. Some examples: LocalDate date = LocalDate.of(2014, 2, 15); // 2014-06-15 boolean isBefore = LocalDate.now().isBefore(date); // false // information about the month Month february = date.getMonth(); // FEBRUARY int februaryIntValue = february.getValue(); // 2 int minLength = february.minLength(); // 28 int maxLength = february.maxLength(); // 29 Month firstMonthOfQuarter = february.firstMonthOfQuarter(); // JANUARY // information about the year int year = date.getYear(); // 2014 int dayOfYear = date.getDayOfYear(); // 46 int lengthOfYear = date.lengthOfYear(); // 365 boolean isLeapYear = date.isLeapYear(); // false DayOfWeek dayOfWeek = date.getDayOfWeek(); int dayOfWeekIntValue = dayOfWeek.getValue(); // 6 String dayOfWeekName = dayOfWeek.name(); // SATURDAY int dayOfMonth = date.getDayOfMonth(); // 15 LocalDateTime startOfDay = date.atStartOfDay(); // 2014-02-15 00:00 // time information LocalTime time = LocalTime.of(15, 30); // 15:30:00 int hour = time.getHour(); // 15 int second = time.getSecond(); // 0 int minute = time.getMinute(); // 30 int secondOfDay = time.toSecondOfDay(); // 55800 Some information can be obtained without providing a specific date. For example, we can use the Year class if we need information about a specific year: Year currentYear = Year.now(); Year twoThousand = Year.of(2000); boolean isLeap = currentYear.isLeap(); // false int length = currentYear.length(); // 365 // sixtyFourth day of 2014 (2014-03-05) LocalDate date = Year.of(2014).atDay(64); We can use the plus and minus methods to add or subtract specific amounts of time. Note that these methods always return a new instance (Java 8 date/time classes are immutable). LocalDate tomorrow = LocalDate.now().plusDays(1); // before 5 houres and 30 minutes LocalDateTime dateTime = LocalDateTime.now().minusHours(5).minusMinutes(30); TemporalAdjusters are another nice way for date manipulation. TemporalAdjuster is a single method interface that is used to separate the process of adjustment from actual date/time objects. A set of common TemporalAdjusters can be accessed using static methods of the TemporalAdjusters class. LocalDate date = LocalDate.of(2014, Month.FEBRUARY, 25); // 2014-02-25 // first day of february 2014 (2014-02-01) LocalDate firstDayOfMonth = date.with(TemporalAdjusters.firstDayOfMonth()); // last day of february 2014 (2014-02-28) LocalDate lastDayOfMonth = date.with(TemporalAdjusters.lastDayOfMonth()); Static imports make this more fluent to read: import static java.time.temporal.TemporalAdjusters.*; ... // last day of 2014 (2014-12-31) LocalDate lastDayOfYear = date.with(lastDayOfYear()); // first day of next month (2014-03-01) LocalDate firstDayOfNextMonth = date.with(firstDayOfNextMonth()); // next sunday (2014-03-02) LocalDate nextSunday = date.with(next(DayOfWeek.SUNDAY)); Time Zones Working with time zones is another big topic that is simplified by the new API. The LocalDate/Time classes we have seen so far do not contain information about a time zone. If we want to work with a date/time in a certain time zone we can use ZonedDateTime or OffsetDateTime: ZoneId losAngeles = ZoneId.of("America/Los_Angeles"); ZoneId berlin = ZoneId.of("Europe/Berlin"); // 2014-02-20 12:00 LocalDateTime dateTime = LocalDateTime.of(2014, 02, 20, 12, 0); // 2014-02-20 12:00, Europe/Berlin (+01:00) ZonedDateTime berlinDateTime = ZonedDateTime.of(dateTime, berlin); // 2014-02-20 03:00, America/Los_Angeles (-08:00) ZonedDateTime losAngelesDateTime = berlinDateTime.withZoneSameInstant(losAngeles); int offsetInSeconds = losAngelesDateTime.getOffset().getTotalSeconds(); // -28800 // a collection of all available zones Set allZoneIds = ZoneId.getAvailableZoneIds(); // using offsets LocalDateTime date = LocalDateTime.of(2013, Month.JULY, 20, 3, 30); ZoneOffset offset = ZoneOffset.of("+05:00"); // 2013-07-20 03:30 +05:00 OffsetDateTime plusFive = OffsetDateTime.of(date, offset); // 2013-07-19 20:30 -02:00 OffsetDateTime minusTwo = plusFive.withOffsetSameInstant(ZoneOffset.ofHours(-2)); Timestamps Classes like LocalDate and ZonedDateTime provide a human view on time. However, often we need to work with time viewed from a machine perspective. For this we can use the Instant class which represents timestamps. An Instant counts the time beginning from the first second of January 1, 1970 (1970-01-01 00:00:00) also called the EPOCH. Instant values can be negative if they occured before the epoch. They followISO 8601 the standard for representing date and time. // current time Instant now = Instant.now(); // from unix timestamp, 2010-01-01 12:00:00 Instant fromUnixTimestamp = Instant.ofEpochSecond(1262347200); // same time in millis Instant fromEpochMilli = Instant.ofEpochMilli(1262347200000l); // parsing from ISO 8601 Instant fromIso8601 = Instant.parse("2010-01-01T12:00:00Z"); // toString() returns ISO 8601 format, e.g. 2014-02-15T01:02:03Z String toIso8601 = now.toString(); // as unix timestamp long toUnixTimestamp = now.getEpochSecond(); // in millis long toEpochMillis = now.toEpochMilli(); // plus/minus methods are available too Instant nowPlusTenSeconds = now.plusSeconds(10); Periods and Durations Period and Duration are two other important classes. Like the names suggest they represent a quantity or amount of time. A Period uses date based values (years, months, days) while a Duration uses seconds or nanoseconds to define an amount of time. Duration is most suitable when working with Instants and machine time. Periods and Durations can contain negative values if the end point occurs before the starting point. // periods LocalDate firstDate = LocalDate.of(2010, 5, 17); // 2010-05-17 LocalDate secondDate = LocalDate.of(2015, 3, 7); // 2015-03-07 Period period = Period.between(firstDate, secondDate); int days = period.getDays(); // 18 int months = period.getMonths(); // 9 int years = period.getYears(); // 4 boolean isNegative = period.isNegative(); // false Period twoMonthsAndFiveDays = Period.ofMonths(2).plusDays(5); LocalDate sixthOfJanuary = LocalDate.of(2014, 1, 6); // add two months and five days to 2014-01-06, result is 2014-03-11 LocalDate eleventhOfMarch = sixthOfJanuary.plus(twoMonthsAndFiveDays); // durations Instant firstInstant= Instant.ofEpochSecond( 1294881180 ); // 2011-01-13 01:13 Instant secondInstant = Instant.ofEpochSecond(1294708260); // 2011-01-11 01:11 Duration between = Duration.between(firstInstant, secondInstant); // negative because firstInstant is after secondInstant (-172920) long seconds = between.getSeconds(); // get absolute result in minutes (2882) long absoluteResult = between.abs().toMinutes(); // two hours in seconds (7200) long twoHoursInSeconds = Duration.ofHours(2).getSeconds(); Formatting and Parsing Formatting and parsing is another big topic when working with dates and times. In Java 8 this can be accomplished by using the format() and parse() methods: // 2014-04-01 10:45 LocalDateTime dateTime = LocalDateTime.of(2014, Month.APRIL, 1, 10, 45); // format as basic ISO date format (20140220) String asBasicIsoDate = dateTime.format(DateTimeFormatter.BASIC_ISO_DATE); // format as ISO week date (2014-W08-4) String asIsoWeekDate = dateTime.format(DateTimeFormatter.ISO_WEEK_DATE); // format ISO date time (2014-02-20T20:04:05.867) String asIsoDateTime = dateTime.format(DateTimeFormatter.ISO_DATE_TIME); // using a custom pattern (01/04/2014) String asCustomPattern = dateTime.format(DateTimeFormatter.ofPattern("dd/MM/yyyy")); // french date formatting (1. avril 2014) String frenchDate = dateTime.format(DateTimeFormatter.ofPattern("d. MMMM yyyy", new Locale("fr"))); // using short german date/time formatting (01.04.14 10:45) DateTimeFormatter formatter = DateTimeFormatter.ofLocalizedDateTime(FormatStyle.SHORT) .withLocale(new Locale("de")); String germanDateTime = dateTime.format(formatter); // parsing date strings LocalDate fromIsoDate = LocalDate.parse("2014-01-20"); LocalDate fromIsoWeekDate = LocalDate.parse("2014-W14-2", DateTimeFormatter.ISO_WEEK_DATE); LocalDate fromCustomPattern = LocalDate.parse("20.01.2014", DateTimeFormatter.ofPattern("dd.MM.yyyy")); Conversion Of course we do not always have objects of the type we need. Therefore, we need an option to convert different date/time related objects between each other. The following examples show some of the possible conversion options: // LocalDate/LocalTime <-> LocalDateTime LocalDate date = LocalDate.now(); LocalTime time = LocalTime.now(); LocalDateTime dateTimeFromDateAndTime = LocalDateTime.of(date, time); LocalDate dateFromDateTime = LocalDateTime.now().toLocalDate(); LocalTime timeFromDateTime = LocalDateTime.now().toLocalTime(); // Instant <-> LocalDateTime Instant instant = Instant.now(); LocalDateTime dateTimeFromInstant = LocalDateTime.ofInstant(instant, ZoneId.of("America/Los_Angeles")); Instant instantFromDateTime = LocalDateTime.now().toInstant(ZoneOffset.ofHours(-2)); // convert old date/calendar/timezone classes Instant instantFromDate = new Date().toInstant(); Instant instantFromCalendar = Calendar.getInstance().toInstant(); ZoneId zoneId = TimeZone.getDefault().toZoneId(); ZonedDateTime zonedDateTimeFromGregorianCalendar = new GregorianCalendar().toZonedDateTime(); // convert to old classes Date dateFromInstant = Date.from(Instant.now()); TimeZone timeZone = TimeZone.getTimeZone(ZoneId.of("America/Los_Angeles")); GregorianCalendar gregorianCalendar = GregorianCalendar.from(ZonedDateTime.now()); Conclusion With Java 8 we get a very rich API for working with date and time located in the java.time package. The API can completely replace old classes like java.util.Date or java.util.Calendar with newer, more flexible classes. Due to mostly immutable classes the new API helps in building thread safe systems. The source of the examples can be found on GitHub.
February 27, 2014
by Michael Scharhag
· 208,893 Views · 18 Likes
article thumbnail
Custom Spring Namespaces Made Easier With JAXB
First of all, let me tell this out loud: Spring is no longer XML-heavy. As a matter of fact you can write Spring applications these days with minimal or no XML at all, using plenty of annotations, Java configuration and Spring Boot. Seriously stop ranting about Spring and XML, it's the thing the of the past. That being said you might still be using XML for couple of reasons: you are stuck with legacy code base, you chose XML for other reasons or you use Spring as a foundation for some framework/platform. The last case is actually quite common, for example Mule ESB and ActiveMQ use Spring underneath to wire up their dependencies. Moreover Spring XML is their way to configure the framework. However configuring message broker or enterprise service bus using plain Spring s would be cumbersome and verbose. Luckily Spring supports writing custom namespaces that can be embedded within standard Spring configuration files. These custom snippets of XML are preprocessed at runtime and can register many bean definitions at once in a concise and pleasantly looking (as far as XML allows) format. In a way custom namespaces are like macros that expand at runtime into multiple bean definitions. To give you a feeling of what are we aiming at, imagine a standard "enterprise" application that has several business entities. For each entity we define three, almost identical, beans: repository, service and controller. They are always wired in a similar way and only differ in small details. To begin with, our Spring XML looks like this (I am pasting screenshot with thumbnail to spare your eyes, it's huge and bloated): This is a "layered" architecture, thus we will call our custom namespace called onion - because onions have layers - and also because systems designed this way make me cry. By the end of this article you will learn how to collapse this pile of XML into: Look closely, it's still Spring XML file that is perfectly understandable by this framework - and you will learn how to achieve this. You can run arbitrary code for each top-level custom XML tag, e.g. single occurrence of registers repository, service and controller bean definitions all at once. The first thing to implement is writing a custom XML schema for our namespace. This is not that hard and will allow IntelliJ IDEA to show code completion in XML: Once the schema is completed we must register it in Spring using two files: /META-INF/spring.schemas: http\://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd=/com/blogspot/nurkiewicz/onion/ns/spring-onion.xsd /META-INF/spring.handlers: http\://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd=com.blogspot.nurkiewicz.onion.ns.OnionNamespaceHandler One maps schema URL into schema location locally, another points to so-called namespace handler. This class is fairly straightforward - it tells what to do with every top-level custom XML tag coming from this namespace encountered in Spring configuration file: import org.springframework.beans.factory.xml.NamespaceHandlerSupport; public class OnionNamespaceHandler extends NamespaceHandlerSupport { public void init() { registerBeanDefinitionParser("entity", new EntityBeanDefinitionParser()); registerBeanDefinitionParser("converter", new ConverterBeanDefinitionParser()); } } So, when piece of XML is found by Spring, it knows that our ConverterBeanDefinitionParser needs to be used. Remember that if our custom tag has children (like in the case of ), bean definition parser is called only for top-level tag. It is up to us how we parse and handle children. OK, so single tag is suppose to create the following two beans: The responsibility of bean definition parser is to programmatically register bean definitions otherwise defined in XML. I won't go into details of the API, but compare it with the XML snippet above, they match closely to each other: import org.w3c.dom.Element; public class ConverterBeanDefinitionParser extends AbstractBeanDefinitionParser { @Override protected AbstractBeanDefinition parseInternal(Element converterElement, ParserContext parserContext) { final String format = converterElement.getAttribute("format"); final String lenientStr = converterElement.getAttribute("lenient"); final boolean lenient = lenientStr != null? Boolean.valueOf(lenientStr) : true; final BeanDefinitionRegistry registry = parserContext.getRegistry(); final AbstractBeanDefinition converterBeanDef = converterBeanDef(format, lenient); registry.registerBeanDefinition(format + "Converter", converterBeanDef); final AbstractBeanDefinition readerBeanDef = readerBeanDef(format); registry.registerBeanDefinition(format + "Reader", readerBeanDef); return null; } private AbstractBeanDefinition readerBeanDef(String format) { return BeanDefinitionBuilder. rootBeanDefinition(ReaderFactoryBean.class). addPropertyValue("format", format). getBeanDefinition(); } private AbstractBeanDefinition converterBeanDef(String format, boolean lenient) { AbstractBeanDefinition converterBeanDef = BeanDefinitionBuilder. rootBeanDefinition(Converter.class.getName()). addConstructorArgValue(format + ".xml"). addConstructorArgValue(lenient). addPropertyReference("reader", format + "Reader"). getBeanDefinition(); converterBeanDef.setFactoryBeanName("convertersFactory"); converterBeanDef.setFactoryMethodName("build"); return converterBeanDef; } } Do you see how parseInternal() receives XML Element representing tag, extracts attributes and registers bean definitions? It's up to you how many beans you define in AbstractBeanDefinitionParser implementation. Just remember that we are barely constructing the configuration here, no instantiation took place yet. Once the XML file is fully parsed and all bean definition parsers triggered, Spring will start bootstrapping our application. One thing to keep in mind is returning null in the end. The API sort of expects you to return single bean definition. However no need to restrict ourselves, null is fine. The second custom tag that we support is that registers three beans at once. It's similar and thus not that interesting, see full source of EntityBeanDefinitionParser. One important implementation detail that can be found there is the usage of ManagedList. Documentation vaguely mentions it but it's quite valuable. If you want to define a list of beans to be injected knowing their IDs, a simple List is not sufficient, you must explicitly tell Spring you mean a list of bean references: List converterRefs = new ManagedList<>(); for (String converterName : converters) { converterRefs.add(new RuntimeBeanReference(converterName)); } return BeanDefinitionBuilder. rootBeanDefinition("com.blogspot.nurkiewicz.FooService"). addPropertyValue("converters", converterRefs). getBeanDefinition(); Using JAXB to simplify bean definition parsers OK, so by now you should be familiar with custom Spring namespaces and how they can help you. However they are quite low level by requiring you to parse custom tags using raw XML DOM API. However my team mate discovered that since we already have XSD schema file, why not use JAXB to handle XML parsing? First we ask maven to generate Java beans representing XML types and elements during build: org.jvnet.jaxb2.maven2 maven-jaxb22-plugin 0.8.3 xjc generate src/main/resources/com/blogspot/nurkiewicz/onion/ns com.blogspot.nurkiewicz.onion.ns.xml Under /target/generated-sources/xjc you will discover couple of Java files. I like generated JAXB models to have some commons prefix like Xml, which can be easily achieved with custom bindings.xjb file placed next to spring-onion.xsd: How does it change our custom bean definition parser? Previously we had this: final String clazz = entityElement.getAttribute("class"); //... final NodeList pageNodes = entityElement.getElementsByTagNameNS(NS, "page"); for (int i = 0; i < pageNodes.getLength(); ++i) { //...final String clazz = entityElement.getAttribute("class"); //... final NodeList pageNodes = entityElement.getElementsByTagNameNS(NS, "page"); for (int i = 0; i < pageNodes.getLength(); ++i) { //... Now we simply traverse Java beans: final XmlEntity entity = JaxbHelper.unmarshal(entityElement); final String clazz = entity.getClazz(); //... for (XmlPage page : entity.getPage()) { //... JaxbHelper is just a simple tool that hides checked exceptions and JAXB mechanics from outside: public class JaxbHelper { private static final Unmarshaller unmarshaller = create(); private static Unmarshaller create() { try { return JAXBContext.newInstance("com.blogspot.nurkiewicz.onion.ns.xml").createUnmarshaller(); } catch (JAXBException e) { throw Throwables.propagate(e); } } public static T unmarshal(Element elem) { try { return (T) unmarshaller.unmarshal(elem); } catch (JAXBException e) { throw Throwables.propagate(e); } } } Couple of words as a summary. First of all I don't encourage you to auto-generate repository/service/controller bean definitions for every entity. Actually it's a poor practice but the domain is familiar to all of us so I thought it will be a good example. Secondly, more important, custom XML namespaces are a powerful tool that should be used as a last resort when everything else fails, namely abstract beans, factory beans and Java configuration. Typically you'll want this kind of feature in frameworks or tools built in top of Spring. In that case check out full source code on GitHub.
February 26, 2014
by Tomasz Nurkiewicz DZone Core CORE
· 11,566 Views
article thumbnail
Getting Started with Mocking in Java using Mockito
We all write unit tests but the challenge we face at times is that the unit under test might be dependent on other components. And configuring other components for unit testing is definitely an overkill. Instead we can make use of Mocks in place of the other components and continue with the unit testing. To show how one can use mocks, I have a Data access layer(DAL), basically a class which provides an API for the application to access and modify the data in the data repository. I then unit test the DAL without actually the need to connect to the data repository. The data repository can be a local database or remote database or a file system or any place where we can store and retrieve the data. The use of a DAL class helps us in keeping the data mappers separate from the application code. Lets create a Java project using maven. mvn archetype:generate -DgroupId=info.sanaulla -DartifactId=MockitoDemo -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false The above creates a folder MockitoDemo and then creates the entire directory structure for source and test files. Consider the below model class for this example: package info.sanaulla.models; import java.util.List; /** * Model class for the book details. */ public class Book { private String isbn; private String title; private List authors; private String publication; private Integer yearOfPublication; private Integer numberOfPages; private String image; public Book(String isbn, String title, List authors, String publication, Integer yearOfPublication, Integer numberOfPages, String image){ this.isbn = isbn; this.title = title; this.authors = authors; this.publication = publication; this.yearOfPublication = yearOfPublication; this.numberOfPages = numberOfPages; this.image = image; } public String getIsbn() { return isbn; } public String getTitle() { return title; } public List getAuthors() { return authors; } public String getPublication() { return publication; } public Integer getYearOfPublication() { return yearOfPublication; } public Integer getNumberOfPages() { return numberOfPages; } public String getImage() { return image; } } The DAL class for operating on the Book model class is: package info.sanaulla.dal; import info.sanaulla.models.Book; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.List; /** * API layer for persisting and retrieving the Book objects. */ public class BookDAL { private static BookDAL bookDAL = new BookDAL(); public List getAllBooks(){ return Collections.EMPTY_LIST; } public Book getBook(String isbn){ return null; } public String addBook(Book book){ return book.getIsbn(); } public String updateBook(Book book){ return book.getIsbn(); } public static BookDAL getInstance(){ return bookDAL; } } The DAL layer above currently has no functionality and we are going to unit test that piece of code (TDD). The DAL layer might communicate with a ORM Mapper or Database API which we are not concerned while designing the API. Test Driving the DAL Layer There are lot of frameworks for Unit testing and mocking in Java but for this example I would be picking JUnit for unit testing and Mockito for mocking. We would have to update the dependency in Maven’s pom.xml 4.0.0 info.sanaulla MockitoDemo jar 1.0-SNAPSHOT MockitoDemo http://maven.apache.org junit junit 4.10 test org.mockito mockito-all 1.9.5 test Now lets unit test the BookDAL. During the unit testing we will inject mock data into the BookDAL so that we can complete the testing of the API without depending on the data source. Initially we will have an empty test class: public class BookDALTest { public void setUp() throws Exception { } public void testGetAllBooks() throws Exception { } public void testGetBook() throws Exception { } public void testAddBook() throws Exception { } public void testUpdateBook() throws Exception { } } We will inject the mock BookDAL and mock data in the setUp() as shown below: public class BookDALTest { private static BookDAL mockedBookDAL; private static Book book1; private static Book book2; @BeforeClass public static void setUp(){ //Create mock object of BookDAL mockedBookDAL = mock(BookDAL.class); //Create few instances of Book class. book1 = new Book("8131721019","Compilers Principles", Arrays.asList("D. Jeffrey Ulman","Ravi Sethi", "Alfred V. Aho", "Monica S. Lam"), "Pearson Education Singapore Pte Ltd", 2008,1009,"BOOK_IMAGE"); book2 = new Book("9788183331630","Let Us C 13th Edition", Arrays.asList("Yashavant Kanetkar"),"BPB PUBLICATIONS", 2012,675,"BOOK_IMAGE"); //Stubbing the methods of mocked BookDAL with mocked data. when(mockedBookDAL.getAllBooks()).thenReturn(Arrays.asList(book1, book2)); when(mockedBookDAL.getBook("8131721019")).thenReturn(book1); when(mockedBookDAL.addBook(book1)).thenReturn(book1.getIsbn()); when(mockedBookDAL.updateBook(book1)).thenReturn(book1.getIsbn()); } public void testGetAllBooks() throws Exception {} public void testGetBook() throws Exception {} public void testAddBook() throws Exception {} public void testUpdateBook() throws Exception {} } In the above setUp() method I have: Created a mock object of BookDAL BookDAL mockedBookDAL = mock(BookDAL.class); Stubbed the API of BookDAL with mock data, such that when ever the API is invoked the mocked data is returned. //When getAllBooks() is invoked then return the given data and so on for the other methods. when(mockedBookDAL.getAllBooks()).thenReturn(Arrays.asList(book1, book2)); when(mockedBookDAL.getBook("8131721019")).thenReturn(book1); when(mockedBookDAL.addBook(book1)).thenReturn(book1.getIsbn()); when(mockedBookDAL.updateBook(book1)).thenReturn(book1.getIsbn()); Populating the rest of the tests we get: package info.sanaulla.dal; import info.sanaulla.models.Book; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; import java.util.Arrays; import java.util.List; public class BookDALTest { private static BookDAL mockedBookDAL; private static Book book1; private static Book book2; @BeforeClass public static void setUp(){ mockedBookDAL = mock(BookDAL.class); book1 = new Book("8131721019","Compilers Principles", Arrays.asList("D. Jeffrey Ulman","Ravi Sethi", "Alfred V. Aho", "Monica S. Lam"), "Pearson Education Singapore Pte Ltd", 2008,1009,"BOOK_IMAGE"); book2 = new Book("9788183331630","Let Us C 13th Edition", Arrays.asList("Yashavant Kanetkar"),"BPB PUBLICATIONS", 2012,675,"BOOK_IMAGE"); when(mockedBookDAL.getAllBooks()).thenReturn(Arrays.asList(book1, book2)); when(mockedBookDAL.getBook("8131721019")).thenReturn(book1); when(mockedBookDAL.addBook(book1)).thenReturn(book1.getIsbn()); when(mockedBookDAL.updateBook(book1)).thenReturn(book1.getIsbn()); } @Test public void testGetAllBooks() throws Exception { List allBooks = mockedBookDAL.getAllBooks(); assertEquals(2, allBooks.size()); Book myBook = allBooks.get(0); assertEquals("8131721019", myBook.getIsbn()); assertEquals("Compilers Principles", myBook.getTitle()); assertEquals(4, myBook.getAuthors().size()); assertEquals((Integer)2008, myBook.getYearOfPublication()); assertEquals((Integer) 1009, myBook.getNumberOfPages()); assertEquals("Pearson Education Singapore Pte Ltd", myBook.getPublication()); assertEquals("BOOK_IMAGE", myBook.getImage()); } @Test public void testGetBook(){ String isbn = "8131721019"; Book myBook = mockedBookDAL.getBook(isbn); assertNotNull(myBook); assertEquals(isbn, myBook.getIsbn()); assertEquals("Compilers Principles", myBook.getTitle()); assertEquals(4, myBook.getAuthors().size()); assertEquals("Pearson Education Singapore Pte Ltd", myBook.getPublication()); assertEquals((Integer)2008, myBook.getYearOfPublication()); assertEquals((Integer)1009, myBook.getNumberOfPages()); } @Test public void testAddBook(){ String isbn = mockedBookDAL.addBook(book1); assertNotNull(isbn); assertEquals(book1.getIsbn(), isbn); } @Test public void testUpdateBook(){ String isbn = mockedBookDAL.updateBook(book1); assertNotNull(isbn); assertEquals(book1.getIsbn(), isbn); } } One can run the test by using maven command: mvn test. The output is: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running info.sanaulla.AppTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec Running info.sanaulla.dal.BookDALTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec Results : Tests run: 5, Failures: 0, Errors: 0, Skipped: 0 So we have been able to test the DAL class without actually configuring the data source by using mocks.
February 26, 2014
by Mohamed Sanaulla
· 233,099 Views · 18 Likes
article thumbnail
Developing Java EE applications with Maven and WebLogic 12c
The WebLogic Server 12c has very nice support for Maven now. The doc for this is kinda hidden though, so here is a direct link http://docs.oracle.com/middleware/1212/core/MAVEN To summarize the doc, Oracle did not provide a public Maven repository manager hosting for their server artifacts. However they do now provide a tool for you to create and populate your own. You can setup either your local repository (if you are working mostly on your own in a single computer), or you may deploy them into your own internal Maven repository manager such as Archiva or Nexus. Here I would show how the local repository is done. First step is use a maven plugin provided by WLS to populate the repository. I am using a MacOSX for this demo and my WLS is installed in $HOME/apps/wls12120. If you are on Windows, you may install it under C:/apps/wls12120. $ cd $HOME/apps/wls12120/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.1.2/ $ mvn install:install-file -DpomFile=oracle-maven-sync.12.1.2.pom -Dfile=oracle-maven-sync.12.1.2.jar $ mvn com.oracle.maven:oracle-maven-sync:push -Doracle-maven-sync.oracleHome=$HOME/apps/wls12120 -Doracle-maven-sync.testingOnly=false The artifacts are placed under your local $HOME/.m2/repository/com/oracle. Now you may use Maven to build Java EE application with these WebLogic artifact as dependencies. Not only these are available, the push also populated some additional maven plugins that helps development more easy. For example, you can generate a template project using their archetype plugin. $ cd $HOME $ mvn archetype:generate \ -DarchetypeGroupId=com.oracle.weblogic.archetype \ -DarchetypeArtifactId=basic-webapp \ -DarchetypeVersion=12.1.2-0-0 \ -DgroupId=org.mycompany \ -DartifactId=my-basic-webapp-project \ -Dversion=1.0-SNAPSHOT Type 'Y' to confirm to finish. Notice that pom.xml it generated; it is using the "javax:javaee-web-api:6.0:provided" dependency. This is working because we setup the repository earlier. Now you may build it. $ cd my-basic-webapp-project $ mvn package After this build you should have the war file under the target directory.You may manually copy and deploy this into your WebLogic server domain. Or you may continue to configure the maven pom to do this all with maven. Here is how I do it. Edit the my-basic-webapp-project/pom.xml file and replace the weblogic-maven-plugin plugin like this: com.oracle.weblogic weblogic-maven-plugin 12.1.2-0-0 ${oracleMiddlewareHome} ${oracleServerUrl} ${oracleUsername} ${oraclePassword} ${project.build.directory}/${project.build.finalName}.${project.packaging} ${oracleServerName} true ${project.build.finalName} With this change, you may deploy the webapp into WebLogic server (well, assuming you already started your "mydomain" with "myserver" server running locally. See my previous blog for instructions) $ cd my-basic-webapp-project $ mvn weblogic:deploy -DoracleMiddlewareHome=$HOME/apps/wls12120 -DoracleServerName=myserver -DoracleUsername=admin -DoraclePassword=admin123 After the "BUILD SUCCESS" message, you may visit the http://localhost:7001/basicWebapp URL. Revisit the WLS doc again and you will find that they also provide other project templates (Maven calls these archetypes) for building EJB, MDB, or WebService projects. These should help you get your EE projects started quickly.
February 26, 2014
by Zemian Deng
· 19,549 Views · 1 Like
article thumbnail
How to Estimate Memory Consumption
This story goes back at least a decade, when I was first approached by a PHB with a question “How big servers are we going to need to buy for our production deployment”. The new and shiny system we have been building was nine months from production rollout and apparently the company had promised to deliver the whole solution, including hardware. Oh boy, was I in trouble. With just a few years of experience down my belt, I could have pretty much just tossed a dice. Even though I am sure my complete lack of confidence was clearly visible, I still had to come up with the answer. Four hours of googling later I recall sitting there with the same question still hovering in front of my bedazzled face: “How to estimate the need for computing power?” In this post I start to open up the subject by giving you rough guidelines on how to estimate memory requirements for your brand new Java application. For the impatient ones – the answer will be to start with the memory equal to approximately 5 x [amount of memory consumed by Live Data] and start the fine-tuning from there. For the ones more curious about the logic behind, stay with me and I will walk you through the reasoning. First and foremost, I can only recommend to avoid answering a question phrased like this without detailed information being available. Your answer has to be based upon the performance requirements, so do not even start without clarifying those first. And I do not mean way-too-ambiguous “The system needs to support 700 concurrent users”, but a lot more specific ones about latency and throughput, taking into account the amount of data, usage patterns. One should also not forget about the budget also – we all can all dream about sub-millisecond latencies, but those without HFT banking backbone budgets – unfortunately it will only remain a dream. For now, lets assume you have those requirements in place. Next stop would be to create the load test scripts emulating user behaviour. If you are now able to launch those scripts concurrently you have built a foundation to the answer. As you might also have guessed, the next step involves our usual advice of measuring not guessing. But with a caveat. Live Data Size Namely, our quest for the optimal memory configuration requires capturing the Live Data Size. Having captured this, we have the baseline configuration in place for the fine-tuning. How does one define live data size? Charlie Hunt and Binu John in their “Java Performance” book have given it the following definition: Live data size is the heap size consumed by the set of long-lived objects required to run the application in its steady state. Equipped with the definition, we are ready to run your load tests against the application with the GC logging turned on (-XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log -XX:+PrintGCDetails) and visualize the logs (with the help of gcviewer for example) to determine the moment when the application has reached to the steady state. What you are after looks similar to the following: We can see the GC doing its job both with minor and Full GC runs in a familiar double-saw-toothed graphic. This particular application seems to have achieved a steady state already after the first full GC run on 21st second. In most cases however, it takes 10-20 Full GC runs to spot the change in trends. After four full GC runs we can estimate that the Live Data Size is equal to approximately 100MB. The aforementioned Java Performance book is now indicating that there is a strong correlation between the Live Data Size and the optimal memory configuration parameters in a typical Java EE application. The evidence from the field is also backing up their recommendations: Set the maximum heap size to 3-4 x [Live Data Size] So, for our application at hand, we should set the -Xmx to be in between 300m and 400m for the initial performance tests and take it from there. We have mixed feelings about other recommendations given in the book, recommending to set the maximum permanent generation size to 1.2-1.5 x [Live Data Size of the Permanent Generation] and the -XX:NewRatio being set to 1-1.5 x of the [Live Data Size]. We are currently gathering more data to determine whether the positive correlation exists, but until then I recommend to base your survival and eden configuration decisions on monitoring your allocation rate instead. Why should one bother you might now ask. Indeed, two reasons for not caring immediately surface: 8G memory chip is in sub $100 territory at the time of writing this article Virtualization, especially when using large providers such as Amazon AWS make adjusting the capacity easy Both of the reasons are partially valid and have definitely reduced the need for provisioning to be precise. But both of them are still putting you in the danger zone When tossing in huge amounts of memory “just in case” you are most likely going to significantly affect the latency – going into heaps above 8G it is darn easy to introduce Full GC pauses spanning over tens of seconds. When over-provisioning with the mindset of “lets tune it later”, the “later” part has a tendency of never arriving. I have faced numerous applications running on vastly over provisioned environments just because of this. For example the aforementioned application I discovered running on Amazon EC2 m1.xlarge instance was costing the company $4,200 per instance / year. Converting it to m1.small reduced the bill to just $520 for the instance. 8-fold cost reduction will be visible from your operations budget if your deployments are large, trust me on this. Summary Unfortunately I still see way too many decisions made exactly like I was forced to do a decade ago. This leads to the under- and over planning of capacity, both of which can be equally poor choices, especially if you cannot enjoy the benefits of virtualization. I got lucky with mine, but you might not get away with your guestimate, so I can only recommend to actually plan ahead using the simple framework described in this post. If you enjoyed the content, I can only recommend to follow our performance tuning advice in Twitter.
February 25, 2014
by Nikita Salnikov-Tarnovski
· 11,205 Views
article thumbnail
AngularJS IndexedDB Demo
over the past few months i've had a series of articles ( part 1 , part 2 , part 3 ) discussing indexeddb. in the last article i built a full, if rather simple, application that let you write notes. (i'm a sucker for note taking applications.) when i built the application, i intentionally did not use a framework. i tried to write nice, clear code of course, but i wanted to avoid anything that wasn't 100% necessary to demonstrate the application and indexeddb. in the perspective of an article, i think this was the right decision to make. i wanted my readers to focus on the feature and not anything else. but i thought this would be an excellent opportunity to try angularjs again. for the most part, this conversion worked perfectly. this may sound lame, but i found myself grinning as i built this application. i'm a firm believer that if something makes you happy then it is probably good for you. ;) i still find myself a bit... not confused... but slowed down by the module system and dependency injection. these are both things i grasp in general, but in angularjs they feel a bit awkward to me. it feels like something i'll never be able to code from memory, but will need to reference older applications to remind me. i'm not saying they are wrong of course, they just don't feel natural to me yet. on the flip side, the binding support is incredible. i love working with html templates and $scope. it feels incredibly powerful. heck, being able to add an input field and use it as a filter in approximately 30 seconds was mind blowing. one issue i ran into and i'm not convinced i created the best solution for was the async nature of indexeddb's database open logic. angularjs has a promises library built in and it works incredibly well for my application in general. but i needed the entire application to be bootstrapped to an async call for database startup. i got around that with two things that felt a bit like a hack. first, my home view (get all notes) ran a call to an init function to ensure the db was already open. so consider this init(): function init() { var deferred = $q.defer(); if(setup) { deferred.resolve(true); return deferred.promise; } var openrequest = window.indexeddb.open("indexeddb_angular",1); openrequest.onerror = function(e) { console.log("error opening db"); console.dir(e); deferred.reject(e.tostring()); }; openrequest.onupgradeneeded = function(e) { var thisdb = e.target.result; var objectstore; //create note os if(!thisdb.objectstorenames.contains("note")) { objectstore = thisdb.createobjectstore("note", { keypath: "id", autoincrement:true }); objectstore.createindex("titlelc", "titlelc", { unique: false }); objectstore.createindex("tags","tags", {unique:false,multientry:true}); } }; openrequest.onsuccess = function(e) { db = e.target.result; db.onerror = function(event) { // generic error handler for all errors targeted at this database's // requests! deferred.reject("database error: " + event.target.errorcode); }; setup=true; deferred.resolve(true); }; return deferred.promise; } this logic is similar to what i had in the non-framework app but i've made use of promises and a flag to remember when i've already opened the database. this lets me then tie to init() in my getnotes logic. function getnotes() { var deferred = $q.defer(); init().then(function() { var result = []; var handleresult = function(event) { var cursor = event.target.result; if (cursor) { result.push({key:cursor.key, title:cursor.value.title, updated:cursor.value.updated}); cursor.continue(); } }; var transaction = db.transaction(["note"], "readonly"); var objectstore = transaction.objectstore("note"); objectstore.opencursor().onsuccess = handleresult; transaction.oncomplete = function(event) { deferred.resolve(result); }; }); return deferred.promise; } all of this worked ok - but i ran into an issue on the other pages of my application. if for example you bookmarked the edit link for a note, you would run into an error. i could have applied the same fix in my service layer (run init first), but it just felt wrong. so instead i did this in my app.js: $rootscope.$on("$routechangestart", function(event,currentroute, previousroute){ if(!persistanceservice.ready() && $location.path() != '/home') { $location.path('/home'); }; }); the ready call was simply a wrapper to the flag variable. so yeah, this worked for me, but i still think there is (probably) a nicer solution. anyway, if you want to check it out, just hit the demo link below. i want to give a shoutout to sharon diorio for giving me a lot of help/tips/support while i built this app. p.s. i assume this is obvious, but i'm not really offering this up as a "best practices" angularjs application. i assume i could have done about every part better. ;)
February 25, 2014
by Raymond Camden
· 17,910 Views
  • Previous
  • ...
  • 762
  • 763
  • 764
  • 765
  • 766
  • 767
  • 768
  • 769
  • 770
  • 771
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: