DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

The Latest Java Topics

article thumbnail
Adding SLF4J to Your Maven Project
Learn how to add SLF4J, or Simple Logging Facade for Java, to your Maven project in this tutorial.
September 28, 2011
by Roger Hughes
· 260,917 Views · 4 Likes
article thumbnail
Hibernate by Example - Part 1 (Orphan removal)
So i thought to do a series of hibernate examples showing various features of hibernate. In the first part i wanted to show about the Delete Orphan feature and how it may be used with the use of a story line. So let us begin :) Prerequisites: In order for you to try out the following example you will need the below mentioned JAR files: org.springframework.aop-3.0.6.RELEASE.jar org.springframework.asm-3.0.6.RELEASE.jar org.springframework.aspects-3.0.6.RELEASE.jar org.springframework.beans-3.0.6.RELEASE.jar org.springframework.context.support-3.0.6.RELEASE.jar org.springframework.context-3.0.6.RELEASE.jar org.springframework.core-3.0.6.RELEASE.jar org.springframework.jdbc-3.0.6.RELEASE.jar org.springframework.orm-3.0.6.RELEASE.jar org.springframework.transaction-3.0.6.RELEASE.jar. org.springframework.expression-3.0.6.RELEASE.jar commons-logging-1.0.4.jar log4j.jar aopalliance-1.0.jar dom4j-1.1.jar hibernate-commons-annotations-3.2.0.Final.jar hibernate-core-3.6.4.Final.jar hibernate-jpa-2.0-api-1.0.0.Final.jar javax.persistence-2.0.0.jar jta-1.1.jar javassist-3.1.jar slf4j-api-1.6.2.jar mysql-connector-java-5.1.13-bin.jar commons-collections-3.0.jar For anyone who want the eclipse project to try this out, you can download it with the above mentioned JAR dependencies here. Introduction: Its year 2011. And The Justice League has grown out of proportion and are searching for a developer to help with creating a super hero registering system. A developer competent in Hibernate and ORM is ready to do the system and handle the persistence layer using Hibernate. For simplicity, He will be using a simple stand alone application to persist super heroes. This is how this example will layout: Table design Domain classes and Hibernate mappings DAO & Service classes Spring configuration for the application A simple main class to show how it all works Let the Journey Begin...................... Table Design: The design consists of three simple tables as illustrated by the diagram below; As you can see its a simple one-to-many relationship linked by a Join Table. The Join Table will be used by Hibernate to fill the Super hero list which is in the domain class which we will go on to see next. Domain classes and Hibernate mappings: There are mainly only two domain classes as the Join table in linked with the primary owning entity which is the Justice League entity. So let us go on to see how the domain classes are constructed with annotations; package com.justice.league.domain; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import org.hibernate.annotations.Type; @Entity @Table(name = "SuperHero") public class SuperHero implements Serializable { /** * */ private static final long serialVersionUID = -6712720661371583351L; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "super_hero_id") private Long superHeroId; @Column(name = "super_hero_name") private String name; @Column(name = "power_description") private String powerDescription; @Type(type = "yes_no") @Column(name = "isAwesome") private boolean isAwesome; public Long getSuperHeroId() { return superHeroId; } public void setSuperHeroId(Long superHeroId) { this.superHeroId = superHeroId; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getPowerDescription() { return powerDescription; } public void setPowerDescription(String powerDescription) { this.powerDescription = powerDescription; } public boolean isAwesome() { return isAwesome; } public void setAwesome(boolean isAwesome) { this.isAwesome = isAwesome; } } As i am using MySQL as the primary database, i have used the GeneratedValue strategy as GenerationType.AUTO which will do the auto incrementing whenever a new super hero is created. All other mappings are familiar to everyone with the exception of the last variable where we map a boolean to a Char field in the database. We use Hibernate's @Type annotation to represent true & false as Y & N within the database field. Hibernate has many @Type implementations which you can read about here. In this instance we have used this type. Ok now that we have our class to represent the Super Heroes, lets go on to see how our Justice League domain class looks like which keeps tab of all super heroes who have pledged allegiance to the League. package com.justice.league.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @Table(name = "JusticeLeague") public class JusticeLeague implements Serializable { /** * */ private static final long serialVersionUID = 763500275393020111L; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "justice_league_id") private Long justiceLeagueId; @Column(name = "justice_league_moto") private String justiceLeagueMoto; @Column(name = "number_of_members") private Integer numberOfMembers; @OneToMany(cascade = { CascadeType.ALL }, fetch = FetchType.EAGER, orphanRemoval = true) @JoinTable(name = "JUSTICE_LEAGUE_SUPER_HERO", joinColumns = { @JoinColumn(name = "justice_league_id") }, inverseJoinColumns = { @JoinColumn(name = "super_hero_id") }) private List superHeroList = new ArrayList(0); public Long getJusticeLeagueId() { return justiceLeagueId; } public void setJusticeLeagueId(Long justiceLeagueId) { this.justiceLeagueId = justiceLeagueId; } public String getJusticeLeagueMoto() { return justiceLeagueMoto; } public void setJusticeLeagueMoto(String justiceLeagueMoto) { this.justiceLeagueMoto = justiceLeagueMoto; } public Integer getNumberOfMembers() { return numberOfMembers; } public void setNumberOfMembers(Integer numberOfMembers) { this.numberOfMembers = numberOfMembers; } public List getSuperHeroList() { return superHeroList; } public void setSuperHeroList(List superHeroList) { this.superHeroList = superHeroList; } } The important fact to note here is the annotation @OneToMany(cascade = { CascadeType.ALL }, fetch = FetchType.EAGER, orphanRemoval = true). Here we have set orphanRemoval = true. So what does that do exactly? Ok so say that you have a group of Super Heroes in your League. And say one Super Hero goes haywire. So we need to remove Him/Her from the League. With JPA cascade this is not possible as it does not detect Orphan records and you will wind up with the database having the deleted Super Hero(s) whereas your collection still has a reference to it. Prior to JPA 2.0 you did not have the orphanRemoval support and the only way to delete orphan records was to use the following Hibernate specific(or ORM specific) annotation which is now deprecated; @org.hibernate.annotations.Cascade(org.hibernate.annotations.CascadeType.DELETE_ORPHAN) But with the introduction of the attribute orphanRemoval, we are now able to handle the deletion of orphan records through JPA. Now that we have our Domain classes DAO & Service classes: To keep with good design standards i have separated the DAO(Data access object) layer and the service layer. So let us see the DAO interface and implementation. Note that i have used HibernateTemplatethrough HibernateDAOSupportso as to keep away any Hibernate specific detail out and access everything in a unified manner using Spring. package com.justice.league.dao; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import com.justice.league.domain.JusticeLeague; @Transactional(propagation = Propagation.REQUIRED, readOnly = false) public interface JusticeLeagueDAO { public void createOrUpdateJuticeLeagure(JusticeLeague league); public JusticeLeague retrieveJusticeLeagueById(Long id); } In the interface layer i have defined the Transaction handling as Required. This is done so that whenever you do not need a transaction you can define that at the method level of that specific method and in more situations you will need a transaction with the exception of data retrieval methods. According to the JPA spec you need a valid transaction for insert/delete/update functions. So lets take a look at the DAO implementation; package com.justice.league.dao.hibernate; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.orm.hibernate3.support.HibernateDaoSupport; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import com.justice.league.dao.JusticeLeagueDAO; import com.justice.league.domain.JusticeLeague; @Qualifier(value="justiceLeagueHibernateDAO") public class JusticeLeagueHibernateDAOImpl extends HibernateDaoSupport implements JusticeLeagueDAO { @Override public void createOrUpdateJuticeLeagure(JusticeLeague league) { if (league.getJusticeLeagueId() == null) { getHibernateTemplate().persist(league); } else { getHibernateTemplate().update(league); } } @Transactional(propagation = Propagation.NOT_SUPPORTED, readOnly = false) public JusticeLeague retrieveJusticeLeagueById(Long id){ return getHibernateTemplate().get(JusticeLeague.class, id); } } Here i have defined an @Qualifier to let Spring know that this is the Hibernate implementation of the DAO class. Note the package name which ends with hibernate. This as i see is a good design concept to follow where you separate your implementation(s) into separate packages to keep the design clean. Ok lets move on to the service layer implementation. The service layer in this instance is just acting as a mediation layer to call the DAO methods. But in a real world application you will probably have other validations, security related procedures etc handled within the service layer. package com.justice.league.service; import com.justice.league.domain.JusticeLeague; public interface JusticeLeagureService { public void handleJusticeLeagureCreateUpdate(JusticeLeague justiceLeague); public JusticeLeague retrieveJusticeLeagueById(Long id); } package com.justice.league.service.impl; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.stereotype.Component; import com.justice.league.dao.JusticeLeagueDAO; import com.justice.league.domain.JusticeLeague; import com.justice.league.service.JusticeLeagureService; @Component("justiceLeagueService") public class JusticeLeagureServiceImpl implements JusticeLeagureService { @Autowired @Qualifier(value = "justiceLeagueHibernateDAO") private JusticeLeagueDAO justiceLeagueDAO; @Override public void handleJusticeLeagureCreateUpdate(JusticeLeague justiceLeague) { justiceLeagueDAO.createOrUpdateJuticeLeagure(justiceLeague); } public JusticeLeague retrieveJusticeLeagueById(Long id){ return justiceLeagueDAO.retrieveJusticeLeagueById(id); } } Few things to note here. First of all the @Component binds this service implementation with the name justiceLeagueService within the spring context so that we can refer to the bean as a bean with an id of name justiceLeagueService. And we have auto wired the JusticeLeagueDAO and defined an @Qualifier so that it will be bound to the Hibernate implementation. The value of the Qualifier should be the same name we gave the class level Qualifier within the DAO Implementation class. And Lastly let us look at the Spring configuration which wires up all these together; Spring configuration for the application: com.justice.league.**.* org.hibernate.dialect.MySQLDialect com.mysql.jdbc.Driver jdbc:mysql://localhost:3306/my_test root password true org.hibernate.dialect.MySQLDialect Note that i have used the HibernateTransactionManager in this instance as i am running it stand alone. If you are running it within an application server you will almost always use a JTA Transaction manager. I have also used auto creation of tables by hibernate for simplicity purposes. The packagesToScan property instructs to scan through all sub packages(including nested packaged within them) under the root package com.justice.league.**.* to be scanned for @Entity annotated classes. We have also bounded the session factory to the justiceLeagueDAO so that we can work with the Hibernate Template. For testing purposes you can have the tag create initially if you want, and let hibernate create the tables for you. Ok so now that we have seen the building blocks of the application, lets see how this all works by first creating some super heroes within the Justice League A simple main class to show how it all works: As the first example lets see how we are going to persist the Justice League with a couple of Super Heroes; package com.test; import java.util.ArrayList; import java.util.List; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.justice.league.domain.JusticeLeague; import com.justice.league.domain.SuperHero; import com.justice.league.service.JusticeLeagureService; public class TestSpring { /** * @param args */ public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext( "spring-context.xml"); JusticeLeagureService service = (JusticeLeagureService) ctx .getBean("justiceLeagueService"); JusticeLeague league = new JusticeLeague(); List superHeroList = getSuperHeroList(); league.setSuperHeroList(superHeroList); league.setJusticeLeagueMoto("Guardians of the Galaxy"); league.setNumberOfMembers(superHeroList.size()); service.handleJusticeLeagureCreateUpdate(league); } private static List getSuperHeroList() { List superHeroList = new ArrayList(); SuperHero superMan = new SuperHero(); superMan.setAwesome(true); superMan.setName("Clark Kent"); superMan.setPowerDescription("Faster than a speeding bullet"); superHeroList.add(superMan); SuperHero batMan = new SuperHero(); batMan.setAwesome(true); batMan.setName("Bruce Wayne"); batMan.setPowerDescription("I just have some cool gadgets"); superHeroList.add(batMan); return superHeroList; } } And if we go to the database and check this we will see the following output; mysql> select * from superhero; +---------------+-----------+-----------------+-------------------------------+ | super_hero_id | isAwesome | super_hero_name | power_description | +---------------+-----------+-----------------+-------------------------------+ | 1 | Y | Clark Kent | Faster than a speeding bullet | | 2 | Y | Bruce Wayne | I just have some cool gadgets | +---------------+-----------+-----------------+-------------------------------+ mysql> select * from justiceleague; +-------------------+-------------------------+-------------------+ | justice_league_id | justice_league_moto | number_of_members | +-------------------+-------------------------+-------------------+ | 1 | Guardians of the Galaxy | 2 | +-------------------+-------------------------+-------------------+ So as you can see we have persisted two super heroes and linked them up with the Justice League. Now let us see how that delete orphan works with the below example; package com.test; import java.util.ArrayList; import java.util.List; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.justice.league.domain.JusticeLeague; import com.justice.league.domain.SuperHero; import com.justice.league.service.JusticeLeagureService; public class TestSpring { /** * @param args */ public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext( "spring-context.xml"); JusticeLeagureService service = (JusticeLeagureService) ctx .getBean("justiceLeagueService"); JusticeLeague league = service.retrieveJusticeLeagueById(1l); List superHeroList = league.getSuperHeroList(); /** * Here we remove Batman(a.k.a Bruce Wayne) out of the Justice League * cos he aint cool no more */ for (int i = 0; i < superHeroList.size(); i++) { SuperHero superHero = superHeroList.get(i); if (superHero.getName().equalsIgnoreCase("Bruce Wayne")) { superHeroList.remove(i); break; } } service.handleJusticeLeagureCreateUpdate(league); } } Here we first retrieve the Justice League record by its primary key. Then we loop through and remove Batman off the League and again call the createOrUpdate method. As we have the remove orphan defined, any Super Hero not in the list which is in the database will be deleted. Again if we query the database we will see that batman has been removed now as per the following; mysql> select * from superhero; +---------------+-----------+-----------------+-------------------------------+ | super_hero_id | isAwesome | super_hero_name | power_description | +---------------+-----------+-----------------+-------------------------------+ | 1 | Y | Clark Kent | Faster than a speeding bullet | +---------------+-----------+-----------------+-------------------------------+ So that's it. The story of how Justice League used Hibernate to remove Batman automatically without being bothered to do it themselves. Next up look forward to how Captain America used Hibernate Criteria to build flexible queries in order to locate possible enemies. Watch out!!!! Have a great day people and thank you for reading!!!! If you have any suggestions or comments pls do leave them by. From http://dinukaroshan.blogspot.com/2011/09/hibernate-by-example-part-1-orphan.html
September 24, 2011
by Dinuka Arseculeratne
· 47,385 Views
article thumbnail
Lucene and Solr's CheckIndex to the Rescue!
while using lucene and solr we are used to a very high reliability. however, there may come a day when solr will inform us that our index is corrupted, and we need to do something about it. is the only way to repair the index to restore it from the backup or do full indexation? no – there is hope in the form of checkindex tool. what is checkindex ? checkindex is a tool available in the lucene library, which allows you to check the files and create new segments that do not contain problematic entries. this means that this tool, with little loss of data is able to repair a broken index, and thus save us from having to restore the index from the backup (of course if we have it) or do the full indexing of all documents that were stored in solr. where do i start? please note that, according to what we find in javadocs, this tool is experimental and may change in the future. therefore, before starting to work with it we should create a copy of the index. in addition, it is worth knowing that the tool analyzes the index byte by byte, and thus for large indexes the time of analysis and repair may be large. it is important not to run the tool with the -fix option at the moment when it is used by solr or other applications based on the lucene library. finally, be aware that the launch of the tool in repairing mode may result in removal of some or all documents that are stored in the index. how to run it to run the utility, go to the directory where the lucene library files are located and run the following command: java -ea:org.apache.lucene... org.apache.lucene.index.checkindex index_path -fix in my case, it looked as follows: java -cp lucene-core-2.9.3.jar -ea:org.apache.lucene... org.apache.lucene.index.checkindex e:\solr\solr\data\index\ -fix after a while i got the following information : opening index @ e:solrsolrdataindex segments file=segments_2 numsegments=1 version=format_diagnostics [lucene 2.9] 1 of 1: name=_0 doccount=19 compound=false hasprox=true numfiles=11 size (mb)=0,018 diagnostics = {os.version=6.1, os=windows 7, lucene.version=2.9.3 951790 - 2010-06-06 01:30:55, source=flush, os.arch=x86, java.version=1.6.0_23, java.vendor=sun microsystems inc.} no deletions test: open reader.........ok test: fields..............ok [15 fields] test: field norms.........ok [15 fields] test: terms, freq, prox...ok [900 terms; 1517 terms/docs pairs; 1707 tokens] test: stored fields.......ok [232 total field count; avg 12,211 fields per doc] test: term vectors........ok [3 total vector count; avg 0,158 term/freq vector fields per doc] no problems were detected with this index. it mean that the index is correct and there was no need for any corrective action. additionally, you can learn some interesting things about the index broken index but what happens in the case of the broken index? there is only one way to see it – let’s try. so, i broke one of the index files and ran the checkindex tool. the following appeared on the console after i’ve run the checkindex tool: opening index @ e:solrsolrdataindex segments file=segments_2 numsegments=1 version=format_diagnostics [lucene 2.9] 1 of 1: name=_0 doccount=19 compound=false hasprox=true numfiles=11 size (mb)=0,018 diagnostics = {os.version=6.1, os=windows 7, lucene.version=2.9.3 951790 - 2010-06-06 01:30:55, source=flush, os.arch=x86, java.version=1.6.0_23, java.vendor=sun microsystems inc.} no deletions test: open reader.........failed warning: fixindex() would remove reference to this segment; full exception: org.apache.lucene.index.corruptindexexception: did not read all bytes from file "_0.fnm": read 150 vs size 152 at org.apache.lucene.index.fieldinfos.read(fieldinfos.java:370) at org.apache.lucene.index.fieldinfos.(fieldinfos.java:71) at org.apache.lucene.index.segmentreader$corereaders.(segmentreader.java:119) at org.apache.lucene.index.segmentreader.get(segmentreader.java:652) at org.apache.lucene.index.segmentreader.get(segmentreader.java:605) at org.apache.lucene.index.checkindex.checkindex(checkindex.java:491) at org.apache.lucene.index.checkindex.main(checkindex.java:903) warning: 1 broken segments (containing 19 documents) detected warning: 19 documents will be lost note: will write new segments file in 5 seconds; this will remove 19 docs from the index. this is your last chance to ctrl+c! 5... 4... 3... 2... 1... writing... ok wrote new segments file "segments_3" as you can see, all the 19 documents that were in the index have been removed. this is an extreme case, but you should realize that this tool might work like this. the end if you remember about the basisc assumptions associated with the use of the checkindex tool you may find yourself in a situation when this tool will come in handy and you will not have to ask yourself a question like “when was the last backup was made?”
September 22, 2011
by Rafał Kuć
· 21,302 Views · 1 Like
article thumbnail
Deploying Applications to Weblogic Using Maven
When developing JEE applications as part of the development cycle, it’s important to deploy to a development server before running end to end or integration tests. This blog demonstrates how to deploy your applications to your Weblogic server using Maven and, it transpires that there are at least two ways of doing this. The first way is using a the weblogic-maven-plugin available from Oracle and you can find an article on how to install this available on the Oracle website. The unfortunate thing about this is that Oracle don’t deploy their JARs to a Maven repository, which means that you have to do some jiggery-pokery messing around setting up your local repository. The Oracle article defines three steps: creating a plug-in jar using the wljarbuilder tool, extracting the pom.xml and then installing the results to your local repository using mvn install:install-file. In order to use this plug-in, you’ll need to add the following to your program’s POM file: com.oracle.weblogic weblogic-maven-plugin 10.3.4 t3://localhost:7001 weblogic your-password true deploy false true ../marin-tips-ear/target/marin-tips.ear ${project.build.finalName} install deploy The second method of deploying an application to your Weblogic server is by using a ‘good-ol’ Ant script. This method is simpler as you don’t need to bother creating plug-in JARs or installing them in your local repository. The POM file additions are: org.apache.maven.plugins maven-antrun-plugin 1.6 deploy-to-server pre-integration-test run I prefer the old fashioned Ant way more purely because it’s less hassle and adheres to the Keep It Simple Stupid rule of programming as, unless Oracle make their JAR files available on some repository, then extra set-up steps aren’t really an improvement. From http://www.captaindebug.com/2011/09/deploying-applications-to-weblogic.html
September 22, 2011
by Roger Hughes
· 29,270 Views · 3 Likes
article thumbnail
Creating OSGi projects using Eclipse IDE and Maven
f you want to create any of these projects listed below using Eclipse IDE, OSGi Application Project OSGi Bundle Project OSGi Composite Bundle Project OSGi Fragment Project Blueprint File you need to have IBM Rational Development Tools for OSGi Applications installed. Why do we need these tools? Create and edit OSGi bundles, composite bundles, bundle fragments, and applications. Import and export OSGi bundles, composite bundles, bundle fragments, and applications. Convert existing Java Enterprise Edition (Java EE) web, Java Persistence Application (JPA), plug-in, or simple Java projects into OSGi bundles. Edit OSGi bundle, application, and composite bundle manifest files. Create and edit OSGi blueprint configuration files. Edit OSGi blueprint binding configuration files. Diagnose and fix problems in your bundles and applications using validation and quick fixes. Eclipse Plugin Installation Before you install the tools, you must have the Eclipse Helios v3.6.2 package, Eclipse IDE for Java EE Developers installed. 1. Click on Help > Install New Software 2. Point to this update site – http://public.dhe.ibm.com/ibmdl/export/pub/software/rational/OSGiAppTools – and click on Add. 3. You’ll see a list of tools – OSGi Application Development Tools, OSGi Application Development Tools UI, OSGi context-sensitive help, OSGi Help documentation, Rational Development Tools for OSGi Applications help documentation. Select all those are listed (you can ignore help stuff) and go ahead with the installation. As of writing this, the development tools’ version is 1.0.3. Maven Integration If you want to manage any of these OSGi projects using Maven, you can right-click on it and select Maven > Enable Dependency Management. You need to have Maven Integration for Eclipse(m2e) installed for this which you can find in Eclipse Marketplace. If you use Maven, you can try using the plugins provided by Apache Felix project for bundling (building an OSGi bundle). More on this plugin @ http://felix.apache.org/site/apache-felix-maven-bundle-plugin-bnd.html, http://felix.apache.org/site/apache-felix-maven-osgi-plugin.html References: http://www.ibm.com/developerworks/rational/downloads/10/rationaldevtoolsforosgiapplications.html#download From http://singztechmusings.wordpress.com/2011/09/12/creating-osgi-projects-using-eclipse-ide-and-maven/
September 21, 2011
by Singaram Subramanian
· 25,920 Views
article thumbnail
Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy
The ability to inject pieces of code into compiled classes and methods, either statically or at runtime, may be of immense help. This applies especially to troubleshooting problems in third-party libraries without source codes or in an environment where it isn’t possible to use a debugger or a profiler. Code injection is also useful for dealing with concerns that cut across the whole application, such as performance monitoring. Using code injection in this way became popular under the name Aspect-Oriented Programming (AOP). Code injection isn’t something used only rarely as you might think, quite the contrary; every programmer will come into a situation where this ability could prevent a lot of pain and frustration. This post is aimed at giving you the knowledge that you may (or I should rather say “will”) need and at persuading you that learning basics of code injection is really worth the little of your time that it takes. I’ll present three different real-world cases where code injection came to my rescue, solving each one with a different tool, fitting best the constraints at hand. Why You Are Going to Need It A lot has been already said about the advantages of AOP – and thus code injection – so I will only concentrate on a few main points from the troubleshooting point of view. The coolest thing is that it enables you to modify third party, closed-source classes and actually even JVM classes. Most of us work with legacy code and code for which we haven’t the source codes and inevitably we occasionally hit the limitations or bugs of these 3rd-party binaries and need very much to change some small thing in there or to gain more insight into the code’s behavior. Without code injection you have no way to modify the code or to add support for increased observability into it. Also you often need to deal with issues or collect information in the production environment where you can’t use a debugger and similar tools while you usually can at least manage somehow your application’s binaries and dependencies. Consider the following situations: You’re passing a collection of data to a closed-source library for processing and one method in the library fails for one of the elements but the exception provides no information about which element it was. You’d need to modify it to either log the offending argument or to include it in the exception. (And you can’t use a debugger because it only happens on the production application server.) You need to collect performance statistics of important methods in your application including some of its closed-source components under the typical production load. (In the production you of course cannot use a profiler and you want to incur the minimal overhead.) You use JDBC to send a lot of data to a database in batches and one of the batch updates fails. You would need some nice way to find out which batch it was and what data it contained. I’ve in fact encountered these three cases (among others) and you will see possible implementations later. You should keep the following advantages of code injection in your mind while reading this post: Code injection enables you to modify binary classes for which you haven’t the source codes The injected code can be used to collect various runtime information in environments where you cannot use the traditional development tools such as profilers and debuggers Don’t Repeat Yourself: When you need the same piece of logic at multiple places, you can define it once and inject it into all those places. With code injection you do not modify the original source files so it is great for (possibly large-scale) changes that you need only for a limited period of time, especially with tools that make it possible to easily switch the code injection on and off (such as AspectJ with its load-time weaving). A typical case is performance metrics collection and increased logging during troubleshooting You can inject the code either statically, at the build time, or dynamically, when the target classes are being loaded by the JVM Mini Glossary You might encounter the following terms in relation to code injection and AOP: Advice The code to be injected. Typically we talk about before, after, and around advices, which are executed before, after, or instead of a target method. It’s possible to make also other changes than injecting code into methods, e.g. adding fields or interfaces to a class. AOP (Aspect Oriented Programming) A programming paradigm claiming that “cross-cutting concerns” – the logic needed at many places, without a single class where to implement them – should be implemented once and injected into those places. Check Wikipedia for a better description. Aspect A unit of modularity in AOP, corresponds roughly to a class – it can contain different advices and pointcuts. Joint point A particular point in a program that might be the target of code injection, e.g. a method call or method entry. Pointcut Roughly spoken, a pointcut is an expression which tells a code injection tool where to inject a particular piece of code, i.e. to which joint points to apply a particular advice. It could select only a single such point – e.g. execution of a single method – or many similar points – e.g. executions of all methods marked with a custom annotation such as @MyBusinessMethod. Weaving The process of injecting code – advices – into the target places – joint points. The Tools There are many very different tools that can do the job so we will first have a look at the differences between them and then we will get acquainted with three prominent representatives of different evolution branches of code injection tools. Basic Classification of Code Injection Tools I. Level of Abstraction How difficult is it to express the logic to be injected and to express the pointcuts where the logic should be inserted? Regarding the “advice” code: Direct bytecode manipulation (e.g. ASM) – to use these tools you need to understand the bytecode format of a class because they abstract very little from it, you work directly with opcodes, the operand stack and individual instructions. An ASM example: methodVisitor.visitFieldInsn(Opcodes.GETSTATIC, "java/lang/System", "out", "Ljava/io/PrintStream;"); They are difficult to use due to being so low-level but are the most powerful. Usually they are used to implement higher-level tools and only few actually need to use them. Intermediate level – code in strings, some abstraction of the classfile structure (Javassist) Advices in Java (e.g. AspectJ) – the code to be injected is expressed as syntax-checked and statically compiled Java Regarding the specification of where to inject the code: Manual injection – you have to get somehow hold of the place where you want to inject the code (ASM, Javassist) Primitive pointcuts – you have rather limited possibilities for expressing where to inject the code, for example to a particular method, to all public methods of a class or to all public methods of classes in a group (Java EE interceptors) Pattern matching pointcut expressions – powerful expressions matching joint points based on a number of criteria with wildcards, awareness of the context (e.g. “called from a class in the package XY”) etc. (AspectJ) II. When the Magic Happens The code can be injected at different points in time: Manually at run-time – your code has to explicitly ask for the enhanced code, e.g. by manually instantiating a custom proxy wrapping the target object (this is arguably not true code injection) At load-time – the modification are performed when the target classes are being loaded by the JVM At build-time – you add an extra step to your build process to modify the compiled classes before packaging and deploying your application Each of these modes of injection can be more suitable at different situations. III. What It Can Do The code injection tools vary pretty much in what they can or cannot do, some of the possibilities are: Add code before/after/instead of a method – only member-level methods or also the static ones? Add fields to a class Add a new method Make a class to implement an interface Modify an instruction within the body of a method (e.g. a method call) Modify generics, annotations, access modifiers, change constant values, … Remove method, field, etc. Selected Code Injection Tools The best-known code injection tools are: Dynamic Java Proxy The bytecode manipulation library ASM JBoss Javassist AspectJ Spring AOP/proxies Java EE interceptors Practical Introduction to Java Proxy, Javassist and AspectJ I’ve selected three rather different mature and popular code injection tools and will present them on real-world examples I’ve personally experienced. The Omnipresent Dynamic Java Proxy Java.lang.reflect.Proxy makes it possible to create dynamically a proxy for an interface, forwarding all calls to a target object. It is not a code injection tool for you cannot inject it anywhere, you must manually instantiate and use the proxy instead of the original object, and you can do this only for interfaces, but it can still be very useful as we will see. Advantages: It’s a part of JVM and thus is available everywhere You can use the same proxy – more exactly an InvocationHandler – for incompatible objects and thus reuse the code more than you could normally You save effort because you can easily forward all calls to a target object and only modify the ones interesting for you. If you were to implement a proxy manually, you would need to implement all the methods of the interface in question Disadvantages You can create a dynamic proxy only for an interface, you can’t use it if your code expects a concrete class You have to instantiate and apply it manually, there is no magical auto-injection It’s little too verbose Its power is very limited, it can only execute some code before/after/around a method There is no code injection step – you have to apply the proxy manually. Example I was using JDBC PreparedStatement’s batch updates to modify a lot of data in a database and the processing was failing for one of the batch updates because of integrity constraint violation. The exception didn’t contain enough information to find out which data caused the failure and so I’ve created a dynamic proxy for the PreparedStatement that remembered values passed into each of the batch updates and in the case of a failure it automatically printed the batch number and the data. With this information I was able to fix the data and I kept the solution in place so that if a similar problems ever occurs again, I’ll be able to find its cause and resolve it quickly. The crucial part of the code: LoggingStatementDecorator.java – snippet 1 class LoggingStatementDecorator implements InvocationHandler { private PreparedStatement target; ... private LoggingStatementDecorator(PreparedStatement target) { this.target = target; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { try { Object result = method.invoke(target, args); updateLog(method, args); // remember data, reset upon successful execution return result; } catch (InvocationTargetException e) { Throwable cause = e.getTargetException(); tryLogFailure(cause); throw cause; } } private void tryLogFailure(Throwable cause) { if (cause instanceof BatchUpdateException) { int failedBatchNr = successfulBatchCounter + 1; Logger.getLogger("JavaProxy").warning( "THE INJECTED CODE SAYS: " + "Batch update failed for batch# " + failedBatchNr + " (counting from 1) with values: [" + getValuesAsCsv() + "]. Cause: " + cause.getMessage()); } } ... Notes: To create a proxy, you first need to implement an InvocationHandler and its invoke method, which is called whenever any of the interface’s methods is invoked on the proxy You can access the information about the call via the java.lang.reflect.* objects and for example delegate the call to the proxied object via method.invoke We’ve also an utility method for creating a proxy instance for a Prepared statement: LoggingStatementDecorator.java – snippet 2 public static PreparedStatement createProxy(PreparedStatement target) { return (PreparedStatement) Proxy.newProxyInstance( PreparedStatement.class.getClassLoader(), new Class[] { PreparedStatement.class }, new LoggingStatementDecorator(target)); }; Notes: You can see that the newProxyInstance call takes a classloader, an array of interfaces that the proxy should implement, and the invocation handler that calls should be delegated to (the handler itself has to manage a reference to the proxied object, if it needs it) It is then used like this: Main.java ... PreparedStatement rawPrepStmt = connection.prepareStatement("..."); PreparedStatement loggingPrepStmt = LoggingStatementDecorator.createProxy(rawPrepStmt); ... loggingPrepStmt.executeBatch(); ... Notes: You see that we have to manually wrap a raw object with the proxy and use the proxy further on Alternative Solutions This problem could be solved in different ways, for example by creating a non-dynamic proxy implementing PreparedStatement and forwarding all calls to the real statement while remembering batch data but it would be lot of boring typing for the interface has many methods. The caller could also manually keep track of the data it has send to the prepared statement but that would obscure its logic with an unrelated concern. Using the dynamic Java proxy we get rather clean and easy to implement solution. The Independent Javassist JBoss Javassist is an intermediate code injection tool providing a higher-level abstraction than bytecode manipulation libraries and offering little limited but still very useful manipulation capabilities. The code to be injected is represented as strings and you have to manually get to the class-method where to inject it. Its main advantage is that the modified code has no new run-time dependencies, on Javassist or anything else. This may be the decisive factor if you are working for a large corporation where the deployment of additional open-source libraries (or just about any additional libraries) such as AspectJ is difficult for legal and other reasons. Advantages Code modified by Javassist doesn’t require any new run-time dependencies, the injection happens at the build time and the injected advice code itself doesn’t depend on any Javassist API Higher-level than bytecode manipulation libraries, the injected code is written in Java syntax, though enclosed in strings Can do most things that you may need such as “advising” method calls and method executions Disadvantages Still little too low-level and thus harder to use – you have to deal a little with structure of methods and the injected code is not syntax-checked The injection is done manually, there isn’t support for injecting the code automatically based on a pattern (though I’ve once implemented a custom Ant task to do execution/call advising for Javassist) Only build-time injection (See GluonJ below for a solution without most of the disadvantages of Javassist.) With Javassist you create a class, which uses the Javassist API to inject code int targets and run it as a part of your build process after the compilation, for example as I once did via a custom Ant task. Example We needed to add some simple performance monitoring to our Java EE application and we were not allowed to deploy any non-approved open-source library (at least not without going through a time-consuming approval process). We’ve therefore used Javassist to inject the performance monitoring code to our important methods and to the places were important external methods were called. The code injector: JavassistInstrumenter.java public class JavassistInstrumenter { public void insertTimingIntoMethod(String targetClass, String targetMethod) throws NotFoundException, CannotCompileException, IOException { Logger logger = Logger.getLogger("Javassist"); final String targetFolder = "./target/javassist"; try { final ClassPool pool = ClassPool.getDefault(); // Tell Javassist where to look for classes - into our ClassLoader pool.appendClassPath(new LoaderClassPath(getClass().getClassLoader())); final CtClass compiledClass = pool.get(targetClass); final CtMethod method = compiledClass.getDeclaredMethod(targetMethod); // Add something to the beginning of the method: method.addLocalVariable("startMs", CtClass.longType); method.insertBefore("startMs = System.currentTimeMillis();"); // And also to its very end: method.insertAfter("{final long endMs = System.currentTimeMillis();" + "iterate.jz2011.codeinjection.javassist.PerformanceMonitor.logPerformance(\"" + targetMethod + "\",(endMs-startMs));}"); compiledClass.writeFile(targetFolder); // Enjoy the new $targetFolder/iterate/jz2011/codeinjection/javassist/TargetClass.class logger.info(targetClass + "." + targetMethod + " has been modified and saved under " + targetFolder); } catch (NotFoundException e) { logger.warning("Failed to find the target class to modify, " + targetClass + ", verify that it ClassPool has been configured to look " + "into the right location"); } } public static void main(String[] args) throws Exception { final String defaultTargetClass = "iterate.jz2011.codeinjection.javassist.TargetClass"; final String defaultTargetMethod = "myMethod"; final boolean targetProvided = args.length == 2; new JavassistInstrumenter().insertTimingIntoMethod( targetProvided? args[0] : defaultTargetClass , targetProvided? args[1] : defaultTargetMethod ); } } Notes: You can see the “low-levelness” – you have to explicitly deal with objects like CtClass, CtMethod, explicitly add a local variable etc. Javassist is rather flexible in where it can look for the classes to modify – it can search the classpath, a particular folder, a JAR file, or a folder with JAR files You would compile this class and run its main during your build process Javassist on Steroids: GluonJ GluonJ is an AOP tool building on top of Javassist. It can use either a custom syntax or Java 5 annotations and it’s build around the concept of “revisers”. Reviser is a class – an aspect – that revises, i.e. modifies, a particular target class and overrides one or more of its methods (contrary to inheritance, the reviser’s code is physically imposed over the original code inside the target class). Advantages No run-time dependencies if build-time weaving used (load-time weaving requires the GluonJ agent library or gluonj.jar) Simple Java syntax using GlutonJ’s annotation – though the custom syntax is also trivial to understand and easy to use Easy, automatic weaving into the target classes with GlutonJ’s JAR tool, an Ant task or dynamically at the load-time Support for both build-time and load-time weaving Disadvantages An aspect can modify only a single class, you cannot inject the same piece of code to multiple classes/methods Limited power – only provides for field/method addition and execution of a code instead of/around a target method, either upon any of its executions or only if the execution happens in a particular context, i.e. when called from a particular class/method If you don’t need to inject the same piece of code into multiple methods then GluonJ is easier and better choice than Javassist and if its simplicity isn’t a problem for you then it also might be a better choice than AspectJ just thanks to this simplicity. The Almighty AspectJ AspectJ is a full-blown AOP tool, it can do nearly anything you might want, including the modification of static methods, addition of new fields, addition of an interface to a class’ list of implemented interfaces etc. The syntax of AspectJ advices comes in two flavours, one is a superset of Java syntax with additional keywords like aspect and pointcut, the other one – called @AspectJ – is standard Java 5 with annotations such as @Aspect, @Pointcut, @Around. The latter is perhaps easier to learn and use but also little less powerful as it isn’t as expressive as the custom AspectJ syntax. With AspectJ you can define which joint points to advise with very powerful expressions but it may be little difficult to learn them and to get them right. There is a useful Eclipse plugin for AspectJ development – the AspectJ Development Tools (AJDT) – but the last time I’ve tried it it wasn’t as helpful as I’d have liked. Advantages Very powerful, can do nearly anything you might need Powerful pointcut expressions for defining where to inject an advice and when to activate it (including some run-time checks) – fully enables DRY, i.e. write once & inject many times Both build-time and load-time code injection (weaving) Disadvantages The modified code depends on the AspectJ runtime library The pointcut expressions are very powerful but it might be difficult to get them right and there isn’t much support for “debugging” them though the AJDT plugin is partially able to visualize their effects It will likely take some time to get started though the basic usage is pretty simple (using @Aspect, @Around, and a simple pointcut expression, as we will see in the example) Example Once upon time I was writing a plugin for a closed-source LMS J2EE application having such dependencies that it wasn’t feasible to run it locally. During an API call, a method deep inside the application was failing but the exception didn’t contain enough information to track the cause of the problem. I therefore needed to change the method to log the value of its argument when it fails. The AspectJ code is quite simple: LoggingAspect.java @Aspect public class LoggingAspect { @Around("execution(private void TooQuiet3rdPartyClass.failingMethod(..))") public Object interceptAndLog(ProceedingJoinPoint invocation) throws Throwable { try { return invocation.proceed(); } catch (Exception e) { Logger.getLogger("AspectJ").warning( "THE INJECTED CODE SAYS: the method " + invocation.getSignature().getName() + " failed for the input '" + invocation.getArgs()[0] + "'. Original exception: " + e); throw e; } } } Notes: The aspect is a normal Java class with the @Aspect annotation, which is just a marker for AspectJ The @Around annotation instructs AspectJ to execute the method instead of the one matched by the expression, i.e. instead of the failingMethod of the TooQuiet3rdPartyClass The around advice method needs to be public, return an Object, and take a special AspectJ object carrying information about the invocation – ProceedingJoinPoint – as its argument and it may have an arbitrary name (Actually this is the minimal form of the signature, it could be more complex.) We use the ProceedingJoinPoint to delegate the call to the original target (an instance of the TooQuiet3rdPartyClass) and, in the case of an exception, to get the argument’s value I’ve used an @Around advice though @AfterThrowing would be simpler and more appropriate but this shows better the capabilities of AspectJ and can be nicely compared to the dynamic java proxy example above Since I hadn’t control over the application’s environment, I couldn’t enable the load-time weaving and thus had to use AspectJ’s Ant task to weave the code at the build time, re-package the affected JAR and re-deploy it to the server. Alternative Solutions Well, if you can’t use a debugger then your options are quite limited. The only alternative solution I could think of is to decompile the class (illegal!), add the logging into the method (provided that the decompilation succeeds), re-compile it and replace the original .class with the modified one. The Dark Side Code injection and Aspect Oriented Programming are very powerful and sometimes indispensable both for troubleshooting and as a regular part of application architecture, as we can see e.g. in the case of Java EE’s Enterprise Java Beans where the business concerns such as transaction management and security checks are injected into POJOs (though implementations actually more likely use proxies) or in Spring. However there is a price to be paid in terms of possibly decreased understandability as the runtime behavior and structure are different from what you’d expect based on the source codes (unless you know to check also the aspects’ sources or unless the injection is made explicit by annotations on the target classes such as Java EE’s @Interceptors). Therefore you must carefully weight the benefits and drawbacks of code injection/AOP – though when used reasonably, they do not obscure the program flow more than interfaces, factories etc. The argument about obscuring code is perhaps often over-estimated. If you want to see an example of AOP gone wild, check the source codes of Glassbox, a JavaEE performance monitoring tool (for that you might need a map not to get too lost). Fancy Uses of Code Injection and AOP The main field of application of code injection in the process of troubleshooting is logging, more exactly gaining visibility into what an application is doing by extracting and somehow communicating interesting runtime information about it. However AOP has many interesting uses beyond – simple or complex – logging, for example: Typical examples: Caching & et al (ex.: on AOP in JBoss Cache), transaction management, logging, enforcement of security, persistence, thread safety, error recovery, automatic implementation of methods (e.g. toString, equals, hashCode), remoting Implementation of role-based programming (e.g. OT/J, using BCEL) or the Data, Context, and Interaction architecture Testing Test coverage – inject code to record whether a line has been executed during test run or not Mutation testing (µJava, Jumble) – inject “random” mutation to the application and verify that the tests failed Pattern Testing – automatic verification that Architecture/Design/Best practices recommendations are implemented correctly in the code via AOP Simulate hardware/external failures by injecting the throwing of an exception Help to achieve zero turnaround for Java applications – JRebel uses an AOP-like approach for framework and server integration plugins – namely its plugins use Javassist for “binary patching” Solving though problems and avoiding monkey-coding with AOP patterns such as Worker Object Creation (turn direct calls into asynchronous with a Runnable and a ThreadPool/task queue) and Wormhole (make context information from a caller available to the callee without having to pass them through all the layers as parameters and without a ThreadLocal) – described in the book AspectJ in Action Dealing with legacy code – overriding the class instantiated on a call to a constructor (this and similar may be used to break tight-coupling with feasible amount of work), ensuring backwards-compatibility o , teaching components to react properly on environment changes Preserving backwards-compatibility of an API while not blocking its ability to evolve e.g. by adding backwards-compatible methods when return types have been narrowed/widened (Bridge Method Injector – uses ASM) or by re-adding old methods and implementing them in terms of the new API Turning POJOs into JMX beans Summary We’ve learned that code injection can be indispensable for troubleshooting, especially when dealing with closed-source libraries and complex deployment environments. We’ve seen three rather different code injection tools – dynamic Java proxies, Javassist, AspectJ – applied to real-world problems and discussed their advantages and disadvantages because different tools may be suitable for different cases. We’ve also mentioned that code injection/AOP shouldn’t be overused and looked at some examples of advanced applications of code injection/AOP. I hope that you now understand how code injection can help you and know how to use these three tools. Source Codes You can get the fully-documented source codes of the examples from GitHub including not only the code to be injected but also the target code and support for easy building. The easiest may be: git clone git://github.com/jakubholynet/JavaZone-Code-Injection.git cd JavaZone-Code-Injection/ cat README mvn -P javaproxy test mvn -P javassist test mvn -P aspectj test (It may take few minutes for Maven do download its dependencies, plugins, and the actual project’s dependencies.) Additional Resources Spring’s introduction into AOP dW: AOP@Work: AOP myths and realities Chapter 1 of AspectJ in Action, 2nd. ed. Acknowledgements I would like to thank all the people who helped me with this post and the presentation including my colleges, the JRebel folk, and GluonJ’s co-author prof. Shigeru Chiba. From http://theholyjava.wordpress.com/2011/09/07/practical-introduction-into-code-injection-with-aspectj-javassist-and-java-proxy/
September 18, 2011
by Jakub Holý
· 38,061 Views
article thumbnail
Apache CXF: How to add custom SOAP headers to the web service request?
Here’s how to do it in CXF proprietary way: // Create a list of SOAP headers List headersList = new ArrayList(); Header testHeader = new Header(new QName("uri:singz.ws.sample", "test"), "A custom header", new JAXBDataBinding(String.class)); headersList.add(testHeader); ((BindingProvider)proxy).getRequestContext().put(Header.HEADER_LIST, headersList); The headers in the list are streamed at the appropriate time to the wire according to the databinding object found in the Header object. This doesn’t require changes to WSDL or method signatures. It’s much faster as it doesn’t break streaming and the memory overhead is less. More on this @ http://cxf.apache.org/faq.html#FAQ-HowcanIaddsoapheaderstotherequest%2Fresponse%3F From http://singztechmusings.wordpress.com/2011/09/08/apache-cxf-how-to-add-custom-soap-headers-to-the-web-service-request/
September 17, 2011
by Singaram Subramanian
· 27,596 Views
article thumbnail
Memory Barriers/Fences
In this article I'll discuss the most fundamental technique in concurrent programming known as memory barriers, or fences, that make the memory state within a processor visible to other processors. CPUs have employed many techniques to try and accommodate the fact that CPU execution unit performance has greatly outpaced main memory performance. In my “Write Combining” article I touched on just one of these techniques. The most common technique employed by CPUs to hide memory latency is to pipeline instructions and then spend significant effort, and resource, on trying to re-order these pipelines to minimise stalls related to cache misses. When a program is executed it does not matter if its instructions are re-ordered provided the same end result is achieved. For example, within a loop it does not matter when the loop counter is updated if no operation within the loop uses it. The compiler and CPU are free to re-order the instructions to best utilise the CPU provided it is updated by the time the next iteration is about to commence. Also over the execution of a loop this variable may be stored in a register and never pushed out to cache or main memory, thus it is never visible to another CPU. CPU cores contain multiple execution units. For example, a modern Intel CPU contains 6 execution units which can do a combination of arithmetic, conditional logic, and memory manipulation. Each execution unit can do some combination of these tasks. These execution units operate in parallel allowing instructions to be executed in parallel. This introduces another level of non-determinism to program order if it was observed from another CPU. Finally, when a cache-miss occurs, a modern CPU can make an assumption on the results of a memory load and continue executing based on this assumption until the load returns the actual data. Provided “program order” is preserved the CPU, and compiler, are free to do whatever they see fit to improve performance. Figure 1. Loads and stores to the caches and main memory are buffered and re-ordered using the load, store, and write-combining buffers. These buffers are associative queues that allow fast lookup. This lookup is necessary when a later load needs to read the value of a previous store that has not yet reached the cache. Figure 1 above depicts a simplified view of a modern multi-core CPU. It shows how the execution units can use the local registers and buffers to manage memory while it is being transferred back and forth from the cache sub-system. In a multi-threaded environment techniques need to be employed for making program results visible in a timely manner. I will not cover cache coherence in this article. Just assume that once memory has been pushed to the cache then a protocol of messages will occur to ensure all caches are coherent for any shared data. The techniques for making memory visible from a processor core are known as memory barriers or fences. Memory barriers provide two properties. Firstly, they preserve externally visible program order by ensuring all instructions either side of the barrier appear in the correct program order if observed from another CPU and, secondly, they make the memory visible by ensuring the data is propagated to the cache sub-system. Memory barriers are a complex subject. They are implemented very differently across CPU architectures. At one end of the spectrum there is a relatively strong memory model on Intel CPUs that is more simple than say the weak and complex memory model on a DEC Alpha with its partitioned caches in addition to cache layers. Since x86 CPUs are the most common for multi-threaded programming I’ll try and simplify to this level. Store Barrier A store barrier, “sfence” instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued. This will make the program state visible to other CPUs so they can act on it if necessary. A good example of this in action is the following simplified code from the BatchEventProcessor in the Disruptor. When the sequence is updated other consumers and producers know how far this consumer has progressed and thus can take appropriate action. All previous updates to memory that happened before the barrier are now visible. private volatile long sequence = RingBuffer.INITIAL_CURSOR_VALUE; // from inside the run() method T event = null; long nextSequence = sequence + 1L; while (running) { try { final long availableSequence = dependencyBarrier.waitFor(nextSequence); while (nextSequence <= availableSequence) { event = dependencyBarrier.getEvent(nextSequence); eventHandler.onEvent(event, nextSequence == availableSequence); nextSequence++; } sequence = event.getSequence(); // store barrier inserted here !!! } catch (final Exception ex) { exceptionHandler.handle(ex, event); sequence = event.getSequence(); // store barrier inserted here !!! nextSequence = event.getSequence() + 1L; } } Load Barrier A load barrier, “lfence” instruction on x86, forces all load instructions after the barrier to happen after the barrier and then wait on the load buffer to drain for that CPU. This makes program state exposed from other CPUs visible to this CPU before making further progress. A good example of this is when the BatchEventProcessor sequence referenced above is read by producers, or consumers, in the corresponding barriers of the Disruptor. Full Barrier A full barrier, "mfence" instruction on x86, is a composite of both load and store barriers happening on a CPU. Java Memory Model In the Java Memory Model a volatile field has a store barrier inserted after a write to it and a load barrier inserted before a read of it. Qualified final fields of a class have a store barrier inserted after their initialisation to ensure these fields are visible once the constructor completes when a reference to the object is available. Atomic Instructions and Software Locks Atomic instructions, such as the “lock ...” instructions on x86, are effectively a full barrier as they lock the memory sub-system to perform an operation and have guaranteed total order, even across CPUs. Software locks usually employ memory barriers, or atomic instructions, to achieve visibility and preserve program order. Performance Impact of Memory Barriers Memory barriers prevent a CPU from performing a lot of techniques to hide memory latency therefore they have a significant performance cost which must be considered. To achieve maximum performance it is best to model the problem so the processor can do units of work, then have all the necessary memory barriers occur on the boundaries of these work units. Taking this approach allows the processor to optimise the units of work without restriction. There is an advantage to grouping necessary memory barriers in that buffers flushed after the first one will be less costly because no work will be under way to refill them. From http://mechanical-sympathy.blogspot.com/2011/07/memory-barriersfences.html
September 12, 2011
by Martin Thompson
· 25,337 Views · 8 Likes
article thumbnail
How to add information to a SOAP fault message with EJB 3 based web services
Are you building a Java web service based on EJB3? Do you need to return a more significant message to your web service clients other that just the exception message or even worst the recurring javax.transaction.TransactionRolledbackException? Well if the answer is YES to the above questions then keep reading... The code in this article has been tested with JBoss 5.1.0 but it should (!?) work on other EJB containers as well Create a base application exception that will be extended by all the other exception, I will refer to it as MyApplicationBaseException . This exception contains a list of UserMessage, again a class I created with some messages and locale information You need to create a javax.xml.ws.handler.soap.SOAPHandler < SOAPMessageContext > implementation. Mine looks like this import java.util.Set; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller; import javax.xml.namespace.QName; import javax.xml.soap.SOAPException; import javax.xml.soap.SOAPFault; import javax.xml.soap.SOAPMessage; import javax.xml.ws.handler.MessageContext; import javax.xml.ws.handler.soap.SOAPHandler; import javax.xml.ws.handler.soap.SOAPMessageContext; import org.apache.commons.lang.exception.ExceptionUtils; public class SoapExceptionHandler implements SOAPHandler { private transient Logger logger = ServiceLogFactory.getLogger(SoapExceptionHandler.class); @Override public void close(MessageContext context) { } @Override public boolean handleFault(SOAPMessageContext context) { try { boolean outbound = (Boolean) context.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outbound) { logger.info("Processing " + context + " for exceptions"); SOAPMessage msg = ((SOAPMessageContext) context).getMessage(); SOAPFault fault = msg.getSOAPBody().getFault(); // Retrives the exception from the context Exception ex = (Exception) context.get("exception"); if (ex != null) { // Add a fault to the body if not there already if (fault == null) { fault = msg.getSOAPBody().addFault(); } // Get my exception int indexOfType = ExceptionUtils.indexOfType(ex, MyApplicationBaseException.class); if (indexOfType != -1) { ex = (MyApplicationBaseException)ExceptionUtils.getThrowableList(ex).get(indexOfType); MyApplicationBaseException myEx = (AmsException) ex; fault.setFaultString(myEx.getMessage()); try { JAXBContext jaxContext = JAXBContext.newInstance(UserMessages.class); Marshaller marshaller = jaxContext.createMarshaller(); //Add the UserMessage xml as a fault detail. Detail interface extends Node marshaller.marshal(amsEx.getUserMessages(), fault.addDetail()); } catch (JAXBException e) { throw new RuntimeException("Can't marshall the user message ", e); } }else { logger.info("This is not an AmsException"); } }else { logger.warn("No exception found in the webServiceContext"); } } } catch (SOAPException e) { logger.warn("Error when trying to access the soap message", e); } return true; } @Override public boolean handleMessage(SOAPMessageContext context) { return true; } @Override public Set getHeaders() { return null; } } Now that you have the exception handler you need to register this SoapHandler with the EJB. To do that you'll need to create an Xml file in your class path and add an annotation to the EJB implementation class. The xml file : ExceptionHandler com.mycompany.utilities.ExceptionHandler and the EJB with annotation will be import javax.jws.HandlerChain; @Local(MyService.class) @Stateless @HandlerChain(file = "soapHandler.xml") @Interceptors( { MyApplicationInterceptor.class }) @SOAPBinding(style = SOAPBinding.Style.RPC) @WebService(endpointInterface = "com.mycompany.services.myservice", targetNamespace = "http://myservice.services.mycompany.com") public final class MyServiceImpl implements MyService { // service implementation } To make sure all my exceptions have proper messages and that the exception is set in the SOAPMessageContext I use an Interceptor to wrap all the service methods and transform any exception to an instance of MyApplicationException The interceptor has a single method @AroundInvoke private Object setException(InvocationContext ic) throws Exception { Object toReturn = null; try { toReturn = ic.proceed(); } catch (Exception e) { logger.error("Exception during the request processing.", e); //converts any exception to MyApplicationException e = MyApplicationExceptionHandler.getMyApplicationException(e); if (context != null && context.getMessageContext() != null) { context.getMessageContext().put("exception", e); } throw e; } return toReturn; } That's it! You're done. From http://www.devinprogress.info/2011/02/how-to-add-information-to-soap-fault.html
September 10, 2011
by Andrew Salvadore
· 11,185 Views
article thumbnail
Testing Databases with JUnit and Hibernate Part 1: One to Rule them
There is little support for testing the database of your enterprise application. We will describe some problems and possible solutions based on Hibernate and JUnit.
September 6, 2011
by Jens Schauder
· 122,135 Views · 2 Likes
article thumbnail
NTLM Authentication in Java
In one of my previous lives, I used to work in Microsoft and there this word – NTLM (NT Lan Manager) was something that came to us whenever we used to work on applications. Microsoft OS have always provided us with an inbuilt security systems that can be effectively used to offer authentication (and even authorization to web applications). Many years back, I moved over into Java world and when I was asked to carry out my very first security implementation, I realized that there was no easy way to do this and many clients would actually want us to use LDAP for authentication and authorization. For many years, I continued to use that. And, then one day in a discussion with a client, we were asked to offer SSO implementation and client did not have an existing setup like SiteMinder. I started to think about if we can go about using NTLM based authentication. The reason that was possible was because the application we were asked to build was to be used within the organization itself and all the people were required to login into a domain. After some research, I was able to find out a way we could do this. We did a POC and showed it to the client and they were happy about it. What we did has been explained below: Wrote a Servlet which was the first one to be loaded (like Authentication Interceptor). This servlet was responsible for reading the header attributes and identify the user’s Domain and NTID Once we had the details; we sent a request to our Database to see if that user is registered under the same domain/NTID If the user was found in our user-database we allowed him to pass through And then roles and authorization for user was loaded Basically, we bypassed the “Login Screen” where the user was entering the password and used Domain information. Please note that it was possible for us because the Client guaranteed that there was this domain always and all users had unique NTIDs. Also, that it was their responsibility to shield the application from any external entry points where someone may impersonate the Domain/ID. If you are interested, you can refer to the code below: From http://scrtchpad.wordpress.com/2011/08/04/ntml-authentication-in-java/
September 1, 2011
by Kapil Viren Ahuja
· 51,761 Views · 1 Like
article thumbnail
Java NIO vs. IO
when studying both the java nio and io api's, a question quickly pops into mind: when should i use io and when should i use nio? in this text i will try to shed some light on the differences between java nio and io, their use cases, and how they affect the design of your code. main differences of java nio and io the table below summarizes the main differences between java nio and io. i will get into more detail about each difference in the sections following the table. io nio stream oriented buffer oriented blocking io non blocking io selectors stream oriented vs. buffer oriented the first big difference between java nio and io is that io is stream oriented, where nio is buffer oriented. so, what does that mean? java io being stream oriented means that you read one or more bytes at a time, from a stream. what you do with the read bytes is up to you. they are not cached anywhere. furthermore, you cannot move forth and back in the data in a stream. if you need to move forth and back in the data read from a stream, you will need to cache it in a buffer first. java nio's buffer oriented approach is slightly different. data is read into a buffer from which it is later processed. you can move forth and back in the buffer as you need to. this gives you a bit more flexibility during processing. however, you also need to check if the buffer contains all the data you need in order to fully process it. and, you need to make sure that when reading more data into the buffer, you do not overwrite data in the buffer you have not yet processed. blocking vs. non-blocking io java io's various streams are blocking. that means, that when a thread invokes a read() or write(), that thread is blocked until there is some data to read, or the data is fully written. the thread can do nothing else in the meantime. java nio's non-blocking mode enables a thread to request reading data from a channel, and only get what is currently available, or nothing at all, if no data is currently available. rather than remain blocked until data becomes available for reading, the thread can go on with something else. the same is true for non-blocking writing. a thread can request that some data be written to a channel, but not wait for it to be fully written. the thread can then go on and do something else in the mean time. what threads spend their idle time on when not blocked in io calls, is usually performing io on other channels in the meantime. that is, a single thread can now manage multiple channels of input and output. selectors java nio's selectors allow a single thread to monitor multiple channels of input. you can register multiple channels with a selector, then use a single thread to "select" the channels that have input available for processing, or select the channels that are ready for writing. this selector mechanism makes it easy for a single thread to manage multiple channels. how nio and io influences application design whether you choose nio or io as your io toolkit may impact the following aspects of your application design: the api calls to the nio or io classes. the processing of data. the number of thread used to process the data. the api calls of course the api calls when using nio look different than when using io. this is no surprise. rather than just read the data byte for byte from e.g. an inputstream, the data must first be read into a buffer, and then be processed from there. the processing of data the processing of the data is also affected when using a pure nio design, vs. an io design. in an io design you read the data byte for byte from an inputstream or a reader. imagine you were processing a stream of line based textual data. for instance: name: anna age: 25 email: anna@mailserver.com phone: 1234567890 this stream of text lines could be processed like this: inputstream input = ... ; // get the inputstream from the client socket bufferedreader reader = new bufferedreader(new inputstreamreader(input)); string nameline = reader.readline(); string ageline = reader.readline(); string emailline = reader.readline(); string phoneline = reader.readline(); notice how the processing state is determined by how far the program has executed. in other words, once the first reader.readline() method returns, you know for sure that a full line of text has been read. the readline() blocks until a full line is read, that's why. you also know that this line contains the name. similarly, when the second readline() call returns, you know that this line contains the age etc. as you can see, the program progresses only when there is new data to read, and for each step you know what that data is. once the executing thread have progressed past reading a certain piece of data in the code, the thread is not going backwards in the data (mostly not). this principle is also illustrated in this diagram: java io: reading data from a blocking stream. a nio implementation would look different. here is a simplified example: bytebuffer buffer = bytebuffer.allocate(48); int bytesread = inchannel.read(buffer); notice the second line which reads bytes from the channel into the bytebuffer. when that method call returns you don't know if all the data you need is inside the buffer. all you know is that the buffer contains some bytes. this makes processing somewhat harder. imagine if, after the first read(buffer) call, that all what was read into the buffer was half a line. for instance, "name: an". can you process that data? not really. you need to wait until at leas a full line of data has been into the buffer, before it makes sense to process any of the data at all. so how do you know if the buffer contains enough data for it to make sense to be processed? well, you don't. the only way to find out, is to look at the data in the buffer. the result is, that you may have to inspect the data in the buffer several times before you know if all the data is inthere. this is both inefficient, and can become messy in terms of program design. for instance: bytebuffer buffer = bytebuffer.allocate(48); int bytesread = inchannel.read(buffer); while(! bufferfull(bytesread) ) { bytesread = inchannel.read(buffer); } the bufferfull() method has to keep track of how much data is read into the buffer, and return either true or false, depending on whether the buffer is full. in other words, if the buffer is ready for processing, it is considered full. the bufferfull() method scans through the buffer, but must leave the buffer in the same state as before the bufferfull() method was called. if not, the next data read into the buffer might not be read in at the correct location. this is not impossible, but it is yet another issue to watch out for. if the buffer is full, it can be processed. if it is not full, you might be able to partially process whatever data is there, if that makes sense in your particular case. in many cases it doesn't. the is-data-in-buffer-ready loop is illustrated in this diagram: java nio: reading data from a channel until all needed data is in buffer. summary nio allows you to manage multiple channels (network connections or files) using only a single (or few) threads, but the cost is that parsing the data might be somewhat more complicated than when reading data from a blocking stream. if you need to manage thousands of open connections simultanously, which each only send a little data, for instance a chat server, implementing the server in nio is probably an advantage. similarly, if you need to keep a lot of open connections to other computers, e.g. in a p2p network, using a single thread to manage all of your outbound connections might be an advantage. this one thread, multiple connections design is illustrated in this diagram: java nio: a single thread managing multiple connections. if you have fewer connections with very high bandwidth, sending a lot of data at a time, perhaps a classic io server implementation might be the best fit. this diagram illustrates a classic io server design: java io: a classic io server design - one connection handled by one thread. from http://tutorials.jenkov.com/java-nio/nio-vs-io.html
August 28, 2011
by Jakob Jenkov
· 132,924 Views · 19 Likes
article thumbnail
Clojure: partition-by, split-with, group-by, and juxt
Today I ran into a common situation: I needed to split a list into 2 sublists - elements that passed a predicate and elements that failed a predicate. I'm sure I've run into this problem several times, but it's been awhile and I'd forgotten what options were available to me. A quick look at http://clojure.github.com/clojure/ reveals several potential functions: partition-by, split-with, and group-by. partition-by From the docs: Usage: (partition-by f coll) Applies f to each value in coll, splitting it each time f returns a new value. Returns a lazy seq of partitions. Let's assume we have a collection of ints and we want to split them into a list of evens and a list of odds. The following REPL session shows the result of calling partition-by with our list of ints. user=> (partition-by even? [1 2 4 3 5 6]) ((1) (2 4) (3 5) (6)) The partition-by function works as described; unfortunately, it's not exactly what I'm looking for. I need a function that returns ((1 3 5) (2 4 6)). split-with From the docs: Usage: (split-with pred coll) Returns a vector of [(take-while pred coll) (drop-while pred coll)] The split-with function sounds promising, but a quick REPL session shows it's not what we're looking for. user=> (split-with even? [1 2 4 3 5 6]) [() (1 2 4 3 5 6)] As the docs state, the collection is split on the first item that fails the predicate - (even? 1). group-by From the docs: Usage: (group-by f coll) Returns a map of the elements of coll keyed by the result of f on each element. The value at each key will be a vector of the corresponding elements, in the order they appeared in coll. The group-by function works, but it gives us a bit more than we're looking for. user=> (group-by even? [1 2 4 3 5 6]) {false [1 3 5], true [2 4 6]} The result as a map isn't exactly what we desire, but using a bit of destructuring allows us to grab the values we're looking for. user=> (let [{evens true odds false} (group-by even? [1 2 4 3 5 6])] [evens odds]) [[2 4 6] [1 3 5]] The group-by results mixed with destructuring do the trick, but there's another option. juxt From the docs: Usage: (juxt f) (juxt f g) (juxt f g h) (juxt f g h & fs) Alpha - name subject to change. Takes a set of functions and returns a fn that is the juxtaposition of those fns. The returned fn takes a variable number of args, and returns a vector containing the result of applying each fn to the args (left-to-right). ((juxt a b c) x) => [(a x) (b x) (c x)] The first time I ran into juxt I found it a bit intimidating. I couldn't tell you why, but if you feel the same way - don't feel bad. It turns out, juxt is exactly what we're looking for. The following REPL session shows how to combine juxt with filter and remove to produce the desired results. user=> ((juxt filter remove) even? [1 2 4 3 5 6]) [(2 4 6) (1 3 5)] There's one catch to using juxt in this way, the entire list is processed with filter and remove. In general this is acceptable; however, it's something worth considering when writing performance sensitive code. From http://blog.jayfields.com/2011/08/clojure-partition-by-split-with-group.html
August 24, 2011
by Jay Fields
· 12,842 Views
article thumbnail
Avoiding Java Serialization to increase performance
Many frameworks for storing objects in an off-line or cached manner, use standard Java Serialization to encode the object as bytes which can be turned back into the original object. Java Serialization is generic and can serialise just about any type of object. Why avoid it The main problem with Java Serialization is performance and efficiency. Java serialization is much slower than using in memory stores and tends to significantly expand the size of the object. Java Serialization also creates a lot of garbage. Access performance Say you have a collection and you want to update a field of many elements. Something like for (MutableTypes mt : mts) { mt.setInt(mt.getInt()); } If you update one million elements for about five seconds how long does each one take. Huge Collection update one field, took an average 5.1 ns. List update one field took an average 6.5 ns. List with Externalizable update one field took an average 5,841 ns. List update one field took an average 23,217 ns. If you update ten million elements for five seconds or more Huge Collection update one field, took an average 5.4 ns. List, update one field took an average 6.6 ns. List with readObject/writeObject update one field took an average 6,073 ns. List update one field took an average 22,943 ns. Huge Collection stores information in a column based based, so accessing just one field is much more CPU cache efficient than using JavaBeans. If you were to update every field, it would be about 2x or more times slower. Using an optimised Externalizable is much faster than the default Serializable, however is it 400x slower than using a a JavaBean Memory efficiency The per object memory used is also important as it impacts how many object you can store and the performance of accessing those objects. Collection type Heap used per million Direct memory per million Garbage produced per million Huge Collection 0.09 MB 34 MB 80 bytes List 68 MB none 30 bytes List using Externalizable 140 MB none 5,941 MB List 506 MB none 16,746 MB This test was performed on a collection of one million elements. To test the amount of garbage produced I set the Eden size target than 15 GB so no GC would be performed. -mx22g -XX:NewSize=20g -XX:-UseTLAB -verbosegc Conclusion Having an optimised readExternal/writeExternal can improve performance and the size of a serialised object by 2-4 times, however if you need to maximise performance and efficiency you can gain much more by not using it. From http://vanillajava.blogspot.com/2011/08/avoiding-java-serialization-to-increase.html
August 20, 2011
by Peter Lawrey
· 26,040 Views
article thumbnail
Attaching Java source with Eclipse IDE
In Eclipse, when you press Ctrl button and click on any Class names, the IDE will take you to the source file for that class. This is the normal behavior for the classes you have in your project. But, in case you want the same behavior for Java’s core classes too, you can have it by attaching the Java source with the Eclipse IDE. Once you attach the source, thereafter when you Ctrl+Click any Java class names (String for example), Eclipse will open the source code of that class. To attach the Java source code with Eclipse, When you install the JDK, you must have selected the option to install the Java source files too. This will copy the src.zip file in the installation directory. In Eclipse, go to Window -> Preferences -> Java -> Installed JREs -> Add and choose the JDK you have in your system. Eclipse will now list the JARs found in the dialog box. There, select the rt.jar and choose Source Attachment. By default, this will be pointing to the correct src.zip. If not, choose the src.zip file which you have in your java installation directory. Similarly, if you have the javadoc downloaded in your machine, you can configure that too in this dialog box. Done! Here after, for all the projects for which you are using the above JDK, you’ll be able to browse the Java’s source code just like how you browse your own code. From http://veerasundar.com/blog/2011/08/attaching-java-source-with-eclipse-ide
August 18, 2011
by Veera Sundar
· 143,194 Views · 4 Likes
article thumbnail
REST JSON to SOAP conversion tutorial
i often get asked about ‘rest to soap’ transformation use cases these days. using an soa gateway like securespan to perform this type of transformation at runtime is trivial to setup. with securespan in front of any existing web service (in the dmz for example), you can virtualize a rest version of this same service. using an example, here is a description of the steps to perform this conversion. imagine the geoloc web service for recording geographical locations. it has two methods, one for setting a location and one for getting a location. see below what this would look like in soap. request: 34802398402 response: 52.37706 4.889721 request: 34802398402 52.37706 4.889721 response: ok here is the equivalent rest target that i want to support at the edge. payloads could be xml, but let’s use json to make it more interesting. get /position/34802398402 http 200 ok content-type: text/json { 'latitude' : 52.37706 'longitude' : 4.889721 } post /position/34802398402 content-type: text/json { 'latitude' : 52.37706 'longitude' : 4.889721 } http 200 ok ok now let’s implement this rest version of the service using securespan. i’m assuming that you already have a securespan gateway deployed between the potential rest requesters and the existing soap web service. first, i will create a new service endpoint on the gateway for this service and assign anything that comes at the uri pattern /position/* to this service. i will also allow the http verbs get and post for this service. rest geoloc service properties next, let’s isolate the resource id from the uri and save this as a context variable named ‘trackerid’. we can use a simple regex assertion to accomplish this. also, i will branch on the incoming http verb using an or statement. i am just focusing on get and post for this example but you could add additional logic for other http verbs that you want to support for this rest service. regex for rest service resource identification policy branching for get vs post for get requests, the transformation is very simple, we just declare a message variable using a soap skeleton into which we refer to the trackerid variable. soap request template this soap message is routed to the existing web service and the essential elements are isolated using xpath assertions. processing soap response the rest response is then constructed back using a template response. template json response a similar logic is performed for the post message. see below for the full policy logic. complete policy you’re done for virtualizing the rest service. setting this up with securespan took less than an hour, did not require any change on the existing soap web service and did not require the deployment of an additional component. from there, you would probably enrich the policy to perform some json schema validation , some url and query parameter validation, perhaps some authentication, authorization , etc.
August 4, 2011
by Francois Lascelles
· 37,051 Views
article thumbnail
Java Tools for Source Code Optimization and Analysis
Below is a list of some tools that can help you examine your Java source code for potential problems: 1. PMD from http://pmd.sourceforge.net/ License: PMD is licensed under a “BSD-style” license PMD scans Java source code and looks for potential problems like: * Possible bugs – empty try/catch/finally/switch statements * Dead code – unused local variables, parameters and private methods * Suboptimal code – wasteful String/StringBuffer usage * Overcomplicated expressions – unnecessary if statements, for loops that could be while loops * Duplicate code – copied/pasted code means copied/pasted bugs You can download everything from here, and you can get an overview of all the rules at the rulesets index page. PMD is integrated with JDeveloper, Eclipse, JEdit, JBuilder, BlueJ, CodeGuide, NetBeans/Sun Java Studio Enterprise/Creator, IntelliJ IDEA, TextPad, Maven, Ant, Gel, JCreator, and Emacs. 2. FindBug from http://findbugs.sourceforge.net License: L-GPL FindBugs, a program which uses static analysis to look for bugs in Java code. And since this is a project from my alumni university (IEEE – University of Maryland, College Park – Bill Pugh) , I have to definitely add this contribution to this list. 3. Clover from http://www.cenqua.com/clover/ License: Free for Open Source (more like a GPL) Measures statement, method, and branch coverage and has XML, HTML, and GUI reporting. and comprehensive plug-ins for major IDEs. * Improve Test Quality * Increase Testing Productivity * Keep Team on Track Fully integrated plugins for NetBeans, Eclipse , IntelliJ IDEA, JBuilder and JDeveloper. These plugins allow you to measure and inspect coverage results without leaving the IDE. Seamless Integration with projects using Apache Ant and Maven. * Easy integration into legacy build systems with command line interface and API. Fast, accurate, configurable, detailed coverage reporting of Method, Statement, and Branch coverage. Rich reporting in HTML, PDF, XML or a Swing GUI Precise control over the coverage gathering with source-level filtering. Historical charting of code coverage and other metrics. Fully compatible with JUnit 3.x & 4.x, TestNG, JTiger and other testing frameworks. Can also be used with manual, functional or integration testing. 4. Macker from http://innig.net/macker/ License: GPL Macker is a build-time architectural rule checking utility for Java developers. It’s meant to model the architectural ideals programmers always dream up for their projects, and then break — it helps keep code clean and consistent. You can tailor a rules file to suit a specific project’s structure, or write some general “good practice” rules for your code. Macker doesn’t try to shove anybody else’s rules down your throat; it’s flexible, and writing a rules file is part of the development process for each unique project. 5 EMMA from http://emma.sourceforge.net/ License: EMMA is distributed under the terms of Common Public License v1.0 and is thus free for both open-source and commercial development. Reports on class, method, basic block, and line coverage (text, HTML, and XML). EMMA can instrument classes for coverage either offline (before they are loaded) or on the fly (using an instrumenting application classloader). Supported coverage types: class, method, line, basic block. EMMA can detect when a single source code line is covered only partially. Coverage stats are aggregated at method, class, package, and “all classes” levels. Output report types: plain text, HTML, XML. All report types support drill-down, to a user-controlled detail depth. The HTML report supports source code linking. Output reports can highlight items with coverage levels below user-provided thresholds. Coverage data obtained in different instrumentation or test runs can be merged together. EMMA does not require access to the source code and degrades gracefully with decreasing amount of debug information available in the input classes. EMMA can instrument individial .class files or entire .jars (in place, if desired). Efficient coverage subset filtering is possible, too. Makefile and ANT build integration are supported on equal footing. EMMA is quite fast: the runtime overhead of added instrumentation is small (5-20%) and the bytecode instrumentor itself is very fast (mostly limited by file I/O speed). Memory overhead is a few hundred bytes per Java class. EMMA is 100% pure Java, has no external library dependencies, and works in any Java 2 JVM (even 1.2.x). 6. XRadar from http://xradar.sourceforge.net/ License: BSD (me thinks) The XRadar is an open extensible code report tool currently supporting all Java based systems. The batch-processing framework produces HTML/SVG reports of the systems current state and the development over time – all presented in sexy tables and graphs. The XRadar gives measurements on standard software metrics such as package metrics and dependencies, code size and complexity, code duplications, coding violations and code-style violations. 7. Hammurapi from Hammurapi Group License: (if anyone knows the license for this email me Venkatt.Guhesan at Y! dot com) Hammurapi is a tool for execution of automated inspection of Java program code. Following the example of 282 rules of Hammurabi’s code, we are offered over 120 Java classes, the so-called inspectors, which can, at three levels (source code, packages, repository of Java files), state whether the analysed source code contains violations of commonly accepted standards of coding. Relevant Links: http://en.sdjournal.org/products/articleInfo/93 http://wiki.hammurapi.biz/index.php?title=Hammurapi_4_Quick_Start 8. Relief from http://www.workingfrog.org/ License: GPL Relief is a design tool providing a new look on Java projects. Relying on our ability to deal with real objects by examining their shape, size or relative place in space it gives a “physical” view on java packages, types and fields and their relationships, making them easier to handle. 9. Hudson from http://hudson-ci.org/ License: MIT Hudson is a continuous integration (CI) tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. It supports SCM tools including CVS, Subversion, Git and Clearcase and can execute Apache Ant and Apache Maven based projects, as well as arbitrary shell scripts and Windows batch commands. 10. Cobertura from http://cobertura.sourceforge.net/ License: GNU GPL Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage. 11. SonarSource from http://www.sonarsource.org/ (recommended by Vishwanath Krishnamurthi – thanks) License: LGPL Sonar is an open platform to manage code quality. As such, it covers the 7 axes of code quality: Architecture & Design, Duplications, Unit Tests, Complexity, Potential bugs, Coding rules, Comments. From http://mythinkpond.wordpress.com/2011/07/14/java-tools-for-source-code-optimization-and-analysis/
July 29, 2011
by Venkatt Guhesan
· 64,194 Views
article thumbnail
Java: What is the Limit to the Number of Threads You Can Create?
I have seen a number of tests where a JVM has 10K threads. However, what happens if you go beyond this? My recommendation is to consider having more servers once your total reaches 10K. You can get a decent server for $2K and a powerful one for $10K. Creating threads gets slower The time it takes to create a thread increases as you create more thread. For the 32-bit JVM, the stack size appears to limit the number of threads you can create. This may be due to the limited address space. In any case, the memory used by each thread's stack add up. If you have a stack of 128KB and you have 20K threads it will use 2.5 GB of virtual memory. Bitness Stack Size Max threads 32-bit 64K 32,073 32-bit 128K 20,549 32-bit 256K 11,216 64-bit 64K stack too small 64-bit 128K 32,072 64-bit 512K 32,072 Note: in the last case, the thread stacks total 16 GB of virtual memory. Java 6 update 26 32-bit,-XX:ThreadStackSize=64 4,000 threads: Time to create 4,000 threads was 0.522 seconds 8,000 threads: Time to create 4,000 threads was 1.281 seconds 12,000 threads: Time to create 4,000 threads was 1.874 seconds 16,000 threads: Time to create 4,000 threads was 2.725 seconds 20,000 threads: Time to create 4,000 threads was 3.333 seconds 24,000 threads: Time to create 4,000 threads was 4.151 seconds 28,000 threads: Time to create 4,000 threads was 5.293 seconds 32,000 threads: Time to create 4,000 threads was 6.636 seconds After creating 32,073 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 32-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.525 seconds 8,000 threads: Time to create 4,000 threads was 1.239 seconds 12,000 threads: Time to create 4,000 threads was 1.902 seconds 16,000 threads: Time to create 4,000 threads was 2.529 seconds 20,000 threads: Time to create 4,000 threads was 3.165 seconds After creating 20,549 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 32-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.526 seconds 8,000 threads: Time to create 4,000 threads was 1.212 seconds After creating 11,216 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 64-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.577 seconds 8,000 threads: Time to create 4,000 threads was 1.292 seconds 12,000 threads: Time to create 4,000 threads was 1.995 seconds 16,000 threads: Time to create 4,000 threads was 2.653 seconds 20,000 threads: Time to create 4,000 threads was 3.456 seconds 24,000 threads: Time to create 4,000 threads was 4.663 seconds 28,000 threads: Time to create 4,000 threads was 5.818 seconds 32,000 threads: Time to create 4,000 threads was 6.792 seconds After creating 32,072 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 64-bit,-XX:ThreadStackSize=512 4,000 threads: Time to create 4,000 threads was 0.577 seconds 8,000 threads: Time to create 4,000 threads was 1.292 seconds 12,000 threads: Time to create 4,000 threads was 1.995 seconds 16,000 threads: Time to create 4,000 threads was 2.653 seconds 20,000 threads: Time to create 4,000 threads was 3.456 seconds 24,000 threads: Time to create 4,000 threads was 4.663 seconds 28,000 threads: Time to create 4,000 threads was 5.818 seconds 32,000 threads: Time to create 4,000 threads was 6.792 seconds After creating 32,072 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) The Code MaxThreadsMain.java From http://vanillajava.blogspot.com/2011/07/java-what-is-limit-to-number-of-threads.html
July 26, 2011
by Peter Lawrey
· 147,875 Views · 1 Like
article thumbnail
Five reasons why you should rejoice about Kotlin
As you probably saw by now, JetBrains just announced that they are working on a brand new statically typed JVM language called Kotlin. I am planning to write a post to evaluate how Kotlin compares to the other existing languages, but first, I’d like to take a slightly different angle and try to answer a question I have already seen asked several times: what’s the point? We already have quite a few JVM languages, do we need any more? Here are a few reasons that come to mind. 1) Coolness New languages are exciting! They really are. I’m always looking forward to learning new languages. The more foreign they are, the more curious I am, but the languages I really look forward to discovering are the ones that are close to what I already know but not identical, just to find out what they did differently that I didn’t think of. Allow me to make a small digression in order to clarify my point. Some time ago, I started learning Japanese and it turned out to be the hardest and, more importantly, the most foreign natural language I ever studied. Everything is different from what I’m used to in Japanese. It’s not just that the grammar, the syntax and the alphabets are odd, it’s that they came up with things that I didn’t even think would make any sense. For example, in English (and many other languages), numbers are pretty straightforward and unique: one bag, two cars, three tickets, etc… Now, did it ever occur to you that a language could allow several words to mean “one”, and “two”, and “three”, etc…? And that these words are actually not arbitrary, their usage follows some very specific rules. What could these rules be? Well, in Japanese, what governs the word that you pick is… the shape of the object that you are counting. That’s right, you will use a different way to count if the object is long, flat, a liquid or a building. Mind-boggling, isn’t it? Here is another quick example: in Russian, each verb exists in two different forms, which I’ll call A and B to simplify. Russian doesn’t have a future tense, so when you want to speak at the present tense, you’ll conjugate verb A in the present, and when you want the future, you will use the B form… in the present tense. It’s not just that you need to learn two verbs per verb, you also need to know which one is which if you want to get your tenses right. Oh and these forms also have different meanings when you conjugate them in past tenses. End of digression. The reason why I am mentioning this is because this kind of construct bends your mind, and this goes for natural languages as much as programming languages. It’s tremendously exciting to read new syntaxes this way. For that reason alone, the arrival of new languages should be applauded and welcome. Kotlin comes with a few interesting syntactic innovations of its own, which I’ll try to cover in a separate post, but for now, I’d like to come back to my original point, which was to give you reasons why you should be excited about Kotlin, so let’s keep going down the list. 2) IDE support None of the existing JVM languages (Groovy, Scala, Fantom, Gosu, Ceylon) have really focused much on the tooling aspect. IDE plug-ins exist for each of them, all with varying quality, but they are all an afterthought, and they suffer from this oversight. The plug-ins are very slow to mature, they have to keep up with the internals of a compiler that’s always evolving and which, very often, doesn’t have much regards for the tools built on top of it. It’s a painful and frustrating process for tool creators and tool users alike. With Kotlin, we have good reasons to think that the IDE support will be top notch. JetBrains is basically announcing that they are building the compiler and the IDEA support in lockstep, which is a great way to guarantee that the language will work tremendously well inside IDEA, but also that other tools should integrate nicely with it as well (I’m rooting for a speedy Eclipse plug-in, obviously). 3) Reified generics This is a pretty big deal. Not so much for the functionality (I’ll get back to this in the next paragraph) but because this is probably the very first time that we see a JVM language with true support for reified generics. This innovation needs to be saluted. Correction from the comments: Gosu has reified generics Having said that, I don’t feel extremely excited by this feature because overall, I think that reified generics come at too high a price. I’ll try to dedicate a full blog post to this topic alone, because it deserves a more thorough treatment. 4) Commercial support JetBrains has a very clear financial interest in seeing Kotlin succeed. It’s not just that they are a commercial entity that can put money behind the development of the language, it’s also that the success of the language would most likely mean that they will sell more IDEA licenses, and this can also turn into an additional revenue stream derived from whatever other tools they might come up with that would be part of the Kotlin ecosystem. This kind of commercial support for a language was completely unheard of in the JVM world for fifteen years, and suddenly, we have two instances of it (Typesafe and now JetBrains). This is a good sign for the JVM community. 5) Still no Java successors Finally, the simple truth is that we still haven’t found any credible Java successor. Java still reigns supreme and is showing no sign of giving away any mindshare. Pulling numbers out of thin air, I would say that out of all the code currently running on the JVM today, maybe 94% of it is in Java, 3% is in Groovy and 1% is in Scala. This 94% figure needs to go down, but so far, no language has stepped up to whittle it down significantly. What will it take? Obviously, nothing that the current candidates are offering (closures, modularity, functional features, more concise syntax, etc…) has been enough to move that needle. Something is still missing. Could the missing piece be “stellar IDE support” or “reified generics”? We don’t know yet because no contender offers any of these features, but Kotlin will, so we will soon know. Either way, I am predicting that we will keep seeing new JVM languages pop up at a regular pace until one finally claims the prize. And this should be cause for rejoicing for everyone interested in the JVM ecosystem. So let’s cheer for Kotlin and wish JetBrains the best of luck with their endeavor. I can’t wait to see what will come out of this. Oh, and secretly, I am rooting for Eclipse to start working on their own JVM language too, obviously. From http://beust.com/weblog/2011/07/20/five-reasons-why-should-rejoice-about-kotlin/
July 22, 2011
by Cedric Beust
· 8,257 Views
article thumbnail
Java Collection Performance
Learn more about Java collection performance in this post.
July 18, 2011
by Leo Lewis
· 122,292 Views · 10 Likes
  • Previous
  • ...
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: