DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
Using Maven to Generate Wrapped or Non-Wrapped SOAP Bindings
For a given WSDL, there are several different ways to generate Java web service code (CXF, Axis2, etc..). And depending on certain settings within the WSDL file and settings used by the relevant build tool, there are different ways of exposing those services described in the WSDL. This post will briefly document the generating of Java code for a WSDL using Maven and the jaxws wsimport plugin. It will also show the difference in the services exposed when using wrapped and non-wrapped bindings. Below is an extract from a pom.xml to generate the Java code: org.codehaus.mojo jaxws-maven-plugin 1.10 wsimport City81SOAPService.wsdl ${basedir}/src/wsdl generate-sources generate-sources ......... ${project.build.directory}/generated-sources/jaxws-wsimport true true true ${basedir}/src/jax-ws-catalog.xml For the below WSDL file, the wsimport plugin will generate the following classes: com\city81\soap\Balance.java com\city81\soap\City81SOAP.java com\city81\soap\City81SOAPImplService.java com\city81\soap\CreateCustomer.java com\city81\soap\CreateCustomerResponse.java com\city81\soap\CreateCustomerResponseType.java com\city81\soap\CreateStatus.java com\city81\soap\ObjectFactory.java com\city81\soap\package-info.java For the above settings, the generated City81SOAP class will be as below: @WebService(name = "City81SOAP", targetNamespace = "http://soap.city81.com/") @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @XmlSeeAlso({ ObjectFactory.class }) public interface City81SOAP { @WebMethod(action = "http://soap.city81.com/createCustomer") @WebResult(name = "createCustomerResponse", targetNamespace = "http://soap.city81.com/", partName = "params") public CreateCustomerResponse createCustomer(@WebParam(name = "createCustomer", targetNamespace = "http://soap.city81.com/", partName = "params") CreateCustomer params); } The binding style as can be seen from the @SOAPBinding annotation at the head of the class is BARE ie non-wrapped. The method's args and return parameters are in each case represented as a single Java object. CreateCustomer and CreateCustomerResponse. This has happened because in the pom.xml file, there is a bindingDirectory tag which points to a folder containing a binding.xml file. This file, shown below, has an enableWrapperStyle tag and the boolean value of false. false If the boolean was true, or if there was no bindingDirectory tag in the pom.xml file, then the default SOAP binding style would be used ie WRAPPED. This would then result in the below generated City81SOAP class: @WebService(name = "City81SOAP", targetNamespace = "http://soap.city81.com/") @XmlSeeAlso({ ObjectFactory.class }) public interface City81SOAP { @WebMethod(action = "http://soap.city81.com/createCustomer") @RequestWrapper(localName = "createCustomer", targetNamespace = "http://soap.city81.com/", className = "com.city81.soap.CreateCustomer") @ResponseWrapper(localName = "createCustomerResponse", targetNamespace = "http://soap.city81.com/", className = "com.city81.soap.CreateCustomerResponse") public void createCustomer( @WebParam(name = "surname", targetNamespace = "") String surname, @WebParam(name = "firstName", targetNamespace = "") String firstName, @WebParam(name = "balance", targetNamespace = "") Balance balance, @WebParam(name = "customerId", targetNamespace = "", mode = WebParam.Mode.OUT) Holder customerId, @WebParam(name = "status", targetNamespace = "", mode = WebParam.Mode.OUT) Holder status); } The method's args are now individual Java objects and the return parameters are each represented as Holder objects with a WebParam.Mode.OUT value denoting they are return objects. This means that return objects are set as opposed to actually being returned in the method's signature. Another way to specify bindings other than using the binding.xml file is to embed the enableWrapperStyle as a child of the portType but if a WSDL is from a third party, then having to change it every time a new version of the WSDL is released is open to errors. false ... Back to the generated interfaces, and these of course need to be implemented. For an interface with a binding type of BARE, the implemented class would look like below: @WebService(targetNamespace = "http://soap.city81.com/", name = "City81SOAP", portName = "City81SOAPImplPort", serviceName = "City81SOAPImplService") @SOAPBinding(style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.BARE) public class City81SOAPImpl implements City81SOAP { @Override public CreateCustomerResponse createCustomer(CreateCustomer createCustomer) { CreateCustomerResponse createCustomerResponse = new CreateCustomerResponse(); ..... return createCustomerResponse; } } In the case of WRAPPED binding style, the SOAPBinding annotation would include parameterStyle = SOAPBinding.ParameterStyle.WRAPPED and the createCustomer method would be as below: public void createCustomer( String surname, String firstName, Balance balance, Holder customerId, Holder status) { customerId= new Holder("1"); status = new Holder(CreateStatus.CREATE_PENDING); } This post shows that there are different ways to ultimately achieve the same result.
October 1, 2012
by Geraint Jones
· 35,121 Views
article thumbnail
Customizing Spring Data JPA Repository
Spring Data is a very convenient library. However, as the project as quite new, it is not well featured. By default, Spring Data JPA will provide implementation of the DAO based on SimpleJpaRepository. In recent project, I have developed a customize repository base class so that I could add more features on it. You could add vendor specific features to this repository base class as you like. Configuration You have to add the following configuration to you spring beans configuration file. You have to specified a new repository factory class. We will develop the class later. extends SimpleJpaRepository implements GenericRepository , Serializable{ private static final long serialVersionUID = 1L; static Logger logger = Logger.getLogger(GenericRepositoryImpl.class); private final JpaEntityInformation entityInformation; private final EntityManager em; private final DefaultPersistenceProvider provider; private Class springDataRepositoryInterface; public Class getSpringDataRepositoryInterface() { return springDataRepositoryInterface; } public void setSpringDataRepositoryInterface( Class springDataRepositoryInterface) { this.springDataRepositoryInterface = springDataRepositoryInterface; } /** * Creates a new {@link SimpleJpaRepository} to manage objects of the given * {@link JpaEntityInformation}. * * @param entityInformation * @param entityManager */ public GenericRepositoryImpl (JpaEntityInformation entityInformation, EntityManager entityManager , Class springDataRepositoryInterface) { super(entityInformation, entityManager); this.entityInformation = entityInformation; this.em = entityManager; this.provider = DefaultPersistenceProvider.fromEntityManager(entityManager); this.springDataRepositoryInterface = springDataRepositoryInterface; } /** * Creates a new {@link SimpleJpaRepository} to manage objects of the given * domain type. * * @param domainClass * @param em */ public GenericRepositoryImpl(Class domainClass, EntityManager em) { this(JpaEntityInformationSupport.getMetadata(domainClass, em), em, null); } public S save(S entity) { if (this.entityInformation.isNew(entity)) { this.em.persist(entity); flush(); return entity; } entity = this.em.merge(entity); flush(); return entity; } public T saveWithoutFlush(T entity) { return super.save(entity); } public List saveWithoutFlush(Iterable entities) { List result = new ArrayList(); if (entities == null) { return result; } for (T entity : entities) { result.add(saveWithoutFlush(entity)); } return result; } } As a simple example here, I just override the default save method of the SimpleJPARepository. The default behaviour of the save method will not flush after persist. I modified to make it flush after persist. On the other hand, I add another method called saveWithoutFlush() to allow developer to call save the entity without flush. Define Custom repository factory bean The last step is to create a factory bean class and factory class to produce repository based on your customized base repository class. public class DefaultRepositoryFactoryBean , S, ID extends Serializable> extends JpaRepositoryFactoryBean { /** * Returns a {@link RepositoryFactorySupport}. * * @param entityManager * @return */ protected RepositoryFactorySupport createRepositoryFactory( EntityManager entityManager) { return new DefaultRepositoryFactory(entityManager); } } /** * * The purpose of this class is to override the default behaviour of the spring JpaRepositoryFactory class. * It will produce a GenericRepositoryImpl object instead of SimpleJpaRepository. * */ public class DefaultRepositoryFactory extends JpaRepositoryFactory{ private final EntityManager entityManager; private final QueryExtractor extractor; public DefaultRepositoryFactory(EntityManager entityManager) { super(entityManager); Assert.notNull(entityManager); this.entityManager = entityManager; this.extractor = DefaultPersistenceProvider.fromEntityManager(entityManager); } @SuppressWarnings({ "unchecked", "rawtypes" }) protected JpaRepository getTargetRepository( RepositoryMetadata metadata, EntityManager entityManager) { Class repositoryInterface = metadata.getRepositoryInterface(); JpaEntityInformation entityInformation = getEntityInformation(metadata.getDomainType()); if (isQueryDslExecutor(repositoryInterface)) { return new QueryDslJpaRepository(entityInformation, entityManager); } else { return new GenericRepositoryImpl(entityInformation, entityManager, repositoryInterface); //custom implementation } } @Override protected Class getRepositoryBaseClass(RepositoryMetadata metadata) { if (isQueryDslExecutor(metadata.getRepositoryInterface())) { return QueryDslJpaRepository.class; } else { return GenericRepositoryImpl.class; } } /** * Returns whether the given repository interface requires a QueryDsl * specific implementation to be chosen. * * @param repositoryInterface * @return */ private boolean isQueryDslExecutor(Class repositoryInterface) { return QUERY_DSL_PRESENT && QueryDslPredicateExecutor.class .isAssignableFrom(repositoryInterface); } } Conclusion You could now add more features to base repository class. In your program, you could now create your own repository interface extending GenericRepository instead of JpaRepository. public interface MyRepository extends GenericRepository { void someCustomMethod(ID id); } In next post, I will show you how to add hibernate filter features to this GenericRepository.
September 27, 2012
by Boris Lam
· 97,675 Views · 4 Likes
article thumbnail
Caching with Guava
Guava cache is a simple library that provides flexible and powerful caching features.
September 27, 2012
by Yusuf Aytaş
· 69,207 Views · 11 Likes
article thumbnail
Fixing Common Java Security Code Violations in Sonar
This article aims to show you how to quickly fix the most common java security code violations. It assumes that you are familiar with the concept of code rules and violations and how Sonar reports on them. However, if you haven’t heard these terms before then you might take a look at Sonar Concepts or the forthcoming book about Sonar for a more detailed explanation. To get an idea, during Sonar analysis, your project is scanned by many tools to ensure that the source code conforms with the rules you’ve created in your quality profile. Whenever a rule is violated… well a violation is raised. With Sonar you can track these violations with violations drilldown view or in the source code editor. There are hundreds of rules, categorized based on their importance. Ill try, in future posts, to cover as many as I can but for now let’s take a look at some common security rules / violations. There are two pairs of rules (all of them are ranked as critical in Sonar ) we are going to examine right now. 1. Array is Stored Directly ( PMD ) and Method returns internal array ( PMD ) These violations appear in the cases when an internal Array is stored or returned directly from a method. The following example illustrates a simple class that violates these rules. public class CalendarYear { private String[] months; public String[] getMonths() { return months; } public void setMonths(String[] months) { this.months = months; } } To eliminate them you have to clone the Array before storing / returning it as shown in the following class implementation, so noone can modify or get the original data of your class but only a copy of them. public class CalendarYear { private String[] months; public String[] getMonths() { return months.clone(); } public void setMonths(String[] months) { this.months = months.clone(); } } 2. Nonconstant string passed to execute method on an SQL statement (findbugs) and A prepared statement is generated from a nonconstant String (findbugs) Both rules are related to database access when using JDBC libraries. Generally there are two ways to execute an SQL Commants via JDBC connection : Statement and PreparedStatement. There is a lot of discussion about pros and cons but it’s out of the scope of this post. Let’s see how the first violation is raised based on the following source code snippet. Statement stmt = conn.createStatement(); String sqlCommand = "Select * FROM customers WHERE name = '" + custName + "'"; stmt.execute(sqlCommand); You’ve already noticed that the sqlcommand parameter passed to execute method is dynamically created during run-time which is not acceptable by this rule. Similar situations causes the second violation. String sqlCommand = "insert into customers (id, name) values (?, ?)"; Statement stmt = conn.prepareStatement(sqlCommand); You can overcome this problems with three different ways. You can either use StringBuilder or String.format method to create the values of the string variables. If applicable you can define the SQL Commands as Constant in class declaration, but it’s only for the case where the SQL command is not required to be changed in runtime. Let’s re-write the first code snippet using StringBuilder Statement stmt = conn.createStatement(); stmt.execute(new StringBuilder("Select FROM customers WHERE name = '"). append(custName). append("'").toString()); and using String.format Statement stmt = conn.createStatement(); String sqlCommand = String.format("Select * from customers where name = '%s'", custName); stmt.execute(sqlCommand); For the second example you can just declare the sqlCommand as following private static final SQLCOMMAND = insert into customers (id, name) values (?, ?)"; There are more security rules such as the blocker Hardcoded constant database password but I assume that nobody is still hardcodes passwords in source code files… In following articles I’m going to show you how to adhere to performance and bad practice rules. Until then I’m waiting for your comments or suggestions.
September 26, 2012
by Patroklos Papapetrou
· 26,732 Views
article thumbnail
Choosing Static vs. Dynamic Languages for Your Startup
Everyone is thinking why in the world would anyone pick static, when you can be dynamic? Usually the thought process is, "what language am I most proficient in, that can do the job." Totally not a bad way to go about it. Now does this choice affect anything else? Testing? Speed of development? Robustness? Dynamic vs. Static Dynamic languages are languages that don’t necessarily need variables to be declared before they are used. Examples of dynamic languages are Python, Ruby, and PHP. So in dynamic languages the following is possible: num = 10 We have successfully assigned a value to variable without declaring it before hand. Simple enough, try doing this in Java (you can’t). This can *increase* development speed, without having to write boilerplate code. This can somewhat be a double edge sword, since dynamic languages types are checked during runtime, there is no way to tell if there is a bug in code until it is run. I know you can test, but you can’t test for everything. You can’t test for everything. Here is an example albeit trivial. def get_first_problem(problems): for problem in problems: problam = problem + 1 return problam Now if you are raging to some serious dubstep, its easy enough to miss that small typo, you go screw it and do it live, and deploy to production. Python will simply create the new variable and not a single thing will be said. Only you can stop bugs in production! Static languages are languages that variables need to be declared before use and type checking is done at compile time. Examples of static languages include Java, C, and C++. So in static languages the following is enforced static int awesomeNumber; awesomeNumber = 10; Many argue this increases robustness as well as decrease chances of Runtime Errors. Since the compiler will catch those horrible horrible mistakes you made throughout your code. Your methods contracts are tighter, downside to this is crap ton of boilerplate code. Weak and Strong Typing can be often be confused with dynamic and static languages. Weak typed languages can lead to philosophical questions like what does the number 2 added to the word ‘two’ give you? Things like this are possible with a weak typed language. a = 2 b = "2" concatenate(a, b) // Returns "22" add(a, b) // Returns 4 Traditionally languages may place restriction on what transaction may occur for example in a strong typed language adding a string and integer will result in a type error as shown below. >>> a = 10 >>> b = 'ten' >>> a + b Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for +: 'int' and 'str' >>> Conclusion Regardless of where you land on this discussion, claiming one is better than the other would lead to flame war, but there are places where each is strong. Dynamic languages are good for fast quick development cycles and prototyping, while static languages are better suited to longer development cycles where trivial bugs could be extremely costly (telecommunication systems, air traffic control). For example if some giant company called Moo Corp. spent millions of dollars on QA and Testing and a bug somehow gets into the field, to fix it would mean another round of testing. When sitting in that chair the choice is clear static languages FTW, its a hard job but someone has to milk the cows. Test, test, and test. Just a little food for thought, for when you are starting your next project. You never know what limitations you maybe placing on yourself and your team. What do you do consider when selecting a programming language for a project?
September 25, 2012
by Mahdi Yusuf
· 24,560 Views
article thumbnail
Introducing the New Date and Time API for JDK 8
Date and time handling in Java is a somewhat tricky part when you are new to the language. Time can be accessed via the static method System.currentTimeMillis() which returns the current time in milliseconds from January 1st 1970. If you prefer to work with Objects instead you can use java.util.Date, a class whose methods are mostly deprecated in recent versions of Java. To work with time offsets, say add one month to a date, there is java.util.GregorianCalendar. All in all, those methods described here are not very convenient to work with. Java 7 and below are lacking a good date and time API. The Joda Time library is a common drop-in if you need to work with date/time. With JSR 310 (Java Specification Request) this is about to change. JSR 310 adds a new date, time and calendar API to Java 8. The ThreeTen project provides a reference implementation to this new API and can already be utilized in current Java projects (I however recommend not to do this for production). As the README states: The API is currently considered usable and accurate, yet incomplete and subject to change. If you use this API you must be able to handle incompatible changes in later versions. Building ThreeTen Building the ThreeTen project is relatively easy. It requires both Git and Ant to be installed on your system. git clone git://github.com/ThreeTen/threeten.git cd threeten ant This will first fetch the most recent version of ThreeTen and then start the build process using ant. Note that building the library also requires either OpenJDK 1.6 or Oracle JDK 1.6. JSR 310 The new API specifies a number of new classes which are divided into the categories of continuous and human time. Continuous time is based on Unix time and is represented as a single incrementing number. Class Description Instant A point in time in nanoseconds from January 1st 1970 Duration An amount of time measured in nanoseconds Human time is based on fields that we use in our daily lifes such as day, hour, minute and second. It is represented by a group of classes, some of which we will discuss in this article. Class Description LocalDate a date, without time of day, offset or zone LocalTime the time of day, without date, offset or zone LocalDateTime the date and time, without offset or zone OffsetDate a date with an offset such as +02:00, without time of day or zone OffsetTime the time of day with an offset such as +02:00, without date or zone OffsetDateTime the date and time with an offset such as +02:00, without a zone ZonedDateTime the date and time with a time zone and offset YearMonth a year and month MonthDay month and day Year/MonthOfDay/DayOfWeek/... classes for the important fields DateTimeFields stores a map of field-value pairs which may be invalid Calendrical access to the low-level API Period a descriptive amount of time, such as "2 months and 3 days" In addition to the above classes three support classes have been implemented. The Clock class wraps the current time and date, ZoneOffset is a time offset from UTC and ZoneId defines a time zone such as 'Australia/Brisbane'. Using the API Getting the current time The current time is represented by the Clock class. The class is abstract, so you can not create instances of it. The systemUTC() static method will return the current time based on your system clock and set to UTC. import javax.time.Clock; Clock clock = Clock.systemUTC(); To use the default time zone on your system there also is systemDefaultZone(). Clock clock = Clock.systemDefaultZone(); The millis() method can then be used to access the current time in milliseconds from January 1st, 1970. This shows, that the Clock class and all subclasses are wrapped around System.currentTimeMillis(). Clock clock = Clock.systemDefaultZone(); long time = clock.millis(); Working with time zones To work with time zones you need to import the ZoneId class. The class provides a method to get the default system time zone: import javax.time.ZoneId; import javax.time.Clock; ZoneId zone = ZoneId.systemDefault(); Clock clock = Clock.system(zone); As seen above, the ZoneId can then be used to get an instance of a Clock with that time zone. Other time zones can be accessed by their name, e.g.: ZoneId zone = ZoneId.of("Europe/Berlin"); Clock clock = Clock.system(zone); Getting human date and time Working with a time represented in a single long variable is not what we wanted. We want to work with objects that represent human readable time. The LocalDate, LocalTime and LocalDateTime classes do just that. import javax.time.LocalDate; // The now() method returns the current DateTime LocalDate date = LocalDate.now(); System.out.printf("%s-%s-%s", date.getYear(), date.getMonthValue(), date.getDayOfMonth() ); Using LocalDate to print the current date Doing calculations with times and dates One of the most important functionalities of JSR-310 is that you can do calculations with dates and times. The API makes it very easy to do that. import javax.time.LocalTime; import javax.time.Period; import static javax.time.calendrical.LocalPeriodUnit.HOURS; Period p = Period.of(5, HOURS); LocalTime time = LocalTime.now(); LocalTime newTime; newTime = time.plus(5, HOURS); // or newTime = time.plusHours(5); // or newTime = time.plus(p); Three ways of adding 5 hours to the current time Each class that represents human time implements the AdjustableDateTime interface. The interface requires the plus and the minus method that take a value and a PeriodUnit as argument. Conclusion This article gave a (very) brief introduction into the new date and time API that will ship with Java 8. The API seems to be very consistent and well thought through and provides many ways to interact with dates and times. Upon release of Java 8 the API will be moved from the javax.time package over to java.time, so there will be no conflict if you start using the current implementation.
September 25, 2012
by Fabian Becker
· 78,232 Views
article thumbnail
Allowing JUnit Tests to Pass Test Case on Failures
Why create a mechanism to expect a test failure? There comes a time when one would want and expect a JUnit @Test case fail. Though this is pretty rare, it happens. I had the need to detect when a JUnit Test fails and then, if expected, to pass instead of fail. The specific case was that I was testing a piece of code that could throw an Assert error inside of a call of the object. The code was written to be an enhancement to the popular new Fest Assertions framework, so in order to test the functionality, one would expect test cases to fail on purpose. A Solution One possible solution is to utilize the functionality provided by a JUnit @Rule in conjunction with a custom marker in the form of an annotation. . Why use a @Rule? @Rule objects provide an AOP-like interface to a test class and each test cases. Rules are reset prior to each test case being run and they expose the workings of the test case in the style of an @Around AspectJ advice would. Required code elements @Rule object to check the status of each @Test case @ExpectedFailure custom marker annotation Test cases proving code works! Optional specific exception to be thrown if annotated test case does not fail NOTE: working code is available on my github page and will soon be in Maven Central. Feel free to Fork the project and submit a pull request Example Usage In this example, the "exception" object is a Fest assertion enhanced ExpectedException (look for my next post to expose this functionality). The expected exception will make assertions and in order to test those, the test case must be marked as @ExpectedFailure public class ExceptionAssertTest { @Rule public ExpectedException exception = ExpectedException.none(); @Rule public ExpectedTestFailureWatcher watcher = ExpectedTestFailureWatcher.instance(); @Test @ExpectedFailure("The matcher should fail becasue exception is not a SimpleException") public void assertSimpleExceptionAssert_exceptionIsOfType() { // expected exception will be of type "SimpleException" exception.instanceOf(SimpleException.class); // throw something other than SimpleException...expect failure throw new RuntimeException("this is an exception"); } } Implementation of Solution Reminder, the latest code is available on my github page. @Rule code (ExpectedTestFailureWatcher.java) import org.junit.rules.TestRule; import org.junit.runner.Description; import org.junit.runners.model.Statement; // YEAH Guava!! import static com.google.common.base.Strings.isNullOrEmpty; public class ExpectedTestFailureWatcher implements TestRule { /** * Static factory to an instance of this watcher * * @return New instance of this watcher */ public static ExpectedTestFailureWatcher instance() { return new ExpectedTestFailureWatcher(); } @Override public Statement apply(final Statement base, final Description description) { return new Statement() { @Override public void evaluate() throws Throwable { boolean expectedToFail = description.getAnnotation(ExpectedFailure.class) != null; boolean failed = false; try { // allow test case to execute base.evaluate(); } catch (Throwable exception) { failed = true; if (!expectedToFail) { throw exception; // did not expect to fail and failed...fail } } // placed outside of catch if (expectedToFail && !failed) { throw new ExpectedTestFailureException(getUnFulfilledFailedMessage(description)); } } /** * Extracts detailed message about why test failed * @param description * @return */ private String getUnFulfilledFailedMessage(Description description) { String reason = null; if (description.getAnnotation(ExpectedFailure.class) != null) { reason = description.getAnnotation(ExpectedFailure.class).reason(); } if (isNullOrEmpty(reason)) { reason = "Should have failed but didn't"; } return reason; } }; } } @ExpectedFailure custom annotation (ExpectedFailure.java) import java.lang.annotation.*; /** * Initially this is just a marker annotation to be used by a JUnit4 Test case in conjunction * with ExpectedTestFailure @Rule to indicate that a test is supposed to be failing */ @Documented @Retention(RetentionPolicy.RUNTIME) @Target(value = ElementType.METHOD) public @interface ExpectedFailure { // TODO: enhance by adding specific information about what type of failure expected //Class assertType() default Throwable.class; /** * Text based reason for marking test as ExpectedFailure * @return String */ String reason() default ""; } Custom Exception (Optional, you can easily just throw RuntimeException or existing custom exception) public class ExpectedTestFailureException extends Throwable { public ExpectedTestFailureException(String message) { super(message); } } Can't one exploit the ability to mark a failure as expected? With great power comes great responsibility, it is advised that you do not mark a test as being @ExpectedFailure if you do not understand exactly why the test if failing. It is recommended that this testing method be implemented with care. DO NOT use the @ExpectedFailure annotation as an alternative to @Ignore Possible future enhancements could include ways to specify the specific assertion or the specific message asserted during the test case execution. Known issues In this current state, the @ExpectedFailure annotation can cover up additional assertions and until the future enhancements have been put into place, it is advised to use this methodology wisely.
September 17, 2012
by Mike Ensor
· 36,185 Views
article thumbnail
8 Common Code Violations in Java
At work, recently I did a code cleanup of an existing Java project. After that exercise, I could see a common set of code violations that occur again and again in the code. So, I came up with a list of such common violations and shared it with my peers so that an awareness would help to improve the code quality and maintainability. I’m sharing the list here to a bigger audience. The list is not in any particular order and all derived from the rules enforced by code quality tools such as CheckStyle, FindBugs and PMD. Here we go! Format source code and Organize imports in Eclipse: Eclipse provides the option to auto-format the source code and organize the imports (thereby removing unused ones). You can use the following shortcut keys to invoke these functions. Ctrl + Shift + F – Formats the source code. Ctrl + Shift + O – Organizes the imports and removes the unused ones. Instead of you manually invoking these two functions, you can tell Eclipse to auto-format and auto-organize whenever you save a file. To do this, in Eclipse, go to Window -> Preferences -> Java -> Editor -> Save Actions and then enable Perform the selected actions on save and check Format source code + Organize imports. Avoid multiple returns (exit points) in methods: In your methods, make sure that you have only one exit point. Do not use returns in more than one places in a method body. For example, the below code is NOT RECOMMENDED because it has more then one exit points (return statements). private boolean isEligible(int age){ if(age > 18){ return true; }else{ return false; } } The above code can be rewritten like this (of course, the below code can be still improved, but that’ll be later). private boolean isEligible(int age){ boolean result; if(age > 18){ result = true; }else{ result = false; } return result; } Simplify if-else methods: We write several utility methods that takes a parameter, checks for some conditions and returns a value based on the condition. For example, consider the isEligible method that you just saw in the previous point. private boolean isEligible(int age){ boolean result; if(age > 18){ result = true; }else{ result = false; } return result; } The entire method can be re-written as a single return statement as below. private boolean isEligible(int age){ return age > 18; } Do not create new instances of Boolean, Integer or String: Avoid creating new instances of Boolean, Integer, String etc. For example, instead of using new Boolean(true), use Boolean.valueOf(true). The later statement has the same effect of the former one but it has improved performance. Use curly braces around block statements. Never forget to use curly braces around block level statements such as if, for, while. This reduces the ambiguity of your code and avoids the chances of introducing a new bug when you modify the block level statement. NOT RECOMMENDED if(age > 18) return true; else return false; RECOMMENDED if(age > 18){ return true; }else{ return false; } Mark method parameters as final, wherever applicable: Always mark the method parameters as final wherever applicable. If you do so, when you accidentally modify the value of the parameter, you’ll get a compiler warning. Also, it makes the compiler to optimize the byte code in a better way. RECOMMENDED private boolean isEligible(final int age){ ... } Name public static final fields in UPPERCASE: Always name the public static final fields (also known as Constants) in UPPERCASE. This lets you to easily differentiate constant fields from the local variables. NOT RECOMMENDED public static final String testAccountNo = "12345678"; RECOMMENDED public static final String TEST_ACCOUNT_NO = "12345678";, Combine multiple if statements into one: Wherever possible, try to combine multiple if statements into single one. For example, the below code; if(age > 18){ if( voted == false){ // eligible to vote. } } can be combined into single if statements, as: if(age > 18 && !voted){ // eligible to vote } switch should have default: Always add a default case for the switch statements. Avoid duplicate string literals, instead create a constant: If you have to use a string in several places, avoid using it as a literal. Instead create a String constant and use it. For example, from the below code, private void someMethod(){ logger.log("My Application" + e); .... .... logger.log("My Application" + f); } The string literal “My Application” can be made as an Constant and used in the code. public static final String MY_APP = "My Application"; private void someMethod(){ logger.log(MY_APP + e); .... .... logger.log(MY_APP + f); } Additional Resources: A collection of Java best practices. List of available Checkstyle checks. List of PMD Rule sets
September 14, 2012
by Veera Sundar
· 45,624 Views · 1 Like
article thumbnail
A Better Java Shell Script Wrapper
In many Java projects, you often see wrapper shell script to invoke the java command with its custom application parameters. For example, $ANT_HOME/bin/ant, $GROOVY_HOME/bin/groovy, or even in our TimeMachine Scheduler you will see $TIMEMACHINE_HOME/bin/scheduler.sh. Writing these wrapper script is boring and error prone. Most of the problems come from setting the correct classpath for the application. If you're working on an in-house project for a company, then you can get away with hardcoding paths and your environment vars. But for open source projects, folks have to make the wrapper more flexible and generic. Most of them even provide a .bat version of it. Windows DOS is really a brutal and limited terminal to script away your project need. For this reason, I often encourage others to use Cygwin as much as they can. It at least has a real bash shell to work with. Another common problem with these wrappers is it can quickly get out of hand and have too many duplication of similar scripts liter every where in your project. In this post, I will show you a Java wrapper script that I've written. It's simple to use and very flexible for running just about any Java program. Let's see how it's used first, and then I will print its content at the bottom of the post. Introducing the run-java wrapper script If you take a look at $TIMEMACHINE_HOME/bin/scheduler.sh, you will see that it in turns calls a run-java script that comes in the same directory. DIR=$(dirname $0) SCHEDULER_HOME=$DIR/.. $DIR/run-java -Dscheduler.home="$SCHEDULER_HOME" timemachine.scheduler.tool.SchedulerServer "$@" As you can see, our run-java can take -D options. Not only this, it can also take -cp option as well! What's more is that you can specify these options even after the main class! This makes the run-java re-wrappable by other script, and still be able to add additional system properties and classpath. For examples, the TimeMachine comes with Groovy library, so instead of downloading it's full distribution again, you can simply invoke the groovy like this $TIMEMACHINE_HOME/bin/run-java groovy.ui.GroovyMain test.groovy You can use run-java in any directory you're in, so it's convenient. It will resolve it's own directory and load any jars in the lib directory automatically. Now if you want Groovy to run with more additional jars, you can use the -cp option like this: $TIMEMACHINE_HOME/bin/run-java -cp "$HOME/apps/my-app/lib/*" groovy.ui.GroovyMain test.groovy Often times things will go wrong if you are not careful with Java classpath, but with run-java script you can perform a dry run first: RUN_JAVA_DRY=1 $TIMEMACHINE_HOME/bin/run-java -cp "$HOME/apps/my-app/lib/*" groovy.ui.GroovyMain test.groovy You would run the above all in single line on a command prompt. It should print out your full java command with all options and arguments for you to inspect. There are many more options to the script, which you can find out more by reading the comments in it. The current script will work on any Linux bash or on a Windows Cygwin terminal. Using run-java during development with Maven Above examples are assuming you are in a released project structure such as this $TIMEMACHINE_HOME +- bin/run-java +- lib/*.jar But what about during development? A frequent use case is that you want to be able to run your latest compiled classes under target/classes without have to package up or release the entire project. You can use our run-java in these scenario as well. First, simply add bin/run-java in your project, then you run mvn compile dependency:copy-dependencies that will generate all the jar files into target/dependency. That's all. The run-java will automatically detect these directories and create the correct classpath to run your main class. If you use Eclipse IDE for development, then your target/classes will be always up-to-date, and the run-java can be a great gem to have in your project even for development. Get the run-java wrapper script now #!/usr/bin/env bash # # Copyright 2012 Zemian Deng # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # A wrapper script that run any Java6 application in unix/cygwin env. # # This script is assumed to be located in an application's "bin" directory. It will # auto resolve any symbolic link and always run in relative to this application # directory (which is one parent up from the script.) Therefore, this script can be # run any where in the file system and it will still reference this application # directory. # # This script will by default auto setup a Java classpath that picks up any "config" # and "lib" directories under the application directory. It also will also add a # any typical Maven project output directories such as "target/test-classes", # "target/classes", and "target/dependency" into classpath. This can be disable by # setting RUN_JAVA_NO_PARSE=1. # # If the "Default parameters" section bellow doesn't match to user's env, then user # may override these variables in their terminal session or preset them in shell's # profile startup script. The values of all path should be in cygwin/unix path, # and this script will auto convert them into Windows path where is needed. # # User may customize the Java classpath by setting RUN_JAVA_CP, which will prefix to existing # classpath, or use the "-cp" option, which will postfix to existing classpath. # # Usage: # run-java [java_opts] [-cp /more/classpath] [-Dsysprop=value] # # Example: # run-java example.Hello # run-java example.Hello -Dname=World # run-java org.junit.runner.JUnitCore example.HelloTest -cp "C:\apps\lib\junit4.8.2\*" # # Created by: Zemian Deng 03/09/2012 # This run script dir (resolve to absolute path) SCRIPT_DIR=$(cd $(dirname $0) && pwd) # This dir is where this script live. APP_DIR=$(cd $SCRIPT_DIR/.. && pwd) # Assume the application dir is one level up from script dir. # Default parameters JAVA_HOME=${JAVA_HOME:=/apps/jdk} # This is the home directory of Java development kit. RUN_JAVA_CP=${RUN_JAVA_CP:=$CLASSPATH} # A classpath prefix before -classpath option, default to $CLASSPATH RUN_JAVA_OPTS=${RUN_JAVA_OPTS:=} # Java options (-Xmx512m -XX:MaxPermSize=128m etc) RUN_JAVA_DEBUG=${RUN_JAVA_DEBUG:=} # If not empty, print the full java command line before executing it. RUN_JAVA_NO_PARSE=${RUN_JAVA_NO_PARSE:=} # If not empty, skip the auto parsing of -D and -cp options from script arguments. RUN_JAVA_NO_AUTOCP=${RUN_JAVA_NO_AUTOCP:=} # If not empty, do not auto setup Java classpath RUN_JAVA_DRY=${RUN_JAVA_DRY:=} # If not empty, do not exec Java command, but just print # OS specific support. $var _must_ be set to either true or false. CYGWIN=false; case "`uname`" in CYGWIN*) CYGWIN=true ;; esac # Define where is the java executable is JAVA_CMD=java if [ -d "$JAVA_HOME" ]; then JAVA_CMD="$JAVA_HOME/bin/java" fi # Auto setup applciation's Java Classpath (only if they exists) if [ -z "$RUN_JAVA_NO_AUTOCP" ]; then if $CYGWIN; then # Provide Windows directory conversion JAVA_HOME_WIN=$(cygpath -aw "$JAVA_HOME") APP_DIR_WIN=$(cygpath -aw "$APP_DIR") if [ -d "$APP_DIR_WIN\config" ]; then RUN_JAVA_CP="$RUN_JAVA_CP;$APP_DIR_WIN\config" ; fi if [ -d "$APP_DIR_WIN\target\test-classes" ]; then RUN_JAVA_CP="$RUN_JAVA_CP;$APP_DIR_WIN\target\test-classes" ; fi if [ -d "$APP_DIR_WIN\target\classes" ]; then RUN_JAVA_CP="$RUN_JAVA_CP;$APP_DIR_WIN\target\classes" ; fi if [ -d "$APP_DIR_WIN\target\dependency" ]; then RUN_JAVA_CP="$RUN_JAVA_CP;$APP_DIR_WIN\target\dependency\*" ; fi if [ -d "$APP_DIR_WIN\lib" ]; then RUN_JAVA_CP="$RUN_JAVA_CP;$APP_DIR_WIN\lib\*" ; fi else if [ -d "$APP_DIR/config" ]; then RUN_JAVA_CP="$RUN_JAVA_CP:$APP_DIR/config" ; fi if [ -d "$APP_DIR/target/test-classes" ]; then RUN_JAVA_CP="$RUN_JAVA_CP:$APP_DIR/target/test-classes" ; fi if [ -d "$APP_DIR/target/classes" ]; then RUN_JAVA_CP="$RUN_JAVA_CP:$APP_DIR/target/classes" ; fi if [ -d "$APP_DIR/target/dependency" ]; then RUN_JAVA_CP="$RUN_JAVA_CP:$APP_DIR/target/dependency/*" ; fi if [ -d "$APP_DIR/lib" ]; then RUN_JAVA_CP="$RUN_JAVA_CP:$APP_DIR/lib/*" ; fi fi fi # Parse addition "-cp" and "-D" after the Java main class from script arguments # This is done for convenient sake so users do not have to export RUN_JAVA_CP and RUN_JAVA_OPTS # saparately, but now they can pass into end of this run-java script instead. # This can be disable by setting RUN_JAVA_NO_PARSE=1. if [ -z "$RUN_JAVA_NO_PARSE" ]; then # Prepare variables for parsing FOUND_CP= declare -a NEW_ARGS IDX=0 # Parse all arguments and look for "-cp" and "-D" for ARG in "$@"; do if [[ -n $FOUND_CP ]]; then if [ "$OS" = "Windows_NT" ]; then # Can't use cygpath here, because cygpath will auto expand "*", which we do not # want. User will just have to use OS path when specifying "-cp" option. #ARG=$(cygpath -w -a $ARG) RUN_JAVA_CP="$RUN_JAVA_CP;$ARG" else RUN_JAVA_CP="$RUN_JAVA_CP:$ARG" fi FOUND_CP= else case $ARG in '-cp') FOUND_CP=1 ;; '-D'*) RUN_JAVA_OPTS="$RUN_JAVA_OPTS $ARG" ;; *) NEW_ARGS[$IDX]="$ARG" let IDX=$IDX+1 ;; esac fi done # Display full Java command. if [ -n "$RUN_JAVA_DEBUG" ] || [ -n "$RUN_JAVA_DRY" ]; then echo "$JAVA_CMD" $RUN_JAVA_OPTS -cp "$RUN_JAVA_CP" "${NEW_ARGS[@]}" fi # Run Java Main class using parsed variables if [ -z "$RUN_JAVA_DRY" ]; then "$JAVA_CMD" $RUN_JAVA_OPTS -cp "$RUN_JAVA_CP" "${NEW_ARGS[@]}" fi else # Display full Java command. if [ -n "$RUN_JAVA_DEBUG" ] || [ -n "$RUN_JAVA_DRY" ]; then echo "$JAVA_CMD" $RUN_JAVA_OPTS -cp "$RUN_JAVA_CP" "$@" fi # Run Java Main class if [ -z "$RUN_JAVA_DRY" ]; then "$JAVA_CMD" $RUN_JAVA_OPTS -cp "$RUN_JAVA_CP" "$@" fi fi
September 11, 2012
by Zemian Deng
· 14,456 Views
article thumbnail
Java 7: HashMap vs ConcurrentHashMap
As you may have seen from my past performance related articles and HashMap case studies, Java thread safety problems can bring down your Java EE application and the Java EE container fairly easily. One of most common problems I have observed when troubleshooting Java EE performance problems is infinite looping triggered from the non-thread safe HashMap get() and put() operations. This problem is known since several years but recent production problems have forced me to revisit this issue one more time. This article will revisit this classic thread safety problem and demonstrate, using a simple Java program, the risk associated with a wrong usage of the plain old java.util.HashMap data structure involved in a concurrent threads context. This proof of concept exercise will attempt to achieve the following 3 goals: Revisit and compare the Java program performance level between the non-thread safe and thread safe Map data structure implementations (HashMap, Hashtable, synchronized HashMap, ConcurrentHashMap) Replicate and demonstrate the HashMap infinite looping problem using a simple Java program that everybody can compile, run and understand Review the usage of the above Map data structures in a real-life and modern Java EE container implementation such as JBoss AS7 For more detail on the ConcurrentHashMap implementation strategy, I highly recommend the great article from Brian Goetz on this subject. Tools and server specifications As a starting point, find below the different tools and software’s used for the exercise: Sun/Oracle JDK & JRE 1.7 64-bit Eclipse Java EE IDE Windows Process Explorer (CPU per Java Thread correlation) JVM Thread Dump (stuck thread analysis and CPU per Thread correlation) The following local computer was used for the problem replication process and performance measurements: Intel(R) Core(TM) i5-2520M CPU @ 2.50Ghz (2 CPU cores, 4 logical cores) 8 GB RAM Windows 7 64-bit * Results and performance of the Java program may vary depending of your workstation or server specifications. Java program In order to help us achieve the above goals, a simple Java program was created as per below: The main Java program is HashMapInfiniteLoopSimulator.java A worker Thread class WorkerThread.java was also created The program is performing the following: Initialize different static Map data structures with initial size of 2 Assign the chosen Map to the worker threads (you can chose between 4 Map implementations) Create a certain number of worker threads (as per the header configuration). 3 worker threads were created for this proof of concept NB_THREADS = 3; Each of these worker threads has the same task: lookup and insert a new element in the assigned Map data structure using a random Integer element between 1 – 1 000 000. Each worker thread perform this task for a total of 500K iterations The overall program performs 50 iterations in order to allow enough ramp up time for the HotSpot JVM The concurrent threads context is achieved using the JDK ExecutorService As you can see, the Java program task is fairly simple but complex enough to generate the following critical criteria’s: Generate concurrency against a shared / static Map data structure Use a mix of get() and put() operations in order to attempt to trigger internal locks and / or internal corruption (for the non-thread safe implementation) Use a small Map initial size of 2, forcing the internal HashMap to trigger an internal rehash/resize Finally, the following parameters can be modified at your convenience: ## Number of worker threads private static final int NB_THREADS = 3; ## Number of Java program iterations private static final int NB_TEST_ITERATIONS = 50; ## Map data structure assignment. You can choose between 4 structures // Plain old HashMap (since JDK 1.2) nonThreadSafeMap = new HashMap(2); // Plain old Hashtable (since JDK 1.0) threadSafeMap1 = new Hashtable(2); // Fully synchronized HashMap threadSafeMap2 = new HashMap(2); threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2); // ConcurrentHashMap (since JDK 1.5) threadSafeMap3 = new ConcurrentHashMap(2); /*** Assign map at your convenience ****/ assignedMapForTest = threadSafeMap3; Now find below the source code of our sample program. #### HashMapInfiniteLoopSimulator.java package org.ph.javaee.training4; import java.util.Collections; import java.util.Map; import java.util.HashMap; import java.util.Hashtable; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; /** * HashMapInfiniteLoopSimulator * @author Pierre-Hugues Charbonneau * */ public class HashMapInfiniteLoopSimulator { private static final int NB_THREADS = 3; private static final int NB_TEST_ITERATIONS = 50; private static Map assignedMapForTest = null; private static Map nonThreadSafeMap = null; private static Map threadSafeMap1 = null; private static Map threadSafeMap2 = null; private static Map threadSafeMap3 = null; /** * Main program * @param args */ public static void main(String[] args) { System.out.println("Infinite Looping HashMap Simulator"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com"); for (int i=0; i(2); // Plain old Hashtable (since JDK 1.0) threadSafeMap1 = new Hashtable(2); // Fully synchronized HashMap threadSafeMap2 = new HashMap(2); threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2); // ConcurrentHashMap (since JDK 1.5) threadSafeMap3 = new ConcurrentHashMap(2); // ConcurrentHashMap /*** Assign map at your convenience ****/ assignedMapForTest = threadSafeMap3; long timeBefore = System.currentTimeMillis(); long timeAfter = 0; Float totalProcessingTime = null; ExecutorService executor = Executors.newFixedThreadPool(NB_THREADS); for (int j = 0; j < NB_THREADS; j++) { /** Assign the Map at your convenience **/ Runnable worker = new WorkerThread(assignedMapForTest); executor.execute(worker); } // This will make the executor accept no new threads // and finish all existing threads in the queue executor.shutdown(); // Wait until all threads are finish while (!executor.isTerminated()) { } timeAfter = System.currentTimeMillis(); totalProcessingTime = new Float( (float) (timeAfter - timeBefore) / (float) 1000); System.out.println("All threads completed in "+totalProcessingTime+" seconds"); } } } #### WorkerThread.java package org.ph.javaee.training4; import java.util.Map; /** * WorkerThread * * @author Pierre-Hugues Charbonneau * */ public class WorkerThread implements Runnable { private Map map = null; public WorkerThread(Map assignedMap) { this.map = assignedMap; } @Override public void run() { for (int i=0; i<500000; i++) { // Return 2 integers between 1-1000000 inclusive Integer newInteger1 = (int) Math.ceil(Math.random() * 1000000); Integer newInteger2 = (int) Math.ceil(Math.random() * 1000000); // 1. Attempt to retrieve a random Integer element Integer retrievedInteger = map.get(String.valueOf(newInteger1)); // 2. Attempt to insert a random Integer element map.put(String.valueOf(newInteger2), newInteger2); } } } Performance comparison between thread safe Map implementations The first goal is to compare the performance level of our program when using different thread safe Map implementations: Plain old Hashtable (since JDK 1.0) Fully synchronized HashMap (via Collections.synchronizedMap()) ConcurrentHashMap (since JDK 1.5) Find below the graphical results of the execution of the Java program for each iteration along with a sample of the program console output. # Output when using ConcurrentHashMap Infinite Looping HashMap Simulator Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com All threads completed in 0.984 seconds All threads completed in 0.908 seconds All threads completed in 0.706 seconds All threads completed in 1.068 seconds All threads completed in 0.621 seconds All threads completed in 0.594 seconds All threads completed in 0.569 seconds All threads completed in 0.599 seconds ……………… As you can see, the ConcurrentHashMap is the clear winner here, taking in average only half a second (after an initial ramp-up) for all 3 worker threads to concurrently read and insert data within a 500K looping statement against the assigned shared Map. Please note that no problem was found with the program execution e.g. no hang situation. The performance boost is definitely due to the improved ConcurrentHashMap performance such as the non-blocking get() operation. The 2 other Map implementations performance level was fairly similar with a small advantage for the synchronized HashMap. HashMap infinite looping problem replication The next objective is to replicate the HashMap infinite looping problem observed so often from Java EE production environments. In order to do that, you simply need to assign the non-thread safe HashMap implementation as per code snippet below: /*** Assign map at your convenience ****/ assignedMapForTest = nonThreadSafeMap; Running the program as is using the non-thread safe HashMap should lead to: No output other than the program header Significant CPU increase observed from the system At some point the Java program will hang and you will be forced to kill the Java process What happened? In order to understand this situation and confirm the problem, we will perform a CPU per Thread analysis from the Windows OS using Process Explorer and JVM Thread Dump. 1 - Run the program again then quickly capture the thread per CPU data from Process Explorer as per below. Under explore.exe you will need to right click over the javaw.exe and select properties. The threads tab will be displayed. We can see overall 4 threads using almost all the CPU of our system. 2 – Now you have to quickly capture a JVM Thread Dump using the JDK 1.7 jstack utility. For our example, we can see our 3 worker threads which seems busy/stuck performing get() and put() operations. ..\jdk1.7.0\bin>jstack 272 2012-08-29 14:07:26 Full thread dump Java HotSpot(TM) 64-Bit Server VM (21.0-b17 mixed mode): "pool-1-thread-3" prio=6 tid=0x0000000006a3c000 nid=0x18a0 runnable [0x0000000007ebe000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) "pool-1-thread-2" prio=6 tid=0x0000000006a3b800 nid=0x6d4 runnable [0x000000000805f000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.get(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) "pool-1-thread-1" prio=6 tid=0x0000000006a3a800 nid=0x2bc runnable [0x0000000007d9e000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) .............. 3 – CPU per thread correlation It is now time to convert the Process Explorer thread ID DECIMAL format to HEXA format as per below. The HEXA value allows us to map and identify each thread as per below: ## TID: 1748 (nid=0X6D4) Thread name: pool-1-thread-2 CPU @25.71% Task: Worker thread executing a HashMap.get() operation at java.util.HashMap.get(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) ## TID: 700 (nid=0X2BC) Thread name: pool-1-thread-1 CPU @23.55% Task: Worker thread executing a HashMap.put() operation at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) ## TID: 6304 (nid=0X18A0) Thread name: pool-1-thread-3 CPU @12.02% Task: Worker thread executing a HashMap.put() operation at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) ## TID: 5944 (nid=0X1738) Thread name: pool-1-thread-1 CPU @20.88% Task: Main Java program execution "main" prio=6 tid=0x0000000001e2b000 nid=0x1738 runnable [0x00000000029df000] java.lang.Thread.State: RUNNABLE at org.ph.javaee.training4.HashMapInfiniteLoopSimulator.main(HashMapInfiniteLoopSimulator.java:75) As you can see, the above correlation and analysis is quite revealing. Our main Java program is in a hang state because our 3 worker threads are using lot of CPU and not going anywhere. They may appear "stuck" performing HashMap get() & put() but in fact they are all involved in an infinite loop condition. This is exactly what we wanted to replicate. HashMap infinite looping deep dive Now let’s push the analysis one step further to better understand this looping condition. For this purpose, we added tracing code within the JDK 1.7 HashMap Java class itself in order to understand what is happening. Similar logging was added for the put() operation and also a trace indicating that the internal & automatic rehash/resize got triggered. The tracing added in get() and put() operations allows us to determine if the for() loop is dealing with circular dependency which would explain the infinite looping condition. #### HashMap.java get() operation public V get(Object key) { if (key == null) return getForNullKey(); int hash = hash(key.hashCode()); /*** P-H add-on- iteration counter ***/ int iterations = 1; for (Entry e = table[indexFor(hash, table.length)]; e != null; e = e.next) { /*** Circular dependency check ***/ Entry currentEntry = e; Entry nextEntry = e.next; Entry nextNextEntry = e.next != null?e.next.next:null; K currentKey = currentEntry.key; K nextNextKey = nextNextEntry != null?(nextNextEntry.key != null?nextNextEntry.key:null):null; System.out.println("HashMap.get() #Iterations : "+iterations++); if (currentKey != null && nextNextKey != null ) { if (currentKey == nextNextKey || currentKey.equals(nextNextKey)) System.out.println(" ** Circular Dependency detected! ["+currentEntry+"]["+nextEntry+"]"+"]["+nextNextEntry+"]"); } /***** END ***/ Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null; } HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.resize() in progress... HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 2 HashMap.resize() in progress... HashMap.resize() in progress... HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 2 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 ** Circular Dependency detected! [362565=362565][333326=333326]][362565=362565] HashMap.put() #Iterations : 2 ** Circular Dependency detected! [333326=333326][362565=362565]][333326=333326] HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 ............................. HashMap.put() #Iterations : 56823 Again, the added logging was quite revealing. We can see that following a few internal HashMap.resize() the internal structure became affected, creating circular dependency conditions and triggering this infinite looping condition (#iterations increasing and increasing...) with no exit condition. It is also showing that the resize() / rehash operation is the most at risk of internal corruption, especially when using the default HashMap size of 16. This means that the initial size of the HashMap appears to be a big factor in the risk & problem replication. Finally, it is interesting to note that we were able to successfully run the test case with the non-thread safe HashMap by assigning an initial size setting at 1000000, preventing any resize at all. Find below the merged graph results: The HashMap was our top performer but only when preventing an internal resize. Again, this is definitely not a solution to the thread safe risk but just a way to demonstrate that the resize operation is the most at risk given the entire manipulation of the HashMap performed at that time. The ConcurrentHashMap, by far, is our overall winner by providing both fast performance and thread safety against that test case. JBoss AS7 Map data structures usage We will now conclude this article by looking at the different Map implementations within a modern Java EE container implementation such as JBoss AS 7.1.2. You can obtain the latest source code from the github master branch. Find below the report: Total JBoss AS7.1.2 Java files (August 28, 2012 snapshot): 7302 Total Java classes using java.util.Hashtable: 72 Total Java classes using java.util.HashMap: 512 Total Java classes using synchronized HashMap: 18 Total Java classes using ConcurrentHashMap: 46 Hashtable references were found mainly within the test suite components and from naming and JNDI related implementations. This low usage is not a surprise here. References to the java.util.HashMap were found from 512 Java classes. Again not a surprise given how common this implementation is since the last several years. However, it is important to mention that a good ratio was found either from local variables (not shared across threads), synchronized HashMap or manual synchronization safeguard so “technically” thread safe and not exposed to the above infinite looping condition (pending/hidden bugs is still a reality given the complexity with Java concurrency programming…this case study involving Oracle Service Bus 11g is a perfect example). A low usage of synchronized HashMap was found with only 18 Java classes from packages such as JMS, EJB3, RMI and clustering. Finally, find below a breakdown of the ConcurrentHashMap usage which was our main interest here. As you will see below, this Map implementation is used by critical JBoss components layers such as the Web container, EJB3 implementation etc. ## JBoss Single Sign On Used to manage internal SSO ID's involving concurrent Thread access Total: 1 ## JBoss Java EE & Web Container Not surprising here since lot of internal Map data structures are used to manage the http sessions objects, deployment registry, clustering & replication, statistics etc. with heavy concurrent Thread access. Total: 11 ## JBoss JNDI & Security Layer Used by highly concurrent structures such as internal JNDI security management. Total: 4 ## JBoss domain & managed server management, rollout plans... Total: 7 ## JBoss EJB3 Used by data structures such as File Timer persistence store, application Exception, Entity Bean cache, serialization, passivation... Total: 8 ## JBoss kernel, Thread Pools & protocol management Used by high concurrent Threads Map data structures involved in handling and dispatching/processing incoming requests such as HTTP. Total: 3 ## JBoss connectors such as JDBC/XA DataSources... Total: 2 ## Weld (reference implementation of JSR-299: Contexts and Dependency Injection for the JavaTM EE platform) Used in the context of ClassLoader and concurrent static Map data structures involving concurrent Threads access. Total: 3 ## JBoss Test Suite Used in some integration testing test cases such as an internal Data Store, ClassLoader testing etc. Total: 3 Final words I hope this article has helped you revisit this classic problem and understand one of the common problems and risks associated with a wrong usage of the non-thread safe HashMap implementation. My main recommendation to you is to be careful when using an HashMap in a concurrent threads context. Unless you are a Java concurrency expert, I recommend that you use ConcurrentHashMap instead which offers a very good balance between performance and thread safety. As usual, extra due diligence is always recommended such as performing cycles of load & performance testing. This will allow you to detect thread safety and / or performance problems before you promote the solution to your client production environment. Please provide any comments and share your experience with ConcurrentHashMap or HashMap implementations and troubleshooting.
September 7, 2012
by Pierre - Hugues Charbonneau
· 153,741 Views · 5 Likes
article thumbnail
OCA Java 7: The if and if-else Constructs
Editor's Note: This post is a free chapter from the book from Manning Publications "In the OCA Java SE 7 programmer certification guide" by Mala Gupta In this article, I'll cover if and if-else constructs. We'll examine what happens when these constructs are used with and without curly braces {}. We'll also cover nested if and if-else constructs. The if construct and its flavors An if construct enables you to execute a set of statements in your code based on the result of a condition. This condition must always evaluate to a boolean or a Boolean value. You can specify a set of statements to execute when this condition evaluates to true or false. (In many Java books, you'll notice that the terms constructs and statements are used interchangeably.) Figure 1 shows multiple flavors of the if statement with their corresponding representations. if if-else if-else-if-else Figure 1 Multiple flavors of if statement: if, if-else, and if-else-if In figure 1, condition1 and condition2 refer to a variable or an expression that must evaluate to boolean or Boolean value. statement1, statement2, and statement3 refer to a single line of code or a code block. Because the Boolean wrapper class isn't covered in the OCA Java SE 7 Programmer I exam, we won't cover it here. We'll work with only the boolean data type. Exam Tip: then isn't a keyword in Java and isn't supposed to be used with the if statement. Let's look at the use of some flavors by first defining a set of variables: score, result, name, and file, as follows: int score = 100; String result = ""; String name = "Lion"; java.io.File file = new java.io.File("F"); Figure 2 shows the use of if, if-else, and if-else-if-else constructs and compares them by showing the code side by side. Figure 2 Multiple flavors of if statements implemented using code Let's quickly go through the code used in above if, if-else, and if-else-if-else statements. In the following example code, if condition name.equals("Lion") evaluates to true, a value of 200 is assigned to the variable score: if (name.equals("Lion")) #A score = 200; #A #A Example of if construct In the following example, if condition name.equals("Lion") evaluates to true, a value of 200 is assigned to the variable score. If this condition were to evaluate to false, a value of 300 is assigned to the variable score: if (name.equals("Lion")) #A score = 200; #A else #A score = 300; #A #A Example of if else construct In the following example, if score is equal to 100, the variable result is assigned a value of A. If score is equal to 50, the variable result is assigned a value of B. If the score is equal to 10, the variable result is assigned a value of C. If score doesn't match either of 100, 50, or 10, a value of F is assigned to the variable result. An if-else-if-else construct may use different conditions for all its if constructs: if (score == 100) #A result = "A"; else if (score == 50) #B result = "B"; else if (score == 10) #C result = "C"; else #D result = "F"; #A Condition 1 -> score == 100 #B Condition 2 -> score == 50 #C Condition 3 -> score == 10 #D If none of previous conditions evaluate to true, execute this else Figure 3 shows the previous code. Figure 3 The execution of the if-else-if-else code Figure 3 makes clear multiple points: The last else statement is part of the last if construct and not any of the if constructs before it. The if-else-if-else is an if-else construct, where its else part defines another if construct. A few other programming languages, such as VB and C#, use if-elsif and if-elseif (without space) constructs to define if-else-if constructs. If you've programmed with any of these languages, note the difference is with respect to Java. The following code is equal to the previous code: if (score == 100) result = "A"; else if (score == 50) result = "B"; else if (score == 10) result = "C"; else result="F"; Again, note that none of the previous if constructs use then to define the code to execute if a condition evaluates to true. As mentioned previously, unlike other programming languages, then isn't a keyword in Java and isn't used with the if construct. Exam Tip The if-else-if-else is an if-else construct, where else part defines another if construct. The boolean expression used as a condition for if construct can also include assignment operation. Missing else blocks What happens if you don't define the else statements for an if construct? It's acceptable to define one course of action for an if construct, as follows (omitting the else part): boolean testValue = false; if (testValue == true) System.out.println("value is true"); But you can't define the else part for an if construct, skipping the if code block. The following code won't compile: boolean testValue = false; if (testValue == true) else #A System.out.println("value is false"); #A This won't compile What follows is another interesting and bizarre piece of code: int score = 100; if((score=score+10) > 110); #1 #1 Missing then or else part Line #1 is a valid line of code, even if it doesn't define both the then and else part of the if statement. In this case, if condition evaluates and that's it. The if construct doesn't define any code that should execute based on the result of this condition. Note if(testValue==true) is same as using if(testValue). Similarly, if(testValue==false) is same as using if(!testValue). Implications of presence and absence of {} in if-else constructs You can execute a single statement or a block of statements, when if condition evaluates to true or false values. A block of statement is marked by enclosing single or multiple statements within a pair of curly braces ({}). Examine the following code: String name = "Lion"; int score = 100; if (name.equals("Lion")) score = 200; What happens if you want to execute another line of code, if value of variable name is equal to Lion? Is the following code correct? String name = "Lion"; int score = 100; if (name.equals("Lion")) score = 200; name = "Larry"; #1 #1 Set name to Larry Exam Tip In the exam, watch out for code similar to the above mentioned if construct that uses misleading indentation. In the absence of a code block definition (marked with a pair of {}), only the statement following the if construct forms its part. What happens to the same code if you define an else part for your if construct as follows: String name = "Lion"; int score = 100; if (name.equals("Lion")) score = 200; name = "Larry"; #A else score = 129; #A This statement isn't part of the if construct In this case, the previous code won't compile. The compiler will report that the else part is defined without an if statement. If this leaves you confused, examine the following code, which is indented in order to emphasize the fact that line name = "Larry" isn't part of the else construct: String name = "Lion"; int score = 100; if (name.equals ("Lion")) score = 200; name = "Larry"; #A else #B score = 129; #A Right indentation to emphasize that this statement isn't part of the if construct #B else seems to be defined without a preceding if construct If you want to execute multiple statements for if construct, you should define them within a block of code. You can do so by defining all this code within curly braces ({}). To follow is an example: String name = "Lion"; int score = 100; if (name.equals("Lion")) { #A score = 200; #B name = "Larry"; #B } #C else score = 129; #A Start of code block #B Statements to execute if (name.equals("Lion")) evaluates to true #C End of code block Similarly, you may define multiple lines of code for the else part (incorrectly) as follows: String name = "Lion"; if (name.equals("Lion")) System.out.println("Lion"); else System.out.println("Not a Lion"); System.out.println("Again, not a Lion"); #1 #1 Not part of else construct. Will execute irrespective of the value of variable name The output of the above code is as follows: Lion Again, not a Lion Though code on line #1 seems to execute only if value of variable name matches with value Lion, this is not the case. It is indented incorrectly to trick you into believing that it is a part of the else block. The above code is same as the following code (with correct indentation): String name = "Lion"; if (name.equals("Lion")) System.out.println("Lion"); else System.out.println("Not a Lion"); System.out.println("Again, not a Lion"); #1 #1 Not part of else construct. Will execute irrespective of the value of variable name If you wish to execute the last two statements in the previous code, only if the if condition evaluates to false, you can do so by using {}: String name = "Lion"; if (name.equals("Lion")) System.out.println("Lion"); else { System.out.println("Not a Lion"); System.out.println("Again, not a Lion"); #1 } #1 Now part of else construct. Will execute only when if condition evaluates to false You can define another statement, construct or loop, to execute for an if condition, without using {}, as follows: String name = "Lion"; if (name.equals("Lion")) #A for (int i = 0; i < 3; ++i) #B System.out.println(i); #C #A if condition #B for loop is a single construct that will execute if name.equals("Lion") evaluates to true #C This code is part of the for loop defined at previous line System.out.println(i) is part of the for loop, and not an unrelated statement that follows the for loop. So this code is correct and gives the following output: 0 1 2 Appropriate vs. inappropriate expressions passed as arguments to an if statement The result of an expression used in an if construct must evaluate to a boolean or Boolean value. Given the following definition of variables: int score = 100; boolean allow = false; String name = "Lion"; Up next are examples of some of the valid expressions that can be passed on to an if construct. Note that using == is not a good practice to compare two String objects for equality. The correct way to compare two String objects is to use equals method from the String class. However, comparing two String values using == is a valid expression that returns a boolean value and may also be used in the exam: (score == 100) #A (name == "Lio") #B (score <= 100 || allow) #C (allow) #D #A Evaluates to true #B Evaluates to false #C Evaluates to true #D Evaluates to false Now comes the tricky part of passing an assignment operation to an if construct. What do you think is the output of the following code? boolean allow = false; if (allow = true) #A System.out.println("value is true"); else System.out.println("value is false"); #A This is assignment, not comparison You may think that because the value of the boolean variable allow is set to false, the previous code output's value is false. Revisit the code and notice that assignment operation allow = true assigns the value true to the boolean variable allow. Further, its result is also a boolean value, which makes it eligible to be passed on as an argument to the if construct, Although the previous code has no syntactical errors, it's a logical error-an error in the program logic. The correct code to compare a boolean variable with a boolean literal value should be defined as follows: boolean allow = false; if (allow == true) #A System.out.println("value is true"); else System.out.println("value is false"); #A This is comparison Exam Tip Watch out for the code in the exam that uses the assignment operator (=) to compare a boolean value in the if condition. It won't compare the boolean value; it'll assign a value to it. The correct operator to compare a boolean value is equality operator (==). Nested if constructs A nested if construct is an if construct defined within another if construct. Theoretically, you don't have a limit on the levels of nested if and if-else constructs. Whenever you come across nested if and if-else constructs, you need to be careful about determining the else part of an if statement. If this statement doesn't make a lot of sense, take a look at the following code and determine its output: int score = 110; if (score > 200) #1 if (score <400) #2 if (score > 300) System.out.println(1); else System.out.println(2); else #3 System.out.println(3); #3 #1 if (score>200) #2 if (score<400) #3 To which if does this else belongs? Based on the way the code is indented, you may believe that else at #3 belongs to the if defined at #1. But it belongs to the if defined at #2. To follow is the code with the correct indentation: int score = 110; if (score > 200) if (score <400) if (score > 300) System.out.println(1); else System.out.println(2); else #A System.out.println(3); #A #A This else belongs to the if with condition (score<400) Next, you need to understand how to do the following: How to define an else for an outer if, other than the one that it'll be assigned to by default How to determine to which if does an else belong in nested if constructs Both of these tasks are simple. Let's start with the first one. How to define an else for an outer if other than the one that it'll be assigned to by default The key point is to use curly braces, as follows: int score = 110; if (score > 200) { #1 if (score <400) if (score > 300) System.out.println(1); else System.out.println(2); } #2 else #3 System.out.println(3); #3 #1 Start if construct for score > 200 #2 End if construct for score > 200 #3 else for score > 200 The curly braces at #1 and #2 mark the start and the end of the if condition (score>200) defined at #1. Hence, the else at #3 that follows #2 belongs to the if defined at #1. How to determine to which if an else belongs in nested if constructs If code uses curly braces to mark the start and end of the territory of an if or else construct, it can be simple, as mentioned in the previous section, "How to define an else for an outer if than the one that it'll be assigned to by default." When the if constructs don't use curly braces, don't get confused by the code indentation. Try to match all if with their corresponding else in the following poorly indented code: if (score > 200) if (score <400) if (score > 300) System.out.println(1); else System.out.println(2); else System.out.println(3); Start working inside out, with the innermost if-else statement, matching else with its nearest unmatched if statement. Figure 4 shows how to match the if-else pairs for the previous code, marked with 1, 2, and 3. Figure 4 Matching if-else pairs for poorly indented code Summary We covered the different flavors of the if construct. You saw what happens when these constructs are used with and without curly braces {}. We also covered nested if and if-else constructs. The humble if-else construct can virtually define any set of simple or complicated conditions. OCA Java SE 7 Programmer I Certification Guide By Mala Gupta In the OCA Java SE 7 programmer exam, you'll be asked you'll be asked how to define and control the flow in your code. In this article, based on chapter 4 of OCA Java SE 7 Programmer I Certification Guide, author Mala Gupta How show you to use if, if-else, if-else-if-else and nested if constructs and the difference when these if constructs are used with and without curly braces {}. Here are some other Manning titles you might be interested in: Unit Testing in Java Lasse Koskela Making Java Groovy Kenneth Kousen Play for Java Nicolas Leroux and Sietse de Kaper
September 6, 2012
by Allen Coin
· 15,328 Views
article thumbnail
Using Spring Profiles and Java Configuration
My last blog introduced Spring 3.1’s profiles and explained both the business case for using them and demonstrated their use with Spring XML configuration files. It seems, however, that a good number of developers prefer using Spring’s Java based application configuration, so Spring have designed a way of using profiles with their existing @Configuration annotation. I’m going to demonstrate profiles and the @Configuration annotation using the Person class from my previous blog. This is a simple bean class whose properties vary depending upon which profile is active. public class Person { private final String firstName; private final String lastName; private final int age; public Person(String firstName, String lastName, int age) { this.firstName = firstName; this.lastName = lastName; this.age = age; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public int getAge() { return age; } } Remember that the Guys at Spring recommend that Spring profiles should only be used when you need to load different types or sets of classes and that for setting properties you should continue using the PropertyPlaceholderConfigurer. The reason I’m breaking the rules is that I want to try to write the simplest code possible to demonstrate profiles and Java configuration. At the heart of using Spring profiles with Java configuration is Spring’s new @Profile annotation. The @Profile annotation is used attach a profile name to an @Configuration annotation. It takes a single parameter that can be used in two ways. Firstly to attach a single profile to an @Configuration annotation: @Profile("test1") and secondly, to attach multiple profiles: @Profile({ "test1", "test2" }) Again, I’m going to define two profiles “test1” and “test2” and associate each with a configuration file. Firstly “test1”: @Configuration @Profile("test1") public class Test1ProfileConfig { @Bean public Person employee() { return new Person("John", "Smith", 55); } } ...and then “test2”: @Configuration @Profile("test2") public class Test2ProfileConfig { @Bean public Person employee() { return new Person("Fred", "Williams", 22); } } In the code above, you can see that I'm creating a Person bean with an effective id of employee (this is from the method name) that returns differing property values in each profile. Also note that the @Profile is marked as: @Target(value=TYPE) ...which means that is can only be placed next to the @Configuration annotation. Having attached an @Profile to an @Configuration, the next thing to do is to activate your selected @Profile. This uses exactly the same principles and techniques that I described in my last blog and again, to my mind, the most useful activation technique is to use the "spring.profiles.active" system property. @Test public void testProfileActiveUsingSystemProperties() { System.setProperty("spring.profiles.active", "test1"); ApplicationContext ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } Obviously, you wouldn’t want to hard code things as I’ve done above and best practice usually means keeping the system properties configuration separate from your application. This gives you the option of using either a simple command line argument such as: -Dspring.profiles.active="test1" ...or by adding # Setting a property value spring.profiles.active=test1 to Tomcat’s catalina.properties So, that’s all there is to it: you create your Spring profiles by annotating an @Configuration with an @Profile annotation and then switching on the profile you want to use by setting the spring.profiles.active system property to your profile’s name. As usual, the Guys at Spring don’t just confine you to using system properties to activate profiles, you can do things programatically. For example, the following code creates an AnnotationConfigApplicationContext and then uses an Environment object to activate the “test1” profile, before registering our @Configuration classes. @Test public void testAnnotationConfigApplicationContextThatWorks() { // Can register a list of config classes AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.getEnvironment().setActiveProfiles("test1"); ctx.register(Test1ProfileConfig.class, Test2ProfileConfig.class); ctx.refresh(); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } This is all fine and good, but beware, you need to call AnnotationConfigApplicationContext’s methods in the right order. For example, if you register your @Configuration classes before you specify your profile, then you’ll get an IllegalStateException. @Test(expected = IllegalStateException.class) public void testAnnotationConfigApplicationContextThatFails() { // Can register a list of config classes AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext( Test1ProfileConfig.class, Test2ProfileConfig.class); ctx.getEnvironment().setActiveProfiles("test1"); ctx.refresh(); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } Before closing today’s blog, the code below demonstrates the ability to attach multiple @Profiles to an @Configuration annotation. @Configuration @Profile({ "test1", "test2" }) public class MulitpleProfileConfig { @Bean public Person tourDeFranceWinner() { return new Person("Bradley", "Wiggins", 32); } } @Test public void testMulipleAssignedProfilesUsingSystemProperties() { System.setProperty("spring.profiles.active", "test1"); ApplicationContext ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); Person person = ctx.getBean("tourDeFranceWinner", Person.class); String firstName = person.getFirstName(); assertEquals("Bradley", firstName); System.setProperty("spring.profiles.active", "test2"); ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); person = ctx.getBean("tourDeFranceWinner", Person.class); firstName = person.getFirstName(); assertEquals("Bradley", firstName); } In the code above, 2012 Tour De France winner Bradley Wiggins appears in both the “test1” and “test2” profiles.
August 30, 2012
by Roger Hughes
· 129,087 Views · 6 Likes
article thumbnail
Performance Test: Groovy 2.0 vs. Java
At the end of July 2012, Groovy 2.0 was released with support for static type checking and some performance improvements through the use of JDK7 invokedynamic and type inference as a result of type information now available through static typing. I was interested in seeing some estimate as to how significant the performance improvements in Groovy 2.0 have turned out and how Groovy 2.0 would now compare to Java in terms of performance. In case the performance gap had become minor, or at least acceptable, in the meantime, it would certainly be time to take a serious look at Groovy. Groovy has been ready for production for a long time. So, let's see whether it can compare with Java in terms of performance. The only performance measurement I could find on the Internet was this little benchmark measurment on jlabgroovy. The measurement only consists of calculating Fibonacci numbers with and without the @CompileStatic annotation. That's it; i.e., it's certainly not very meaningful in striving to get an overall impression. I was only interested in obtaining some rough estimate of how Groovy compares to Java as far as performance is concerned. Java performance measurement included Alas, no measurement was included in this little benchmark as to how much time Java takes to calculate Fibonacci numbers. So I "ported" the Groovy code to Java (here it is) and repeated the measurements. All measurements were done on an Intel Core2 Duo CPU E8400 3.00 GHz using JDK7u6 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Groovy plugin using the Groovy compiler version 2.0.0.xx-20120703-1400-e42-RELEASE. These are the figures I obtained without having a warm-up phase: Groovy 2.0 without @CompileStatic Groovy/Java performance factor Groovy 2.0 with @CompileStatic Groovy/Java performance factor Kotlin 0.1.2580 Java static ternary 4352ms 4.7 926ms 1.0 1005ms 924ms static if 4267ms 4.7 911ms 0.9 1828ms 917ms instance ternary 4577ms 2.7 1681ms 1.8 994ms 917ms instance if 4592ms 2.9 1604ms 1.7 1611ms 969ms I also did measurements with a warm-up phase of various length with the conclusion that there is no benefit for either language with or without the @CompileStatic. Since the Fibonacci algorithm is that recursive the warm-up phase seems to be "included" for any Fibonacci number that is not very small. We can see that the performance improvements due to static typing have made quite a difference. This little comparison does little justice, though. To me, the impression that static typing in Groovy has had in conjunction with type inference has led to significant performance improvements—and in the same way it has led to Groovy++ becoming very strong. With the @CompileStatic, the performance of Groovy is about 1-2 times slower than Java, and without Groovy, it's about 3-5 times slower. Unhappily, the measurements of "instance ternary" and "instance if" are the slowest. Unless we want to create masterpieces in programming with static functions, the measurements for "static ternary" and "static if" are not that relevant for most of the code with the ambition to be object-oriented (based on instances). Conclusion When Groovy was about 10-20 times slower than Java (see benchmark table almost at the end of this article) it is questionable whether the @CompileStatic was used or not. This means to me that Groovy is ready for applications where performance has to be somewhat comparable to Java. Earlier, Groovy (or Ruby, Closure, etc.) could only serve as a plus on your CV because of the performance impediment (at least here in Europe). New JVM kid on the block: Kotlin I added the figures for Kotlin as well (here is the code). Kotlin is a relatively new statically typed JVM-based Java-compatible programming language. Kotlin is more concise than Java by supporting variable type inferences, higher-order functions (closures), extension functions, mixins and first-class delegation, etc. Contrary to Groovy, it is more geared towards Scala, but also integrates well with Java. Kotlin is still under development and has yet to be officially released. So the figures have to be taken with caution as the guys at JetBrains are still working on the code optimization. Ideally, Kotlin should be as fast as Java. The measurements were done with the current "official" release 0.1.2580. And what about future performance improvements? At the time when JDK1.3 was the most recent JDK, I still earned my pay with Smalltalk development. At that time the performance of VisualWorks Smalltalk (now Cincom Smalltalk) and IBM VA for Smalltalk (now owned by Instantiations) was very good comparable to Java. And Smalltalk is a dynamically typed language, like pre-Goovy 2.0 and Ruby, where the compiler cannot make use of type inference to do optimizations. Because of this, it always appeared strange to me that Groovy, Ruby and other JVM-based dynamic languages had such a big performance penalty compared to Java when Smalltalk had not. From that point of view I think there's still room for Groovy performance improvements beyond @CompileStatic.
August 28, 2012
by Oliver Plohmann
· 49,080 Views · 1 Like
article thumbnail
Adding Hibernate Entity Level Filtering feature to Spring Data JPA Repository
Original Article: http://borislam.blogspot.hk/2012/07/adding-hibernate-entity-level-filter.html Those who have used data filtering features of hibernate should know that it is very powerful. You could define a set of filtering criteria to an entity class or a collection. Spring data JPA is a very handy library but it does not have fitering features. In this post, I will demonstarte how to add the hibernate filter features at entity level. You can use this features when you are using Hibernate Entity Manager. We can just define annotation in your repositoy interface to enable this features. Step 1. Define filter at entity level as usual. Just use hibernate @FilterDef annotation @Entity @Table(name = "STUDENT") @FilterDef(name="filterBySchoolAndClass", parameters={@ParamDef(name="school", type="string"),@ParamDef(name="class", type="integer")}) public class Student extends GenericEntity implements Serializable { // add your properties ... } Step2. Define two custom annotations. These two annotations are to be used in your repository interfaces. You could apply the hibernate filter defined in step 1 to specific query through these annotations. @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface EntityFilter { FilterQuery[] filterQueries() default {}; } @Retention(RetentionPolicy.RUNTIME) public @interface FilterQuery { String name() default ""; String jpql() default ""; } Step3. Add a method to your Spring data JPA base repository. This method will read the annotation you defined (i.e. @FilterQuery) and apply hibernate filter to the query by just simply unwrap the EntityManager. You could specify the parameter in your hibernate filter and also the parameter in you query in this method. If you do not know how to add custom method to your Spring data JPA base repository, please see my previous article for how to customize your Spring data JPA base repository for detail. You can see in previous article that I intentionally expose the repository interface (i.e. the springDataRepositoryInterface property) in the GenericRepositoryImpl. This small tricks enable me to access the annotation in the repository interface easily. public List doQueryWithFilter( String filterName, String filterQueryName, Map inFilterParams, Map inQueryParams){ if (GenericRepository.class.isAssignableFrom(getSpringDataRepositoryInterface())) { Annotation entityFilterAnn = getSpringDataRepositoryInterface().getAnnotation(EntityFilter.class); if(entityFilterAnn != null){ EntityFilter entityFilter = (EntityFilter)entityFilterAnn; FilterQuery[] filterQuerys = entityFilter.filterQueries() ; for (FilterQuery fQuery : filterQuerys) { if (StringUtils.equals(filterQueryName, fQuery.name())) { String jpql = fQuery.jpql(); Filter filter = em.unwrap(Session.class).enableFilter(filterName); //set filter parameter for (Object key: inFilterParams.keySet()) { String filterParamName = key.toString(); Object filterParamValue = inFilterParams.get(key); filter.setParameter(filterParamName, filterParamValue); } //set query parameter Query query= em.createQuery(jpql); for (Object key: inQueryParams.keySet()) { String queryParamName = key.toString(); Object queryParamValue = inQueryParams.get(key); query.setParameter(queryParamName, queryParamValue); } return query.getResultList(); } } } } } return null; } Last Step: example usage In your repositry, define which query you would like to apply hibernate filter through your @EntityFilter and @FilterQuery annotation. @EntityFilter ( filterQueries = { @FilterQuery(name="query1", jpql="SELECT s FROM Student LEFT JOIN FETCH s.Subject where s.subject = :subject" ), @FilterQuery(name="query2", jpql="SELECT s FROM Student LEFT JOIN s.TeacherSubject where s.teacher = :teacher") } ) public interface StudentRepository extends GenericRepository { } In your service or business class that inject your repository, you could just simply call the doQueryWithFilter() method to enable the filtering function. @Service public class StudentService { @Inject private StudentRepository studentRepository; public List searchStudent( String subject, String school, String class) { List studentList; // Prepare parameters for query filter HashMap inFilterParams = new HashMap(); inFilterParams.put("school", "Hong Kong Secondary School"); inFilterParams.put("class", "S5"); // Prepare parameters for query HashMap inParams = new HashMap(); inParams.put("subject", "Physics"); studentList = studentRepository.doQueryWithFilter( "filterBySchoolAndClass", "query1", inFilterParams, inParams); return studentList; } }
August 24, 2012
by Boris Lam
· 56,337 Views · 1 Like
article thumbnail
Installing Maven 3.0.4 on Ubuntu 12.04
To install Apache Maven 3.0.4 on Ubuntu 12.04, take the following steps.
August 24, 2012
by Pavithra Gunasekara
· 43,246 Views
article thumbnail
Obfuscate Your JavaFX Application
Introduction JavaFX currently has a high momentum and enjoys good adoption in the community. With its rich set of controls, CSS styling, good and free tool chain and, last but not least, its multi-platform availability (with Version 2.2 that was released just a couple of days ago, it is available for Windows, Mac OS and Linux) it is just natural to take it into consideration when thinking about an implementation technology for commercial desktop applications. Java is compiled into a bytecode which can be just as easily decompiled back into human readable source code. If you are thinking about writing a commercial application you might want to protect your intellectual property by implementing some functionality that checks if the user has a proper license - for instance a valid serial number - or something alike. This functionality or the whole application should not be easy to decompile and understand by a third party. A common measure to achieve this, or at least make it harder, is obfuscation. Since the reinvention of Version 2.0, JavaFX is 100% Java, which means you can use any Java obfuscator and just mind some minor differences. There is a whole bunch of free and commercial obfuscation tools out there. One of them you will encounter if you Google java obfuscation, and its proguard (http://proguard.sourceforge.net/). It is free and, although it takes some time to get into it, it's well-documented and ships with an ant task. The following text describes the process of obfuscating a JavaFX application with proguard. It is a complete example that uses the latest tools and features in JavaFX, including FXML. The following tools were used: Netbeans 7.2 Scene Builder 1.0 JDK 7 update 6 with JavaFX 2.2 Proguard version 4.8 The complete example is attached as a netbeans project. The Application The example application is a very simple one. It shows one screen with a textfield for a message and a button. If the user presses the button, the message is encrypted and shown in another non-editable textfield. The screen is implemented using fxml (Sample.fxml) and a controller class (SampleController.java). The code to encrypt the message is implemented in a separate class (EncryptionService.java) and the finally there is an application class with a main method (ObfuscationExample.java) that starts up the whole application. The following screenshot shows the application. Obfuscation Proguard is highly customizable and ships with a gui that let's you edit the configuration file that is specific for your application. I don't want to get too much into details here. As mentioned the proguard documentation is pretty exhaustive. You have to specify the jars that you want to get obfuscated (injars) , the resulting jar (outjars) and all libraries that are referenced from your injars (library jars), in any case the Java runtime (rt.jar) and the JavaFX runtime (jfxrt.jar). Until there it is just like any other java application that you obfuscate. In the case of JavaFX, there are some more things that need to be considered: In the controller class for the fxml file action handler methods and controls are annotated with the @FXML annotation. You want to keep these annotations and achieve this with the -keepattributes option Both controls and action handler methods are connected via the annotation AND their name. Therefore you also want to keep the names of those, which can be achieved with the -keepclassmembernames option that is applied to everything that is annotated with @javafx.fxml.FXML Last but not least, you want to keep your main method(s) that are the entry point to your application. In the case of JavaFX you have to configure this always for two classes: com.javafx.main.Main and your JavaFX application class, in our case: obfuscationexample.ObfuscationExample -injars dist\ObfuscationExample.jar -outjars dist\o_ObfuscationExample.jar -libraryjars /lib/rt.jar -libraryjars /lib/jfxrt.jar -dontshrink -dontoptimize -flattenpackagehierarchy '' -keepattributes Exceptions,InnerClasses,Signature,Deprecated,SourceFile,LineNumberTable,LocalVariable*Table,*Annotation*,Synthetic,EnclosingMethod -adaptresourcefilecontents **.fxml,**.properties,META-INF/MANIFEST.MF -keepclassmembernames class * { @javafx.fxml.FXML *; } # Keep - Applications. Keep all application classes, along with their 'main' # methods. -keepclasseswithmembers public class com.javafx.main.Main, obfuscationexample.ObfuscationExample { public static void main(java.lang.String[]); } The result of the obfuscation can be viewed in a decompiler. I was using JD-GUI (http://java.decompiler.free.fr) which is also free and quite easy to use. Automatically obfuscate during build After we have the obfuscation setup to our needs, we finally want to integrate it in our build-process. The build.xml that is created automatically in Netbeans offers some hooks to call additional tasks during the build-process -post-jfx-jar seems to the right step, as this is called after the jar file was created. As mentioned above proguard ships with an ant task that allows - besides other things - to just simply execute a proguard configuration file. The target below is called during the build process and does the following: Define the proguard ant task Call proguard with our configuration that we have setup before Rename the resulting obfuscated jar to the original name to make for example the original JNLP-file still work. Conclusion Although it took me some time to get everything working especially when using FXML, it is after all not much code to get your JavaFX application at least basically obfuscated in a seamless and automated way. Proguard has much more options to obfuscate in a more sophisticated way and to even shrink and optimize your code. Anyway I leave it up to you to configure it to your special needs.
August 21, 2012
by Thomas Bolz
· 15,659 Views · 2 Likes
article thumbnail
Spring Data, Spring Security and Envers integration
Learn about pros, cons, and basics of Spring security and data, plus Envers integration.
August 20, 2012
by Nicolas Fränkel DZone Core CORE
· 24,801 Views · 1 Like
article thumbnail
8 Ways to Improve Your Java EE Production Support Skills
This article will provide you with 8 ways to improve your production support skills which may help you better enjoy your IT support job.
August 15, 2012
by Pierre - Hugues Charbonneau
· 32,289 Views · 2 Likes
article thumbnail
JaCoCo in Maven Multi-Module Projects
Code coverage is an important measurement used during our development that describes the degree to which source code is tested. In this post I am going to explain how to run code coverage using Maven and JaCoCo plugins in multi-module projects. JaCoCo is a code coverage library for Java, which has been created by the EclEmma team. It has a plugin for Eclipse, and can be run with Ant and Maven too. Now we will focus only on a Maven approach. In a project with only one module is as easy as registering a build plugin: org.jacoco jacoco-maven-plugin 0.5.7.201204190339 prepare-agent report prepare-package report And now running mvn package in site/jacoco directory, a coverage report will be present in different formats. But with multimodule projects a new problem arises. How do we merge the metrics of all subprojects into only one file so we can have a quick overview of all subprojects? For now the Maven JaCoCo Plugin does not support it. There are many alternatives and I am going to cite the most common: Sonar. It has the disadvantage that you need to install Sonar (maybe you are already using it, but maybe not). Jenkins. The plugin for JaCoCo is still under development. Moreover you need to run a build job to inspect your coverage. This is good in terms of continuous integration but could be a problem if you are trying to "catch" some piece of code that have not been covered with previously implemented tests. Arquillian JaCoCo Extension. Arquillian is a container test framework that has an extension which during test execution can capture the coverage. It's a good option if you are using Arquillian. The disadvantage is that maybe your project does not require a container. Ant. You can use an Ant task with Maven. JaCoCo Ant tasks can merge results from multiple JaCoCo file results. Note that this is the most generic solution, and this is the chosen approach that we are going to use. The first thing to do is add a JaCoCo plugin to the parent pom so all projects could generate a coverage report. Of course, if there are modules which do not require coverage, the plugin definition should be changed from parent pom to specific projects. org.jacoco jacoco-maven-plugin 0.5.7.201204190339 prepare-agent report prepare-package report The next step is creating a specific submodule for appending all results of the JaCoCo plugin by using an Ant task. I suggest using something like project-name-coverage. Then let's open generated pom.xml and we are going to insert the required plugins to join all coverage information. To append them. As we have already written we are going to use a JaCoCo Ant task which has the ability to open all JaCoCo output files and append all their content into one. So the first thing to do is download the jar which contains the JaCoCo Ant task. To automate the download process, we are going to use maven dependency plugin: org.apache.maven.plugins maven-dependency-plugin jacoco-dependency-ant copy process-test-resources false org.jacoco org.jacoco.ant ${jacoco.version} true ${basedir}/target/jacoco-jars During process-test-resources phase Jacoco Ant artifact will be downloaded and copied to the target directory so it can be registered into the pom without worrying about the jar location. We also need a way to handle Ant tasks from Maven. And this is as simple as using maven antrun plugin, which you can specify any ant command in its configuration section. See next simple example: org.apache.maven.plugins maven-antrun-plugin 1.6 compile run Notice that we can specify any Ant task in the target tag. And now we are ready to start configuring the JaCoCo Ant task. The JaCoCo report plugin requires you set the location of the build directory, class directory, source directory or generated-source directory. For this purpose we are going set them as properties. ../projectA/target ../projectB/target ../projectA/target/classes ../projectB/target/classes ../projectA/src/main/java ../projectB/src/main/java ../projectA/target/generated-sources/annotations ../projectB/target/generated-sources/annotations And now the Ant task part which will go into target tag of the antrun plugin. First we need to define report task. Do you see that org.jacoco.ant.jar file is downloaded by the dependency plugin? You don't need to worry about copying it manually. Then we are going to call report task as defined in taskdef section. Within the executiondata element, we specify locations where JaCoCo execution data files are stored. By default this is the target directory, and for each project we need to add one entry for each submodule. The next element is structure. This element defines the report structure, and can be defined with a hierarchy of group elements. Each group should contain class files and source files of all projects that belongs to that group. In our example only one group is used. And finally we are setting output format using html, xml and csv tags. Complete Code: org.apache.maven.plugins maven-antrun-plugin 1.6 post-integration-test run org.jacoco org.jacoco.ant ${jacoco.version} And now simply run mvn clean verify and in my-project-coverage/target/coverage-report, a report with code coverage of all projects will be presented. Hope you find this post useful. We Keep Learning, Alex.
August 13, 2012
by Alex Soto
· 80,954 Views · 4 Likes
article thumbnail
Installing Oracle Java 6 on Ubuntu
If you have already installed Ubuntu 12.04 you probably have realized that Sun java(oracle java) does not come prepacked with Ubuntu like it used to be , instead OpenJDK comes with it. Here is how you can install Oracle java on Ubuntu 12.04 manually. Download jdk-6u32-linux-x64.bin from this link. If you have used 32-bit Ubuntu installation, download jdk-6u32-linux-x32.bin instead. To make the downloaded bin file executable use the following command chmod +x jdk-6u32-linux-x64.bin To extract the bin file use the following command ./jdk-6u32-linux-x64.bin Using the following command create a folder called "jvm" inside /usr/lib if it is not already existing sudo mkdir /usr/lib/jvm Move the extracted folder into the newly created jvm folder sudo mv jdk1.6.0_32 /usr/lib/jvm/ To install the Java source use following commands sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_32/bin/javac 1 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_32/bin/java 1 sudo update-alternatives --install /usr/bin/javaws javaws /usr/lib/jvm/jdk1.6.0_32/bin/javaws 1 To make this default java sudo update-alternatives --config javac sudo update-alternatives --config java sudo update-alternatives --config javaws To make symlinks point to the new Java location use the following command ls -la /etc/alternatives/java* To verify Java has installed correctly use this command java -version
August 13, 2012
by Pavithra Gunasekara
· 59,793 Views · 1 Like
  • Previous
  • ...
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: