DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Understanding sun.misc.Unsafe
The biggest competitor to the Java virtual machine might be Microsoft's CLR that hosts languages such as C#. The CLR allows to write unsafe code as an entry gate for low level programming, something that is hard to achieve on the JVM. If you need such advanced functionality in Java, you might be forced to use the JNI which requires you to know some C and will quickly lead to code that is tightly coupled to a specific platform. With sun.misc.Unsafe, there is however another alternative to low-level programming on the Java plarform using a Java API, even though this alternative is discouraged. Nevertheless, several applications rely on sun.misc.Unsafe such for example objenesis and therewith all libraries that build on the latter such for example kryo which is again used in for example Twitter's Storm. Therefore, it is time to have a look, especially since the functionality of sun.misc.Unsafe is considered to become part of Java's public API in Java 9. Getting hold of an instance of sun.misc.Unsafe The sun.misc.Unsafe class is intended to be only used by core Java classes which is why its authors made its only constructor private and only added an equally private singleton instance. The public getter for this instances performs a security check in order to avoid its public use: public static Unsafe getUnsafe() { Class cc = sun.reflect.Reflection.getCallerClass(2); if (cc.getClassLoader() != null) throw new SecurityException("Unsafe"); return theUnsafe; } This method first looks up the calling Class from the current thread’s method stack. This lookup is implemented by another internal class named sun.reflection.Reflection which is basically browsing down the given number of call stack frames and then returns this method’s defining class. This security check is however likely to change in future version. When browsing the stack, the first found class (index 0) will obviously be the Reflection class itself, and the second (index 1) class will be the Unsafe class such that index 2 will hold your application class that was calling Unsafe#getUnsafe(). This looked-up class is then checked for its ClassLoader where a null reference is used to represent the bootstrap class loader on a HotSpot virtual machine. (This is documented in Class#getClassLoader() where it says that “some implementations may use null to represent the bootstrap class loader”.) Since no non-core Java class is normally ever loaded with this class loader, you will therefore never be able to call this method directly but receive a thrown SecurityException as an answer. (Technically, you could force the VM to load your application classes using the bootstrap class loader by adding it to the –Xbootclasspath, but this would require some setup outside of your application code which you might want to avoid.) Thus, the following test will succeed: @Test(expected = SecurityException.class) public void testSingletonGetter() throws Exception { Unsafe.getUnsafe(); } However, the security check is poorly designed and should be seen as a warning against the singleton anti-pattern. As long as the use of reflection is not prohibited (which is hard since it is so widely used in many frameworks), you can always get hold of an instance by inspecting the private members of the class. From the Unsafe class's source code, you can learn that the singleton instance is stored in a private static field called theUnsafe. This is at least true for the HotSpot virtual machine. Unfortunately for us, other virtual machine implementations sometimes use other names for this field. Android’s Unsafe class is for example storing its singleton instance in a field called THE_ONE. This makes it hard to provide a “compatible” way of receiving the instance. However, since we already left the save territory of compatibility by using the Unsafe class, we should not worry about this more than we should worry about using the class at all. For getting hold of the singleton instance, you simply read the singleton field's value: Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe"); theUnsafe.setAccessible(true); Unsafe unsafe = (Unsafe) theUnsafe.get(null); Alternatively, you can invoke the private instructor. I do personally prefer this way since it works for example with Android while extracting the field does not: Constructor unsafeConstructor = Unsafe.class.getDeclaredConstructor(); unsafeConstructor.setAccessible(true); Unsafe unsafe = unsafeConstructor.newInstance(); The price you pay for this minor compatibility advantage is a minimal amount of heap space. The security checks performed when using reflection on fields or constructors are however similar. Create an Instance of a Class Without Calling a Constructor The first time I made use of the Unsafe class was for creating an instance of a class without calling any of the class's constructors. I needed to proxy an entire class which only had a rather noisy constructor but I only wanted to delegate all method invocations to a real instance which I did however not know at the time of construction. Creating a subclass was easy and if the class had been represented by an interface, creating a proxy would have been a straight-forward task. With the expensive constructor, I was however stuck. By using the Unsafe class, I was however able to work my way around it. Consider a class with an artificially expensive constructor: class ClassWithExpensiveConstructor { private final int value; private ClassWithExpensiveConstructor() { value = doExpensiveLookup(); } private int doExpensiveLookup() { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } return 1; } public int getValue() { return value; } } Using the Unsafe, we can create an instance of ClassWithExpensiveConstructor (or any of its subclasses) without having to invoke the above constructor, simply by allocating an instance directly on the heap: @Test public void testObjectCreation() throws Exception { ClassWithExpensiveConstructor instance = (ClassWithExpensiveConstructor) unsafe.allocateInstance(ClassWithExpensiveConstructor.class); assertEquals(0, instance.getValue()); } Note that final field remained uninitialized by the constructor but is set with its type's default value. Other than that, the constructed instance behaves like a normal Java object. It will for example be garbage collected when it becomes unreachable. The Java run time itself creates objects without calling a constructor when for example creating objects for deserialization. Therefore, the ReflectionFactory offers even more access to individual object creation: @Test public void testReflectionFactory() throws Exception { @SuppressWarnings("unchecked") Constructor silentConstructor = ReflectionFactory.getReflectionFactory() .newConstructorForSerialization(ClassWithExpensiveConstructor.class, Object.class.getConstructor()); silentConstructor.setAccessible(true); assertEquals(10, silentConstructor.newInstance().getValue()); } Note that the ReflectionFactory class only requires a RuntimePermission called reflectionFactoryAccess for receiving its singleton instance and no reflection is therefore required here. The received instance of ReflectionFactory allows you to define any constructor to become a constructor for the given type. In the example above, I used the default constructor of java.lang.Object for this purpose. You can however use any constructor: class OtherClass { private final int value; private final int unknownValue; private OtherClass() { System.out.println("test"); this.value = 10; this.unknownValue = 20; } } @Test public void testStrangeReflectionFactory() throws Exception { @SuppressWarnings("unchecked") Constructor silentConstructor = ReflectionFactory.getReflectionFactory() .newConstructorForSerialization(ClassWithExpensiveConstructor.class, OtherClass.class.getDeclaredConstructor()); silentConstructor.setAccessible(true); ClassWithExpensiveConstructor instance = silentConstructor.newInstance(); assertEquals(10, instance.getValue()); assertEquals(ClassWithExpensiveConstructor.class, instance.getClass()); assertEquals(Object.class, instance.getClass().getSuperclass()); } Note that value was set in this constructor even though the constructor of a completely different class was invoked. Non-existing fields in the target class are however ignored as also obvious from the above example. Note that OtherClass does not become part of the constructed instances type hierarchy, the OtherClass's constructor is simply borrowed for the "serialized" type. Not mentioned in this blog entry are other methods such as Unsafe#defineClass, Unsafe#defineAnonymousClass or Unsafe#ensureClassInitialized. Similar functionality is however also defined in the public API's ClassLoader. Native Memory Allocation Did you ever want to allocate an array in Java that should have had more than Integer.MAX_VALUE entries? Probably not because this is not a common task, but if you once need this functionality, it is possible. You can create such an array by allocating native memory. Native memory allocation is used by for example direct byte buffers that are offered in Java's NIO packages. Other than heap memory, native memory is not part of the heap area and can be used non-exclusively for example for communicating with other processes. As a result, Java's heap space is in competition with the native space: the more memory you assign to the JVM, the less native memory is left. Let us look at an example for using native (off-heap) memory in Java with creating the mentioned oversized array: class DirectIntArray { private final static long INT_SIZE_IN_BYTES = 4; private final long startIndex; public DirectIntArray(long size) { startIndex = unsafe.allocateMemory(size * INT_SIZE_IN_BYTES); unsafe.setMemory(startIndex, size * INT_SIZE_IN_BYTES, (byte) 0); } } public void setValue(long index, int value) { unsafe.putInt(index(index), value); } public int getValue(long index) { return unsafe.getInt(index(index)); } private long index(long offset) { return startIndex + offset * INT_SIZE_IN_BYTES; } public void destroy() { unsafe.freeMemory(startIndex); } } @Test public void testDirectIntArray() throws Exception { long maximum = Integer.MAX_VALUE + 1L; DirectIntArray directIntArray = new DirectIntArray(maximum); directIntArray.setValue(0L, 10); directIntArray.setValue(maximum, 20); assertEquals(10, directIntArray.getValue(0L)); assertEquals(20, directIntArray.getValue(maximum)); directIntArray.destroy(); } First, make sure that your machine has sufficient memory for running this example! You need at least (2147483647 + 1) * 4 byte = 8192 MB of native memory for running the code. If you have worked with other programming languages as for example C, direct memory allocation is something you do every day. By calling Unsafe#allocateMemory(long), the virtual machine allocates the requested amount of native memory for you. After that, it will be your responsibility to handle this memory correctly. The amount of memory that is required for storing a specific value is dependent on the type's size. In the above example, I used an int type which represents a 32-bit integer. Consequently a single int value consumes 4 byte. For primitive types, size is well-documented. It is however more complex to compute the size of object types since they are dependent on the number of non-static fields that are declared anywhere in the type hierarchy. The most canonical way of computing an object's size is using the Instrumented class from Java's attach API which offers a dedicated method for this purpose called getObjectSize. I will however evaluate another (hacky) way of dealing with objects in the end of this section. Be aware that directly allocated memory is always native memory and therefore not garbage collected. You therefore have to free memory explicitly as demonstrated in the above example by a call to Unsafe#freeMemory(long). Otherwise you reserved some memory that can never be used for something else as long as the JVM instance is running what is a memory leak and a common problem in non-garbage collected languages. Alternatively, you can also directly reallocate memory at a certain address by calling Unsafe#reallocateMemory(long, long) where the second argument describes the new amount of bytes to be reserved by the JVM at the given address. Also, note that the directly allocated memory is not initialized with a certain value. In general, you will find garbage from old usages of this memory area such that you have to explicitly initialize your allocated memory if you require a default value. This is something that is normally done for you when you let the Java run time allocate the memory for you. In the above example, the entire area is overriden with zeros with help of the Unsafe#setMemory method. When using directly allocated memory, the JVM will neither do range checks for you. It is therefore possible to corrupt your memory as this example shows: @Test public void testMallaciousAllocation() throws Exception { long address = unsafe.allocateMemory(2L * 4); unsafe.setMemory(address, 8L, (byte) 0); assertEquals(0, unsafe.getInt(address)); assertEquals(0, unsafe.getInt(address + 4)); unsafe.putInt(address + 1, 0xffffffff); assertEquals(0xffffff00, unsafe.getInt(address)); assertEquals(0x000000ff, unsafe.getInt(address + 4)); } Note that we wrote a value into the space that was each partly reserved for the first and for the second number. This picture might clear things up. Be aware that the values in the memory run from the "right to the left" (but this might be machine dependent). The first row shows the initial state after writing zeros to the entire allocated native memory area. Then we override 4 byte with an offset of a single byte using 32 ones. The last row shows the result after this writing operation. Finally, we want to write an entire object into native memory. As mentioned above, this is a difficult task since we first need to compute the size of the object in order to know the amount of size we need to reserve. The Unsafe class does however not offer such functionality. At least not directly since we can at least use the Unsafe class to find the offset of an instance's field which is used by the JVM when itself allocates objects on the heap. This allows us to find the approximate size of an object: public long sizeOf(Class clazz) long maximumOffset = 0; do { for (Field f : clazz.getDeclaredFields()) { if (!Modifier.isStatic(f.getModifiers())) { maximumOffset = Math.max(maximumOffset, unsafe.objectFieldOffset(f)); } } } while ((clazz = clazz.getSuperclass()) != null); return maximumOffset + 8; } This might at first look cryptic, but there is no big secret behind this code. We simply iterate over all non-static fields that are declared in the class itself or in any of its super classes. We do not have to worry about interfaces since those cannot define fields and will therefore never alter an object's memory layout. Any of these fields has an offset which represents the first byte that is occupied by this field's value when the JVM stores an instance of this type in memory, relative to a first byte that is used for this object. We simply have to find the maximum offset in order to find the space that is required for all fields but the last field. Since a field will never occupy more than 64 bit (8 byte) for a long or double value or for an object reference when run on a 64 bit machine, we have at least found an upper bound for the space that is used to store an object. Therefore, we simply add these 8 byte to the maximum index and we will not run into danger of having reserved to little space. This idea is of course wasting some byte and a better algorithm should be used for production code. In this context, it is best to think of a class definition as a form of heterogeneous array. Note that the minimum field offset is not 0 but a positive value. The first few byte contain meta information. The graphic below visualizes this principle for an example object with an int and a long field where both fields have an offset. Note that we do not normally write meta information when writing a copy of an object into native memory so we could further reduce the amount of used native memoy. Also note that this memory layout might be highly dependent on an implementation of the Java virtual machine. With this overly careful estimate, we can now implement some stub methods for writing shallow copies of objects directly into native memory. Note that native memory does not really know the concept of an object. We are basically just setting a given amount of byte to values that reflect an object's current values. As long as we remember the memory layout for this type, these byte contain however enough information to reconstruct this object. public void place(Object o, long address) throws Exception { Class clazz = o.getClass(); do { for (Field f : clazz.getDeclaredFields()) { if (!Modifier.isStatic(f.getModifiers())) { long offset = unsafe.objectFieldOffset(f); if (f.getType() == long.class) { unsafe.putLong(address + offset, unsafe.getLong(o, offset)); } else if (f.getType() == int.class) { unsafe.putInt(address + offset, unsafe.getInt(o, offset)); } else { throw new UnsupportedOperationException(); } } } } while ((clazz = clazz.getSuperclass()) != null); } public Object read(Class clazz, long address) throws Exception { Object instance = unsafe.allocateInstance(clazz); do { for (Field f : clazz.getDeclaredFields()) { if (!Modifier.isStatic(f.getModifiers())) { long offset = unsafe.objectFieldOffset(f); if (f.getType() == long.class) { unsafe.putLong(instance, offset, unsafe.getLong(address + offset)); } else if (f.getType() == int.class) { unsafe.putLong(instance, offset, unsafe.getInt(address + offset)); } else { throw new UnsupportedOperationException(); } } } } while ((clazz = clazz.getSuperclass()) != null); return instance; } @Test public void testObjectAllocation() throws Exception { long containerSize = sizeOf(Container.class); long address = unsafe.allocateMemory(containerSize); Container c1 = new Container(10, 1000L); Container c2 = new Container(5, -10L); place(c1, address); place(c2, address + containerSize); Container newC1 = (Container) read(Container.class, address); Container newC2 = (Container) read(Container.class, address + containerSize); assertEquals(c1, newC1); assertEquals(c2, newC2); } Note that these stub methods for writing and reading objects in native memory only support int and long field values. Of course, Unsafe supports all primitive values and can even write values without hitting thread-local caches by using the volatile forms of the methods. The stubs were only used to keep the examples concise. Be aware that these "instances" would never get garbage collected since their memory was allocated directly. (But maybe this is what you want.) Also, be careful when precalculating size since an object's memory layout might be VM dependent and also alter if a 64-bit machine runs your code compared to a 32-bit machine. The offsets might even change between JVM restarts. For reading and writing primitives or object references, Unsafe provides the following type-dependent methods: getXXX(Object target, long offset): Will read a value of type XXX from target's address at the specified offset. putXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offset. getXXXVolatile(Object target, long offset): Will read a value of type XXX from target's address at the specified offset and not hit any thread local caches. putXXXVolatile(Object target, long offset, XXX value): Will place value at target's address at the specified offset and not hit any thread local caches. putOrderedXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offet and might not hit all thread local caches. putXXX(long address, XXX value): Will place the specified value of type XXX directly at the specified address. getXXX(long address): Will read a value of type XXX from the specified address. compareAndSwapXXX(Object target, long offset, long expectedValue, long value): Will atomicly read a value of type XXX from target's address at the specified offset and set the given value if the current value at this offset equals the expected value. Be aware that you are copying references when writing or reading object copies in native memory by using the getObject(Object, long) method family. You are therefore only creating shallow copies of instances when applying the above method. You could however always read object sizes and offsets recursively and create deep copies. Pay however attention for cyclic object references which would cause infinitive loops when applying this principle carelessly. Not mentioned here are existing utilities in the Unsafe class that allow manipulation of static field values sucht as staticFieldOffset and for handling array types. Finally, both methods named Unsafe#copyMemory allow to instruct a direct copy of memory, either relative to a specific object offset or at an absolute address as the following example shows: @Test public void testCopy() throws Exception { long address = unsafe.allocateMemory(4L); unsafe.putInt(address, 100); long otherAddress = unsafe.allocateMemory(4L); unsafe.copyMemory(address, otherAddress, 4L); assertEquals(100, unsafe.getInt(otherAddress)); } Throwing Checked Exceptions Without Declaration There are some other interesting methods to find in Unsafe. Did you ever want to throw a specific exception to be handled in a lower layer but you high layer interface type did not declare this checked exception? Unsafe#throwException allows to do so: @Test(expected = Exception.class) public void testThrowChecked() throws Exception { throwChecked(); } public void throwChecked() { unsafe.throwException(new Exception()); } Native Concurrency The park and unpark methods allow you to pause a thread for a certain amount of time and to resume it: @Test public void testPark() throws Exception { final boolean[] run = new boolean[1]; Thread thread = new Thread() { @Override public void run() { unsafe.park(true, 100000L); run[0] = true; } }; thread.start(); unsafe.unpark(thread); thread.join(100L); assertTrue(run[0]); } Also, monitors can be acquired directly by using Unsafe using monitorEnter(Object), monitorExit(Object) and tryMonitorEnter(Object). A file containing all the examples of this blog entry is available as a gist.
January 14, 2014
by Rafael Winterhalter
· 151,705 Views · 39 Likes
article thumbnail
Splitting Large XML Files in Java
Our best option is to create some pre-processing tool that will first split the big file in multiple smaller chunks before they are processed by the middle-ware.
January 14, 2014
by Koen Serneels
· 43,082 Views · 4 Likes
article thumbnail
Spring IDE and the Spring Tool Suite - Using Spring in Eclipse
Get started with Spring IDE and the Spring Tool Suite – a set of plugins to simplify the development of Spring-based applications in Eclipse.
January 10, 2014
by James Sugrue DZone Core CORE
· 699,021 Views · 9 Likes
article thumbnail
JBoss 5 to 7 in 11 steps
Introduction Some time ago we decided to upgrade our application from JBoss 5 to 7 (technically 7.2). In this article I going to describe several things which we found problematic. At the end I also provided a short list of benefits we gained in retrospect. First some general information about our application. It was built using EJB 3.0 technology. We have 2 interfaces for communicating with other components – JMS and JAX-WS. We use JBoss AS 5 as our messaging broker which is started as a separate JVM process. This part of the system we were not allowed to change. Finally – we use JPA to store processing results to Oracle DB. Step #1 – Convince your Product Owner Although our application was rather small and built on JEE5 standard it took us 4 weeks to migrate it to JEE6 and JBoss 7. So you can't do it as a maintenance ticket – it's simply too big. There is always problem with providing Business Value of such migration for Product Owners as well as for key Stakeholders. There are several aspects which might help you convincing them. One of the biggest benefits is processing time. JBoss 7 is simply faster and has better caching (Infinispan over Ehcache). Another one is startup time (our server is ready to go in 5-6 seconds opposed to 1 minute in JBoss 5). Finally – development is much faster (EJB 3.1 is much better then 3.0). The last one might be translated to “time to market”. Having above arguments I'm pretty sure you'll convince them. Step #2 – Do some reading Here is a list on interesting links which are worth reading before the migration: JBoss 5 -> 7 migration guide: https://docs.jboss.org/author/display/AS7/How+do+I+migrate+my+application+from+AS5+or+AS6+to+AS7 JBoss 7 vs EAP libraries: https://access.redhat.com/site/articles/112673 JBoss EAP Faq: http://www.jboss.org/jbossas/faq Cache implementation benchmarks: http://sourceforge.net/p/nitrocache/blog/2012/05/performance-benchmark-nitrocache--ehcache--infinispan--jcs--cach4j/ JBoss 7 performence tuning: http://www.mastertheboss.com/jboss-performance/jboss-as-7-performance-tuning JBoss caching: http://www.mastertheboss.com/hibernate-howto/using-hibernate-second-level-cache-with-jboss-as-5-6-7 Step #3 – Off you go – change Maven dependencies JBoss 5 isn't packaged very well, so I suppose you many dependencies included in your classpath (either directly or by transitive dependencies). This is the first big change in JBoss 7. Now I strongly advice you to use this artifact in your dependency management section: org.jboss.as jboss-as-parent 7.2.0.Final pom import We also decided to stick only to JEE6 spec and configure all additional JBoss 7 options with proper XML files. If it sounds good for your project too, just add this dependency and you're done with this step: org.jboss.spec jboss-javaee-6.0 1.0.0.Final pom provided After cleaning up dependencies your code probably won't compile for a couple of days or even weeks. It takes time to clean this up. Step #4 – EJB 3.0 to 3.1 migration Dependency Injection is a heart of the application, so it is worth to start with it. Almost all of your code should work, but you'll have some problems with beans annotated with @Service (these are singletons with JBoss 5 EJB Extended API). You just need to replace them with @Singleton annotations and put @PostConstruct annotation on your init method. One last thing – remember to use proper concurrency strategy. We decided to use @ConcurrencyManagement(BEAN) and leave the implementation as is. Step #5 – Upgrade to JPA 2.0 If you used JPA 1.0 with Hibernate, I'm pretty sure you have a lot of non standard annotations defining caching or cascading. All of them might be successfully replaced with JPA 2.0 annotations and finally you might get rid of Hibernate from compile classpath and depend only on JPA 2.0. Here are several standard things to do: Get rid of Hibernate's Session.evict and switch to EntityManager.detach Get rid of Hibernate's @Cache annotation and replace it with @Cachable Fix Cascades (now delete orphan is a part of @XXXToYYY annotations) Remove Hibernate dependency and stick with JEE6 spec Step #6 – Fix Hibernate's sequencer Migrating Hibernate 3 to 4 is a bit tricky because of the way it uses sequences (fields annotated with @Id). Hibernate by default uses a pool of ids instead of incrementing sequence. An example will be more descriptive: Some_DB_Sequence.nextval -> 1 Hibernate 3: 1*50 = 50; IDs to be used = 50, 51, 52.…, 99 Some_DB_Sequence.nextval -> 2 Hibernate 3: 2*50 = 100; IDs to be used = 100, 101, 102.…, 149 In Hibernate 4.x there is a new sequence generator that uses new IDs that are 1:1 related to DB sequence. Typically it's disabled by default... but not in JBoss 7.1. So after migration, Hibernate tries to insert entities using IDs read from sequence (using new sequence generator) that were already used which causes constraint violation. The fastest solution is to switch Hibernate to the old method of sequence generation (described in example above), that requires following change in persistence.xml: Step #7 – Caching Infinispan is shipped with JBoss 7 and does not require much configuration. There is only one setting in persistence.xml which needs to be set and the others might be removed: Infinispan itself might require some extra configuration – just use standalone-full-ha.xml as guide. Step #8 – RMI with JBoss 5 If you're using a lot of RMI communicating with other JBoss 5 servers – I have bad information for you – JBoss 5 and 7 are totally different and this kind of comminication will not work. I strongly recommend to switch to some other technology like JAX-WS. In the retrospect we are very glad we decided to do it. Step #9 – JMS migration We thought it would be really hard to connect with JMS server based on JBoss 5. It turned out that you have 2 options and both work fine: Start HornetQ server on your own instance and create a bridge to JBoss 5 instance Use Generic JMS adapter: https://github.com/jms-ra/generic-jms-ra Step #10 – Fix EAR layout In JBoss 5 it does not matter where all jars are being placed. All EJBs are being started. It does not work with JBoss 7 anymore. All EJB which should start must be added as modules. Step #11 – JMX console Bad information – it's not present in JBoss 7. We liked it very much, but we had to switch to jvisualvm to invoke our JMX operations. There is a ticket in WildFly Jira opened for that: https://issues.jboss.org/browse/WFLY-1197. Unfortunately at moment of writing this article it is not resolved. Some thoughts in retrospect It is really time consuming task to migrate from JBoss 5 to 7. Although in my opinion it is worth it. Now we have better caching for cluster solutions (Infinispan), better DI (EJB 3.1) and better Web Services (CXF instead of JBoss WS). Processing time decreased by 25% without any code change. Development speed increased in my opinion (it is really hard to measure it) by 50% and we are much more productive (faster server restarts). Memory footprint lowered from 1GB to 512MB. Finally automatic application redeployment finally works! However there is always a price to pay – the migration took us 4 weeks (2 sprints). We didn't write any code for our business in that period. So make sure you prepare well for such migration and my last advice – invest some time to write good automatic functional tests (we use Arquillian for that). Once they're green again – you're almost crossing finishing line.
January 9, 2014
by Sebastian Laskawiec
· 46,661 Views
article thumbnail
Spring Cache Abstraction
Spring cache abstraction applies caching to the Java methods. It provides an environment where we can cache the result for the methods we choose. By doing so, it improves the performance of the methods by avoiding multiple execution of the methods for the same object. Note that this type of caching can be applied to the methods which return the same result for the same input. In this post, we will dive into spring abstraction and give code samples to the related parts. Spring provides annotation for caching. The first and basic way of caching is done with @Cacheable annotation. When we make a method @Cacheable, for each invocation cache is checked to see whether a result for the invocation exist. Let’s see an example for basic use of @Cacheable as follows. @Cacheable("customers") public Customer findCustomer(long customerId) {...} When you have a complex input for the method, you have the ability generate key by specifying which attribute will be the key for the cache. Let’s see by an example as follows. @Cacheable(value="customer", key="identity.customerId") public Customer findCustomer(Identity identity) {...} Spring also provides conditional caching for @Cacheable annotation. You can specify a condition in which you want to cache items by a condition parameter. Let’s see condition parameter in an example. @Cacheable(value="customer", condition="identity.loginFrequency > 3") public Customer findCustomer(Identity identity) Eviction is an important issue, one should evict the entries from the cache since there can be stale items in the cache. While @Cacheable provides populating items into cahce, @CacheEvict provides removing stale items from the cache. Let’s see cache eviction example. @CacheEvict(value="customer", allEntries = true) public void removeAllCustomers(long customerId) {...} By defaults, Spring provides caching by ConcurrentHashMap by specifying cache manager as follows. However, we can use other cache managers like ImcacheCacheManager as follows. For an example project, you can have a look at imcache-examples project on githup. The example class is at SpringCacheExample.java and example configuration is at exampleContext.xml.
January 8, 2014
by Yusuf Aytaş
· 31,390 Views · 1 Like
article thumbnail
Building a Samlple Java WebSocket Client
Learn more about creating Java-based WebSocket clients, including code for the server side WebSocket application and the corresponding JavaScript/HTML client.
January 8, 2014
by Ulas Ergin
· 223,825 Views · 10 Likes
article thumbnail
CGLib: The Missing Manual
The byte code instrumentation library cglib is a popular choice among many well-known Java frameworks such as Hibernate (not anymore) or Spring for doing their dirty work. Byte code instrumentation allows to manipulate or to create classes after the compilation phase of a Java application. Since Java classes are linked dynamically at run time, it is possible to add new classes to an already running Java program. Hibernate uses cglib for example for its generation of dynamic proxies. Instead of returning the full object that you stored in a a database, Hibernate will return you an instrumented version of your stored class that lazily loads some values from the database only when they are requested. Spring used cglib for example when adding security constraints to your method calls. Instead of calling your method directly, Spring security will first check if a specified security check passes and only delegate to your actual method after this verification. Another popular use of cglib is within mocking frameworks such as mockito, where mocks are nothing more than instrumented class where the methods were replaced with empty implementations (plus some tracking logic). Other than ASM - another very high-level byte code manipulation library on top of which cglib is built - cglib offers rather low-level byte code transformers that can be used without even knowing about the details of a compiled Java class. Unfortunately, the documentation of cglib is rather short, not to say that there is basically none. Besides a single blog article from 2005 that demonstrates the Enhancer class, there is not much to find. This blog article is an attempt to demonstrate cglib and its unfortunately often awkward API. Enhancer Let's start with the Enhancer class, the probably most used class of the cglib library. An enhancer allows the creation of Java proxies for non-interface types. The Enhancer can be compared with the Java standard library's Proxy class which was introduced in Java 1.3. The Enhancer dynamically creates a subclass of a given type but intercepts all method calls. Other than with the Proxy class, this works for both class and interface types. The following example and some of the examples after are based on this simple Java POJO: public class SampleClass { public String test(String input) { return "Hello world!"; } } Using cglib, the return value of test(String) method can easily be replaced by another value using an Enhancer and a FixedValue callback: @Test public void testFixedValue() throws Exception { Enhancer enhancer = new Enhancer(); enhancer.setSuperclass(SampleClass.class); enhancer.setCallback(new FixedValue() { @Override public Object loadObject() throws Exception { return "Hello cglib!"; } }); SampleClass proxy = (SampleClass) enhancer.create(); assertEquals("Hello cglib!", proxy.test(null)); } In the above example, the enhancer will return an instance of an instrumented subclass of SampleClass where all method calls return a fixed value which is generated by the anonymous FixedValue implementation above. The object is created by Enhancer#create(Object...) where the method takes any number of arguments which are used to pick any constructor of the enhanced class. (Even though constructors are only methods on the Java byte code level, the Enhancer class cannot instrument constructors. Neither can it instrument static or final classes.) If you only want to create a class, but no instance, Enhancer#createClass will create a Class instance which can be used to create instances dynamically. All constructors of the enhanced class will be available as delegation constructors in this dynamically generated class. Be aware that any method call will be delegated in the above example, also calls to the methods defined in java.lang.Object. As a result, a call to proxy.toString() will also return "Hello cglib!". In contrast will a call to proxy.hashCode() result in a ClassCastException since the FixedValue interceptor always returns a String even though the Object#hashCode signature requires a primitive integer. Another observation that can be made is that final methods are not intercepted. An example of such a method is Object#getClass which will return something like "SampleClass$$EnhancerByCGLIB$$e277c63c" when it is invoked. This class name is generated randomly by cglib in order to avoid naming conflicts. Be aware of the different class of the enhanced instance when you are making use of explicit types in your program code. The class generated by cglib will however be in the same package as the enhanced class (and therefore be able to override package-private methods). Similar to final methods, the subclassing approach makes for the inability of enhancing final classes. Therefore frameworks as Hibernate cannot persist final classes. Next, let us look at a more powerful callback class, the InvocationHandler, that can also be used with an Enhancer: @Test public void testInvocationHandler() throws Exception { Enhancer enhancer = new Enhancer(); enhancer.setSuperclass(SampleClass.class); enhancer.setCallback(new InvocationHandler() { @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) { return "Hello cglib!"; } else { throw new RuntimeException("Do not know what to do."); } } }); SampleClass proxy = (SampleClass) enhancer.create(); assertEquals("Hello cglib!", proxy.test(null)); assertNotEquals("Hello cglib!", proxy.toString()); } This callback allows us to answer with regards to the invoked method. However, you should be careful when calling a method on the proxy object that comes with the InvocationHandler#invoke method. All calls on this method will be dispatched with the same InvocationHandler and might therefore result in an endless loop. In order to avoid this, we can use yet another callback dispatcher: @Test public void testMethodInterceptor() throws Exception { Enhancer enhancer = new Enhancer(); enhancer.setSuperclass(SampleClass.class); enhancer.setCallback(new MethodInterceptor() { @Override public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy) throws Throwable { if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) { return "Hello cglib!"; } else { proxy.invokeSuper(obj, args); } } }); SampleClass proxy = (SampleClass) enhancer.create(); assertEquals("Hello cglib!", proxy.test(null)); assertNotEquals("Hello cglib!", proxy.toString()); proxy.hashCode(); // Does not throw an exception or result in an endless loop. } The MethodInterceptor allows full control over the intercepted method and offers some utilities for calling the method of the enhanced class in their original state. But why would one want to use other methods anyways? Because the other methods are more efficient and cglib is often used in edge case frameworks where efficiency plays a significant role. The creation and linkage of the MethodInterceptor requires for example the generation of a different type of byte code and the creation of some runtime objects that are not required with the InvocationHandler. Because of that, there are other classes that can be used with the Enhancer: LazyLoader: Even though the LazyLoader's only method has the same method signature as FixedValue, the LazyLoader is fundamentally different to the FixedValue interceptor. The LazyLoader is actually supposed to return an instance of a subclass of the enhanced class. This instance is requested only when a method is called on the enhanced object and then stored for future invocations of the generated proxy. This makes sense if your object is expensive in its creation without knowing if the object will ever be used. Be aware that some constructor of the enhanced class must be called both for the proxy object and for the lazily loaded object. Thus, make sure that there is another cheap (maybe protected) constructor available or use an interface type for the proxy. You can choose the invoked constructed by supplying arguments to Enhancer#create(Object...). Dispatcher: The Dispatcher is like the LazyLoader but will be invoked on every method call without storing the loaded object. This allows to change the implementation of a class without changing the reference to it. Again, be aware that some constructor must be called for both the proxy and the generated objects. ProxyRefDispatcher: This class carries a reference to the proxy object it is invoked from in its signature. This allows for example to delegate method calls to another method of this proxy. Be aware that this can easily cause an endless loop and will always cause an endless loop if the same method is called from within ProxyRefDispatcher#loadObject(Object). NoOp: The NoOp class does not what its name suggests. Instead, it delegates each method call to the enhanced class's method implementation. At this point, the last two interceptors might not make sense to you. Why would you even want to enhance a class when you will always delegate method calls to the enhanced class anyways? And you are right. These interceptors should only be used together with a CallbackFilter as it is demonstrated in the following code snippet: @Test public void testCallbackFilter() throws Exception { Enhancer enhancer = new Enhancer(); CallbackHelper callbackHelper = new CallbackHelper(SampleClass.class, new Class[0]) { @Override protected Object getCallback(Method method) { if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) { return new FixedValue() { @Override public Object loadObject() throws Exception { return "Hello cglib!"; }; } } else { return NoOp.INSTANCE; // A singleton provided by NoOp. } } }; enhancer.setSuperclass(MyClass.class); enhancer.setCallbackFilter(callbackHelper); enhancer.setCallbacks(callbackHelper.getCallbacks()); SampleClass proxy = (SampleClass) enhancer.create(); assertEquals("Hello cglib!", proxy.test(null)); assertNotEquals("Hello cglib!", proxy.toString()); proxy.hashCode(); // Does not throw an exception or result in an endless loop. } The Enhancer instance accepts a CallbackFilter in its Enhancer#setCallbackFilter(CallbackFilter) method where it expects methods of the enhanced class to be mapped to array indices of an array of Callback instances. When a method is invoked on the created proxy, the Enhancer will then choose the according interceptor and dispatch the called method on the corresponding Callback (which is a marker interface for all the interceptors that were introduced so far). To make this API less awkward, cglib offers a CallbackHelper which will represent a CallbackFilter and which can create an array of Callbacks for you. The enhanced object above will be functionally equivalent to the one in the example for the MethodInterceptor but it allows you to write specialized interceptors whilst keeping the dispatching logic to these interceptors separate. How does it work? When the Enhancer creates a class, it will set create a privatestatic field for each interceptor that was registered as a Callback for the enhanced class after its creation. This also means that class definitions that were created with cglib cannot be reused after their creation since the registration of callbacks does not become a part of the generated class's initialization phase but are prepared manually by cglib after the class was already initialized by the JVM. This also means that classes created with cglib are not technically ready after their initialization and for example cannot be sent over the wire since the callbacks would not exist for the class loaded in the target machine. Depending on the registered interceptors, cglib might register additional fields such as for example for the MethodInterceptor where two privatestatic fields (one holding a reflective Method and a the other holding MethodProxy) are registered per method that is intercepted in the enhanced class or any of its subclasses. Be aware that the MethodProxy is making excessive use of the FastClass which triggers the creation of additional classes and is described in further detail below. For all these reasons, be careful when using the Enhancer. And always register callback types defensively, since the MethodInterceptor will for example trigger the creation of additional classes and register additional static fields in the enhanced class. This is specifically dangerous since the callback variables are also stored as static variables in the enhanced class: This implies that the callback instances are never garbage collected (unless their ClassLoader is, what is unusual). This is in particular dangerous when using anonymous classes which silently carry a reference to their outer class. Recall the example above: @Test public void testFixedValue() throws Exception { Enhancer enhancer = new Enhancer(); enhancer.setSuperclass(SampleClass.class); enhancer.setCallback(new FixedValue() { @Override public Object loadObject() throws Exception { return "Hello cglib!"; } }); SampleClass proxy = (SampleClass) enhancer.create(); assertEquals("Hello cglib!", proxy.test(null)); } The anonymous subclass of FixedValue would become hardly referenced from the enhanced SampleClass such that neither the anonymous FixedValue instance or the class holding the @Test method would ever be garbage collected. This can introduce nasty memory leaks in your applications. Therefore, do not use non-static inner classes with cglib. (I only use them in this blog entry for keeping the examples short.) Finally, you should never intercept Object#finalize(). Due to the subclassing approach of cglib, intercepting finalize is implemented by overriding it what is in general a bad idea. Enhanced instances that intercept finalize will be treated differently by the garbage collector and will also cause these objects being queued in the JVM's finalization queue. Also, if you (accidentally) create a hard reference to the enhanced class in your intercepted call to finalize, you have effectively created an noncollectable instance. This is in general nothing you want. Note that final methods are never intercepted by cglib. Thus, Object#wait, Object#notify and Object#notifyAll do not impose the same problems. Be however aware that Object#clone can be intercepted what is something you might not want to do. Immutable Bean cglib's ImmutableBean allows you to create an immutability wrapper similar to for example Collections#immutableSet. All changes of the underlying bean will be prevented by an IllegalStateException (however, not by an UnsupportedOperationException as recommended by the Java API). Looking at some bean public class SampleBean { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } } we can make this bean immutable: @Test(expected = IllegalStateException.class) public void testImmutableBean() throws Exception { SampleBean bean = new SampleBean(); bean.setValue("Hello world!"); SampleBean immutableBean = (SampleBean) ImmutableBean.create(bean); assertEquals("Hello world!", immutableBean.getValue()); bean.setValue("Hello world, again!"); assertEquals("Hello world, again!", immutableBean.getValue()); immutableBean.setValue("Hello cglib!"); // Causes exception. } As obvious from the example, the immutable bean prevents all state changes by throwing an IllegalStateException. However, the state of the bean can be changed by changing the original object. All such changes will be reflected by the ImmutableBean. Bean Generator The BeanGenerator is another bean utility of cglib. It will create a bean for you at run time: @Test public void testBeanGenerator() throws Exception { BeanGenerator beanGenerator = new BeanGenerator(); beanGenerator.addProperty("value", String.class); Object myBean = beanGenerator.create(); Method setter = myBean.getClass().getMethod("setValue", String.class); setter.invoke(myBean, "Hello cglib!"); Method getter = myBean.getClass().getMethod("getValue"); assertEquals("Hello cglib!", getter.invoke(myBean)); } As obvious from the example, the BeanGenerator first takes some properties as name value pairs. On creation, the BeanGenerator creates the accessors get() void set() for you. This might be useful when another library expects beans which it resolved by reflection but you do not know these beans at run time. (An example would be Apache Wicket which works a lot with beans.) Bean Copier The BeanCopier is another bean utility that copies beans by their property values. Consider another bean with similar properties as SampleBean: public class OtherSampleBean { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } } Now you can copy properties from one bean to another: @Test public void testBeanCopier() throws Exception { BeanCopier copier = BeanCopier.create(SampleBean.class, OtherSampleBean.class, false); SampleBean bean = new SampleBean(); myBean.setValue("Hello cglib!"); OtherSampleBean otherBean = new OtherSampleBean(); copier.copy(bean, otherBean, null); assertEquals("Hello cglib!", otherBean.getValue()); } without being restrained to a specific type. The BeanCopier#copy mehtod takles an (eventually) optional Converter which allows to do some further manipulations on each bean property. If the BeanCopier is created with false as the third constructor argument, the Converter is ignored and can therefore be null. Bulk Bean A BulkBean allows to use a specified set of a bean's accessors by arrays instead of method calls: @Test public void testBulkBean() throws Exception { BulkBean bulkBean = BulkBean.create(SampleBean.class, new String[]{"getValue"}, new String[]{"setValue"}, new Class[]{String.class}); SampleBean bean = new SampleBean(); bean.setValue("Hello world!"); assertEquals(1, bulkBean.getPropertyValues(bean).length); assertEquals("Hello world!", bulkBean.getPropertyValues(bean)[0]); bulkBean.setPropertyValues(bean, new Object[] {"Hello cglib!"}); assertEquals("Hello cglib!", bean.getValue()); } The BulkBean takes an array of getter names, an array of setter names and an array of property types as its constructor arguments. The resulting instrumented class can then extracted as an array by BulkBean#getPropertyBalues(Object). Similarly, a bean's properties can be set by BulkBean#setPropertyBalues(Object, Object[]). Bean Map This is the last bean utility within the cglib library. The BeanMap converts all properties of a bean to a String-to-Object Java Map: @Test public void testBeanGenerator() throws Exception { SampleBean bean = new SampleBean(); BeanMap map = BeanMap.create(bean); bean.setValue("Hello cglib!"); assertEquals("Hello cglib", map.get("value")); } Additionally, the BeanMap#newInstance(Object) method allows to create maps for other beans by reusing the same Class. Key Factory The KeyFactory factory allows the dynamic creation of keys that are composed of multiple values that can be used in for example Map implementations. For doing so, the KeyFactory requires some interface that defines the values that should be used in such a key. This interface must contain a single method by the name newInstance that returns an Object. For example: public interface SampleKeyFactory { Object newInstance(String first, int second); } Now an instance of a a key can be created by: @Test public void testKeyFactory() throws Exception { SampleKeyFactory keyFactory = (SampleKeyFactory) KeyFactory.create(Key.class); Object key = keyFactory.newInstance("foo", 42); Map map = new HashMap(); map.put(key, "Hello cglib!"); assertEquals("Hello cglib!", map.get(keyFactory.newInstance("foo", 42))); } The KeyFactory will assure the correct implementation of the Object#equals(Object) and Object#hashCode methods such that the resulting key objects can be used in a Map or a Set. The KeyFactory is also used quite a lot internally in the cglib library. Mixin Some might already know the concept of the Mixin class from other programing languages such as Ruby or Scala (where mixins are called traits). cglib Mixins allow the combination of several objects into a single object. However, in order to do so, those objects must be backed by interfaces: public interface Interface1 { String first(); } public interface Interface2 { String second(); } public class Class1 implements Interface1 { @Override public String first() { return "first"; } } public class Class2 implements Interface2 { @Override public String second() { return "second"; } } Now the classes Class1 and Class2 can be combined to a single class by an additional interface: public interface MixinInterface extends Interface1, Interface2 { /* empty */ } @Test public void testMixin() throws Exception { Mixin mixin = Mixin.create(new Class[]{Interface1.class, Interface2.class MixinInterface.class}, new Object[]{new Class1(), new Class2()}); MixinInterface mixinDelegate = (MixinInterface) mixin; assertEquals("first", mixinDelegate.first()); assertEquals("second", mixinDelegate.second()); } Admittedly, the Mixin API is rather awkward since it requires the classes used for a mixin to implement some interface such that the problem could also be solved by non-instrumented Java. String Switcher The StringSwitcher emulates a String to int Java Map: @Test public void testStringSwitcher() throws Exception { String[] strings = new String[]{"one", "two"}; int[] values = new int[]{10, 20}; StringSwitcher stringSwitcher = StringSwitcher.create(strings, values, true); assertEquals(10, stringSwitcher.intValue("one")); assertEquals(20, stringSwitcher.intValue("two")); assertEquals(-1, stringSwitcher.intValue("three")); } The StringSwitcher allows to emulate a switch command on Strings such as it is possible with the built-in Java switch statement since Java 7. If using the StringSwitcher in Java 6 or less really adds a benefit to your code remains however doubtful and I would personally not recommend its use. Interface Maker The InterfaceMaker does what its name suggests: It dynamically creates a new interface. @Test public void testInterfaceMaker() throws Exception { Signature signature = new Signature("foo", Type.DOUBLE_TYPE, new Type[]{Type.INT_TYPE}); InterfaceMaker interfaceMaker = new InterfaceMaker(); interfaceMaker.add(signature, new Type[0]); Class iface = interfaceMaker.create(); assertEquals(1, iface.getMethods().length); assertEquals("foo", iface.getMethods()[0].getName()); assertEquals(double.class, iface.getMethods()[0].getReturnType()); } Other than any other class of cglib's public API, the interface maker relies on ASM types. The creation of an interface in a running application will hardly make sense since an interface only represents a type which can be used by a compiler to check types. It can however make sense when you are generating code that is to be used in later development. Method Delegate A MethodDelegate allows to emulate a C#-like delegate to a specific method by binding a method call to some interface. For example, the following code would bind the SampleBean#getValue method to a delegate: public interface BeanDelegate { String getValueFromDelegate(); } @Test public void testMethodDelegate() throws Exception { SampleBean bean = new SampleBean(); bean.setValue("Hello cglib!"); BeanDelegate delegate = (BeanDelegate) MethodDelegate.create( bean, "getValue", BeanDelegate.class); assertEquals("Hello world!", delegate.getValueFromDelegate()); } There are however some things to note: The factory method MethodDelegate#create takes exactly one method name as its second argument. This is the method the MethodDelegate will proxy for you. There must be a method without arguments defined for the object which is given to the factory method as its first argument. Thus, the MethodDelegate is not as strong as it could be. The third argument must be an interface with exactly one argument. The MethodDelegate implements this interface and can be cast to it. When the method is invoked, it will call the proxied method on the object that is the first argument. Furthermore, consider these drawbacks: cglib creates a new class for each proxy. Eventually, this will litter up your permanent generation heap space You cannot proxy methods that take arguments. If your interface takes arguments, the method delegation will simply not work without an exception thrown (the return value will always be null). If your interface requires another return type (even if that is more general), you will get a IllegalArgumentException. Multicast Delegate The MulticastDelegate works a little different than the MethodDelegate even though it aims at similar functionality. For using the MulticastDelegate, we require an object that implements an interface: public interface DelegatationProvider { void setValue(String value); } public class SimpleMulticastBean implements DelegatationProvider { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } } Based on this interface-backed bean we can create a MulticastDelegate that dispatches all calls to setValue(String) to several classes that implement the DelegationProvider interface: @Test public void testMulticastDelegate() throws Exception { MulticastDelegate multicastDelegate = MulticastDelegate.create( DelegatationProvider.class); SimpleMulticastBean first = new SimpleMulticastBean(); SimpleMulticastBean second = new SimpleMulticastBean(); multicastDelegate = multicastDelegate.add(first); multicastDelegate = multicastDelegate.add(second); DelegatationProvider provider = (DelegatationProvider)multicastDelegate; provider.setValue("Hello world!"); assertEquals("Hello world!", first.getValue()); assertEquals("Hello world!", second.getValue()); } Again, there are some drawbacks: The objects need to implement a single-method interface. This sucks for third-party libraries and is awkward when you use CGlib to do some magic where this magic gets exposed to the normal code. Also, you could implement your own delegate easily (without byte code though but I doubt that you win so much over manual delegation). When your delegates return a value, you will receive only that of the last delegate you added. All other return values are lost (but retrieved at some point by the multicast delegate). Constructor Delegate A ConstructorDelegate allows to create a byte-instrumented factory method. For that, that we first require an interface with a single method newInstance which returns an Object and takes any amount of parameters to be used for a constructor call of the specified class. For example, in order to create a ConstructorDelegate for the SampleBean, we require the following to call SampleBean's default (no-argument) constructor: public interface SampleBeanConstructorDelegate { Object newInstance(); } @Test public void testConstructorDelegate() throws Exception { SampleBeanConstructorDelegate constructorDelegate = (SampleBeanConstructorDelegate) ConstructorDelegate.create( SampleBean.class, SampleBeanConstructorDelegate.class); SampleBean bean = (SampleBean) constructorDelegate.newInstance(); assertTrue(SampleBean.class.isAssignableFrom(bean.getClass())); } Parallel Sorter The ParallelSorter claims to be a faster alternative to the Java standard library's array sorters when sorting arrays of arrays: @Test public void testParallelSorter() throws Exception { Integer[][] value = { {4, 3, 9, 0}, {2, 1, 6, 0} }; ParallelSorter.create(value).mergeSort(0); for(Integer[] row : value) { int former = -1; for(int val : row) { assertTrue(former < val); former = val; } } } The ParallelSorter takes an array of arrays and allows to either apply a merge sort or a quick sort on every row of the array. Be however careful when you use it: When using arrays of primitives, you have to call merge sort with explicit sorting ranges (e.g. ParallelSorter.create(value).mergeSort(0, 0, 3) in the example. Otherwise, the ParallelSorter has a pretty obvious bug where it tries to cast the primitive array to an array Object[] what will cause a ClassCastException. If the array rows are uneven, the first argument will determine the length of what row to consider. Uneven rows will either lead to the extra values not being considered for sorting or a ArrayIndexOutOfBoundException. Personally, I doubt that the ParallelSorter really offers a time advantage. Admittedly, I did however not yet try to benchmark it. If you tried it, I'd be happy to hear about it in the comments. Fast Class and Fast Members The FastClass promises a faster invocation of methods than the Java reflection API by wrapping a Java class and offering similar methods to the reflection API: @Test public void testFastClass() throws Exception { FastClass fastClass = FastClass.create(SampleBean.class); FastMethod fastMethod = fastClass.getMethod(SampleBean.class.getMethod("getValue")); MyBean myBean = new MyBean(); myBean.setValue("Hello cglib!"); assertTrue("Hello cglib!", fastMethod.invoke(myBean, new Object[0])); } Besides the demonstrated FastMethod, the FastClass can also create FastConstructors but no fast fields. But how can the FastClass be faster than normal reflection? Java reflection is executed by JNI where method invocations are executed by some C-code. The FastClass on the other side creates some byte code that calls the method directly from within the JVM. However, the newer versions of the HotSpot JVM (and probably many other modern JVMs) know a concept called inflation where the JVM will translate reflective method calls into native version's of FastClass when a reflective method is executed often enough. You can even control this behavior (at least on a HotSpot JVM) with setting the sun.reflect.inflationThreshold property to a lower value. (The default is 15.) This property determines after how many reflective invocations a JNI call should be substituted by a byte code instrumented version. I would therefore recommend to not use FastClass on modern JVMs, it can however fine-tune performance on older Java virtual machines. cglib Proxy The cglib Proxy is a reimplementation of the Java Proxy class mentioned in the beginning of this article. It is intended to allow using the Java library's proxy in Java versions before Java 1.3 and differs only in minor details. The better documentation of the cglib Proxy can however be found in the Java standard library's Proxy javadoc where an example of its use is provided. For this reason, I will skip a more detailed discussion of the cglib's Proxy at this place. A Final Word of Warning After this overview of cglib's functionality, I want to speak a final word of warning. All cglib classes generate byte code which results in additional classes being stored in a special section of the JVM's memory: The so called perm space. This permanent space is, as the name suggests, used for permanent objects that do not usually get garbage collected. This is however not completely true: Once a Class is loaded, it cannot be unloaded until the loading ClassLoader becomes available for garbage collection. This is only the case the Class was loaded with a custom ClassLoader which is not a native JVM system ClassLoader. This ClassLoader can be garbage collected if itself, all Classes it ever loaded and all instances of all Classes it ever loaded become available for garbage collection. This means: If you create more and more classes throughout the life of a Java application and if you do not take care of the removal of these classes, you will sooner or later run of of perm space what will result in your application's death by the hands of an OutOfMemoryError. Therefore, use cglib sparingly. However, if you use cglib wisely and carefully, you can really do amazing things with it that go beyond what you can do with non-instrumented Java applications. Lastly, when creating projects that depend on cglib, you should be aware of the fact that the cglib project is not as well maintained and active as it should be, considering its popularity. The missing documentation is a first hint. The often messy public API a second. But then there are also broken deploys of cglib to Maven central. The mailing list reads like an archive of spam messages. And the release cycles are rather unstable. You might therefore want to have a look at javassist, the only real low-level alternative to cglib. Javassist comes bundled with a pseudo-java compiler what allows to create quite amazing byte code instrumentations without even understanding Java byte code. If you like to get your hands dirty, you might also like ASM on top of which cglib is built. ASM comes with a great documentation of both the library and Java class files and their byte code. Note that these examples only run with cglib 2.2.2 and are not compatible with the newest release 3 of cglib. Unfortunately, I experienced the newest cglib version to occasionally produce invalid byte code which is why I considered an old version and also use this version in production. Also, note that most projects using cglib move the library to their own namespace in order to avoid version conflicts with other dependencies such as for example demonstrated by the Spring project. You should do the same with your project when making use of cglib. Tools such like jarjar can help you with the automation of this good practice.
January 7, 2014
by Rafael Winterhalter
· 75,971 Views · 18 Likes
article thumbnail
Hands-on Angularjs: Building Single-Page Applications with Javascript
Welcome, dear reader, in this article, we’ll talk about a web framework that has become popular in the construction of single-page web applications: Angularjs. But after all, what is a single-page application? Single-page applications Single-page applications, as the name implies, consist of applications where only a single “page” – or as we also call, HTML document – is submitted to the client, and after this initial load, only fragments of the page are reloaded, by Ajax requests, without ever making a full page reload. The main advantage we have in this web application model is that, due to minimizing the data traffic between the client and the server, it provides the user with a highly dynamic application, with low “latency” between user actions on the interface. A point of attention, however, is that in this application model, much of the “weight” of the application processing falls to the client side, then the device’s capabilities where the user is accessing the application can be a problem for the adoption of the model, especially if we are talking about applications accessed on mobile devices. Developed by Google, Angularjs brought features that allow you to build in a well structured way applications in single-page model, through the use of javascript as an extension built on top of HTML pages. One of the advantages of the framework is that it integrates seamlessly into HTML, allowing the developer team to, for example, reuse the pages of a prototype made by a web designer. Architecture The framework architecture works with the concept of been a html extension of the page to which it is linked. As a javascript library, imported within the pages, which through the use of policies – kind of attributes that are embedded within their own html tags, usually with the prefix “NG” – performs the compilation of all the code belonging to the framework generating dynamic code – html, css and javascript – that the user use through his browser. For readers who are interested in knowing more deeply the architecture behind the framework, the presentation below speaks in detail about this topic: Angularjs architecture overview Layout Although the framework has high flexibility in the construction of layouts due to the use of pure html, it lacks ready layout options, so to make the application with a more pleasant graphical interface, it enters the bootstrap, providing CSS styles – plus pre-build behavior in javacript and dynamic html – that enable an even richer layout for the application. At the end of this article, beyond the official angularjs site, you can also access the link to the official site of the bootstrap. The MVC model In the web world, is well known the pattern MVC (Model – View – Controller). In this pattern, we define that the web application is defined in 3 layers with different responsibilities: View: In this layer, we have the code related to the presentation to the end user, with pages and rules related to navigation, such as control of the flags of a register in the “wizard” format, for example. Controller: This layer is the code that bridges between the navigation and the source of application data, represented by the model layer. Processing and business rules typically belong to this layer. Model: In this layer is the code responsible for performing the persistence of the application data. In a traditional web application, we commonly have a DBMS as a data source. Speaking in the context of angularjs, as we see below, we have the view layer represented by the html pages. These pages communicate with the controller layer through policies explained in the beginning of our article, invoking the controllers of the framework, consisting of a series of javascript functions encoded by the developer. These functions use a binding layer, provided by the framework, called $ scope, which is populated by the controller and displayed by the view, representing the model layer. Typically, in the architecture of a angularjs system, we have the model layer being filled through REST services. The figure below illustrates the interaction between these layers: Hands-on For this hands-on, we will use the Wildfly 8.1.0 server, angularjs and the bootstrap. At the time of this article, the latest version (stable) of angularjs is 1.2.26 and for the bootstrap is 3.2.0. As IDE, I am using Eclipse Luna. First, the reader must create a project of type “Dynamic Web Project” by selecting the wizard image below. In fact, it would be perfectly possible to use static web project as a project, but we’ll use the dynamic only to facilitate the deploy on the server. Following the wizard, remember to select the runtime Wildfly, as indicated below, and select the checkbox that calls for the creation of the descriptor “web.xml” on the last page of the wizard. At the end, we will have a project structure like the one below: As a last step of project configuration, we will add the project in the automatically deployed application list on our Wildfly server. To do this, select the server from the perspective of servers, select the “add or remove” and move the application to the list of configured applications, as follows: Including the angularjs and bootstrap on the application To include the angular and the bootstrap in our application, simply include the contents thereof, consisting of js files, css, etc inside the folder “webcontent” generating the project structure below: PS: This structure aims only to provide a quick and easy way to set the environment for the realization of our learning. In a real application, the developer has the freedom to not include all angular files, including only those whose resources will be used in the project. Starting our development, we will create a simple page containing a single page application that consists of a list of tasks, with the option of new tasks and filtering tasks already completed. Keep in mind that the goal of this small application is only to show the basic structure of angularjs. In a real application, the js code should be structured in directories, ensuring readability and maintainability of the code. Thus, we come to the implementation. To deploy it, we will create an html page called “index.html”, and put the code below: Tarefas de {{todo.user} Add DescriptionDone{{item.action} Show Complete As we can see, we have a simple html page. First, insert the policy “ng-app”, which initializes the framework on the page, and import the files needed for the operation of the angular and the bootstrap: Following is the creation of a module. A angularjs module consists of other components such as filters and controllers, constituting a unit similar to a package that can be imported into other application modules. For the initial data source, we put a JSON structure in a JavaScript variable, which is inserted into the binding layer later. In addition, we also have to create a filter. Filters, as the name implies, are created to provide a means of filtering the data of a listing. Later on we will see that use in practice: Once we have our populated binding, it is easy for us to display its values, as in the example below, where we show the value of “user”. We also define a form to our inclusion of tasks: Tarefas de {{todo.user} Add Finally, we have the proper view of the tasks. To do this, we use the “ng-repeat” policy. The reader will notice that we have made use of our filter, and we ask the angularjs to order our list by the “action” value. In addition, we can also see the binding for changing the filter, through the checkboxes to enable / disable filtering, in addition to the marking of completed tasks: DescriptionDone{{item.action} Show Complete The final screen in operation can be seen below: Conclusion And so we conclude our article on angularjs. With design flexibility and an easy learning structure, its adoption as well as the single page applications model is a powerful tool that every web developer should have in his pocket knife options. Thanks to everyone who supported me in this article, until next time. Links Source-code Angularjs Bootstrap Wildfly
January 7, 2014
by Alexandre Lourenco
· 3,608 Views
article thumbnail
Sonar Installation and Eclipse Plugin
This Document tries to help you install Sonar, analyze your project with your Sonar installation, integrate with Eclipse, clean up violations dynamically, and practice better coding. Table of Contents Sonar Installation Download Sonar Unzip and Install Run Sonar Sonar Console Access your Sonar installation Generate Sonar Report Update your POM with SONAR configurations Example Access your project in Sonar Integrate SONAR with Eclipse Eclipse Sonar Plug-In Installation Eclipse Integration (To install this plugin in the Eclipse IDE) - With Eclipse Market Place Eclipse Integration (To install this plugin in the Eclipse IDE) - With Eclipse Software Update Configure Sonar in your Eclipse Link your project for the first time Analyze and clean up the code violations Run Sonar Analysis in Local Sonar Installation Download Sonar Download the sonar here http://dist.sonar.codehaus.org/sonar-3.5.1.zip and unzip the download to your favorite folder Unzip and Install After Unzip you will see folder structure would look something like as follows.. Figure 1 – Sonar Dir Structure Run Sonar Depends on your OS, you need to run the executable , for an instance if you are running linux-x86 and 64 bit, then you need to run start.sh Figure 2 – Run Sonar Sonar Console After you start the sonar you will see some info as follows after you run the sonar Figure 3 - Sonar Console Access your Sonar installation Now you can browse your sonar installation http:localhost:9000 Generate Sonar Report Update your POM with SONAR configurations After we have the sonar installed, we can generate the reports for any maven project, by adding the following lines in your project pom.xml (sonar hosts in your properties section) Figure 4 - POM XML for Sonar Generation Example Let’s take an example of project-common; do the following steps · Checkout the latest code from repository to your work space · Do mvn clean install · Modify your pom.xml (pom.xml) to have the following under properties section · http://localhost:9000/ · Save the file · Do mvn sonar:sonar in your command / terminal · You will see some messages as following. Figure 5 - Sonar report Generation - I Note: And after few minutes (depends on the size of the modules the sonar report would even take longer) Figure 6 - Sonar Report Generation - II Finally you would see the following that indicates the sonar reporting is completed.. Figure 7 - Sonar Report Generation Successful Access your project in Sonar Now go to you http://localhost:9000 you would see the project report that you ran for Figure 8 - Sonar Project Report at your Local Integrate SONAR with Eclipse Eclipse Sonar Plug-In Installation Eclipse Integration (To install this plugin in the Eclipse IDE) - With Eclipse Market Place Figure 9 - Sonar Eclipse Plug-in Install (Market Place) Figure 10 - Sonar Eclipse Plug-in Install (Market Place) II Eclipse Integration (To install this plugin in the Eclipse IDE) - With Eclipse Software Update Go to Help > Install New Software... This should display the Install dialog box. Paste the Update Site URL (http://dist.sonar-ide.codehaus.org/eclipse/) into the field Work with and press Enter. This should display the list of available plugins and components: Figure 11- Sonar Eclipse Plug-in Install (With Install New Software Menu) Choose Sonar Java, follow the steps and install the plugin Note: Please make sure the project that you want to associate with sonar has already analyzed in your sonar installation Configure Sonar in your Eclipse Configure your local/remote sonar in your Eclipse Go to Window > Preferences > Sonar > Servers. Sonar Eclipse is pre-configured to access a local Sonar server listening on http://localhost:9000/. You can edit this server, delete it or add a new one. Figure 12 - Configure Sonar Server in Eclipse Link your project for the first time Once the Sonar server is defined, the next step is to link your Eclipse projects with projects defined and analyzed on this Sonar server. To do so, right-click on the project in the Project Explorer, and then Configure > Associate with Sonar...: Figure 13 - Configure / Associate your Eclipse Project with Sonar In the Sonar project text field, start typing the name of the project and select it in the list box: Figure 14 - Associate your Eclipse Project with Sonar II Click Finish. Your project is now associated to one analyzed on your Sonar server. Analyze and clean up the code violations Do local analysis and clean the violations Figure 15 - Configure Modules Figure 16 - configure sonar modules from Eclipse Note Please make sure you have started your local sonar server (as described in Run sonar section) otherwise you would not able to see the right sonar project that you intend to configure Run Sonar Analysis in Local Figure 17.a – Set Sonar Analysis on Local Mode Figure 17:b - Run Sonar Analysis on Local Figure 18 - sonar violation analysis console Figure 19 - Sonar violation analysis console II Figure 20 - Sonar violations Markers
January 7, 2014
by Hari Subramanian
· 226,433 Views · 3 Likes
article thumbnail
Hunting for an SWT Test Framework? Say Hello to Red Deer
This is the first in a series of posts on the new “Red Deer” (https://github.com/jboss-reddeer/reddeer) open source testing framework for Eclipse. In this post, we’ll introduce Red Deer, and take a look at the some of the advantages that it offers by building a sample test program from scratch. Some of the features that Red Deer automated offers are: An easy to use, high-level API for testing standard Eclipse components Support for creating custom extensions for your own applications A requirements validation mechanism to assist you in configuring complex tests Eclipse Tooling to Assist in Creating new Projects A record and playback tool to enable you to quickly create automated tests An integration with Selenium for testing web based applications Support for running tests in a Jenkins CI environment Note that as of this writing, Red Deer is in an incubation stage. The current release is at level 0.5. The target date for the 1.0 release of Red Deer is late 2014. But, as a community-based, open source project, now is a great time to try Red Deer and make suggestions or even contribute code! A Look at Red Deer’s Architecture The Red Deer project itself is comprised of utilities and the API that supports the development and execution of automated tests. The API (the parts of the above diagram that are enclosed in dashed line boxes) can be thought of as having three layers: The top layer consists of extensions to Red Deer’s abstract classes or implementations for Eclipse components such as Views, Editors, Wizards, or Shells. For example, if you are writing tests for a feature that uses a custom Eclipse View, you can extend Red Deer’s View class by adding support for the specific functions of the feature. The advantage that this API layer gives you is that your test programs do not have to focus on manipulating the individual UI elements directly to perform operations. Your programs can instead instantiate an instance of an Eclipse component such as a View, and then use that instance’s methods to perform operations on the View. This layer of abstraction makes your test programs easier to write, understand, and maintain. The middle layer consists of the Red Deer implementations for SWT UI elements such as: Button, Combo, Label, Menu, Shell, TabItem, Table, ToolBar, Tree. This API layer supports the API’s higher level by providing the building blocks for the API’s Views, Editors, Shells, and WIzards. This middle layer of the API also provides Red Deer packages that enable your tests to enforce requirements, so that necessary setup tasks are performed before a test is run. The bottom layer consists of Red Deer packages that support the execution of tests such as: Conditions, Matchers, Widgets, Workbench, and Red Deer extensions to JUnit. What Makes Red Deer different from other Tools? A Layer of Abstraction The top-most layer of the API enables you to instantiate Eclipse UI elements as objects, and then manipulate them through their methods. The resulting code is easier to read and maintain, instead of being brittle and subject to failures when the UI changes. For example, for a test that has to open a view and press a button, without Red Deer, the test would have to navigate the top level menu, find the view menu, then the view type in that menu, then find the view open dialog, then locate the “OK” button, etc. Your test would have to spend a lot of time navigating through the UI elements before it could even begin to perform the test’s steps. With Red Deer, the code to open a view (in this case, the servers view) is simply: ServersView view = new ServersView(); view.open(); Furthermore, within that ServersView, your test program can perform operations on the View through methods which are defined in the view (and are incidentally also well debugged by the Red Deer team), instead of having to explicitly locate and manipulate the UI elements directly. For example, to obtain a list of all the servers, instead of locating the UI tree that contains the server list, and extracting that list of servers into an array, your Red Deer program can simply call the “getServers()” method. Likewise, the code to open a PackageExplorer, and then select a project within that PackageExplorer is as follows: PackageExplorer packageExplorer = new PackageExplorer(); packageExplorer.open(); packageExplorer.getProject("myTestProject").select(); And, the code to retrieve all the projects within that PackageExplorer is simply: packageExplorer.getProjects(); The result are that your tests are easier to write and maintain and you can focus on testing your application’s logic instead of writing brittle code to navigate through the application. Installing Red Deer The only prerequisites to using Red Deer are Eclipse and Java. In this post, we’ll use Eclipse Kepler and OpenJDK 1.7, running on Red Hat Enterprise Linux (RHEL) 6. To install Red Deer 0.4 (this is the latest stable milestone version as of this writing) follow these steps: Open up Eclipse Navigate to: Help->Install New Software Define a new download site using the Red Deer update site URL: http://download.jboss.org/jbosstools/updates/stable/kepler/core/reddeer/0.4.0/ Select Red Deer, click on the Finish button and Red Deer will install Now that you have Red Deer installed, let’s move onto building a new Red Deer test. Building your First Red Deer Test To create a new Red Deer test project, you make use of the Red Deer UI tooling and select New->Project->Other->Red Deer Test: Before we move on, let’s take a look at the WEB-INF/MANIFEST.MF file that is created in the project: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: com.example.reddeer.sample Bundle-SymbolicName: com.example.reddeer.sample;singleton:=true Bundle-Version: 1.0.0.qualifier Bundle-ActivationPolicy: lazy Bundle-Vendor: Sample Co Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Require-Bundle: org.junit, org.jboss.reddeer.junit, org.jboss.reddeer.swt, org.jboss.reddeer.eclipse The line we’re interested in is the final line in the file. These are the bundles that are required by Red Deer. After the empty project is created by the wizard, you can define a package and create a test class. Here's the code for a minimal functional test. The test will verify that the eclipse configuration is not empty. package com.example.reddeer.sample; import static org.junit.Assert.assertFalse; import java.util.List; import org.jboss.reddeer.swt.api.TreeItem; import org.jboss.reddeer.swt.impl.button.PushButton; import org.jboss.reddeer.swt.impl.menu.ShellMenu; import org.jboss.reddeer.swt.impl.tree.DefaultTree; import org.junit.Test; import org.junit.runner.RunWith; import org.jboss.reddeer.junit.runner.RedDeerSuite; @RunWith(RedDeerSuite.class) public class SimpleTest { @Test public void TestIt() { new ShellMenu("Help", "About Eclipse Platform").select(); new PushButton("Installation Details").click(); DefaultTree ConfigTree = new DefaultTree(); List ConfigItems = ConfigTree.getAllItems(); assertFalse ("The list is empty!", ConfigItems.isEmpty()); for (TreeItem item : ConfigItems) { System.out.println ("Found: " + item.getText()); } } } After you save the test's source file, you can run the test. To run the test, select the Run As->Red Deer Test option: And - there's the green bar! Simplifying Tests with Requirements Red Deer requirements enable you to define actions that you want happen before a test is executed. The advantage to using requirements is that you define the actions with annotations instead of using a @BeforeClass method. The result is that your test code is easier to read and maintain. The biggest difference between a Red Deer requirement and the the @BeforeClass annotation from the JUnit framework is that if a requirement cannot be fulfilled the test is not executed. Like everything else in Red Deer, you can make use of predefined requirements, or you can extend the feature by adding your own custom requirements. These custom requirements can be made complex and for convenience can be stored in external properties files. (We’ll take a look at defining custom requirements in a later post in this series when we examine how to create and contribute extensions to Red Deer.) The current milestone release of Red Deer provides predefined requirements that enable you to clean out your current workspace and open a perspective. Let’s add these to our example. To do this, we need to add these import statements: import org.jboss.reddeer.eclipse.ui.perspectives.JavaBrowsingPerspective; import org.jboss.reddeer.requirements.cleanworkspace.CleanWorkspaceRequirement.CleanWorkspace; import org.jboss.reddeer.requirements.openperspective.OpenPerspectiveRequirement.OpenPerspective; And these annotations: @CleanWorkspace @OpenPerspective(JavaBrowsingPerspective.class) And, we also have to a reference to org.jboss.reddeer.requirements to the required bundle list in our example’s MANIFEST.MF file: Require-Bundle: org.junit, org.jboss.reddeer.junit, org.jboss.reddeer.swt, org.jboss.reddeer.eclipse, org.jboss.reddeer.requirements When we’re done, our example looks like this: package com.example.reddeer.sample; import static org.junit.Assert.assertFalse; import java.util.List; import org.jboss.reddeer.swt.api.TreeItem; import org.jboss.reddeer.swt.impl.button.PushButton; import org.jboss.reddeer.swt.impl.menu.ShellMenu; import org.jboss.reddeer.swt.impl.tree.DefaultTree; import org.junit.Test; import org.junit.runner.RunWith; import org.jboss.reddeer.junit.runner.RedDeerSuite; import org.jboss.reddeer.eclipse.ui.perspectives.JavaBrowsingPerspective; import org.jboss.reddeer.requirements.cleanworkspace.CleanWorkspaceRequirement.CleanWorkspace; import org.jboss.reddeer.requirements.openperspective.OpenPerspectiveRequirement.OpenPerspective; @RunWith(RedDeerSuite.class) @CleanWorkspace @OpenPerspective(JavaBrowsingPerspective.class) public class SimpleTest { @Test public void TestIt() { new ShellMenu("Help", "About Eclipse Platform").select(); new PushButton("Installation Details").click(); DefaultTree ConfigTree = new DefaultTree(); List ConfigItems = ConfigTree.getAllItems(); assertFalse ("The list is empty!", ConfigItems.isEmpty()); for (TreeItem item : ConfigItems) { System.out.println ("Found: " + item.getText()); } } } Notice how we were able to add those functions to the test code, while only adding a very small amount of actual new code? Yes, it can pay to be a lazy programmer. ;-) What’s Next? What’s next for Red Deer is its continued development as it progresses through its incubation stage until its 1.0 release. What’s next for this series of posts will be discussions about: The Red Deer Recorder - To enable you to capture manual actions and convert them into test programs How you can Extend Red Deer - To provide test coverage for your plugins’ specific functions. And How you can Contribute these extensions to the Red Deer project. How you can Define Complex Requirements - To enable you to perform setup tasks for your tests. Red Deer’s Integration with Selenium - To enable you to test web interfaces provided by your plugins. Running Red Deer tests with Jenkins - To enable you to take advantage of Jenkins’ Continuous Integration (CI) test framework. Author’s Acknowledgements I’d like to thank all the contributors to Red Deer for their vision and contributions. It’s a new project, but it is growing fast! The contributors (in alphabetic order) are: Stefan Bunciak, Radim Hopp, Jaroslav Jankovic, Lucia Jelinkova, Marian Labuda, Martin Malina, Jan Niederman, Vlado Pakan, Jiri Peterka, Andrej Podhradsky, Milos Prchlik, Radoslav Rabara, Petr Suchy, and Rastislav Wagner.
January 7, 2014
by Len DiMaggio
· 7,473 Views
article thumbnail
Bulk Fetching with Hibernate
If you need to process large database result sets from Java, you can opt for JDBC to give you the low level control required. On the other hand, if you are already using an ORM in your application, falling back to JDBC might imply some extra pain. You would be losing features such as optimistic locking, caching, automatic fetching when navigating the domain model and so forth. Fortunately most ORMs, like Hibernate, have some options to help you with that. While these techniques are not new, there are a couple of possibilities to choose from. A simplified example; let's assume we have a table (mapped to class "DemoEntity") with 100.000 records. Each record consists of a single column (mapped to the property "property" in DemoEntity) holding some random alphanumerical data of about ~2KB. The JVM is ran with -Xmx250m. Let's assume that 250MB is the overall maximum memory that can be assigned to the JVM on our system. Your job is to read all records currently in the table, doing some not further specified processing, and finally store the result. We'll assume that the entities resulting from our bulk operation are not modified. To start we'll try the obvious first, performing a query to simply retrieve all data: new TransactionTemplate(txManager).execute(new TransactionCallback() { @Override public Void doInTransaction(TransactionStatus status) { Session session = sessionFactory.getCurrentSession(); List demoEntitities = (List) session.createQuery("from DemoEntity").list(); for(DemoEntity demoEntity : demoEntitities){ //Process and write result } return null; } }); After a couple of seconds: Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded Clearly this won't cut it. To fix this we will be switching to Hibernate scrollable result sets as probably most developers are aware of. The above example instructs hibernate to execute the query, map the entire results to entities and return them. When using scrollable result sets records are transformed to entities one at a time: new TransactionTemplate(txManager).execute(new TransactionCallback() { @Override public Void doInTransaction(TransactionStatus status) { Session session = sessionFactory.getCurrentSession(); ScrollableResults scrollableResults = session.createQuery("from DemoEntity").scroll(ScrollMode.FORWARD_ONLY); int count = 0; while (scrollableResults.next()) { if (++count > 0 && count % 100 == 0) { System.out.println("Fetched " + count + " entities"); } DemoEntity demoEntity = (DemoEntity) scrollableResults.get()[0]; //Process and write result } return null; } }); After running this we get: ... Fetched 49800 entities Fetched 49900 entities Fetched 50000 entities Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded Although we are using a scrollable result set, every returned object is an attached object and becomes part of the persistence context (aka session). The result is actually the same as our first example in which we used "session.createQuery("from DemoEntity").list()". However, with that approach we had no control; everything happens behind the scenes and you get a list back with all the data if hibernate has done its job. using a scrollable result set on the other hand gives us a hook into the retrieval process and allows us to free memory up when needed. As we have seen it does not free up memory automatically, you have to instruct Hibernate to actually do it. Following options exist: Evicting the object from the persistent context after processing it Clearing the entire session every now and then We will opt for the first. In the above example under line 13 (//Process and write result) we'll add: session.evict(demoEntity); Important: If you were to perform any modification to the entity (or entities it has associations with that are cascade evicted alongside), make sure to flush the session PRIOR evicting or clearing, otherwise queries hold back because of Hibernate's write behind will not be sent to the database Evicting or clearing does not remove the entities from second level cache. If you enabled second level cache and are using it and you want to remove them as well use the desired sessionFactory.getCache().evictXxx() method From the moment you evict an entity it will be no longer attached (no longer associated with a session). Any modification done to the entity at that stage will no longer be reflected to the database automatically. If you are using lazy loading, accessing any property that was not loaded prior the eviction will yield the famous org.hibernate.LazyInitializationException. So basically, make sure the processing for that entity is done (or it is at least initialized for further needs) before you evict or clear After we run the application again, we see that it now successfully executes: ... Fetched 99800 entities Fetched 99900 entities Fetched 100000 entities Btw; you can also set the query read-only allowing hibernate to perform some extra optimizations: ScrollableResults scrollableResults = session.createQuery("from DemoEntity").setReadOnly(true).scroll(ScrollMode.FORWARD_ONLY); Doing this only gives a very marginal difference in memory usage, in this specific test setup it enabled us to read about 300 entities extra with the given amount of memory. Personally I would not use this feature merely for memory optimizations alone but only if it suits in your overall immutability strategy. With hibernate you have different options to make entities read-only: on the entity itself, the overall session read-only and so forth. Setting read only false on the query individually is probably the least preferred approach. (eg. entities loaded in the session before will remain unaffected, possibly modifiable. Lazy associations will be loaded modifiable even if the root objects returned by the query are read only). Ok, we were able to process our 100.000 records, life is good. But as it turns out Hibernate has another another option for bulk operations: the stateless session. You can obtain a scrollable result set from a stateless session the same way as from a normal session. A stateless session lies directly above JDBC. Hibernate will run in nearly "all features disabled" mode. This means no persistent context, no 2nd level caching, no dirty detection, no lazy loading, basically no nothing. From the javadoc: /** * A command-oriented API for performing bulk operations against a database. * A stateless session does not implement a first-level cache nor interact with any * second-level cache, nor does it implement transactional write-behind or automatic * dirty checking, nor do operations cascade to associated instances. Collections are * ignored by a stateless session. Operations performed via a stateless session bypass * Hibernate's event model and interceptors. Stateless sessions are vulnerable to data * aliasing effects, due to the lack of a first-level cache. For certain kinds of * transactions, a stateless session may perform slightly faster than a stateful session. * * @author Gavin King */ The only thing it does is transforming records to objects. This might be an appealing alternative because it helps you getting rid of that manual evicting/flushing: new TransactionTemplate(txManager).execute(new TransactionCallback() { @Override public Void doInTransaction(TransactionStatus status) { sessionFactory.getCurrentSession().doWork(new Work() { @Override public void execute(Connection connection) throws SQLException { StatelessSession statelessSession = sessionFactory.openStatelessSession(connection); try { ScrollableResults scrollableResults = statelessSession.createQuery("from DemoEntity").scroll(ScrollMode.FORWARD_ONLY); int count = 0; while (scrollableResults.next()) { if (++count > 0 && count % 100 == 0) { System.out.println("Fetched " + count + " entities"); } DemoEntity demoEntity = (DemoEntity) scrollableResults.get()[0]; //Process and write result } } finally { statelessSession.close(); } } }); return null; } }); Besides the fact that the stateless session has the most optimal memory usage, using the it has some side effects. You might have noticed that we are opening a stateless session and closing it explicitly: there is no sessionFactory.getCurrentStatelessSession() nor (at the time of writing) any Spring integration for managing the stateless session.Opening a stateless session allocates a new java.sql.Connection by default (if you use openStatelessSession()) to perform its work and therefore indirectly spawns a second transaction. You can mitigate these side effects by using the Hibernate work API as in the example which supplies the current Connection and pass it along to openStatelessSession(Connection connection). Closing the session in the finally has no impact on the physical connection since that is captured by the Spring infrastructure: only the logical connection handle is closed and a new logical connection handle was created when opening the stateless session. Also note that you have to deal with closing the stateless session yourself and that the above example is only good for read-only operations. From the moment you are going to modify using the stateless session there are some more caveats. As said before, hibernate runs in "all feature disabled" mode and as a direct consequence entities are returned in detached state. For each entity you modify, you'll have to call: statelessSession.update(entity) explicitly. First I tried this for modifying an entity: new TransactionTemplate(txManager).execute(new TransactionCallback() { @Override public Void doInTransaction(TransactionStatus status) { sessionFactory.getCurrentSession().doWork(new Work() { @Override public void execute(Connection connection) throws SQLException { StatelessSession statelessSession = sessionFactory.openStatelessSession(connection); try { DemoEntity demoEntity = (DemoEntity) statelessSession.createQuery("from DemoEntity where id = 1").uniqueResult(); demoEntity.setProperty("test"); statelessSession.update(demoEntity); } finally { statelessSession.close(); } } }); return null; } }); The idea is that we open a stateless session with the existing database Connection. As the StatelessSession javadoc indicates that no write behind occurs, I was convinced that each statement performed by the stateless session would be sent directly to the database. Eventually when the transaction (started by the TransactionTemplate) would be committed the results would become visible in the database. However, hibernate does BATCH statements using a stateless session. I'm not 100% sure what the difference is between batching and write behind, but the result is the same and thus contra dictionary with the javadoc as statements are queued and flushed at a later time. So, if you don't do anything special, statements that are batched will not be flushed and this is what happened in my case: the "statelessSession.update(demoEntity);" was batched and never flushed. One way to force the flush is to use the hibernate transaction API: StatelessSession statelessSession = sessionFactory.openStatelessSession(); statelessSession.beginTransaction(); ... statelessSession.getTransaction().commit(); ... While this works, you probably don't want to start controlling your transactions programatically just because you are using a stateless session. Also, doing this we are again running our stateless session work in a second transaction scenario since we didn't pass along our Connection and thus a new database connection will be acquired. The reason we can't pass along the outer Connection is because if we commit the inner transaction (the "stateless session transaction") and it would be using the same connection as the outer transaction (started by the TransactionTemplate) it would break the outer transaction atomicity as statements from the outer transaction sent to database would be committed along with the inner transaction. So not passing along the connections means opening a new connection and thus creating a second transaction. A better alternative would be just to trigger Hibernate to flush the stateless session. However, statelessSession has no "flush" method to manually trigger a flush. A solution here is to depend a bit on the Hibernate internal API. This solution makes the manual transaction handling and the second transaction obsolete: all statements become part of our (one and only) outer transaction: StatelessSession statelessSession = sessionFactory.openStatelessSession(connection); try { DemoEntity demoEntity = (DemoEntity) statelessSession.createQuery("from DemoEntity where id = 1").uniqueResult(); demoEntity.setProperty("test"); statelessSession.update(demoEntity); ((TransactionContext) statelessSession).managedFlush(); } finally { statelessSession.close(); } Fortunately there is an even better solution very recently posted on the Spring jira: https://jira.springsource.org/browse/SPR-2495 This is not yet part of Spring, but the factory bean implementation is pretty straight forward: StatelessSessionFactoryBean.java when using this you could simple inject the StatelessSession: @Autowired private StatelessSession statelessSession; It will inject a stateless session proxy which is equivalent to the way the normal "current" session works (with the minor difference that you inject a SessionFactory and need to obtain the currentSession each time). When the proxy is invoked it will lookup the stateless session bound to the running transaction. If none exists already it will create one with the same connection as the normal session (like we did in the example) and register a custom transaction synchronization for the stateless session. When the transaction is committed the stateless session is flushed thanks to the synchronization and finally closed. Using this you can inject the stateless session directly and use it as a current session (or the same way as you would inject a JPA PeristentContext for that matter). This relieves you from dealing with the opening and closing of the stateless session and having to deal with one way or the other to make it flush. The implementation is JPA aimed, but the JPA part is limited to obtaining the physical connection in obtainPhysicalConnection(). You can easily leave out the EntityManagerFactory and get the physical connection directly from the Hibernate session. Very careful conclusion: it is clear that the best approach will depend on your situation. If you use the normal session you will have to deal with eviction yourself when reading or persisting entities. Besides the fact you have to do this manually, it might also impact further use of the session if you have a mixed transaction; you both perform 'bulk' and 'normal' operations in the same transaction. If you continue with the normal operations you will have detached entities in your session which might lead to unexpected results (as dirty detection will no longer work and so forth). On the other hand you will still have the major hibernate benefits (as long as the entity isn't evicted) such as lazy loading, caching, dirty detection and the likes. Using the stateless session at the time of writing requires some extra attention on managing it (opening, closing and flushing) which can also be error prone. In the assumption you can proceed with the proposed factory bean, you have a very bare bone session which is separately from your normal session but still participating in the same transaction. With this you have a powerful tool to perform bulk operations without having to think about memory management. The downside is that you don't have any other hibernate functionality available.
January 6, 2014
by Koen Serneels
· 90,179 Views · 14 Likes
article thumbnail
Introduction to Codenvy
what is codenvy exactly? well, their website states: codenvy is a cloud environment for coding, building, and debugging apps. basically, it’s an ide in the cloud (“ide as a service?”) accessible by all the major browsers . it started out as an additional feature to the exo platform in early 2009 and gained a lot of traction after the first paas (openshift) and git integration was added mid-2011. codenvy targets me as a (java) software developer to run and debug applications in their hosted cloud ide, while being able to share and collaborate during development and finally publish to a repository – e.g. git – or a number of deployment platforms – e.g. amazon, openshift or google app engine. i first encountered their booth at javaone last september, but they couldn’t demo their product right there on the spot over the wifi, because their on-line demo workspace never finished loading well i got the t-shirt instead then, but now’s the time to see what codenvy has in store as a cloud ide. signing up signing up took 3 seconds. all you have to do is go to codenvy.com , use the “sign up” button, choose an email address and a name for your workspace , confirm the email they’ll send you and you’re done. the “workspace” holds all your projects and is part of the url codenvy will create for you, like “ https://codenvy.com/ide/ . although not very clear during the registration process – which of course nowadays is usually minimalistic as can be – it seems that i’ve signed up for codenvy’s free community plan , which gives me an unlimited number of public projects. you can even start coding without registration. after confirming the registration mail, i’m in. finally i’ll end up in the browser where your (empty) workspace has been opened. empty workspace a few options a possible for here on, as seen in the figure above: create a new project from scratch – generate an empty project from predefined project types import from github – import projects from your github account clone a git repository – create a new project from any public git reposiroty browse documentation invite people – get team members on board support – questions, feedback and troubleshooting let’s… create a new project from scratch this option allows you to name the new project – e.g. “myproject”, choose a technology and a paas . the technology is a defined set of languages of frameworks to develop with. available technologies at the moment the technologies are: java jar java war java spring javascript ruby on rails python php node.js android maven multi-module at the time of writing java 1.6 is supported. available paas at the moment the available platforms are: amazon webservices (aws) elastic beanstalk savvis cloud appfrog cloudbees google app engine (gae) heroku manymo android emulator red hat’s openshift none depending on the choice of technology, or or more paas options become available. a single jar can not be deployed onto any of the platforms, leaving only the option “none” available. a java web application (war) can be deployed onto any number of platforms, except heroku and manymo. node.js can only be deployed to openshift. creating a simple jar project after having selected a jar (and no platform) one can select a project template . e.g. if webapplication (war) would have been selected, codenvy would present project templates, such as google app engine java project illustrating simple examples that use the search api , java web project with datasource usage or a demonstration of accessing amazon s3 buckets using the java sdk . the jar technology has only one project: simple jar project . after having finished the wizard, our jar project has been created in our workspace. we’ll see two views of our project: a project explorer and a package explorer. project- and package explorer what we can see is that our jar project has been given a maven pom.xml with the following content: view source print ? 01. < project xmlns = " http://maven.apache.org/pom/4.0.0 " xmlns:xsi = " http://www.w3.org/2001/xmlschema-instance " 02. xsi:schemalocation = " http://maven.apache.org/pom/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd " > 03. < modelversion >4.0.0 04. < groupid >com.codenvy.workspaceyug8g52wjwb5im13 05. < artifactid >testjarproject 06. < version >1.0-snapshot 07. < packaging >jar 08. 09. < name >sample-lib 10. 11. < properties > 12. < project.build.sourceencoding >utf-8 13. 14. 15. < dependencies > 16. < dependency > 17. < groupid >junit 18. < artifactid >junit 19. < version >3.8.1 20. < scope >test 21. 22. 23. we have a generated group id com.codenvy.workspaceyug8g52wjwb5im13 , our own artifact id and the junit dependency, which is a decent choice for many java developers use it as a testing framework. the source encoding has already been set to utf-8, which is also a sensible choice. as a convenience we’ve also been given a hello.sayhello class, so we know we’re actually in a java project say hello file & project management so what about the browser-based editor we’re working in? on top we’re seeing a few menu’s, like file, project, edit, view, run, git, paas, window, share and help . i’ll be highlighting a few. file- and project menu the file menu allows to creating folders , packages and various kind of filetypes , such as text, xml (1.0 at time of writing) , html (4.1) , css (2.0), java classes and jsp’s (2.1). although i’m in a jar project, i am still also able to create here e.g. ruby, php or python files. a very convenient feature is to upload existing files to the workspace, either separately or in zip archives. i’ve tried dropping a file onto the package explorer from the file system, but the browser (in this case, chrome) tries to open it instead the project menu allows to create new projects, either by launching the create project wizard again, but also allows for importing from github . in order to clone a repository, you’ll have to authorize codenvy to access github.com to be able to import a project. after having authorized github, codeenvy presents me with a list of projects to choose from. after having imported all necessary stuff, it somehow needs to know what kind of project i’m importing. selecting a file type after importing a project from github the project i imported didn’t give codenvy any clues as to what kind of project it is (which is right since i only had a readme.md in it), so it lists a few options to choose from. i chose the maven multi-module type after which the output window shows: git@github.com:tvinke/examples.git was successfully cloned. [info] project type updated. if you’d have a pom.xml in the root of your project, it would immediately recognize it a s a maven project. apart from going through the project > import from github option, you can also go directly to the git menu, and choose clone repository . this allows you to manually enter the remote repository uri, wanted project name and the remote name (e.g. “origin”). cloning a repository one you have pulled in a git project, the git menu allows all kinds of common operations, such as adding and removing files, committing, pushing, pulling and much more. git menu the ssh keys can be found under menu window > preferences where you can view the github.com entry, where one can view the details or delete it. also a new key can be either generated or uploaded here. sharing the project one of the unique selling points of codenvy are their collaboration possibilities which come along with any project. you can: invite other developers with read-only rights or full read-write rights to your workspace and every project in it.when you’re pair-programming like this, or co-editing a file with a colleague, you can also send each other code pointers – small shortcuts to code lines. use factories to create temporary workspaces , through cloning, off one source project (“factory”) and represent the cloning mechanism as a url which can be given to other developers. a use case might be to get a colleague quickly started on a project by providing a fully working development environment.there’s a lot more about creating factories in the docs (such as through rest), but the nice thing is that once you have a factory url, you can embed it as a button, send it through email of publish it somewhere for others! a factory url to load up e.g. their twitter bootstrap sample – as they use on their website themselves – looks like: https://codenvy.com/factory?v=1.0&pname=sample-twitterbootstrap&wname=codenvy-factories&vcs=git&vcsurl=http%3a%2f%2fcodenvy.com%2fgit%2f04%2f0f%2f7f%2fworkspacegcpv6cdxy1q34n1i%2fsample-twitterbootstrap&idcommit=c1443ecea63471f5797f172c081cd802bac6e6b0&action=openproject&ptype=javascript conclusion applications are run in the cloud nowadays, so why not create them there too? codenvy brings some interesting features, such as being able to instantly provision workspaces (through factory urls) and share projects in real-time. it supports common operations with projects, files and version control. with a slew of languages and platforms and as an ide being always accessible through the internet, it could lower the barrier to actually code anytime and anywhere. in a future post i will try and see whether or not it can actually replace my conventional desktop ide for java development.
January 4, 2014
by Ted Vinke
· 7,556 Views
article thumbnail
Why is Tomcat a Webserver and not an Application Server?
Is Tomcat is a web server or an application server? Let me tell you how I convinced my self regarding this.
January 4, 2014
by Manu Pk
· 175,246 Views · 6 Likes
article thumbnail
RxJava: From Future to Observable
I first came across Reactive Extensions about 4 years ago on Matthew Podwysocki’s blog but then haven’t heard much about it until I saw Matthew give a talk at Code Mesh a few weeks ago. It seems to have grown in popularity recently and I noticed that’s there’s now a Java version called RxJavawritten by Netflix. I thought I’d give it a try by changing some code I wrote while exploring cypher’s MERGE function to expose an Observable instead of Futures. To recap, we have 50 threads and we do 100 iterations where we create random (user, event) pairs. We create a maximum of 10 users and 50 events and the goal is to concurrently send requests for the same pairs. In the example of my other post I was throwing away the result of each query whereas here I returned the result back so I had something to subscribe to. The outline of the code looks like this: public class MergeTimeRx { public static void main( final String[] args ) throws InterruptedException, IOException { String pathToDb = "/tmp/foo"; FileUtils.deleteRecursively( new File( pathToDb ) ); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( pathToDb ); final ExecutionEngine engine = new ExecutionEngine( db ); int numberOfThreads = 50; int numberOfUsers = 10; int numberOfEvents = 50; int iterations = 100; Observable events = processEvents( engine, numberOfUsers, numberOfEvents, numberOfThreads, iterations ); events.subscribe( new Action1() { @Override public void call( ExecutionResult result ) { for ( Map row : result ) { } } } ); .... } } The nice thing about using RxJava is that there’s no mention of how we got our collection of ExecutionResults, it’s not important. We just have a stream of them and by calling the subscribe function on the Observablewe’ll be informed whenever another one is made available. Most of the examples I found show how to generate events from a single thread but I wanted to use a thread pool so that I could fire off lots of requests at the same time. The processEvents method ended up looking like this: private static Observable processEvents( final ExecutionEngine engine, final int numberOfUsers, final int numberOfEvents, final int numberOfThreads, final int iterations ) { final Random random = new Random(); final List userIds = generateIds( numberOfUsers ); final List eventIds = generateIds( numberOfEvents ); return Observable.create( new Observable.OnSubscribeFunc() { @Override public Subscription onSubscribe( final Observer observer ) { final ExecutorService executor = Executors.newFixedThreadPool( numberOfThreads ); List> jobs = new ArrayList<>(); for ( int i = 0; i < iterations; i++ ) { Future job = executor.submit( new Callable() { @Override public ExecutionResult call() { Integer userId = userIds.get( random.nextInt( numberOfUsers ) ); Integer eventId = eventIds.get( random.nextInt( numberOfEvents ) ); return engine.execute( "MERGE (u:User {id: {userId})\n" + "MERGE (e:Event {id: {eventId})\n" + "MERGE (u)-[:HAS_EVENT]->(e)\n" + "RETURN u, e", MapUtil.map( "userId", userId, "eventId", eventId ) ); } } ); jobs.add( job ); } for ( Future future : jobs ) { try { observer.onNext( future.get() ); } catch ( InterruptedException | ExecutionException ignored ) { } } observer.onCompleted(); executor.shutdown(); return Subscriptions.empty(); } } ); } I’m not sure if that’s the correct way of using Observables so please let me know in the comments if I’ve got it wrong. I wasn’t sure what the proper way of handling errors was. I initially had a call to observer#onError in the catch block but that means that no further events are produced which wasn’t what I wanted. The code is available as a gist if you want to play around with it. I added the following dependency to get the RxJava library: com.netflix.rxjava rxjava-core 0.15.1
December 31, 2013
by Mark Needham
· 18,852 Views · 1 Like
article thumbnail
Top Posts of 2013: Google's Big Data Papers
I’ll review Google’s most important Big Data publications and discuss where they are (as far as they’ve disclosed).
December 30, 2013
by Mikio Braun
· 116,629 Views
article thumbnail
Java: Using the Specification Pattern With JPA
This article is an introduction to using the specification pattern in Java. We also will see how we can combine classic specifications with JPA Criteria queries to retrieve objects from a relational database. Within this post we will use the following Poll class as an example entity for creating specifications. It represents a poll that has a start and end date. In the time between those two dates users can vote among different choices. A poll can also be locked by an administrator before the end date has been reached. In this case, a lock date will be set. @Entity public class Poll { @Id @GeneratedValue private long id; private DateTime startDate; private DateTime endDate; private DateTime lockDate; @OneToMany(cascade = CascadeType.ALL) private List votes = new ArrayList<>(); } For better readability I skipped getters, setters, JPA annotations for mapping Joda DateTime instances and fields that aren't needed in this example (like the question being asked in the poll). Now assume we have two constraints we want to implement: A poll is currently running if it is not locked and if startDate < now < endDate A poll is popular if it contains more than 100 votes and is not locked We could start by adding appropriate methods to Poll like: poll.isCurrentlyRunning(). Alternatively we could use a service method like pollService.isCurrentlyRunning(poll). However, we also want to be able to query the database to get all currently running polls. So we might add a DAO or repository method like pollRepository.findAllCurrentlyRunningPolls(). If we follow this way we implement the isCurrentlyRunning constraint two times in two different locations. Things become worse if we want to combine constraints. What if we want to query the database for a list of all popular polls that are currently running? This is where the specification pattern come in handy. When using the specification pattern we move business rules into extra classes called specifications. To get started with specifications we create a simple interface and an abstract class: public interface Specification { boolean isSatisfiedBy(T t); Predicate toPredicate(Root root, CriteriaBuilder cb); Class getType(); } abstract public class AbstractSpecification implements Specification { @Override public boolean isSatisfiedBy(T t) { throw new NotImplementedException(); } @Override public Predicate toPredicate(Root poll, CriteriaBuilder cb) { throw new NotImplementedException(); } @Override public Class getType() { ParameterizedType type = (ParameterizedType) this.getClass().getGenericSuperclass(); return (Class) type.getActualTypeArguments()[0]; } } Please ignore the AbstractSpecification class with the mysterious getType() method for a moment (we come back to it later). The central part of a specification is the isSatisfiedBy() method, which is used to check if an object satisfies the specification. toPredicate() is an additional method we use in this example to return the constraint as javax.persistence.criteria.Predicate instance which can be used to query a database. For each constraint we create a new specification class that extends AbstractSpecification and implements isSatisfiedBy() and toPredicate(). The specification implementation to check if a poll is currently running looks like this: public class IsCurrentlyRunning extends AbstractSpecification { @Override public boolean isSatisfiedBy(Poll poll) { return poll.getStartDate().isBeforeNow() && poll.getEndDate().isAfterNow() && poll.getLockDate() == null; } @Override public Predicate toPredicate(Root poll, CriteriaBuilder cb) { DateTime now = new DateTime(); return cb.and( cb.lessThan(poll.get(Poll_.startDate), now), cb.greaterThan(poll.get(Poll_.endDate), now), cb.isNull(poll.get(Poll_.lockDate)) ); } } Within isSatisfiedBy() we check if the passed object matches the constraint. In toPredicate() we construct a Predicate using JPA's CriteriaBuilder. We will use the resulting Predicate instance later to build a CriteriaQuery for querying the database. The specification for checking if a poll is popular looks similar: public class IsPopular extends AbstractSpecification { @Override public boolean isSatisfiedBy(Poll poll) { return poll.getLockDate() == null && poll.getVotes().size() > 100; } @Override public Predicate toPredicate(Root poll, CriteriaBuilder cb) { return cb.and( cb.isNull(poll.get(Poll_.lockDate)), cb.greaterThan(cb.size(poll.get(Poll_.votes)), 5) ); } } If we now want to test if a Poll instance matches one of these constraints we can use our newly created specifications: boolean isPopular = new IsPopular().isSatisfiedBy(poll); boolean isCurrentlyRunning = new IsCurrentlyRunning().isSatisfiedBy(poll); For querying the database we need to extend our DAO / repository to support specifications. This can look like the following: public class PollRepository { private EntityManager entityManager = ... public List findAllBySpecification(Specification specification) { CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); // use specification.getType() to create a Root instance CriteriaQuery criteriaQuery = criteriaBuilder.createQuery(specification.getType()); Root root = criteriaQuery.from(specification.getType()); // get predicate from specification Predicate predicate = specification.toPredicate(root, criteriaBuilder); // set predicate and execute query criteriaQuery.where(predicate); return entityManager.createQuery(criteriaQuery).getResultList(); } } Here we finally use the getType() method implemented in AbstractSpecification to createCriteriaQuery and Root instances. getType() returns the generic type of theAbstractSpecification instance defined by the subclass. For IsPopular andIsCurrentlyRunning it returns the Poll class. Without getType() we would have to create theCriteriaQuery and Root instances inside toPredicate() of every specification we create. So it is just a small helper to reduce boiler plate code inside specifications. Feel free to replace this with your own implementation if you come up with better approaches. Now we can use our repository to query the database for polls that match a certain specification: List popularPolls = pollRepository.findAllBySpecification(new IsPopular()); List currentlyRunningPolls = pollRepository.findAllBySpecification(new IsCurrentlyRunning()); At this point the specifications are the only components that contain the constraint definitions. We can use it to query the database or to check if an object fulfills the required rules. However one question remains: How do we combine two or more constraints? For example we would like to query the database for all popular polls that are still running. The answer to this is a variation of the composite design pattern called composite specifications. Using a composite specification we can combine specifications in different ways. To query the database for all running and popular pools we need to combine the isCurrentlyRunning with theisPopular specification using the logical and operation. Let's create another specification for this. We name itAndSpecification: public class AndSpecification extends AbstractSpecification { private Specification first; private Specification second; public AndSpecification(Specification first, Specification second) { this.first = first; this.second = second; } @Override public boolean isSatisfiedBy(T t) { return first.isSatisfiedBy(t) && second.isSatisfiedBy(t); } @Override public Predicate toPredicate(Root root, CriteriaBuilder cb) { return cb.and( first.toPredicate(root, cb), second.toPredicate(root, cb) ); } @Override public Class getType() { return first.getType(); } } An AndSpecification is created out of two other specifications. In isSatisfiedBy() and toPredicate()we return the result of both specifications combined by a logical and operation. We can use our new specification like this: Specification popularAndRunning = new AndSpecification<>(new IsPopular(), new IsCurrentlyRunning()); List polls = myRepository.findAllBySpecification(popularAndRunning); To improve readability we can add an and() method to the Specification interface: public interface Specification { Specification and(Specification other); // other methods } and implement it within our abstract implementation: abstract public class AbstractSpecification implements Specification { @Override public Specification and(Specification other) { return new AndSpecification<>(this, other); } // other methods } Now we can chain multiple specification by using the and() method: Specification popularAndRunning = new IsPopular().and(new IsCurrentlyRunning()); boolean isPopularAndRunning = popularAndRunning.isSatisfiedBy(poll); List polls = myRepository.findAllBySpecification(popularAndRunning); When needed we can easily extend this further with other composite specifications (for exampleOrSpecification or NotSpecification). Conclusion When using the specification pattern we move business rules in separate specification classes. These specification classes can be easily combined by using composite specifications. In general, specification improve reusability and maintainability. Additionally specifications can easily be unit tested. For more detailed information about the specification pattern I recommend this article by Eric Evans and Martin Fowler. You can find the source of this example project on GitHub.
December 30, 2013
by Michael Scharhag
· 115,950 Views · 6 Likes
article thumbnail
Spring 4 with Groovy
Finally the wait is over, Spring 4 is here and one of the best features is that now has a fully support for Groovy. The Spring team did a fantastic job bringing the same concept from Grails into the Spring Framework. In the past if you needed to incorporate Spring in your Groovy Applications or Scripts , normally you did something like this: @Grab('org.springframework:spring-context:4.0.0.RELEASE') import org.springframework.context.support.GenericApplicationContext import org.springframework.context.annotation.ClassPathBeanDefinitionScanner import org.springframework.beans.factory.annotation.Autowired import org.springframework.stereotype.Component @Component class Login { def authorize(User user){ if(user.credentials.username == "guest" && user.credentials.password == "guest"){ "${user.greetings} ${user.credentials.username}" }else "You are not ${user.greetings}" } } @Component class Credentials { String username = "guest" String password = "guest" } @Component class User{ @Autowired Credentials credentials String greetings = "Welcome" } def ctx = new GenericApplicationContext() new ClassPathBeanDefinitionScanner(ctx).scan('') // scan root package for components ctx.refresh() def login = ctx.getBean("login") def user = ctx.getBean("user") println login.authorize(user) One of the benefits that I see is that Groovy removes all the unnecessary boiler plate from Java. Then, if you wanted to add more flavor to your Groovy Apps using the Grail's BeanBuilder , you needed to do something like this: @Grab(group='org.slf4j', module='slf4j-simple', version='1.7.5') @Grab(group='org.grails', module='grails-web', version='2.3.4') import grails.spring.BeanBuilder import groovy.transform.ToString class Login { def authorize(User user){ if(user.credentials.username == "John" && user.credentials.password == "Doe"){ "${user.greetings} ${user.credentials.username}" }else "You are not ${user.greetings}" } } @ToString(includeNames=true) class Credentials{ String username String password } @ToString(includeNames=true) class User{ Credentials credentials String greetings } def bb = new BeanBuilder() bb.beans { login(Login) user(User){ credentials = new Credentials(username:"John", password:"Doe") greetings = 'Welcome!!' } } def ctx = bb.createApplicationContext() def u = ctx.getBean("user") println u def l = ctx.getBean("login") println l.authorize(u) And now with the new Spring 4 you can add all the Groovy flavor to your apps (without all the Grails Dependencies) and using the new GroovyBeanDefinitionReader from Spring 4: @Grab('org.springframework:spring-context:4.0.0.RELEASE') import org.springframework.context.support.GenericApplicationContext import org.springframework.beans.factory.groovy.GroovyBeanDefinitionReader import groovy.transform.ToString class Login { def authorize(User user){ if(user.credentials.username == "John" && user.credentials.password == "Doe"){ "${user.greetings} ${user.credentials.username}" }else "You are not ${user.greetings}" } } @ToString(includeNames=true) class Credentials{ String username String password } @ToString(includeNames=true) class User{ Credentials credentials String greetings } def ctx = new GenericApplicationContext() def reader = new GroovyBeanDefinitionReader(ctx) reader.beans { login(Login) user(User){ credentials = new Credentials(username:"John", password:"Doe") greetings = 'Welcome!!' } } ctx.refresh() def u = ctx.getBean("user") println u def l = ctx.getBean("login") println l.authorize(u) And that's not all, also SpringBoot gives you even more power using Groovy: //File: app.groovy import org.springframework.boot.* import org.springframework.boot.autoconfigure.* import org.springframework.stereotype.* import org.springframework.web.bind.annotation.* import org.springframework.beans.factory.annotation.* @Component class Login { def authorize(User user){ if(user.credentials.username == "John" && user.credentials.password == "Doe"){ "${user.greetings} ${user.credentials.username}" }else "You are not ${user.greetings}" } } @Component class Credentials{ String username String password } @Component class User{ Credentials credentials = new Credentials(username:"John", password:"Doe") String greetings = "Welcome!!" } @Controller @EnableAutoConfiguration class SampleController { @Autowired def login @Autowired def user @RequestMapping("/") @ResponseBody String home() { return login.authorize(user) } } To run the above code, install springboot and then just execute: $ spring run app.groovy and then go to your browser and open http://localhost:8080 and you see: Welcome!! John Congratulations to the Spring Team!! Keep grooving!! References: [1] https://spring.io/blog/2013/12/12/announcing-spring-framework-4-0-ga-release [2] http://groovy.codehaus.org/Using+Spring+Factories+with+Groovy [3] http://grails.org/doc/latest/guide/spring.html [4] http://projects.spring.io/spring-boot/ [5] https://spring.io/guides
December 26, 2013
by Felipe Gutierrez
· 27,499 Views · 4 Likes
article thumbnail
Extracting Tables from PDFs in Javascript with PDF.js
a common and difficult problem acquiring data is extracting tables from a pdf. previously, i described how to extract the text from a pdf with pdf.js , a pdf rendering library made by mozilla labs. the rendering process requires an html canvas object, and then draws each object (character, line, rectangle, etc) on it. the easiest way to get a list of these is to to intercept all the calls pdf.js makes to drawing functions on the canvas object. (see “ self modifying javascripts ” for a similar technique). the “set” method below adds a wrapper closure to each function, which logs the call. function replace(ctx, key) { var val = ctx[key]; if (typeof(val) == "function") { ctx[key] = function() { var args = array.prototype.slice.call(arguments); console.log("called " + key + "(" + args.join(",") + ")"); return val.apply(ctx, args); } } } for (var k in context) { replace(context, k); } var rendercontext = { canvascontext: context, viewport: viewport }; page.render(rendercontext); this lets us see a series of calls: called transform(1,0,0,1,150.42,539.67) called translate(0,0) called scale(1,-1) called scale(0.752625,0.752625) called measuretext(c) called save() called scale(0.9701818181818181,1) called filltext(c,0,0) called restore() called restore() called save() called transform(1,0,0,1,150.42,539.6 we can easily retrieve the text by noting the first argument to each “filltext” call: "congregations ranked by growth and decline in membership and worship attendance, 2006 to 2011philadelphia presbytery - table 16net membership changenet worship changepercent changepercent changeworship 2006worship 2011membership 2006membership 2011abington, abington- 143(74)-13.18%(57)0(15)0.00%(22)numberrank3003001,085942anchor, wrightstown0(23)0.00%(27)-12(25)-21.43%(52)numberrank56449797arch street, philadelphia-117(71)-68.42%(117)27(5)90.00% (2)numberrank305717154aston, aston3(21)3.53%(22)-5(19)-9.43% (31)numberrank53488588beaconno reportboth yearsno reportboth yearsnumberrankbensalem, bensalem-23(39)-13.94%(62)-28(36)-28.57% (64)numberrank9870165142berean, philadelphia106(4)44.92%(4)no reportboth yearsnumberrank00236342bethany collegiate, havertown- 188(76)-42.44%(110)43(3)21.29%(7)numberrank202245443255bethel, philadelphia-13(33)-13.68%(60)-27(35)-35.06% (71)numberrank77509582bethesda, philadelphia9(18)5.56%(18)no reportboth yearsnumberrank1150162171beverly hills, upper darby-3(26)-3.03% (32)-11(24)-20.00%(48)numberrank55449996bridesburg, philadelphia0(23)0.00%(27)no reportboth yearsnumberrank004444bristol, bristolno reportboth yearsno reportboth yearsnumberrankpage 1 of 10report prepared by research services, presbyterian church (u.s.a.)1- 800-728-7228, ext #204006-oct-12" notable, this doesn’t track line endings, and not all the characters are recorded in the expected order (the first line is rendered after the second). the calls to transform, translate, and scale control where text is placed. the filltext method also takes an (x, y) parameter set that moves the individual letters between words. the exact position is a combination of successive operations, which are modeled as a stack of matrix operations. thankfully, pdf.js tracks the output of these operations as it renders, so we don’t have to recalculate it. thus, we can make a method that records the letters and their real positions. this method takes the internal context object, the type of state transition, and the arguments to the transition. this method is then called from the ‘record’ function listed above. var chars = []; var cur = {}; function record(ctx, state, args) { if (state === 'filltext') { var c = args[0]; cur.c = c; cur.x = ctx._transformmatrix[4] + args[1]; cur.y = ctx._transformmatrix[5] + args[2]; chars[chars.length] = cur; cur = {}; } } these results can be sorted by position (x and y). the sort method arranges letters by position – if they are shifted up or down a small amount, they are considered to be on one line. chars.sort( function(a, b) { var dx = b.x - a.x; var dy = b.y - a.y; if (math.abs(dy) < 0.5) { return dx * -1; } else { return dy * -1; } } ); this presents several difficulties: this doesn’t detect right-to-left text, and it’s becoming clear that we’re going to have a hard time knowing when you’re in a table and when we aren’t. to do this, we define a function which can transform the array of letters and positions into a csv style output. this tracks from letter to letter – if it sees a “large” change in y, it makes a new line. if it sees a “large” change in x, it treats it as a new column. the real challenge is defining “large” which for my test pdf were around 15 and 20, for dx and dy. function gettext(marks, ex, ey, v) { var x = marks[0].x; var y = marks[0].y; var txt = ''; for (var i = 0; i < marks.length; i++) { var c = marks[i]; var dx = c.x - x; var dy = c.y - y; if (math.abs(dy) > ey) { txt += "\"\n\""; if (marks[i+1]) { // line feed - start from position of next line x = marks[i+1].x; } } if (math.abs(dx) > ex) { txt += "\",\""; } if (v) { console.log(dx + ", " + dy); } txt += c.c; x = c.x; y = c.y; } return txt; } this algorithm doesn’t handle newlines in rows, and oddly, the columns don’t come out in the right order, but they appear to be consistently out of order. line with large spaces (e.g. an em-dash) are detected as having multiple columns, but this can be cleaned up later – here is some sample output. you can see an example below, and the final source is available on github . congregations ranked by growth and decline in m","embership and w","orship attendance, 2006 to 2011" "","philadelphia presbytery"," - table 16" "","net ","membership ","change" "","net worship ","change","percent ","change","percent ","change","worship"," 2006","worship"," 2011","membership"," 2006","membership"," 2011" "","abington, abington","-143","(74)","-13.18%(57)","0","(15)","0.00%(22)","number","rank","300","300","1,085","942" "","anchor, wrightstown","0","(23)","0.00%(27)","-12","(25)","-21.43%(52)","number","rank","56","44","97","97" "","arch street, philadelphia","-117","(71)","-68.42%","(117)","27(5)","90.00%(2)","number","rank","30","57","171","54" "","aston, aston","3","(21)","3.53%(22)","-5","(19)","-9.43%(31)","number","rank","53","48","85","88" "","beacon","no report","both years","no report","both years","number","rank" "","bensalem, bensalem","-23","(39)","-13.94%(62)","-28","(36)","-28.57%(64)","number","rank","98","70","165","142" "","berean, philadelphia","106(4)","44.92%(4)","no report","both years","number","rank","0","0","236","342" "","bethany collegiate, havertown","-188","(76)","-42.44%","(110)","43(3)","21.29%(7)","number","rank","202","245","443","255" "","bethel, philadelphia","-13","(33)","-13.68%(60)","-27","(35)","-35.06%(71)","number","rank","77","50","95","82" "","bethesda, philadelphia","9","(18)","5.56%(18)","no report","both years","number","rank","115","0","162","171" "","beverly hills, upper darby","-3","(26)","-3.03%(32)","-11","(24)","-20.00%(48)","number","rank","55","44","99","96" "","bridesburg, philadelphia","0","(23)","0.00%(27)","no report","both years","number","rank","0","0","44","44" "","bristol, bristol","no report","both years","no report","both years","number","rank" "","page 1 of 10","report prepared by research services, presbyterian church (u.s.a.)","1-800-728-7228, ext #2040","06-oct-12"
December 26, 2013
by Gary Sieling
· 20,243 Views
article thumbnail
Eclipse Build Variables
this post is not about variables in my application code (which i debug). it is about using variables in eclipse for building projects. eclipse variables allow me to make my projects ‘position independent’ whenever i cannot use a path relative to my projects or workspace. eclipse variables which variables are used where in eclipse might be sometimes not very clear. depending in which context variables are used, not everything might be available. this link for example gives a list of variables which can be used to invoke an external tool. build variables eclipse comes with many built-in variables, especially for the build system. if i want to see what variables are already defined, i can show them in the project properties, under c/c++ build > build variables with enabled option ‘show system variables’: system build variables with the ‘add…’ button i can define and add my own variables, available for that project: define a new build variable if above operation is done on a project, then the setting is for the project only. if i want to add a variable for the workspace, i can do this using the menu window > preferences : workspace build variables global system variables eclipse automatically includes the system (e.g. windows) environment variables. many dialogs have the ‘variables…’ button where i can use my variables, including the variables defined on system level: system variables system variables: one way or the other so if i want to have a variable for every workspace, one way is to define it at the system level. however, this is not a good way as this clutter the variables for every application. batch file a solution to this to create my custom batch file where i define my variables, and at the end of this batch file i launch eclipse. that way the extra variables are only for this eclipse session. cwide-env file another very nice way codewarrior eclipse offers is using the cwide-env file located in the eclipse sub-folder of the installation: cwide-env file i can define variables here, or extend existing ones: -add : add string to the variable at the end -prepend : add string to the variable at the beginning that way i can easily manipulate existing system variables or create new ones which then are used by eclipse. summary variables in eclipse help me to define paths to source files and folders outside of a project or workspace. with variables i avoid using absolute paths which would make porting projects from one machine to another difficult. i can define variables for projects, for the workspace or use system variables. with codewarrior i have a cwide-env file which is used to extend the system variables. happy variabling
December 25, 2013
by Erich Styger
· 28,508 Views
article thumbnail
Building and Testing a WebSocket Server with Undertow
the upcoming version of jboss application server will no longer use tomcat as the integrated webserver, but instead, will replace it with undertow . the architecture of undertow is based on handlers that can be added dynamically via a builder api to the server. this approach is similar to the way of constructing a webserver in node.js . it allows developers to embed the undertow webserver easily into their applications. as the addition of features is done via the builder api, one can only add the features that are really required in one’s application. beyond that undertow supports websockets and the servlet api in version 3.1. it can be run as a blocking or non-blocking server and it is reported that first benchmarks have proven that undertow is the fastest webserver written in java. as all of this sounds very promising, so let’s try to set up a simple websocket server. as usual we start by creating a simple java project and add the undertow maven dependency: io.undertow undertow-core 1.0.0.beta20 with undertow’s builder api our buildandstartserver() method looks like this: public void buildandstartserver(int port, string host) { server = undertow.builder() .addlistener(port, host) .sethandler(getwebsockethandler()) .build(); server.start(); } we just add a listener that specifies the port and host to listen for incoming connections and afterwards add a websocket handler. as the websocket handler code is a little bit more comprehensive, i have put it into its own method: private pathhandler getwebsockethandler() { return path().addpath("/websocket", websocket(new websocketconnectioncallback() { @override public void onconnect(websockethttpexchange exchange, websocketchannel channel) { channel.getreceivesetter().set(new abstractreceivelistener() { @override protected void onfulltextmessage(websocketchannel channel, bufferedtextmessage message) { string data = message.getdata(); lastreceivedmessage = data; logger.info("received data: "+data); websockets.sendtext(data, channel, null); } }); channel.resumereceives(); } })) .addpath("/", resource(new classpathresourcemanager(websocketserver.class.getclassloader(), websocketserver.class.getpackage())) .addwelcomefiles("index.html")); } let’s go line by line through this code snippet. first of all, we add a new path: /websocket. the second argument of the addpath() methods lets us specify what kind of protocol we want to use for this path. in our case we create a new websocket. the anonymous implementation has a onconnect() method in which we set an implementation of abstractreceivelistener. here we have a convenient method onfulltextmessage() that is called when a client has sent us a text message. a call of getdata() fetches the actual message we have received. in this simple example we just echo this string back to client to validate that the roundtrip from the client to server and back works. to perform some simple manual tests we also add a second resource under the path / which serves some static html and javascript files. the directory that contains these files is given as an instance of classpathresourcemanager. the call of addwelcomefiles() tells undertow which file to server when the client asks for the path /. the index.html looks like this: our javascript code is swapped out to the websocket.js file. we use jquery and the jquery-plugin gracefulwebsocket to ease the client side development: var ws = $.gracefulwebsocket("ws://127.0.0.1:8080/websocket"); ws.onmessage = function(event) { var messagefromserver = event.data; $('#output').append('received: '+messagefromserver+''); } function send(message) { ws.send(message); } after having created a websocket object by calling $.gracefulwebsocket() we can register a callback function for incoming messages. in this method we only append the message string to the dom of the page. the send() method is just a call to gracefulwebsocket’s send() method. when we now start our application and open the url http://127.0.0.1:8080/ in our webbrowser we see the following page: entering some string and hitting the “send web socket data” button sends the message to the server, which in response echos it back to the client. now that we know that everything works as expected, we want to protect our code against regression with a junit test case. as a websocket client i have chosen the library jetty-websocket: org.eclipse.jetty jetty-websocket 8.1.0.rc5 test in the test case we build and start the websocket server to open a new connection to the websocket port. the websocket implementation of jetty-websocket allows us to implement two callback methods for the open and close events. within the open callback we send the test message to the client. the rest of the code waits for the connection to be established, closes it and asserts that the server has received the message: @test public void teststartandbuild() throws exception { subject = new websocketserver(); subject.buildandstartserver(8080, "127.0.0.1"); websocketclient client = new websocketclient(); future connectionfuture = client.open(new uri("ws://localhost:8080/websocket"), new websocket() { @override public void onopen(connection connection) { logger.info("onopen"); try { connection.sendmessage("testmessage"); } catch (ioexception e) { logger.error("failed to send message: "+e.getmessage(), e); } } @override public void onclose(int i, string s) { logger.info("onclose"); } }); websocket.connection connection = connectionfuture.get(2, timeunit.seconds); assertthat(connection, is(notnullvalue())); connection.close(); subject.stopserver(); thread.sleep(1000); assertthat(subject.lastreceivedmessage, is("testmessage")); } as usual you can find the source code on github . conclusion: undertow’s builder api makes it easy to construct a websocket server and in general an embedded webserver that fits your needs. this also eases automatic testing as you do not need any specific maven plugin that starts and stops your server before and after your integration tests. beyond that the jquery plugin jquery-graceful-websocket lets you send and receive messages over websockets with only a few lines of code.
December 23, 2013
by Martin Mois
· 27,042 Views
  • Previous
  • ...
  • 765
  • 766
  • 767
  • 768
  • 769
  • 770
  • 771
  • 772
  • 773
  • 774
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: