DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
How to Generate a Random String in Java using Apache Commons Lang
In a previous post, we had shared a small function that generated random string in Java. It turns out that similar functionality is available from a class in the extremely useful apache commons lang library. If you are using maven, download the jar using the following dependency: commons-lang commons-lang 20030203.000129 The class we are interested in is RandomStringUtils. Listed below are some functions you may find useful. Generate and print a random string of length 5 from all characters available System.out.println(RandomStringUtils.random(5)); Generate and print random string of length 10 from upper and lower case alphabets System.out.println(RandomStringUtils.randomAlphabetic(10)); Generate and print a random number of length 12 System.out.println(RandomStringUtils.randomNumeric(12)); Generate and print a random string of length 5 using only a, b, c and d characters System.out.println(RandomStringUtils.random(10,new char[]{'a','b','c','d'}));
May 6, 2014
by Faheem Sohail
· 19,967 Views
article thumbnail
Pitfalls of the Hibernate Second-Level / Query Caches
This post will go through how to setup the Hibernate Second-Level and Query caches, how they work and what are their most common pitfalls.
May 6, 2014
by Vasco Cavalheiro
· 77,384 Views · 8 Likes
article thumbnail
Java 8 Elvis Operator
So when I heard about a new feature in Java 8 that was supposed to help mitigate bugs, I was excited.
May 6, 2014
by Robert Greathouse
· 77,672 Views · 5 Likes
article thumbnail
What's Wrong in Java 8, Part II: Functions & Primitives
Tony Hoare called the invention of the null reference the “billion dollars mistake”. May be the use of primitives in Java could be called the million dollars mistake. Primitives where created for one reason: performance. Primitives have nothing to do in an Object language. Introduction of auto boxing/unboxing was a good thing, but much more should have been done. It probably will be done (it is sometimes said to be on the Java 10 road map). In the meanwhile, we have to deal with primitives, and this is a hassle, specially when using functions. Functions in Java 5/6/7 Before Java 8, one could create functions like this: public interface Function { U apply(T t); } Function addTax = new Function() { @Override public Integer apply(Integer x) { return x / 100 * (100 + 10); } }; System.out.println(addTax.apply(100)); This code produces the following result: 110 What Java 8 gives us is the Function interface and the lambda syntax. We do not need anymore to define our own functional interface, and we may use the following syntax: Function addTax = x -> x / 100 * (100 + 10); System.out.println(addTax.apply(100)); Note that in the first example, we used an anonymous class to create a named function. In the second example, using the lambda syntax does not change anything about this. There is still an anonymous class, and a named function. One interesting question is “What is the type of x?” The type was manifest in the first example. Here, it is inferred because of the type of the function. Java knows the function argument type is an Integer because the type of the function is explicitly Function. The first Integer is the type of the argument, and the second Integer is the return type. Boxing is automatically used to convert int to Integer and back as needed. More on this later. Could we use an anonymous function? Yes, but we would have a problem with type. This does not work: System.out.println((x -> x / 100 * (100 + 10)).apply(100)); which means we can't substitute the identifier addTax with its value (the addTax function). We have to restore the type information that is now missing because Java 8 is simply not able to infer the type in this case. The most visible thing which has no explicit type here is the identifier x. So we might try: System.out.println((Integer x) -> x / 100 * 100 + 10).apply(100)); After all, int the first example, we could have written: Function addTax = (Integer x) -> x / 100 * 100 + 10; so it should be enough for Java to infer the type. But this does not work. What we have to do is specifying the type of the function. Specifying the type of its argument is not enough, even if the return type may be inferred. And there is a serious reason for this: Java 8 does not know anything about functions. Functions are ordinary object with ordinary methods that we may call. Nothing more. So we have to specify the type like this: System.out.println(((Function) x -> x / 100 * 100 + 10).apply(100)); Otherwise, it could translate to: System.out.println(((Whatever) x -> x / 100 * 100 + 10).whatever(100)); So the lambda is only syntactic sugar to simplify the Function (or Whatever) interface implementation by an anonymous class. It has in fact absolutely nothing to do with functions. Should Java had only the Function interface with its apply method, this would not be a big deal. But what about primitives? The Function interface would be fine if Java was an object language. But it is not. It is only vaguely oriented toward the use of objects (hence the name Object Oriented). The most important types in Java are the primitives. And primitives do not fit well in OOP. Auto boxing has been introduced in Java 5 to help us deal with this problem, but auto boxing as severe limitations in terms of performance, and this is related to how thing are evaluated in Java. Java is a strict language, so eager evaluation is the rule. The consequence is that each time we have a primitive and need an object, the primitive has to be boxed. And each time we have an object and need a primitive, it has to be unboxed. If we rely upon automatic boxing an unboxing, we may end with much overhead for multiple boxing and unboxing. Other languages have solved this problem differently, allowing only objects and dealing with conversion in the background. They may have “value classes”, which are objects that are backed with primitives. With this functionality, programmers only use objects and the compiler only use primitives (this is over simplified, but it gives an idea of the principle). By allowing programmers to explicitly manipulate primitives, Java makes things much more difficult and much less safe, because programmers are encouraged to use primitives as business types, which is total nonsense either in OOP or in FP. (I will come back to this in another article.) Let's say it abruptly: we should not care about the overhead of boxing and unboxing. If Java programs using this feature are too slow, the language should be fixed. We should not use bad programming techniques to work around language weaknesses. By using primitives, we make the language work against us, and not for us. If this problem is not solved through fixing the language, we should just use another language. But we probably can't for lot of bad reasons, the most important being that we are payed to program in Java and not in any other language. The result is that instead of solving business problems, we find ourselves solving Java problems. And using primitives is a Java problem, and a big one. Lets rewrite our example using primitives instead of objects. Our function takes an argument of type Integer and returns an Integer. To replace this, Java has the type IntUnaryOperator. Wow, this smells! And guess what, it is defined as: public interface IntUnaryOperator { int applyAsInt(int operand); ... } It would probably have been too simple to call the method apply. So, our example using primitives may be rewritten as: IntUnaryOperator addTax = x -> x / 100 * (100 + 10); System.out.println(addTax.applyAsInt(100)); or, using an anonymous function: System.out.println(((IntUnaryOperator) x -> x / 100 * (100 + 10)).applyAsInt(100)); If only for functions of int returning int, this would be simple. But it is much more complex. Java 8 has 43 (functional) interfaces in the java.util.function package. In reality, they do not all represent functions. They can be grouped as follows: 21 one argument functions, among which 2 are functions of object returning object and 19 are various cases of object to primitive and primitive to object functions. One of the two object to object functions is for the specific case when both argument and return value are of the same type. 9 two arguments functions, among which 2 are functions of (object, object) to object, and 7 are various cases of (object, object) to primitive or (primitive, primitive) to primitive. 7 are effects, and not functions, since they do not return any value and are supposed to be used only for their side effect. (It's somewhat strange to call these “functional interfaces”.) 5 are “suppliers”, which means functions that do not take an argument but return a value. These could be functions. In the functional world, these are special functions called nullary functions (to indicate that their arity, or number of arguments, is zero). As functions, their return value may never change, so they allow treating constants as functions. In Java 8, their role is to depend upon mutable context to return variable values. So, they are not functions. What a mess! And furthermore, the methods of these interfaces have different names. Object functions have a method named apply, where methods returning numeric primitives have method name applyAsInt, applyAsLong, or applyAsDouble. Functions returning boolean have a method called test, and suppliers have methods called get, or getAsInt, getAsLong, getAsDouble, or getAsBoolean. (They did not dare calling BooleanSupplier “Predicate” with a test method taking no argument. I really wonder why!) One thing to note is that there are no functions for byte, char, short and float. Nor are there functions for arity greater that two. Needless to say, this is totally ridiculous. But we have to stick with it. As long as Java can infer the type, we may think we have no problem. However, if you want to manipulate functions in a functional way, you will soon face the problem of Java being unable to infer a type. Worst, Java will sometime infer the type and stay silent while using a type which is no the one you intended. How to help discovering the right type Let's say we want to use a three arguments function. As there are no such functional interfaces in Java 8, you are left with a choice: create you own functional interface, or use currying, as we have seen in a previous article (What's wrong with Java 8 part I ). Creating a three object arguments functional interface returning object is straightforward: interface Function { R apply(T, t, U, u, V, v); } However, we may face two problems. The first one is that we may need to process primitives. Parametric types will not help us for this. You may create special versions of the function using primitives instead of objects. After all, with eight type of primitives, three arguments and one return value, there are only 6 561 different versions of this function. Why do you think Oracle did not put TriFunction in Java 8? (To be precise, they only put a very limited number of BiFunction where arguments are Object and return type int, long or double, or when argument and return types are of the same type int, long or Object, leading to a total of 9 out of 729 possible.) A much better solution is to use autoboxing. Just use Integer, Long, Boolean and so on and let Java handle this. Doing whatever else would be the root of all evil, i.e. premature optimization (see http://c2.com/cgi/wiki?PrematureOptimization). Another way to go (beside creating three arguments functional interface) is to use currying. This is mandatory if the arguments may not be evaluated at the same time. Furthermore, it allows using only functions of one argument, which limits the number of possible functions to 81. If we restrict ourselves to boolean, int, long and double, the number falls to 25 (four primitive types plus Object in two places equals 5 x 5). The problem is that it may be somewhat difficult to use currying with functions returning primitives or taking primitives as their argument. As an example, here is the same example used in our previous article (What's wrong with Java 8 part I ), but using primitives: IntFunction> intToIntCalculation = x -> y -> z -> x + y * z; private IntStream calculate(IntStream stream, int a) { return stream.map(intToIntCalculation.apply(b).apply(a)); } IntStream stream = IntStream.of(1, 2, 3, 4, 5); IntStream newStream = calculate(stream, 3); Note that the result is not “a stream containing the values 5, 8, 11, 14 and 17”, no more than the initial stream would have contained the value 1, 2, 3, 4 and 5. newStream in not evaluated at this stage, so it does not contain values. (We'll talk about this in a next article). To see the result, we have to evaluate the stream, which may be forced by binding it to a terminal operation. This may be done through a call to the collect method. But before doing this, we will bind the result to one more non terminal function using the method boxed. The boxed methods binds to the stream a function converting primitives to the corresponding objects. This will simplify evaluation: System.out.println(newStream.boxed().collect(toList())); This prints: [5, 8, 11, 14, 17] We could as well use an anonymous function. However, Java is not be able to infer the type, so we must help it: private IntStream calculate(IntStream stream, int a) { return stream.map(((IntFunction>) x -> y -> z -> x + y * z).apply(b).apply(a)); } IntStream stream = IntStream.of(1, 2, 3, 4, 5); IntStream newStream = calculate(stream, 3); Currying in itself is very easy. Just remember, as I said in a previous article, that: (x, y, z) -> w translates to x -> y -> z -> w Finding the right type is slightly more complicated. You have to remember that each time you apply an argument, you are returning a function, so you need a function from the type of the argument to an object type (because functions are objects). Here, each argument is of type int, so we need to use IntFunction parameterized with the type of the returned function. As the final type is IntUnaryOperator (as required by the map method of the IntStream class), the result is: IntFunction>> Here, we are applying two of the three parameters and all parameters are of type int, so the type is: IntFunction> This may be compared to the version using autoboxing: Function>> If you have problems determining the right type, start with the version using autoboxing, just replacing the final type you know you need (since it is the type of the argument of map): Function> Note that you may perfectly use this type in your program: private IntStream calculate(IntStream stream, int a) { return stream.map(((Function>) x -> y -> z -> x + y * z).apply(b).apply(a)); } IntStream stream = IntStream.of(1, 2, 3, 4, 5); IntStream newStream = calculate(stream, 3); You may then replace each Function>) x -> y -> z -> x + y * z).apply(b).apply(a)); } and then to: private IntStream calculate(IntStream stream, int a) { return stream.map(((IntFunction>) x -> y -> z -> x + y * z).apply(b).apply(a)); } Note that all three versions compile and run. The only difference is whether autoboxing is used or not. When to be anonymous So, as we saw in the examples above, lambdas are very good at simplifying anonymous class creation, but there is rarely good reason not to name the instance that is created. Naming functions allows: function reuse function testing function replacement program maintenance program documentation Naming function plus currying will make your function completely independent from the environment (“referential transparency”), making you programs safer and more modular. There is however a difficulty. Using primitives makes it difficult to figure the type of curried function. And worst, primitive are not the right business types to use, so the compiler will not be able to help you in this area. To see why, look at this example: double tax = 10.24; double limit = 500.0; double delivery = 35.50; DoubleStream stream3 = DoubleStream.of(234.23, 567.45, 344.12, 765.00); DoubleStream stream4 = stream3.map(x -> { double total = x / 100 * (100 + tax); if ( total > limit) { total = total + delivery; } return total; }); To replace the anonymous “capturing” function by a named curried one, determining the correct type is not so difficult. There will be four arguments and it will return a DoubleUnaryOperator, so the type will be DoubleFunction>>. However, it is very easy to misplace the arguments: DoubleFunction>> computeTotal = x -> y -> z -> w -> { double total = w / 100 * (100 + x); if (total > y) { total = total + z; } return total; }; DoubleStream stream2 = stream.map(computeTotal.apply(tax).apply(limit).apply(delivery)); How can you be sure what x, y, z and w are ? There is in fact a simple rule: the arguments that are evaluated through the explicit use of the apply method come first, in the order they are applied, i.e. tax, limit, delivery, corresponding to x, y and z. The argument coming from the stream is applied last, so it corresponds to w. However, we are still having a problem: once the function is tested, we now that it is correct, but there is no way to be sure it will be used right. For example if we apply the parameters in the wrong order: DoubleStream stream2 = stream.map(computeTotal.apply(limit).apply(tax).apply(delivery)); we get [1440.8799999999999, 3440.2000000000003, 2100.2200000000003, 4625.5] instead of: [258.215152, 661.05688, 379.357888, 878.836] This means we have to test not only the function, but each use of it. Wouldn't it be nice if we could be sure that using the parameters in the wrong order would not compile? This is what using the right type system is about. Using primitives for business types is not good. It has never be. But now, with functions, we have one more reason not to do this. This will be the subject of another article. What's next We have seen how using primitives is somewhat more complicated that using objects. Functions using primitives are a real mess in Java 8. But the worst is to come. In a next article, we will talk about using primitives with streams.
May 5, 2014
by Pierre-Yves Saumont
· 51,917 Views · 10 Likes
article thumbnail
Open Session In View Design Tradeoffs
The Open Session in View (OSIV) pattern gives rise to different opinions in the Java development community. Let's go over OSIV and some of the pros and cons of this pattern. The problem The problem that OSIV solves is a mismatch between the Hibernate concept of session and it's lifecycle and the way that many server-side view technologies work. In a typical Java frontend application the service layer starts by querying some of the data needed to build the view. The remaining data needed can be lazy-loaded later, with the condition that the Hibernate session remains open - and there lies the problem. Between the moment that the service layer method finishes it's execution and the moment that the view is rendered, Hibernate has already committed the transaction and closed the session. When the view tries to lazy load the extra data that it needs, if finds the Hibernate session closed, causing a LazyInitializationException. The OSIV solution OSIV tackles this problem by ensuring that the Hibernate session is kept open all the way up to the rendering of the view - hence the name of the pattern. Because the session is kept open, no more LazyInitializationExceptions occur. The session or entity manager is kept open by means of a filter that is added to the request processing chain. In the case of JPA the OpenEntityManagerInViewFilter will create an entity manager at the beginning of the request, and then bind it to the request thread. The service layer will then be executed and the business transaction committed or rolled back, but the transaction manager will not remove the entity manager from the thread after the commit. When the view rendering starts, the transaction manager will then check if there is already an entity manager binded to the thread, and if so use it instead of creating a new one. After the request is processed, the filter will then unbind the entity manager from the thread. The end result is that the same entity manager used to commit the business transaction was kept around in the request thread, allowing the view rendering code to lazy load the needed data. Going back to the original problem Let's step back a moment and go back to the initial problem: the LazyInitializationException. Is this exception really a problem? This exception can also be seen as a warning sign of a wrongly written query in the service layer. When building a view and it's backing services, the developer knows upfront what data is needed, and can make sure that the needed data is loaded before the rendering starts. Several relation types such as one-to-many use lazy-loading by default, but that default setting can be overridden if needed at query time using the following syntax: select p FROM Person p left join fetch p.invoices This means that the lazy loading can be turned off on a case by case basis depending on the data needed by the view. OSIV in projects I've worked In projects I have worked that used OSIV, we could see via query logging that the database was getting hit with a high number of SQL queries, sometimes to the point that developers had to turn off the Hibernate SQL logging. The performance of these application was impacted, but it was kept manageable using second-level caches, and due to the fact that these where intranet-based applications with a limited number of users. Pros of OSIV The main advantage of OSIV is that it makes working with ORM and the database more transparent: Less queries need to be manually written Less awareness is required about the Hibernate session and how to solve LazyInitializationExceptions. Cons of OSIV OSIV seems to be easy to misuse and can accidentally introduce N+1 performance problems in the application. On projects I've worked OSIV did not work out well in the long-term. The alternative of writing custom queries that eager fetch data depending on the use case is manageable and turned out well in other projects I've worked. Alternatives to OSIV Besides the application-level solution of writing custom queries to pre-fetch the needed data, there are other framework-level aproaches to OSIV. The Seam Framework was built by some of the same developers as Hibernate , and solves the problem by introducing the notion of conversation. Can you let me know in the comments bellow your thoughts and experiences with OSIV, thanks for reading.
April 30, 2014
by Vasco Cavalheiro
· 18,703 Views · 3 Likes
article thumbnail
Reading data from Google spreadsheet using JAVA
System Requirements: Eclipse Kepler Service Release 2 JDK 1.5 or above Installed Google App Engine SDK on eclipse – this is required for second version of this example. Create a google spreadsheet – login to your google account and create a new spreadsheet, if you want to read existing one then put that url in SPREADSHEET_URL. Once you create a new spreadsheet our url will be like – https://docs.google.com/spreadsheets/d/1L8xtAJfOObsXL-XemliUV10wkDHQNxjn6jKS4XwzYZ8/ but don’t put this url in SPREADSHEET_URL, Use the below url simply change the bold part. public static final String SPREADSHEET_URL = “https://spreadsheets.google.com/feeds/spreadsheets/1L8xtAJfOObsXL-XemliUV10wkDHQNxjn6jKS4XwzYZ8“; // Fill in google spreadsheet URI package org.gopaldas.readsps; import java.io.IOException; import java.net.URL; import com.google.gdata.client.spreadsheet.SpreadsheetService; import com.google.gdata.data.spreadsheet.ListEntry; import com.google.gdata.data.spreadsheet.ListFeed; import com.google.gdata.data.spreadsheet.SpreadsheetEntry; import com.google.gdata.data.spreadsheet.WorksheetEntry; import com.google.gdata.util.ServiceException; public class ReadSpreadsheet { public static final String GOOGLE_ACCOUNT_USERNAME = "xxxx@gmail.com"; // Fill in google account username public static final String GOOGLE_ACCOUNT_PASSWORD = "xxxx"; // Fill in google account password public static final String SPREADSHEET_URL = "https://spreadsheets.google.com/feeds/spreadsheets/1L8xtAJfOObsXL-XemliUV10wkDHQNxjn6jKS4XwzYZ8"; //Fill in google spreadsheet URI public static void main(String[] args) throws IOException, ServiceException { /** Our view of Google Spreadsheets as an authenticated Google user. */ SpreadsheetService service = new SpreadsheetService("Print Google Spreadsheet Demo"); // Login and prompt the user to pick a sheet to use. service.setUserCredentials(GOOGLE_ACCOUNT_USERNAME, GOOGLE_ACCOUNT_PASSWORD); // Load sheet URL metafeedUrl = new URL(SPREADSHEET_URL); SpreadsheetEntry spreadsheet = service.getEntry(metafeedUrl, SpreadsheetEntry.class); URL listFeedUrl = ((WorksheetEntry) spreadsheet.getWorksheets().get(0)).getListFeedUrl(); // Print entries ListFeed feed = (ListFeed) service.getFeed(listFeedUrl, ListFeed.class); for(ListEntry entry : feed.getEntries()) { System.out.println("new row"); for(String tag : entry.getCustomElements().getTags()) { System.out.println(" "+tag + ": " + entry.getCustomElements().getValue(tag)); } } } }
April 29, 2014
by Gopal Das
· 39,631 Views
article thumbnail
Java EE: The Basics
wanted to go through some of the basic tenets, the technical terminology related to java ee. for many people, java ee/j2ee still mean servlets, jsps or maybe struts at best. no offence or pun intended! this is not a java ee 'bible' by any means. i am not capable enough of writing such a thing! so let us line up the 'keywords' related to java ee and then look at them one by one java ee java ee apis (specifications) containers services multitiered applications components let's try to elaborate on the above mentioned points. ok. so what is java ee? 'ee' stands for enterprise edition. that essentially makes java ee - java enterprise edition. if i had to summarize java ee in a couple of sentences, it would go something like this "java ee is a platform which defines 'standard specifications/apis' which are then implemented by vendors and used for development of enterprise (distributed, 'multi-tired', robust) 'applications'. these applications are composed of modules or 'components' which use java ee 'containers' as their run-time infrastructure." what is this 'standardized platform' based upon? what does it constitute? the platform revolves around 'standard' specifications or apis . think of these as contracts defined by a standard body e.g. enterprise java beans (ejb), java persistence api (jpa), java message service (jms) etc. these contracts/specifications/apis are implemented by different vendors e.g. glassfish, oracle weblogic, apache tomee etc alright. what about containers? containers can be visualized as 'virtual/logical partitions' . each container supports a subset of the apis/specifications defined by the java ee platform they provide run-time 'services' to the 'applications' which they host the java ee specification lists 4 types of containers ejb container web container application client container applet container java ee containers i am not going to dwell into details of these containers in this post. services?? well, 'services' are nothing but a result of the vendor implementations of the standard 'specifications' (mentioned above). examples of specifications are - jersey for jax-rs (restful services), tyrus (web sockets), eclipselink (jpa), weld (cdi) etc. the 'container' is the interface between the deployed application ('service' consumer) and the application server. here is a list of 'services' which are rendered by the 'container' to the underlying 'components' (this is not an exhaustive list) persistence - offered by the java persistence api (jpa) which drives object relational mapping (orm) and an abstraction for the database operations. messaging - the java message service (jms) provides asynchronous messaging between disparate parts of your applications. contexts & dependency injection - cdi provides loosely coupled and type safe injection of resources. web services - jaxrs and jaxws provide support for rest and soap style services respectively transaction - provided by the java transaction api (jta) implementation what is a typical java ee 'application'? what does it comprise of? applications are composed of different ' components ' which in turn are supported by their corresponding ' container ' supported 'component' types are: enterprise applications - make use of the specifications like ejb, jms, jpa etc and are executed within an ejb container web applications - they leverage the servlet api, jsp, jsf etc and are supported by a web container application client - executed in client side. they need an application client container which has a set of supported libraries and executes in a java se environment. applets - these are gui applications which execute in a web browser. how are java ee applications structured? as far as java ee 'application' architecture is concerned, they generally tend follow the n-tier model consisting of client tier, server tier and of course the database (back end) tier client tier - consists of web browsers or gui (swing, java fx) based clients. web browsers tend to talk to the 'web components' on the server tier while the gui clients interact directly with the 'business' layer within the server tier server tier - this tier comprises of the dynamic web components (jsp, jsf, servlets) and the business layer driven by ejbs, jms, jpa, jta specifications. database tier - contains 'enterprise information systems' backed by databases or even legacy data repositories. generic 3-tier java ee application architecture java ee - bare bones, basics.... as quickly and briefly as i possibly could. that's all for now! :-) stay tuned for more java ee content, specifically around the latest and greatest version of the java ee platform --> java ee 7 happy reading!
April 29, 2014
by Abhishek Gupta DZone Core CORE
· 39,884 Views · 3 Likes
article thumbnail
The 7 Log Management Tools Java Developers Should Know
splunk vs. sumo logic vs. logstash vs. graylog vs. loggly vs. papertrails vs. splunk>storm splunk, sumo logic, logstash, graylog, loggly, papertrails - did i miss someone? i’m pretty sure i did. logs are like fossil fuels - we’ve been wanting to get rid of them for the past 20 years, but we’re not quite there yet. well, if that's the case i want a bmw! to deal with the growth of log data a host of log management & analysis tools have been built over the last few years to help developers and operations make sense of the growing data. i thought it’d be interesting to look at our options and what are each tools’ selling point, from a developer’s standpoint . splunk as the biggest tool in this space, i decided to put splunk in a category of its own. that’s not to say it’s the best tool for what you need, but more to give credit to a product who essentially created a new category. pros splunk is probably the most feature rich solution in the space. it’s got hundreds of apps (i counted 537 ) to make sense of almost every format of log data, from security to business analytics to infrastructure monitoring. splunk’s search and charting tools are feature rich to the point that there’s probably no set of data you can’t get to through its ui or apis. cons splunk has two major cons. the first, that is more subjective, is that it’s an on-premise solution which means that setup costs in terms of money and complexity are high. to deploy in a high-scale environment you will need to install and configure a dedicated cluster. as a developer, it’s usually something you can't or don’t want to do as your first choice. splunk’s second con is that it’s expensive. to support a real-world application you’re looking at tens of thousands of dollars, which most likely means you’ll need sign offs from high-ups in your organization, and the process is going to be slow. if you’ve got a new app and you want something fast that you can quickly spin up and ramp as things progress - keep reading. some more enterprise log analyzers can be found here . saas log analyzers sumo logic sumo was founded as a saas version of splunk, going so far as to imitate some of splunk’s features and visuals early on. having said that, sl has developed to a full fledged enterprise class log management solution. pros sl is chock-full of features to reduce, search and chart mass amounts of data. out of all the saas log analyzers, it’s probably the most feature rich. also, being a saas offering it inherently means setup and ongoing operation are easier. one of sumo logic’s main points of attraction is the ability to establish baselines and to actively notify you when key metrics change after an event such as a new version rollout or a breach attempt. cons this one is shared across all saas log analyzers, which is you need to get the data to the service to actually do something with it. this means that you’ll be looking at possible gbs (or more) uploaded from your servers. this can create issues on multiple fronts - as a developer, if you're logging sensitive or pii you need to make sure it’s redacted. there may be a lag between the time data is logged and the time it’s visible to to the service. there’s additional overhead on your machines transmitting gbs of data, which really depends on your logging throughput. sumo’s pricing is also not transparent , which means you might be looking at a buying process which is more complex than swiping your team’s credit card to get going. loggly loggly is also a robust log analyzer, focusing on simplicity and ease of use for a devops audience. pros whereas sumo logic has a strong enterprise and security focus, loggly is geared more towards helping devops find and fix operational problems. this makes it very developer-friendly. things like creating custom performance and devops dashboards are super-easy to do. pricing is also transparent, which makes start of use easier. cons don't expect loggly to scale into a full blown infrastructure, security or analytics solution. if you need forensics or infrastructure monitoring you’re in the wrong place. this is a tools mainly for devops to parse data coming from your app servers. anything beyond that you’ll have to build yourself. papertrails papertrails is a simple way to look and search through logs from multiple machines, in one consolidated easy-to-use interface. think of it like tailing your log in the cloud, and you won't be too far off. pros pt is what it is. a simple way to look at log files from multiple machines in a singular view in the cloud. the ux itself is very similar to looking at a log on your machine, and so are the search commands. it aims to do something simple and useful, and does it elegantly. it’s also very affordable . cons pt is mostly text based. looking for any advanced integrations, predictive or reporting capabilities? you're barking up the wrong tree. splunk>storm this is splunk’s little (some may say step) saas brother. it’s a pretty similar offering that’s hosted on splunk’s servers. pros storm lets you experiment with splunk without having to install the actual software on-premise, and contains much of the features available in the full version. cons this isn't really a commercial offering, and you're limited in the amount of data you can send. it seems to be more of an online limited version of splunk meant to help people test out the product without having to deploy first. a new service called splunk cloud is aimed at providing a full-blown splunk saas experience. open source analyzers logstash logstash is an open source tool for collecting and managing log files. it’s part of an open-source stack which includes elasticsearch for indexing and searching through data and kibana for charting and visualizing data. together they form a powerful log management solution. pros being an open-source solution means you're inherently getting a lot of a control and a very good price. logstash uses three mature and powerful components, all heavily maintained, to create a very robust and extensible package. for an open-source solution it’s also very easy to install and start using. we use logstash and love it. cons as logstash is essentially a stack, it means you're dealing with three different products. that means that extensibility also becomes complex. logstash filters are written in ruby, kibana is pure javascript and elasticsearch has its own rest api as well as json templates. when you move to production, you’ll also need to separate the three into different machines, which adds to the complexity. graylog2 a fairly new player in the space, gl2 is an open-source log analyzer backed by mongodb as well as elasticsearch (similar to logstash) for storing and searching through log errors. it’s mainly focused on helping developers detect and fix errors in their apps. also in this category you can find fluentd and kafka whose one of its main use-cases is also storing log data. phew, so many choices! takipi for logs while this post is not about takipi, i thought there’s one feature it has which you might find relevant to all of this. the biggest disadvantage in all log analyzers and log files in general, is that the right data has to be put there by you first. from a dev perspective, it means that if an exception isn’t logged, or the variable data you need to understand why it happened isn't there, no log file or analyzer in the world can help you. production debugging sucks. one of the things we’ve added to takipi is the ability to jump into a recorded debugging session straight from a log file error. this means that for every log error you can see the actual source code and variable values at the moment of error. you can learn more about it here . this is one post where i would love to hear from you guys about your experiences with some of the tools mentioned (and some that i didn’t). i’m sure there are things you would disagree with or would like to correct me on - so go ahead, the comment section is below and i would love to hear from you. originally posted on takipi blog
April 29, 2014
by Chen Harel
· 37,214 Views
article thumbnail
HashMap Performance Improvements in Java 8
See how HashMap performance has been improved with new features of Java 8.
April 23, 2014
by Tomasz Nurkiewicz DZone Core CORE
· 135,677 Views · 60 Likes
article thumbnail
Java 7 vs. Java 8: Performance Benchmarking of Fork/Join
with the recent release of java 8, developers are still just beginning to asses the strengths and weaknesses of the new platform. the most pressing question is: does java 8 have the fastest jvm so far? a good way to asses the progress of java 8 is to test its ability to work with something that was new to java 7... fork/join. oleg shelajev uses the "infamous" java microbenchmark harness project (jmh) to create a benchmark test for the two most recent versions of java. but before implementing the benchmark, he takes the time to give a brief overview of fork/join and how it changes between java 7 and 8. here is a graph of the results : based on these results, oleg recommends taking a chance and upgrading to java 8, especially if you are working with mapreducing or fork/join. this is his interpretation of the data that lead him to that conclusion: one can see that the baseline results, which show the throughput of running the math directly in a single thread do not differ between the jdk 7 and 8. however, when we include the overhead of managing recursive tasks and going through a forkjoin execution then java 8 is much faster. the numbers for this simple benchmark suggest that the overhead of managing forkjoin tasks is around 35% more performant in the latest release. check out the full article ! it is very informative, has great visuals, and tackles complexity with clarity.
April 22, 2014
by Sarah Ervin
· 60,121 Views · 1 Like
article thumbnail
Java Regular Expressions to Validate Credit Cards
Visa Card ^4[0-9]{12}(?:[0-9]{3})?$^5[1-5][0-9]{14}$ Amex Card ^3[47][0-9]{13}$ Carte Blanche Card ^389[0-9]{11}$ Diners Club Card ^3(?:0[0-5]|[68][0-9])[0-9]{11}$ Discover Card ^65[4-9][0-9]{13}|64[4-9][0-9]{13}|6011[0-9]{12}|(622(?:12[6-9]|1[3-9][0-9]|[2-8][0-9][0-9]|9[01][0-9]|92[0-5])[0-9]{10})$ JCB Card ^(?:2131|1800|35\d{3})\d{11}$ Visa Master Card ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14})$ Insta Payment Card ^63[7-9][0-9]{13}$ Laser Card ^(6304|6706|6709|6771)[0-9]{12,15}$ Maestro Card ^(5018|5020|5038|6304|6759|6761|6763)[0-9]{8,15}$ Solo Card ^(6334|6767)[0-9]{12}|(6334|6767)[0-9]{14}|(6334|6767)[0-9]{15}$ Switch Card ^(4903|4905|4911|4936|6333|6759)[0-9]{12}|(4903|4905|4911|4936|6333|6759)[0-9]{14}|(4903|4905|4911|4936|6333|6759)[0-9]{15}|564182[0-9]{10}|564182[0-9]{12}|564182[0-9]{13}|633110[0-9]{10}|633110[0-9]{12}|633110[0-9]{13}$ Union Pay Card ^(62[0-9]{14,17})$ KoreanLocalCard ^9[0-9]{15}$ BCGlobal ^(6541|6556)[0-9]{12}$ As you can see, regular expressions are incredibly powerful and the above examples are very basic. Regular expressions are essential for all sorts of application from web scraping, form validation and pattern matching. Hopefully you will find this resource useful and would come back to refer again and again.
April 22, 2014
by Jagadeesh Motamarri
· 12,301 Views · 1 Like
article thumbnail
Java String length confusion
Facts and Terminology As you probably know, Java uses UTF-16 to represent Strings. In order to understand the confusion about String.length(), you need to be familiar with some Encoding/Unicode terms. Code Point: A unique integer value which represents a character in the code space. Code Unit: A bit sequence used to encode characters (Code Points). One or more Code Units may be required to represent a Code Point. UTF-16 Unicode Code Points are logically divided into 17 planes. The first plane, the Basic Multilingual Plane (BMP) contains the “classic” characters (from U+0000 to U+FFFF). The other planes contain the supplementary characters (from U+10000 to U+10FFFF). Characters (Code Points) from the first plane are encoded in one 16-bit Code Unit with the same value. Supplementary characters (Code Points) are encoded in two Code Units (encoding-specific, see Wiki for the explanation). Example Character: A Unicode Code Point: U+0041 UTF-16 Code Unit(s): 0041 Character: Mathematical double-struck capital A Unicode Code Point: U+1D538 UTF-16 Code Unit(s): D835 DD38 As you can see here, there are characters which are encoded in two Code Units. String.length() Let’s take a look at the Javadoc of the length() method: public int length() Returns the length of this string. The length is equal to the number of Unicode code units in the string. So if you have one supplementary character which consists of two code units, the length of that single character is two. // Mathematical double-struck capital A String str = "\uD835\uDD38"; System.out.println(str); System.out.println(str.length()); //prints 2 Which is correct according to the documentation, but maybe it’s not expected. ~Solution You need to count the code points not the code units: String str = "\uD835\uDD38"; System.out.println(str); System.out.println(str.codePointCount(0, str.length())); See: codePointCount(int beginIndex, int endIndex) References/Sources The Java Language Specification Unicode Glossary: Code Point Wiki: Code Point Unicode Glossary: Code Unit Wiki: Code Unit Wiki: Unicode Wiki: UTF-16 Supplementary Characters in the Java Platform Wiki: Unicode Planes
April 21, 2014
by Jonatan Ivanov
· 17,664 Views · 7 Likes
article thumbnail
Handy New Map Default Methods in JDK 8
The Map interface provides some handy new methods in JDK 8. Because the Map methods I cover in this post are implemented as default methods, all existing implementations of the Map interface enjoy the default behaviors defined in the default methods without any new code. The JDK 8 introduced Map methods covered in this post are getOrDefault(Object, V), putIfAbsent(K, V), remove(Object, Object), remove(Object, Object), replace(K, V), and replace(K, V, V). Example Map for Demonstrations I will be using the Map declared and initialized as shown in the following code throughout the examples in this blog post. The statesAndCapitals field is a class-level static field. I intentionally have only included a small subset of the fifty states in the United States for reading clarity and to allow easier demonstration of some of the new JDK 8 Map default methods. private final static Map statesAndCapitals; static { statesAndCapitals = new HashMap<>(); statesAndCapitals.put("Alaska", "Anchorage"); statesAndCapitals.put("California", "Sacramento"); statesAndCapitals.put("Colorado", "Denver"); statesAndCapitals.put("Florida", "Tallahassee"); statesAndCapitals.put("Nevada", "Las Vegas"); statesAndCapitals.put("New Mexico", "Sante Fe"); statesAndCapitals.put("Utah", "Salt Lake City"); statesAndCapitals.put("Wyoming", "Cheyenne"); } Map.getOrDefault(Object, V) Map's new method getOrDefault(Object, V) allows the caller to specify in a single statement to get the value of the map that corresponds to the provided key or else return a provided "default value" if no match is found for the provided key. The next code listing compares how checking for a value matching a provided key in a map or else using a default if no match is found was implemented before JDK 8 and how it can now be implemented with JDK 8. /* * Demonstrate Map.getOrDefault and compare to pre-JDK 8 approach. The JDK 8 * addition of Map.getOrDefault requires fewer lines of code than the * traditional approach and allows the returned value to be assigned to a * "final" variable. */ // pre-JDK 8 approach String capitalGeorgia = statesAndCapitals.get("Georgia"); if (capitalGeorgia == null) { capitalGeorgia = "Unknown"; } // JDK 8 approach final String capitalWisconsin = statesAndCapitals.getOrDefault("Wisconsin", "Unknown"); The Apache Commons class DefaultedMap provides functionality similar to the newMap.getOrDefault(Object, V) method. The Groovy GDK includes a similar method for Groovy,Map.get(Object, Object), but that one's behavior is a bit different because it not only returns the provided default if the "key" is not found, but also adds the key with the default value to the underlying map. Map.putIfAbsent(K, V) Map's new method putIfAbsent(K, V) has Javadoc advertising its default implementation equivalent: The default implementation is equivalent to, for this map: V v = map.get(key); if (v == null) v = map.put(key, value); return v; This is illustrated with another code sample that compares the pre-JDK 8 approach to the JDK 8 approach. /* * Demonstrate Map.putIfAbsent and compare to pre-JDK 8 approach. The JDK 8 * addition of Map.putIfAbsent requires fewer lines of code than the * traditional approach and allows the returned value to be assigned to a * "final" variable. */ // pre-JDK 8 approach String capitalMississippi = statesAndCapitals.get("Mississippi"); if (capitalMississippi == null) { capitalMississippi = statesAndCapitals.put("Mississippi", "Jackson"); } // JDK 8 approach final String capitalNewYork = statesAndCapitals.putIfAbsent("New York", "Albany"); Alternate solutions in the Java space before the addition of this putIfAbsent method are discussed in theStackOverflow thread Java map.get(key) - automatically do put(key) and return if key doesn't exist?. It's worth noting that before JDK 8, the ConcurrentMap interface (extends Map) already provided a putIfAbsent(K, V)method. Map.remove(Object, Object) Map's new remove(Object, Object) method goes beyond the long-available Map.remove(Object) method to remove a map entry only if both the provided key and provided value match an entry in the map (the previously available version only looked for a "key" match to remove). The Javadoc comment for this method explains the how the default method's implementation works in terms of equivalent pre-JDK 8 Java code: The default implementation is equivalent to, for this map: if (map.containsKey(key) && Objects.equals(map.get(key), value)) { map.remove(key); return true; } else return false; A concrete comparison of the new approach to the pre-JDK 8 approach is shown in the next code listing. /* * Demonstrate Map.remove(Object, Object) and compare to pre-JDK 8 approach. * The JDK 8 addition of Map.remove(Object, Object) requires fewer lines of * code than the traditional approach and allows the returned value to be * assigned to a "final" variable. */ // pre-JDK 8 approach boolean removed = false; if ( statesAndCapitals.containsKey("New Mexico") && Objects.equals(statesAndCapitals.get("New Mexico"), "Sante Fe")) { statesAndCapitals.remove("New Mexico", "Sante Fe"); removed = true; } // JDK 8 approach final boolean removedJdk8 = statesAndCapitals.remove("California", "Sacramento"); Map.replace(K, V) The first of the two new Map "replace" methods sets the specified value to be mapped to the specified key only if the specified key already exists with some mapped value. The Javadoc comment explains the Java equivalent of this default method implementation: The default implementation is equivalent to, for this map: if (map.containsKey(key)) { return map.put(key, value); } else return null; The comparison of this new approach to the pre-JDK 8 approach is shown next. /* * Demonstrate Map.replace(K, V) and compare to pre-JDK 8 approach. The JDK 8 * addition of replace(K, V) requires fewer lines of code than the traditional * approach and allows the returned value to be assigned to a "final" * variable. */ // pre-JDK 8 approach String replacedCapitalCity; if (statesAndCapitals.containsKey("Alaska")) { replacedCapitalCity = statesAndCapitals.put("Alaska", "Juneau"); } // JDK 8 approach final String replacedJdk8City = statesAndCapitals.replace("Alaska", "Juneau"); Map.replace(K, V, V) The second newly added Map "replace" method is more narrow in its interpretation of which existing values are replaced. While the method just covered replaces any value in a value available for the specified key in the mapping, this "replace" method that accepts an additional (third) argument will only replace the value of a mapped entry that has both a matching key and a matching value. The Javadoc comment shows the default method's implementation: The default implementation is equivalent to, for this map: if (map.containsKey(key) && Objects.equals(map.get(key), value)) { map.put(key, newValue); return true; } else return false; My comparison of this approach to the pre-JDK 8 approach is shown in the next code listing. /* * Demonstrate Map.replace(K, V, V) and compare to pre-JDK 8 approach. The * JDK 8 addition of replace(K, V, V) requires fewer lines of code than the * traditional approach and allows the returned value to be assigned to a * "final" variable. */ // pre-JDK 8 approach boolean replaced = false; if ( statesAndCapitals.containsKey("Nevada") && Objects.equals(statesAndCapitals.get("Nevada"), "Las Vegas")) { statesAndCapitals.put("Nevada", "Carson City"); replaced = true; } // JDK 8 approach final boolean replacedJdk8 = statesAndCapitals.replace("Nevada", "Las Vegas", "Carson City"); Observations and Conclusion There are several observations to make from this post. The Javadoc methods for these new JDK 8 Map methods are very useful, especially in terms of describing how the new methods behave in terms of pre-JDK 8 code. I discussed these methods' Javadoc in a more general discussion on JDK 8 Javadoc-based API documentation. As the equivalent Java code in these methods' Javadoc comments indicates, these new methods do not generally check for null before accessing map keys and values. Therefore, one can expect the same issues with nulls using these methods as one would find when using "equivalent" code as shown in the Javadoc comments. In fact, the Javadoc comments generally warn about the potential forNullPointerException and issues related to some Map implementations allowing null and some not for keys and values. The new Map methods discussed in this post are "default methods," meaning that implementations of Map "inherit" these implementations automatically. The new Map methods discussed in this post allow for cleaner and more concise code. In most of my examples, they allowed the client code to be converted from multiple state-impacting statements to a single statement that can set a local variable once and for all. The new Map methods covered in this post are not ground-breaking or earth-shattering, but they are conveniences that many Java developers previously implemented more verbose code for, wrote their own similar methods for, or used a third-party library for. JDK 8 brings these standardized methods to the Java masses without need for custom implementation or third-party frameworks. Because default methods are the implementation mechanism, even Map implementations that have been around for quite a while suddenly and automatically have access to these new methods without any code changes to the implementations.
April 21, 2014
by Dustin Marx
· 18,341 Views · 1 Like
article thumbnail
Writing effective custom queries in Hibernate
There are many instances where we will have to write custom queries with hibernate. I always hated writing custom queries due to following reasons Hibernate returns List of Object arrays (List getSalaryByDepartment() { Session session = HibernateUtil.getSessionFactory().openSession(); try { List salaryByDepartments = new ArrayList(); for (Object object : contactList) { Object[] result = (Object[]) object; SalaryByDepartment salaryByDepartment = new SalaryByDepartment(); salaryByDepartment.setDeptId((Integer) result[0]); salaryByDepartment.setDepartmentName((String) result[1]); salaryByDepartment.setSalary((Double) result[2]); salaryByDepartments.add(salaryByDepartment); } return salaryByDepartments; } catch (HibernateException e) { e.printStackTrace(); return null; } finally { session.close(); } } There is a better mechanism for writing the custom queries called 'select new' but it is not widely used for some reason. It might be an overkill to write all custom queries using this mechanism, but is good for queries which return more than 2 columns. public List getNewSalaryByDepartment() { Session session = HibernateUtil.getSessionFactory().openSession(); try { List salaryByDept = session.createQuery("select " + "new hsqldb.results.SalaryByDepartment(department.id, department.departmentName, sum(employee.salary)) " + "from Employee employee, Department department " + "where employee.department.id = department.id group by department.id") .list(); return salaryByDept; } catch (HibernateException e) { e.printStackTrace(); return null; } finally { session.close(); } }
April 21, 2014
by Amar Mattey
· 26,909 Views
article thumbnail
Be Careful with Java Path.endsWith(String) Usage
If you need to compare the java.io.file.Path object, be aware that Path.endsWith(String) will ONLY match another sub-element of Path object in your original path, not the path name string portion! If you want to match the string name portion, you would need to call the Path.toString() first. For example // Match all jar files. Files.walk(dir).forEach(path -> { if (path.toString().endsWith(".jar")) System.out.println(path); }); With out the "toString()" you will spend many fruitless hours wonder why your program didn't work.
April 19, 2014
by Zemian Deng
· 10,184 Views · 1 Like
article thumbnail
What's Wrong with Java 8: Currying vs Closures
There are many false ideas around about Java 8. Among these is the idea that Java 8 brings closures to Java.
April 19, 2014
by Pierre-Yves Saumont
· 135,622 Views · 31 Likes
article thumbnail
Insert Embedded or Linked OLE Object in Word Files & EUDC Fonts Support in .NET & Java Apps
What's New in this Release? Aspose development team is happy to announce the monthly release of Aspose.Words for Java &.NET 14.3.0. Aspose.Words now supports insertion of OLE objects such as another Microsoft Word document or a Microsoft Excel chart. A new public method, InsertOleObject, has been introduced in the DocumentBuilder class. This method can be used to insert an embedded or linked OLE object from a file into a Word document. Aspose.Words’ rendering engine now partially supports EUDC (End-User-Defined-Characters) fonts. Please find below the description of how EUDC fonts works on Windows. In this first implementation, Aspose.Words uses a single EUDC font. When rendering a document to fixed-page formats, this font is searched among the specified font sources by “EUDC” family name. Starting from Aspose.Words 14.3.0, Best Fit position of data labels in pie charts is partially supported. In previous versions labels with best fit position were rendered as if they had the inside end position. Currently we use a modified Open Office algorithm to set the best fit position of data labels. The list of new and improved features in this release are listed below Public API for insertion of OLE objects both linked and embedded Outline, Shadow, Reflection, Glow and Fill text effects for rendering text inside DrawingML shapes EUDC fonts rendering partially supported “PDF Logical Structure” export reworked, significantly improving memory usage Support OLE embedding of documents and files Feature Write an article about how to work with Table of contents in Aspose.Words Add tag support Support BestFit position of Pie chart's data labels. Support style:text-decoration attribute of Paragraph tag. Preserve private characters in EUDC.TTE during rendering to Pdf Support HTML table row borders in HTML import Support text outline effect. Support text fill effect. Support text shadow effect. Support text reflection effect Support text glow effect. Support text effects applied to text in Dml Shape or in SmartArt. Implement reading text effects from rPr in DML. Support inheriting styles from parent objects. Add image compression options for different image types Add a link to the online documentation in the DLLs only release Default run properties lose lang attribute value on DOCX to DOC conversions Padding for image in table is lost when rendering to Pdf Consider using "Don't vertically align cells containing floating objects" compatibility option when exporting to HTML Shape in table's cell is improperly horizontally aligned to center after export to HTML is now fixed Floating shape is now properly vertically positioned after export to HTML Warning : Unkno/wn ProgId value 'Visio.Drawing.11'. This might cause inaccessible OLE embedding Charts (DrawingML) issue fixed and now render correctly in Pdf file Aspose.Words now properly work in Jdeveloper IDE OLE object cannot be edited after re saving the document Diagram connectors are inverted/flipped after conversion from Docx to pdf WordArt letters are condensing issue is resolved Condensed character spacing is lost is fixed Relative hyperlink with Unicode is now properly saved to Pdf Add pre-built Document Explorer JAR to Java release. Aspose.Words does not take in account style set in . Table now looks correctly while converting html to doc. Hyperlinks split into multiple fragments/links in output PDF Horizontal table position is corrected. Document.UpdateFields Does not Update TOC in DOCX Content in the output html is overlapped at many places in Html Different table justify alignment for different compatibilityMode values. CSS selectors now work for , , and elements Chart Legend/Series now render correctly in output Pdf file Data labels with best fit position are rendering at correct places in Chart Offset of the hyperlink text line is off by a few pixels in output Pdf Hyperlinks split into multiple fragments/links in output PDF is fixed Document.UpdateFields is enhanced and update SUM formula field Paragraph's first line indent increases is fixed when exported to HTM Space before a paragraph following a floating element is too large after HTML to DOCX conversion is now fixd Block-level SVG image become inline after HTML to DOCX conversion Problem with vertical paragraph spacing is resolved when importing HTML using InsertHtml method Shape rotation is fixed after conversion from Doc to Pdf Text color is changed after conversion from Docx to WordML/Doc Word Table indentation is now corrected when is placed inside Arrows on the Lines gets distorted when converting to Pdf is now fixed A space character is exported to PDF output between Japanese and Numeric characters is fixed Line Shape from Header is merged with the top border of Table in Body Docx to WordML conversion issue resolved with content control Junk text is rendered in fixed page formats Positions of some DrawingML circles are now preserved during rendering A row and some content is rendering at the bottom of previous page is fixed Header table rows and images are now preserved in PDF List items issue fixed, now line up correctly after conversion from RTF to HTML Conversion from Docm to Doc creates corrupted document now fixed Hyperlink for an icon is now preserved during HTML to PDF conversion Text overlapping is fixed after conversion from Docx to Pdf Text position change is fixed after conversion from Docx to Pdf Other most recent bug fixes are also included in this release Newly added documentation pages and articles Some new tips and articles have now been added into Aspose.Words for .NET documentation that may guide you briefly how to use Aspose.Words for performing different tasks like the followings. How Aspose.Words Uses True Type Fonts How to Extract Images from a Document Overview: Aspose.Words Aspose.Words is a word processing component that enables .NET, Java & Android applications to read, write and modify Word documents without using Microsoft Word. Other useful features include document creation, content and formatting manipulation, mail merge abilities, reporting features, TOC updated/rebuilt, Embedded OOXML, Footnotes rendering and support of DOCX, DOC, WordprocessingML, HTML, XHTML, TXT and PDF formats (requires Aspose.Pdf). It supports both 32-bit and 64-bit operating systems. You can even use Aspose.Words for .NET to build applications with Mono. More about Aspose.Words Homepage Aspose.Words for .NET Homepage Java Word Library Download Aspose.Words for .NET Download Aspose.Words for Java Demos for Aspose.Words Contact Information Aspose Pty Ltd Suite 163, 79 Longueville Road Lane Cove, NSW, 2066 Australia Aspose - Your File Format Experts sales@aspose.com Phone: 888.277.6734 Fax: 866.810.9465
April 18, 2014
by David Zondray
· 2,934 Views
article thumbnail
Creating Object Pool in Java
In this post, we will take a look at how we can create an object pool in Java. In recent times, JVM performance has been multiplied manifold and so object creation is no longer considered as expensive as it was done earlier. But there are few objects, for which creation of new object still seems to be slight costly as they are not considered as lightweight objects. e.g.: database connection objects, parser objects, thread creation etc. In any application we need to create multiple such objects. Since creation of such objects is costly, it’s a sure hit for the performance of any application. It would be great if we can reuse the same object again and again. Object Pools are used for this purpose. Basically, object pools can be visualized as a storage where we can store such objects so that stored objects can be used and reused dynamically. Object pools also controls the life-cycle of pooled objects. As we understood the requirement, let’s come to real stuff. Fortunately, there are various open source object pooling frameworks available, so we do not need to reinvent the wheel. In this post we will be using apache commons pool to create our own object pool. At the time of writing this post Version 2.2 is the latest, so let us use this. The basic thing we need to create is- 1. A pool to store heavyweight objects (pooled objects). 2. A simple interface, so that client can - a.) Borrow pooled object for its use. b.) Return the borrowed object after its use. Let’s start with Parser Objects. Parsers are normally designed to parse some document like xml files, html files or something else. Creating new xml parser for each xml file (having same structure) is really costly. One would really like to reuse the same (or few in concurrent environment) parser object(s) for xml parsing. In such scenario, we can put some parser objects into pool so that they can be reused as and when needed. Below is a simple parser declaration: package blog.techcypher.parser; /** * Abstract definition of Parser. * * @author abhishek * */ public interface Parser { /** * Parse the element E and set the result back into target object T. * * @param elementToBeParsed * @param result * @throws Exception */ public void parse(E elementToBeParsed, T result) throws Exception; /** * Tells whether this parser is valid or not. This will ensure the we * will never be using an invalid/corrupt parser. * * @return */ public boolean isValid(); /** * Reset parser state back to the original, so that it will be as * good as new parser. * */ public void reset(); } Let’s implement a simple XML Parser over this as below: package blog.techcypher.parser.impl; import blog.techcypher.parser.Parser; /** * Parser for parsing xml documents. * * @author abhishek * * @param * @param */ public class XmlParser implements Parser { private Exception exception; @Override public void parse(E elementToBeParsed, T result) throws Exception { try { System.out.println("[" + Thread.currentThread().getName()+ "]: Parser Instance:" + this); // Do some real parsing stuff. } catch(Exception e) { this.exception = e; e.printStackTrace(System.err); throw e; } } @Override public boolean isValid() { return this.exception == null; } @Override public void reset() { this.exception = null; } } At this point, as we have parser object we should create a pool to store these objects. Here, we will be using GenericObjectPool to store the parse objects. Apache commons pool has already build-in classes for pool implementation. GenericObjectPool can be used to store any object. Each pool can contain same kind of object and they have factory associated with them. GenericObjectPool provides a wide variety of configuration options, including the ability to cap the number of idle or active instances, to evict instances as they sit idle in the pool, etc. If you want to create multiple pools for different kind of objects (e.g. parsers, converters, device connections etc.) then you should use GenericKeyedObjectPool . package blog.techcypher.parser.pool; import org.apache.commons.pool2.PooledObjectFactory; import org.apache.commons.pool2.impl.GenericObjectPool; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import blog.techcypher.parser.Parser; /** * Pool Implementation for Parser Objects. * It is an implementation of ObjectPool. * * It can be visualized as- * +-------------------------------------------------------------+ * | ParserPool | * +-------------------------------------------------------------+ * | [Parser@1, Parser@2,...., Parser@N] | * +-------------------------------------------------------------+ * * @author abhishek * * @param * @param */ public class ParserPool extends GenericObjectPool>{ /** * Constructor. * * It uses the default configuration for pool provided by * apache-commons-pool2. * * @param factory */ public ParserPool(PooledObjectFactory> factory) { super(factory); } /** * Constructor. * * This can be used to have full control over the pool using configuration * object. * * @param factory * @param config */ public ParserPool(PooledObjectFactory> factory, GenericObjectPoolConfig config) { super(factory, config); } } As we can see, the constructor of pool requires a factory to manage lifecycle of pooled objects. So we need to create a parser factory which can create parser objects. Commons pool provide generic interface for defining a factory(PooledObjectFactory). PooledObjectFactory create and manage PooledObjects . These object wrappers maintain object pooling state, enabling PooledObjectFactory methods to have access to data such as instance creation time or time of last use. A DefaultPooledObject is provided, with natural implementations for pooling state methods. The simplest way to implement a PoolableObjectFactory is to have it extend BasePooledObjectFactory . This factory provides a makeObject() that returns wrap(create()) where create and wrap are abstract. We provide an implementation of create to create the underlying objects that we want to manage in the pool and wrap to wrap created instances in PooledObjects. So, here is our factory implementation for parser objects- package blog.techcypher.parser.pool; import org.apache.commons.pool2.BasePooledObjectFactory; import org.apache.commons.pool2.PooledObject; import org.apache.commons.pool2.impl.DefaultPooledObject; import blog.techcypher.parser.Parser; import blog.techcypher.parser.impl.XmlParser; /** * Factory to create parser object(s). * * @author abhishek * * @param * @param */ public class ParserFactory extends BasePooledObjectFactory> { @Override public Parser create() throws Exception { return new XmlParser(); } @Override public PooledObject> wrap(Parser parser) { return new DefaultPooledObject>(parser); } @Override public void passivateObject(PooledObject> parser) throws Exception { parser.getObject().reset(); } @Override public boolean validateObject(PooledObject> parser) { return parser.getObject().isValid(); } } Now, at this point we have successfully created our pool to store parser objects and we have a factory as well to manage the life-cycle of parser objects. You should notice that, we have implemented couple of extra methods- 1. boolean validateObject(PooledObject obj): This is used to validate an object borrowed from the pool or returned to the pool based on configuration. By default, validation remains off. Implementing this ensures that client will always get a valid object from the pool. 2. void passivateObject(PooledObject obj): This is used while returning an object back to pool. In the implementation we can reset the object state, so that the object behaves as good as a new object on another borrow. Since, we have everything in place, let’s create a test to test this pool. Pool clients can – 1. Get object by calling pool.borrowObject() 2. Return the object back to pool by calling pool.returnObject(object) Below is our code to test Parser Pool- package blog.techcypher.parser; import static org.junit.Assert.fail; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.ExecutorService; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import junit.framework.Assert; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import org.junit.Before; import org.junit.Test; import blog.techcypher.parser.pool.ParserFactory; import blog.techcypher.parser.pool.ParserPool; /** * Test case to test- * 1. object creation by factory * 2. object borrow from pool. * 3. returning object back to pool. * * @author abhishek * */ public class ParserFactoryTest { private ParserPool pool; private AtomicInteger count = new AtomicInteger(0); @Before public void setUp() throws Exception { GenericObjectPoolConfig config = new GenericObjectPoolConfig(); config.setMaxIdle(1); config.setMaxTotal(1); /*---------------------------------------------------------------------+ |TestOnBorrow=true --> To ensure that we get a valid object from pool | |TestOnReturn=true --> To ensure that valid object is returned to pool | +---------------------------------------------------------------------*/ config.setTestOnBorrow(true); config.setTestOnReturn(true); pool = new ParserPool(new ParserFactory(), config); } @Test public void test() { try { int limit = 10; ExecutorService es = new ThreadPoolExecutor(10, 10, 0L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue(limit)); for (int i=0; i parser = null; try { parser = pool.borrowObject(); count.getAndIncrement(); parser.parse(null, null); } catch (Exception e) { e.printStackTrace(System.err); } finally { if (parser != null) { pool.returnObject(parser); } } } }; es.submit(r); } es.shutdown(); try { es.awaitTermination(1, TimeUnit.MINUTES); } catch (InterruptedException ignored) {} System.out.println("Pool Stats:\n Created:[" + pool.getCreatedCount() + "], Borrowed:[" + pool.getBorrowedCount() + "]"); Assert.assertEquals(limit, count.get()); Assert.assertEquals(count.get(), pool.getBorrowedCount()); Assert.assertEquals(1, pool.getCreatedCount()); } catch (Exception ex) { fail("Exception:" + ex); } } } Result: [pool-1-thread-1]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-2]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-3]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-4]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-5]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-8]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-7]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-9]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-6]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-10]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 Pool Stats: Created:[1], Borrowed:[10] You can easily see that single parser object was created and reused dynamically. Commons Pool 2 stands far better in term of performance and scalability over Commons Pool 1. Also, version 2 includes robust instance tracking and pool monitoring. Commons Pool 2 requires JDK 1.6 or above. There are lots of configuration options to control and manage the life-cycle of pooled objects. And so ends our long post… :-) Hope this article helped. Keep learning!
April 14, 2014
by Abhishek Kumar
· 100,666 Views · 9 Likes
article thumbnail
Compiling and Running Java Without an IDE
I’m going to start by discussing the Spring WebMVC configuration to compile and run Java without an IDE.
April 4, 2014
by Dustin Marx
· 58,541 Views · 9 Likes
article thumbnail
Common Misconceptions About Java
Java is the most widely used language in the world ([citation needed]), and everyone has an opinion about it. Due to it being mainstream, it is usually mocked, and sometimes rightly so, but sometimes the criticism just doesn’t touch reality. I’ll try to explain my favorite 5 misconceptions about Java. Java is slow – that might have been true for Java 1.0, and initially may sounds logical, since java is not compiled to binary, but to bytecode, which is in turn interpreted. However, modern versions of the JVM are very, very optimized (JVM optimizations is a topic worth not just an article, but a whole book) and this is no longer remotely true. As noted here, Java is even on-par with C++ in some cases. And it is certainly not a good idea to make a joke about Java being slow if you are a Ruby or PHP developer. Java is too verbose – here we need to split the language from the SDK and from other libraries. There is some verbosity in the JDK (e.g. java.io), which is: 1. easily overcome with de-facto standard libraries like guava 2. a good thing As for language verbosity, the only reasonable point were anonymous classes. Which are no longer an issue in Java 8 with the the functional additions. Getters and setters, Foo foo = new Foo() instead of using val – that is (possibly) boilerplate, but it’s not verbose – it doesn’t add conceptual weight to the code. It doesn’t take more time to write, read or understand. Other libraries – it is indeed pretty scary to see a class like AbstractCommonAsyncFacadeFactoryManagerImpl. But that has nothing to do with Java. It can be argued that sometimes these long names make sense, it can also be argued that they are as complex because the underlying abstraction is unnecessarily complicated, but either way, it is a design decision taken per-library, and nothing that the language or the SDK impose per-se. It is common to see overengineered stuff, but Java in no way pushes you in that direction – stuff can be done in a simple way with any language. You can certainly have AbstractCommonAsyncFacadeFactoryManagerImpl in Ruby, just there wasn’t a stupid architect that thought it’s a good idea and who uses Ruby. If “big, serious, heavy” companies were using Ruby, I bet we’d see the same. Enterprise Java frameworks are bloatware – that was certainly true back in 2002 when EJB 2 was in use (or “has been”, I’m too young to remember). And there are still some overengineered and bloated application servers that you don’t really need. The fact that people are using them is their own problem. You can have a perfectly nice, readable, easy to configure and deploy web application with a framework like Spring, Guice or even CDI; with a web framework like Spring-MVC, Play, Wicket, and even the latest JSF. Or even without any framework, if you feel like you don’t want to reuse the evolved-through-real-world-use frameworks. You can have an application using a message queue, a NoSQL and a SQL database, Amazon S3 file storage, and whatnot, without any accidental complexity. It’s true that people still like to overeingineer stuff, and add a couple of layers where they are not needed, but the fact that frameworks give you this ability doesn’t mean they make you do it. For example, here’s an application that crawls government documents, indexes them, and provides a UI for searching and subscribing. Sounds sort-of simple, and it is. It is written in Scala (in a very java way), but uses only java frameworks – spring, spring-mvc, lucene, jackson, guava. I guess you can start maintaining pretty fast, because it is straightforward. You can’t prototype quickly with Java – this is sort-of related to the previous point – it is assumed that working with Java is slow, and that’s why if you are a startup, or a weekend/hackathon project, you should use Ruby (with Rails), Python, Node JS or anything else that allows you to quickly prototype, to save & refresh, to painlessly iterate. Well, that is simply not true, and I don’t know even where it comes from. Maybe from the fact that big companies with heavy processes use Java, and so making a java app is taking more time. And Save-and-Refresh might look daunting to a beginner, but anyone who has programmed in Java (for the web) for a while, has to know a way to automate that (otherwise he’s a n00b, right?). I’ve summarized the possible approaches, and all of them are mostly OK. Another example here (which may be used as an example for the above point as well) – I made did this project for verifying secure password storage of websites within a weekend + 1 day to fix stuff in the evening. Including the security research. Spring-MVC, JSP templates, MongoDB. Again – quick and easy. You can do nothing in Java without an IDE – of course you can – you can use notepad++, vim, emacs. You will just lack refactoring, compile-on-save, call hierarchies. It would be just like programming in PHP or Python or javascript. The IDE vs Editor debate is a long one, but you can use Java without an IDE. It just doesn’t make sense to do so, because you get so much more from the IDE than from a text editor + command line tools. You may argue that I’m able to write nice and simple java applications quickly because I have a lot of experience, I know precisely which tools to use (and which not) and that I’m of some rare breed of developers with common sense. And while I’ll be flattered by that, I am no different than the good Ruby developer or the Python guru you may be. It’s just that java is too widespread to have only good developers and tools. if so many people were using other language, then probably the same amount of crappy code would’ve been generated. (And PHP is already way ahead even with less usage). I’m the last person not to laugh on jokes about Java, and it certainly isn’t the silver bullet language, but I’d be happier if people had less misconceptions either because of anecdotal evidence, or due to previous bad experience a-la “I hate Java since my previous company where the project was very bloated”. Not only because I don’t like people being biased, but because you may start your next project with a language that will not work, just because you’ve heard “Java is bad”.
April 4, 2014
by Bozhidar Bozhanov
· 21,613 Views · 1 Like
  • Previous
  • ...
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: