DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Scala: Pattern matching a pair inside map/filter
More than a few times recently we’ve wanted to use pattern matching on a collection of pairs/tuples and have run into trouble doing so. It’s easy enough if you don’t try and pattern match: > List(("Mark", 4), ("Charles", 5)).filter(pair => pair._2 == 4) res6: List[(java.lang.String, Int)] = List((Mark,4)) But if we try to use pattern matching: List(("Mark", 4), ("Charles", 5)).filter(case(name, number) => number == 4) We end up with this error: :1: error: illegal start of simple expression List(("Mark", 4), ("Charles", 5)).filter(case(name, number) => number == 4) It turns out that we can only use this if we pass the function to filter using {} instead of (): > List(("Mark", 4), ("Charles", 5)).filter { case(name, number) => number == 4 } res7: List[(java.lang.String, Int)] = List((Mark,4)) It was pointed out to me on the Scala IRC channel that the reason for the compilation failure has nothing to do with trying to do a pattern match inside a higher order function but that it’s not actually possible to use a case token without the {}. [23:16] mneedham: hey – trying to understand how pattern matching works inside higher order functions. Don’t quite get this code -> https://gist.github.com/1079110 any ideas? [23:17] dwins: mneedham: scala requires that “case” statements be inside curly braces. nothing to do with higher-order functions [23:17] mneedham: is there anywhere that’s documented or is that just a known thing? [23:18] mneedham: I expected it to work in normal parentheses [23:21] amacleod: mneedham, it’s documented. Whether it’s documented simply as “case statements need to be in curly braces” is another question The first line of Section 8.5 ‘Pattern Matching Anonymous Functions’ of the Scala language spec proves what I was told: Syntax: BlockExpr ::= ‘{’ CaseClauses ‘} It then goes into further detail about how the anonymous function gets converted into a pattern matching statement which is quite interesting reading. From http://www.markhneedham.com/blog/2011/07/12/scala-pattern-matching-a-pair-inside-mapfilter/
July 15, 2011
by Mark Needham
· 19,740 Views
article thumbnail
Updated NATO Air Defence Solution Based on the NetBeans Platform
I am Angelo D'Agnano and currently I work at the NATO Programming Centre as Software Architect.
July 12, 2011
by Angelo D' Agnano
· 44,539 Views · 1 Like
article thumbnail
Lucene's near-real-time search is fast!
Lucene's near-real-time (NRT) search feature, available since 2.9, enables an application to make index changes visible to a new searcher with fast turnaround time. In some cases, such as modern social/news sites (e.g., LinkedIn, Twitter, Facebook, Stack Overflow, Hacker News, DZone, etc.), fast turnaround time is a hard requirement. Fortunately, it's trivial to use. Just open your initial NRT reader, like this: // w is your IndexWriter IndexReader r = IndexReader.open(w, true); (That's the 3.1+ API; prior to that use w.getReader() instead). The returned reader behaves just like one opened with IndexReader.open: it exposes the point-in-time snapshot of the index as of when it was opened. Wrap it in an IndexSearcher and search away! Once you've made changes to the index, call r.reopen() and you'll get another NRT reader; just be sure to close the old one. What's special about the NRT reader is that it searches uncommitted changes from IndexWriter, enabling your application to decouple fast turnaround time from index durability on crash (i.e., how often commit is called), something not previously possible. Under the hood, when an NRT reader is opened, Lucene flushes indexed documents as a new segment, applies any buffered deletions to in-memory bit-sets, and then opens a new reader showing the changes. The reopen time is in proportion to how many changes you made since last reopening that reader. Lucene's approach is a nice compromise between immediate consistency, where changes are visible after each index change, and eventual consistency, where changes are visible "later" but you don't usually know exactly when. With NRT, your application has controlled consistency: you decide exactly when changes must become visible. Recently there have been some good improvements related to NRT: New default merge policy, TieredMergePolicy, which is able to select more efficient non-contiguous merges, and favors segments with more deletions. NRTCachingDirectory takes load off the IO system by caching small segments in RAM (LUCENE-3092). When you open an NRT reader you can now optionally specify that deletions do not need to be applied, making reopen faster for those cases that can tolerate temporarily seeing deleted documents returned, or have some other means of filtering them out (LUCENE-2900). Segments that are 100% deleted are now dropped instead of inefficiently merged (LUCENE-2010). How fast is NRT search? I created a simple performance test to answer this. I first built a starting index by indexing all of Wikipedia's content (25 GB plain text), broken into 1 KB sized documents. Using this index, the test then reindexes all the documents again, this time at a fixed rate of 1 MB/second plain text. This is a very fast rate compared to the typical NRT application; for example, it's almost twice as fast as Twitter's recent peak during this year's superbowl (4,064 tweets/second), assuming every tweet is 140 bytes, and assuming Twitter indexed all tweets on a single shard. The test uses updateDocument, replacing documents by randomly selected ID, so that Lucene is forced to apply deletes across all segments. In addition, 8 search threads run a fixed TermQuery at the same time. Finally, the NRT reader is reopened once per second. I ran the test on modern hardware, a 24 core machine (dual x5680 Xeon CPUs) with an OCZ Vertex 3 240 GB SSD, using Oracle's 64 bit Java 1.6.0_21 and Linux Fedora 13. I gave Java a 2 GB max heap, and used MMapDirectory. The test ran for 6 hours 25 minutes, since that's how long it takes to re-index all of Wikipedia at a limited rate of 1 MB/sec; here's the resulting QPS and NRT reopen delay (milliseconds) over that time: The search QPS is green and the time to reopen each reader (NRT reopen delay in milliseconds) is blue; the graph is an interactive Dygraph, so if you click through above, you can then zoom in to any interesting region by clicking and dragging. You can also apply smoothing by entering the size of the window into the text box in the bottom left part of the graph. Search QPS dropped substantially with time. While annoying, this is expected, because of how deletions work in Lucene: documents are merely marked as deleted and thus are still visited but then filtered out, during searching. They are only truly deleted when the segments are merged. TermQuery is a worst-case query; harder queries, such as BooleanQuery, should see less slowdown from deleted, but not reclaimed, documents. Since the starting index had no deletions, and then picked up deletions over time, the QPS dropped. It looks like TieredMergePolicy should perhaps be even more aggressive in targeting segments with deletions; however, finally around 5:40 a very large merge (reclaiming many deletions) was kicked off. Once it finished the QPS recovered somewhat. Note that a real NRT application with deletions would see a more stable QPS since the index in "steady state" would always have some number of deletions in it; starting from a fresh index with no deletions is not typical. Reopen delay during merging The reopen delay is mostly around 55-60 milliseconds (mean is 57.0), which is very fast (i.e., only 5.7% "duty cycle" of the every 1.0 second reopen rate). There are random single spikes, which is caused by Java running a full GC cycle. However, large merges can slow down the reopen delay (once around 1:14, again at 3:34, and then the very large merge starting at 5:40). Many small merges (up to a few 100s of MB) were done but don't seem to impact reopen delay. Large merges have been a challenge in Lucene for some time, also causing trouble for ongoing searching. I'm not yet sure why large merges so adversely impact reopen time; there are several possibilities. It could be simple IO contention: a merge keeps the IO system very busy reading and writing many bytes, thus interfering with any IO required during reopen. However, if that were the case, NRTCachingDirectory (used by the test) should have prevented it, but didn't. It's also possible that the OS is [poorly] choosing to evict important process pages, such as the terms index, in favor of IO caching, causing the term lookups required when applying deletes to hit page faults; however, this also shouldn't be happening in my test since I've set Linux's swappiness to 0. Yet another possibility is Linux's write cache becomes temporarily too full, thus stalling all IO in the process until it clears; in this case perhaps tuning some of Linux's pdflush tunables could help, although I'd much rather find a Lucene-only solution so this problem can be fixed without users having to tweak such advanced OS tunables, even swappiness. Fortunately, we have an active Google Summer of Code student, Varun Thacker, working on enabling Directory implementations to pass appropriate flags to the OS when opening files for merging (LUCENE-2793 and LUCENE-2795). From past testing I know that passing O_DIRECT can prevent merges from evicting hot pages, so it's possible this will fix our slow reopen time as well since it bypasses the write cache. Finally, it's always possible other OSs do a better job managing the buffer cache, and wouldn't see such reopen delays during large merges. This issue is still a mystery, as there are many possibilities, but we'll eventually get to the bottom of it. It could be we should simply add our own IO throttling, so we can control net MB/sec read and written by merging activity. This would make a nice addition to Lucene! Except for the slowdown during merging, the performance of NRT is impressive. Most applications will have a required indexing rate far below 1 MB/sec per shard, and for most applications reopening once per second is fast enough. While there are exciting ideas to bring true real-time search to Lucene, by directly searching IndexWriter's RAM buffer as Michael Busch has implemented at Twitter with some cool custom extensions to Lucene, I doubt even the most demanding social apps actually truly need better performance than we see today with NRT. NIOFSDirectory vs MMapDirectory Out of curiosity, I ran the exact same test as above, but this time with NIOFSDirectory instead of MMapDirectory: There are some interesting differences. The search QPS is substantially slower -- starting at 107 QPS vs 151, though part of this could easily be from getting different compilation out of hotspot. For some reason TermQuery, in particular, has high variance from one JVM instance to another. The mean reopen time is slower: 67.7 milliseconds vs 57.0, and the reopen time seems more affected by the number of segments in the index (this is the saw-tooth pattern in the graph, matching when minor merges occur). The takeaway message seems clear: on Linux, use MMapDirectory not NIOFSDirectory! Optimizing your NRT turnaround time My test was just one datapoint, at a fixed fast reopen period (once per second) and at a high indexing rate (1 MB/sec plain text). You should test specifically for your use-case what reopen rate works best. Generally, the more frequently you reopen the faster the turnaround time will be, since fewer changes need to be applied; however, frequent reopening will reduce the maximum indexing rate. Most apps have relatively low required indexing rates compared to what Lucene can handle and can thus pick a reopen rate to suit the application's turnaround time requirements. There are also some simple steps you can take to reduce the turnaround time: Store the index on a fast IO system, ideally a modern SSD. Install a merged segment warmer (see IndexWriter.setMergedSegmentWarmer). This warmer is invoked by IndexWriter to warm up a newly merged segment without blocking the reopen of a new NRT reader. If your application uses Lucene's FieldCache or has its own caches, this is important as otherwise that warming cost will be spent on the first query to hit the new reader. Use only as many indexing threads as needed to achieve your required indexing rate; often 1 thread suffices. The fewer threads used for indexing, the faster the flushing, and the less merging (on trunk). If you are using Lucene's trunk, and your changes include deleting or updating prior documents, then use the Pulsing codec for your id field since this gives faster lookup performance which will make your reopen faster. Use the new NRTCachingDirectory, which buffers small segments in RAM to take load off the IO system (LUCENE-3092). Pass false for applyDeletes when opening an NRT reader, if your application can tolerate seeing deleted doccs from the returned reader. While it's not clear that thread priorities actually work correctly (see this Google Tech Talk), you should still set your thread priorities properly: the thread reopening your readers should be highest; next should be your indexing threads; and finally lowest should be all searching threads. If the machine becomes saturated, ideally only the search threads should take the hit. Happy near-real-time searching!
July 11, 2011
by Michael Mccandless
· 18,658 Views
article thumbnail
Embedded Tomcat, The Minimal Version
Tomcat 7 has been improved a lot and along with everything else that it brings, a very nice feature is provided - an API for embedding Tomcat into your application. The API was provided in earlier versions of Tomcat but it was quite cumbersome to use. To to start the embedded version of Tomcat one may need to build the required JARs. svn co https://svn.apache.org/repos/asf/tomcat/trunk tomcat cd tomcat ant embed-jars ls -l output/embed total 5092 -rw-r--r-- 1 anton None 56802 2011-03-06 17:09 LICENSE -rw-r--r-- 1 anton None 1194 2011-03-06 17:09 NOTICE -rw-r--r-- 1 anton None 1690519 2011-03-06 17:09 ecj-3.6.jar -rw-r--r-- 1 anton None 234625 2011-03-06 17:09 tomcat-dbcp.jar -rw-r--r-- 1 anton None 2402517 2011-03-06 17:09 tomcat-embed-core.jar -rw-r--r-- 1 anton None 781989 2011-03-06 17:09 tomcat-embed-jasper.jar -rw-r--r-- 1 anton None 34106 2011-03-06 17:09 tomcat-embed-logging-juli.jar The following snippet demonstrates the embedded Tomcat usage with a deployed servlet instance. import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.File; import java.io.IOException; import java.io.Writer; public class Main { public static void main(String[] args) throws LifecycleException, InterruptedException, ServletException { Tomcat tomcat = new Tomcat(); tomcat.setPort(8080); Context ctx = tomcat.addContext("/", new File(".").getAbsolutePath()); Tomcat.addServlet(ctx, "hello", new HttpServlet() { protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Writer w = resp.getWriter(); w.write("Hello, World!"); w.flush(); } }); ctx.addServletMapping("/*", "hello"); tomcat.start(); tomcat.getServer().await(); } } The only two JARs required are tomcat-embed-core.jar and tomcat-embed-logging-juli.jar. It means that there will be no JSP support and pooling will also be disabled. But that's enough to start a servlet and in most cases that is what you probably need. From http://arhipov.blogspot.com/2011/03/embedded-tomcat-minimal-version.html
July 9, 2011
by Anton Arhipov
· 67,559 Views · 1 Like
article thumbnail
SEVERE: Error in xpath:java.lang.RuntimeException: solrconfig.xml missing luceneMatchVersion
One of the things that changed from Solr 1.4.1 to 1.5+ was the introduction of a parameter to tell Solr / Lucene which kind of compability version its index files should be created and used in. Solr now refuses to start if you do not provide this setting (if you’re upgrading a previous installation from 1.4.1 or earlier). The fix isn’t really straight forward, and you’ll probably have to recreate your index files if you’re just arriving at the scene with Solr / Lucene 3.2 and 4.0. Solr 3.0 (1.5) might be able to upgrade the files from the 2.9 version, but if you’re jumping from Lucene 2.9 to 4.0, the easiest solution seems to be to delete the current index and reindex (set up replication, disable replication from the master, query the slave while reindexing the master, etc.. and you’ll have no downtime while doing this!). You’ll need to add a parameter to your solrconfig.xml file as well in the section. LUCENE_CURRENT Other valid values are LUCENE_30, LUCENE_31, LUCENE_32 and LUCENE_40. These values represent specific versions of the index structure, while LUCENE_CURRENT will use the version depending on which particular release of Lucene you’re using. The version format will be upgraded automagically between most releases, so you’ll probably be fine by using LUCENE_CURRENT. If you however are trying to load index files that are more than one version older, you may have to use one of the other values.
July 9, 2011
by Mats Lindh
· 10,073 Views
article thumbnail
Is Object Serialization Evil?
In my daily work, I use both an RDBMS and MarkLogic, an XML database. MarkLogic can be considered akin to the newer NoSQL databases, but it has the added structure of XML and standard languages in XQuery and XPath. The NoSQL databases are typically storing documents or key-value pairs, and some other things in between. Given that any datastore will be searched at some point, you will always care how the data is actually stored or whether there is some way to query it easily. Once you start thinking about the problem, you quickly generalize to the “how do I persist any type of data” question. However, my focus is not going to be the comparison of the various data stores, but the comparison of how data is stored. More specifically, I want to show the object serialization, mainly the Java built in method, as a data persistence format is evil. Given what you normally read on this blog, this may seem like an oddly timed post, but I have run into serialization issues lately in some production code and Mark Needham recently wrote an interesting post about this as well. Coincidentally, Mark is also working with MarkLogic, and there is an interesting item in his post: The advantage of doing things this way [using lightweight wrappers] is that it means we have less code to write than we would with the serialisation/deserialisation approach although it does mean that we’re strongly coupled to the data format that our storage mechanism uses. However, since this is one bit of the architecture which is not going to change it seems to makes sense to accept the leakage of that layer. The interesting part of this is that he has accepted using the data format of the storage mechanism, XML in MarkLogic in this case. Why is this interesting? First, it is a move away from the ORM technologies that try to hide the complexities of converting data into objects in the RDBMS world. Also, this is a glimpse into the types of issues that could arise from non-RDBMS storage choices as well as how to persist objects in general. So, an RDBMS is typically used to map object attributes to a table and columns. The mapping is mostly straightforward with some defined relationship for child objects and collections. This is a well-known area, called Object-Relational Mapping (ORM), and several open source and commercial options exist. In this scenario, object attributes are stored in a similar datatype, meaning a String is stored as a varchar and an int is stored as an integer. But, what happens when you move away from an RDBMS for data persistence? If you look at Java and its session objects, pure object serialization is used. Assuming that an application session is fairly short-lived, meaning at most a few hours, object serialization is simple, well supported and built into the Java concept of a session. However, when the data persistence is over a longer period of time, possibly days or weeks, and you have to worry about new releases of the application, serialization quickly becomes evil. As any good Java developer knows, if you plan to serialize an object, even in a session, you need a real serialization ID (serialVersionUID), not just a 1L, and you need to implement the Serializable interface. However, most developers do not know the real rules behind the Java deserialization process. If your object has changed, more than just adding simple fields to the object, it is possible that Java cannot deserialize the object correctly even if the serialization ID has not changed. Suddenly, you cannot retrieve your data any longer, which is inherently bad. Now, may developers reading this may say that they would never write code that would have this problem. That may be true, but what about a library that you use or some other developer no longer employed by your company? Can you guarantee that this problem will never happen? The only way to guarantee that is to use a different serialization method. What options do we have? Obviously, there are the NoSQL datastores but the actual object format is the relevant question not which solution to choose. Besides the obvious serialized object, some NoSQL datastores use JSON to store objects, MarkLogic uses XML and there are others that store just key-value pairs. Key-value pairs are typically a mapping of a text key to a value that is a serialized object, either a binary or textual format. So, that leaves us with XML, JSON and other textual formats. One of the benefits of a structured format like XML or JSON is that they can be made searchable and provide some level of context. I have talked about data formats before, so I won’t go into a comparison again. However, do these types of formats avoid the issues that native Java object serialization has? This is really dependent upon what library you are using for serialization. Some libraries will deserialize an object without any issues regardless of whether the object field list has changed. Other libraries could have problems depending upon whether a serialized field exists in the target object, or there might not be solid support for collections (though that is doubtful at this point). Given that even structured formats could have serialization issues, is the only safe path hand-coded mappings like those used by ORM tools? Some JSON and XML serialization tools use the same mapping methods as the ORM tools in order to avoid these problems. However, once you define these mappings, you are explicitly stating how an object gets translated. This explicit definition will require maintenance, but that is definitely cleaner than trying to trace down a serialization defect in some random stack trace. So is implicit object serialization really worth the potential headaches? Or should we just consider it evil and never speak of it again? From http://regulargeek.com/2011/07/06/is-object-serialization-evil/
July 7, 2011
by Robert Diana
· 19,641 Views · 1 Like
article thumbnail
Programmatically Restart a Java Application
Today I'll talk about a famous problem : restarting a Java application. It is especially useful when changing the language of a GUI application, so that we need to restart it to reload the internationalized messages in the new language. Some look and feel also require to relaunch the application to be properly applied. A quick Google search give plenty answers using a simple : Runtime.getRuntime().exec("java -jar myApp.jar"); System.exit(0); This indeed basically works, but this answer that does not convince me for several reasons : 1) What about VM and program arguments ? (this one is secondary in fact, because can be solve it quite easily). 2) What if the main is not a jar (which is usually the case when launching from an IDE) ? 3) Most of all, what about the cleaning of the closing application ? For example if the application save some properties when closing, commit some stuffs etc. 4) We need to change the command line in the code source every time we change a parameter, the name of the jar, etc. Overall, something that works fine for some test, sandbox use, but not a generic and elegant way in my humble opinion. Ok, so my purpose here is to implement a method : public static void restartApplication(Runnable runBeforeRestart) throws IOException { ... } that could be wrapped in some jar library, and could be called, without any code modification, by any Java program, and by solving the 4 points raised previously. Let's start by looking at each point and find a way to answer them in an elegant way (let's say the most elegant way that I found). 1) How to get the program and VM arguments ? Pretty simple, by calling a : ManagementFactory.getRuntimeMXBean().getInputArguments(); Concerning the program arguments, the Java property sun.java.command we'll give us both the main class (or jar) and the program arguments, and both will be useful. String[] mainCommand = System.getProperty("sun.java.command").split(" "); 2) First retrieve the java bin executable given by the java.home property : String java = System.getProperty("java.home") + "/bin/java"; The simple case is when the application is launched from a jar. The jar name is given by a mainCommand[0], and it is in the current path, so we just have to append the application parameters mainCommand[1..n] with a -jar to get the command to execute : String cmd = java + vmArgsOneLine + "-jar " + new File(mainCommand[0]).getPath() + mainCommand[1..n]; We'll suppose here that the Manifest of the jar is well done, and we don't need to specify the main nor the classpath. Second case : when the application is launched from a class. In this case, we'll specify the class path and the main class : String cmd = java + vmArgsOneLine + -cp \"" + System.getProperty("java.class.path") + "\" " + mainCommand[0] + mainCommand[1..n]; 3) Third point, cleaning the old application before launching the new one. To do such a trick, we'll just execute the Runtime.getRuntime().exec(cmd) in a shutdown hook. This way, we'll be sure that everything will be properly clean up before creating the new application instance. Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { ... } }); Run the runBeforeRestart that contains some custom code that we want to be executed before restarting the application : if(runBeforeRestart != null) { runBeforeRestart.run(); } And finally, call the System.exit(0);. And we're done. Here is our generic method : /** * Sun property pointing the main class and its arguments. * Might not be defined on non Hotspot VM implementations. */ public static final String SUN_JAVA_COMMAND = "sun.java.command"; /** * Restart the current Java application * @param runBeforeRestart some custom code to be run before restarting * @throws IOException */ public static void restartApplication(Runnable runBeforeRestart) throws IOException { try { // java binary String java = System.getProperty("java.home") + "/bin/java"; // vm arguments List vmArguments = ManagementFactory.getRuntimeMXBean().getInputArguments(); StringBuffer vmArgsOneLine = new StringBuffer(); for (String arg : vmArguments) { // if it's the agent argument : we ignore it otherwise the // address of the old application and the new one will be in conflict if (!arg.contains("-agentlib")) { vmArgsOneLine.append(arg); vmArgsOneLine.append(" "); } } // init the command to execute, add the vm args final StringBuffer cmd = new StringBuffer("\"" + java + "\" " + vmArgsOneLine); // program main and program arguments String[] mainCommand = System.getProperty(SUN_JAVA_COMMAND).split(" "); // program main is a jar if (mainCommand[0].endsWith(".jar")) { // if it's a jar, add -jar mainJar cmd.append("-jar " + new File(mainCommand[0]).getPath()); } else { // else it's a .class, add the classpath and mainClass cmd.append("-cp \"" + System.getProperty("java.class.path") + "\" " + mainCommand[0]); } // finally add program arguments for (int i = 1; i < mainCommand.length; i++) { cmd.append(" "); cmd.append(mainCommand[i]); } // execute the command in a shutdown hook, to be sure that all the // resources have been disposed before restarting the application Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { try { Runtime.getRuntime().exec(cmd.toString()); } catch (IOException e) { e.printStackTrace(); } } }); // execute some custom code before restarting if (runBeforeRestart!= null) { runBeforeRestart.run(); } // exit System.exit(0); } catch (Exception e) { // something went wrong throw new IOException("Error while trying to restart the application", e); } } From : http://lewisleo.blogspot.jp/2012/08/programmatically-restart-java.html
July 6, 2011
by Leo Lewis
· 134,197 Views · 2 Likes
article thumbnail
CDI 1.0 vs. Spring 3.1 Feature Comparsion
This blog article provides a comparison matrix between Spring IoC 3.1 and CDI implementation JBoss Weld 1.1.
July 6, 2011
by Niklas Schlimm
· 31,892 Views
article thumbnail
SSL your Tomcat 7
One thing I’m doing very often and always searching on the Internet is how to obtain a self-signed SSL certificate and install it in both my client browsers and my local Tomcat. Sure enough there are enough resources available online, but since it’s a bore to go looking for the right one (yes, some do not work), I figured let’s do it right once and document it so that it will always be there. Create the keystore Keystores are, guess what, files where your store your keys. In our case, we need to create one that will be used by both Tomcat and for the certificat generation. The command-line is: keytool -genkey -keyalg RSA -alias blog.frankel.ch -keystore keystore.jks -validity 999 -keysize 2048 The parameters are as follow: Parameter Value Description -genkey Requests the keytool to generate a key. For all provided features, type keytool -help -keyalg RSA Wanted algorithm. The specified algorithm must be made available by one of the registered cryptographic service providers -keysize 2048 Key size -validity 999 Validity in days -alias blog.frankel.ch Entry in the keystore -keystore keystore.jks Keystore. If the keystore doesn’t exist yet, it will be created and you’ll be prompted for a new password; otherwise, you’ll prompted for the current store’s password Configure Tomcat Tomcat’s SSL configuration is done in the ${TOMCAT_HOME}/conf/server.xml file. Locate the following snippet: Now, uncomment it and add the following attributes: keystoreFile="/path/to/your/keystore.jks" keystorePass="Your password" Note: if the store only contains a single entry, fine; otherwise, you’ll need to configure the entry’s name with keyAlias="blog.frankel.ch" Starting Tomcat and browsing to https://localhost:8443/ will show you Tomcat’s friendly face. Additionnaly, the logs will display: 28 june 2011 20:25:14 org.apache.coyote.AbstractProtocolHandler init INFO: Initializing ProtocolHandler ["http-bio-8443"] Export the certificate The certificate is created from our previous entry in the keystore. The command-line is: keytool -export -alias blog.frankel.ch -file blog.frankel.ch.crt -keystore keystore.jks Even simpler, we are challenged for the keystore’s password and that’s all. The newly created certificate is now available in the filesystem. We just have to distribute it to all browsers that will connect to Tomcat in order to bypass security warnings (since it’s a self-signed certificate). Spread the word The last step is to put the self-signed certificate in the list of trusted certificates in Firefox. For a quick and dirty way, import it in your own Firefox (Options -> Advanced -> Show certificates -> Import…) and distribute the %USER_HOME%"/Application Data/Mozilla/Firefox/Profiles/xzy.default/cert8.db file. It has to be copied to the %FIREFOX_HOME%/defaults/profile folder so that every single profile on the target machine is updated. Note that this way of doing will lose previously individually accepted certificates (in short, we’re overwriting the whole certificate database). For a more industrial process, look at the next section. To go further: The Most Common Java Keytool Keystore Commands Tomcat 7 SSL Configuration HOW-TO Where can I download certutil.exe for Windows From http://blog.frankel.ch/ssl-your-tomcat-7
July 4, 2011
by Nicolas Fränkel DZone Core CORE
· 42,834 Views
article thumbnail
Creating a WebSocket-Chat-Application with Jetty and Glassfish
This article describes how to create a simple HTML5 chat application using WebSockets to connect to a Java back-end.
July 1, 2011
by Andy Moncsek
· 153,842 Views · 2 Likes
article thumbnail
Setting Up SSL on Tomcat in 5 minutes
This tutorial will walk you through how to configure SSL (https://localhost:8443 access) on Tomcat in 5 minutes. For this tutorial you will need: Java SDK (used version 6 for this tutorial) Tomcat (used version 7 for this tutorial) The set up consists in 3 basic steps: Create a keystore file using Java Configure Tomcat to use the keystore Test it (Bonus ) Configure your app to work with SSL (access through https://localhost:8443/yourApp) 1 – Creating a Keystore file using Java Fisrt, open the terminal on your computer and type: Windows: cd %JAVA_HOME%/bin Linux or Mac OS: cd $JAVA_HOME/bin The $JAVA_HOME on Mac is located on “/System/Library/Frameworks/JavaVM.framework/Versions/{your java version}/Home/” You will change the current directory to the directory Java is installed on your computer. Inside the Java Home directory, cd to the bin folder. Inside the bin folder there is a file named keytool. This guy is responsible for generating the keystore file for us. Next, type on the terminal: keytool -genkey -alias tomcat -keyalg RSA When you type the command above, it will ask you some questions. First, it will ask you to create a password (My password is “password“): loiane:bin loiane$ keytool -genkey -alias tomcat -keyalg RSA Enter keystore password: password Re-enter new password: password What is your first and last name? [Unknown]: Loiane Groner What is the name of your organizational unit? [Unknown]: home What is the name of your organization? [Unknown]: home What is the name of your City or Locality? [Unknown]: Sao Paulo What is the name of your State or Province? [Unknown]: SP What is the two-letter country code for this unit? [Unknown]: BR Is CN=Loiane Groner, OU=home, O=home, L=Sao Paulo, ST=SP, C=BR correct? [no]: yes Enter key password for (RETURN if same as keystore password): password Re-enter new password: password It will create a .keystore file on your user home directory. On Windows, it will be on: C:Documents and Settings[username]; on Mac it will be on /Users/[username] and on Linux will be on /home/[username]. 2 – Configuring Tomcat for using the keystore file – SSL config Open your Tomcat installation directory and open the conf folder. Inside this folder, you will find the server.xml file. Open it. Find the following declaration: Uncomment it and modify it to look like the following: Connector SSLEnabled="true" acceptCount="100" clientAuth="false" disableUploadTimeout="true" enableLookups="false" maxThreads="25" port="8443" keystoreFile="/Users/loiane/.keystore" keystorePass="password" protocol="org.apache.coyote.http11.Http11NioProtocol" scheme="https" secure="true" sslProtocol="TLS" /> Note we add the keystoreFile, keystorePass and changed the protocol declarations. 3 – Let’s test it! Start tomcat service and try to access https://localhost:8443. You will see Tomcat’s local home page. Note if you try to access the default 8080 port it will be working too: http://localhost:8080 4 – BONUS - Configuring your app to work with SSL (access through https://localhost:8443/yourApp) To force your web application to work with SSL, you simply need to add the following code to your web.xml file (before web-app tag ends): securedapp /* CONFIDENTIAL The url pattern is set to /* so any page/resource from your application is secure (it can be only accessed with https). The transport-guarantee tag is set to CONFIDENTIAL to make sure your app will work on SSL. If you want to turn off the SSL, you don’t need to delete the code above from web.xml, simply change CONFIDENTIAL to NONE. Reference: http://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html (this tutorial is a little confusing, that is why I decided to write another one my own). Happy Coding! From http://loianegroner.com/2011/06/setting-up-ssl-on-tomcat-in-5-minutes-httpslocalhost8443/
July 1, 2011
by Loiane Groner
· 368,849 Views · 13 Likes
article thumbnail
How to POST and GET JSON between EXTJS and SpringMVC3
After one month of evaluation of frameworks and tools, I choose ExtJS for UI and Spring/SpringMVC for the business layer for my pet project. By using ExtJS we can send data to server by form submits or as request parameters, or in JSON format through Ajax requests. ExtJS uses JSON format in many situations to hold data. So I thought using JSON as data exchange format between EXTJS and Spring would be consistent. The following code snippets explain how we can use ExtJS and SpringMVC3 to exchange data in JSON format. 1. Register MappingJacksonHttpMessageConverter in dispatcher-servlet.xml Add jackson-json jar(s) to WEB-INF/lib 2. Trigger the POST request from ExtJS script as follows: Ext.Ajax.request({ url : 'doSomething.htm', method: 'POST', headers: { 'Content-Type': 'application/json' }, params : { "test" : "testParam" }, jsonData: { "username" : "admin", "emailId" : "admin@sivalabs.com" }, success: function (response) { var jsonResp = Ext.util.JSON.decode(response.responseText); Ext.Msg.alert("Info","UserName from Server : "+jsonResp.username); }, failure: function (response) { var jsonResp = Ext.util.JSON.decode(response.responseText); Ext.Msg.alert("Error",jsonResp.error); } }); 3. Write a Spring Controller to handle the "/doSomething.htm" reguest. @Controller public class DataController { @RequestMapping(value = "/doSomething", method = RequestMethod.POST) @ResponseBody public User handle(@RequestBody User user) throws IOException { System.out.println("Username From Client : "+user.getUsername()); System.out.println("EmailId from Client : "+user.getEmailId()); user.setUsername("SivaPrasadReddy"); user.setEmailId("siva@sivalabs.com"); return user; } } Any other better approaches? From http://sivalabs.blogspot.com/2011/06/how-to-post-and-get-json-between-extjs.html
June 29, 2011
by Siva Prasad Reddy Katamreddy
· 55,447 Views
article thumbnail
GET/POST Parameters in Node.js
We look at how to write GET and POST requests into tyour Node.js-based application. It's so easy you won't believe it!
June 28, 2011
by Snippets Manager
· 61,849 Views · 1 Like
article thumbnail
How to Tame Java GC Pauses? Surviving 16GiB Heap and Greater
Learn how to survive 16GiB and greater heaps, and control Java GC pauses.
June 28, 2011
by Alexey Ragozin
· 161,258 Views
article thumbnail
Convert XML To JSON Using Ruby And ActiveSupport
// Convert XML to JSON using Ruby and ActiveSupport #! /usr/bin/env ruby require 'rubygems' require 'active_support/core_ext' require 'json' xml = File.open(ARGV.first).read json = Hash.from_xml(xml).to_json File.open(ARGV.last, 'w+').write json
June 27, 2011
by Snippets Manager
· 8,258 Views · 1 Like
article thumbnail
TechTip: Use of setLenient method on SimpleDateFormat
Sometimes when you are parsing a date string against a pattern(such as MM/dd/yyyy) using java.text.SimpleDateFormat, strange things might happen (for unknown developers) if your date string is dynamic content entered by a user in some input field on the user interface and if it is not entered in the specified format. The parse method in the SimpleDateFormat parses the date string that is in the incorrect format and returns your date object instead of throwing a java.text.ParseException. However, the date returned is not what you expect. The below code-snippet shows you this behaviour. package com.starwood.system.util; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; public class DateSample { public static void main(String args[]){ SimpleDateFormat sdf = new SimpleDateFormat () ; sdf.applyPattern("MM/dd/yyyy") ; try { Date d = sdf.parse("2011/02/06") ; System.out.println(d) ; } catch (ParseException e) { e.printStackTrace(); } } } Output: Thu Jul 02 00:00:00 MST 173 See the output, that is a date back in the year 173. To avoid this problem, call the setLenient (false) on SimpleDateFormat instance. That will make the parse method throw ParseException when the given input string is not in the specified format. Here is the modified code-snippet. package com.starwood.system.util; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; public class DateSample { public static void main(String args[]){ SimpleDateFormat sdf = new SimpleDateFormat () ; sdf.applyPattern("MM/dd/yyyy") ; sdf.setLenient(false) ; try { Date d = sdf.parse("2011/02/06") ; System.out.println(d) ; } catch (ParseException e) { System.out.println (e.getMessage()) ; } } } Output: Unparseable date: "2011/02/06" http://accordess.com/wpblog/2011/06/02/techtip-use-of-setlenient-method-on-simpledateformat
June 27, 2011
by Upendra Chintala
· 46,857 Views · 5 Likes
article thumbnail
XML unmarshalling benchmark in Java: JAXB vs STax vs Woodstox
towards the end of last week i started thinking how to deal with large amounts of xml data in a resource-friendly way.the main problem that i wanted to solve was how to process large xml files in chunks while at the same time providing upstream/downstream systems with some data to process. of course i've been using jaxb technology for few years now; the main advantage of using jaxb is the quick time-to-market; if one possesses an xml schema, there are tools out there to auto-generate the corresponding java domain model classes automatically (eclipse indigo, maven jaxb plugins in various sauces, ant tasks, to name a few). the jaxb api then offers a marshaller and an unmarshaller to write/read xml data, mapping the java domain model. when thinking of jaxb as solution for my problem i suddendlly realised that jaxb keeps the whole objectification of the xml schema in memory, so the obvious question was: "how would our infrastructure cope with large xml files (e.g. in my case with a number of elements > 100,000) if we were to use jaxb?". i could have simply produced a large xml file, then a client for it and find out about memory consumption. as one probably knows there are mainly two approaches to processing xml data in java: dom and sax. with dom, the xml document is represented into memory as a tree; dom is useful if one needs cherry-pick access to the tree nodes or if one needs to write brief xml documents. on the other side of the spectrum there is sax, an event-driven technology, where the whole document is parsed one xml element at the time, and for each xml significative event, callbacks are "pushed" to a java client which then deals with them (such as start_document, start_element, end_element, etc). since sax does not bring the whole document into memory but it applies a cursor like approach to xml processing it does not consume huge amounts of memory. the drawback with sax is that it processes the whole document start to finish; this might not be necessarily what one wants for large xml documents. in my scenario, for instance, i'd like to be able to pass to downstream systems xml elements as they are available, but at the same time maybe i'd like to pass only 100 elements at the time, implementing some sort of pagination solution. dom seems too demanding from a memory-consumption point of view, whereas sax seems to coarse-grained for my needs. i remembered reading something about stax, a java technology which offered a middle ground between the capability to pull xml elements (as opposed to pushing xml elements, e.g. sax) while being ram-friendly. i then looked into the technology and decided that stax was probably the compromise i was looking for; however i wanted to keep the easy programming model offered by jaxb, so i really needed a combination of the two. while investigating stax, i came across woodstox; this open source project promises to be a faster xml parser than many othrers, so i decided to include it in my benchmark as well. i now had all elements to create a benchmark to give me memory consumption and processing speed metrics when processing large xml documents. the benchmark plan in order to create a benchmark i needed to do the following: create an xml schema which defined my domain model. this would be the input for jaxb to create the java domain model create three large xml files representing the model, with 10,000 / 100,000 / 1,000,000 elements respectively have a pure jaxb client which would unmarshall the large xml files completely in memory have a stax/jaxb client which would combine the low-memory consumption of sax technologies with the ease of programming model offered by jaxb have a woodstox/jaxb client with the same characteristics of the stax/jaxb client (in few words i just wanted to change the underlying parser and see if i could obtain any performance boost) record both memory consumption and speed of processing (e.g. how quickly would each solution make xml chunks available in memory as jaxb domain model classes) make the results available graphically, since, as we know, one picture tells one thousands words. the domain model xml schema i decided for a relatively easy domain model, with xml elements representing people, with their names and addresses. i also wanted to record whether a person was active. using jaxb to create the java model i am a fan of maven and use it as my default tool to build systems. this is the pom i defined for this little benchmark: 4.0.0 uk.co.jemos.tests.xml large-xml-parser 1.0.0-snapshot jar large-xml-parser http://www.jemos.co.uk utf-8 org.apache.maven.plugins maven-compiler-plugin 2.3.2 1.6 1.6 org.jvnet.jaxb2.maven2 maven-jaxb2-plugin 0.7.5 generate ${basedir}/src/main/resources **/*.xsd true -enableintrospection -xtostring -xequals -xhashcode true true org.jvnet.jaxb2_commons jaxb2-basics 0.6.1 org.apache.maven.plugins maven-jar-plugin 2.3.1 true uk.co.jemos.tests.xml.xmlpullbenchmarker org.apache.maven.plugins maven-assembly-plugin 2.2 ${project.build.directory}/site/downloads src/main/assembly/project.xml src/main/assembly/bin.xml junit junit 4.5 test uk.co.jemos.podam podam 2.3.11.release commons-io commons-io 2.0.1 com.sun.xml.bind jaxb-impl 2.1.3 org.jvnet.jaxb2_commons jaxb2-basics-runtime 0.6.0 org.codehaus.woodstox stax2-api 3.0.3 just few things to notice about this pom.xml. i use java 6, since starting from version 6, java contains all the xml libraries for jaxb, dom, sax and stax. to auto-generate the domain model classes from the xsd schema, i used the excellent maven-jaxb2-plugin, which allows, amongst other things, to obtain pojos with tostring, equals and hashcode support. i have also declared the jar plugin, to create an executable jar for the benchmark and the assembly plugin to distribute an executable version of the benchmark. the code for the benchmark is attached to this post, so if you want to build it and run it yourself, just unzip the project file, open a command line and run: $ mvn clean install assembly:assembly this command will place *-bin.* files into the folder target/site/downloads. unzip the one of your preference and to run the benchmark use (-dcreate.xml=true will generate the xml files. don't pass it if you have these files already, e.g. after the first run): $ java -jar -dcreate.xml=true large-xml-parser-1.0.0-snapshot.jar creating the test data to create the test data, i used podam , a java tool to auto-fill pojos and javabeans with data. the code is as simple as: jaxbcontext context = jaxbcontext .newinstance("xml.integration.jemos.co.uk.large_file"); marshaller marshaller = context.createmarshaller(); marshaller.setproperty(marshaller.jaxb_formatted_output, boolean.true); marshaller.setproperty(marshaller.jaxb_encoding, "utf-8"); personstype personstype = new objectfactory().createpersonstype(); list persons = personstype.getperson(); podamfactory factory = new podamfactoryimpl(); for (int i = 0; i < nbrelements; i++) { persons.add(factory.manufacturepojo(persontype.class)); } jaxbelement towrite = new objectfactory() .createpersons(personstype); file file = new file(filename); bufferedoutputstream bos = new bufferedoutputstream( new fileoutputstream(file), 4096); try { marshaller.marshal(towrite, bos); bos.flush(); } finally { ioutils.closequietly(bos); } the xmlpullbenchmarker generates three large xml files under ~/xml-benchmark: large-person-10000.xml (approx 3m) large-person-100000.xml (approx 30m) large-person-1000000.xml (approx 300m) each file looks like the following: ult6yn0d7l u8djoutlk2 dxwlpow6x3 o4ggvximo7 io7kuz0xmz lmiy1uqkxs zhtukbtwti gbc7kex9tn kxmwnlprep 9bibs1m5gr hmtqpxjcpw bhpf1rrldm ydjjillyrw xgstdjcfjc [..etc] each file contains 10,000 / 100,000 / 1,000,000 elements. the running environments i tried the benchmarker on three different environments: ubuntu 10, 64-bit running as virtual machine on a windows 7 ultimate, with cpu i5, 750 @2.67ghz and 2.66ghz, 8gb ram of which 4gb dedicated to the vm. jvm: 1.6.0_25, hotspot windows 7 ultimate , hosting the above vm, therefore with same processor. jvm, 1.6.0_24, hotspot ubuntu 10, 32-bit , 3gb ram, dual core. jvm, 1.6.0_24, openjdk the xml unmarshalling to unmarshall the code i used three different strategies: pure jaxb stax + jaxb woodstox + jaxb pure jaxb unmarshalling the code which i used to unmarshall the large xml files using jaxb follows: private void readlargefilewithjaxb(file file, int nbrrecords) throws exception { jaxbcontext ucontext = jaxbcontext .newinstance("xml.integration.jemos.co.uk.large_file"); unmarshaller unmarshaller = ucontext.createunmarshaller(); bufferedinputstream bis = new bufferedinputstream(new fileinputstream( file)); long start = system.currenttimemillis(); long memstart = runtime.getruntime().freememory(); long memend = 0l; try { jaxbelement root = (jaxbelement) unmarshaller .unmarshal(bis); root.getvalue().getperson().size(); memend = runtime.getruntime().freememory(); long end = system.currenttimemillis(); log.info("jaxb (" + nbrrecords + "): - total memory used: " + (memstart - memend)); log.info("jaxb (" + nbrrecords + "): time taken in ms: " + (end - start)); } finally { ioutils.closequietly(bis); } } the code uses a one-liner to unmarshall each xml file: jaxbelement root = (jaxbelement) unmarshaller .unmarshal(bis); i also accessed the size of the underlying persontype collection to "touch" in memory data. btw, debugging the application showed that all 10,000 elements were indeed available in memory after this line of code. jaxb + stax with stax, i just had to use an xmlstreamreader, iterate through all elements, and pass each in turn to jaxb to unmarshall it into a persontype domain model object. the code follows: // set up a stax reader xmlinputfactory xmlif = xmlinputfactory.newinstance(); xmlstreamreader xmlr = xmlif .createxmlstreamreader(new filereader(file)); jaxbcontext ucontext = jaxbcontext.newinstance(persontype.class); unmarshaller unmarshaller = ucontext.createunmarshaller(); long start = system.currenttimemillis(); long memstart = runtime.getruntime().freememory(); long memend = 0l; try { xmlr.nexttag(); xmlr.require(xmlstreamconstants.start_element, null, "persons"); xmlr.nexttag(); while (xmlr.geteventtype() == xmlstreamconstants.start_element) { jaxbelement pt = unmarshaller.unmarshal(xmlr, persontype.class); if (xmlr.geteventtype() == xmlstreamconstants.characters) { xmlr.next(); } } memend = runtime.getruntime().freememory(); long end = system.currenttimemillis(); log.info("stax - (" + nbrrecords + "): - total memory used: " + (memstart - memend)); log.info("stax - (" + nbrrecords + "): time taken in ms: " + (end - start)); } finally { xmlr.close(); } } note that this time when creating the context, i had to specify that it was for the persontype object, and when invoking the jaxb unmarshalling i had to pass also the desired returned class type, with: jaxbelement pt = unmarshaller.unmarshal(xmlr, persontype.class); note that i don't to anything with the object, just create it, to keep the benchmark as truthful and possible by not introducing any unnecessary steps. jaxb + woodstox with woodstox, the approach is very similar to the one used with stax. in fact woodstox provides a stax2 compatible api, so all i had to do was to provide the correct factory and...bang! i had woodstox under the cover working. private void readlargexmlwithfasterstax(file file, int nbrrecords) throws factoryconfigurationerror, xmlstreamexception, filenotfoundexception, jaxbexception { // set up a woodstox reader xmlinputfactory xmlif = xmlinputfactory2.newinstance(); xmlstreamreader xmlr = xmlif .createxmlstreamreader(new filereader(file)); jaxbcontext ucontext = jaxbcontext.newinstance(persontype.class); unmarshaller unmarshaller = ucontext.createunmarshaller(); long start = system.currenttimemillis(); long memstart = runtime.getruntime().freememory(); long memend = 0l; try { xmlr.nexttag(); xmlr.require(xmlstreamconstants.start_element, null, "persons"); xmlr.nexttag(); while (xmlr.geteventtype() == xmlstreamconstants.start_element) { jaxbelement pt = unmarshaller.unmarshal(xmlr, persontype.class); if (xmlr.geteventtype() == xmlstreamconstants.characters) { xmlr.next(); } } memend = runtime.getruntime().freememory(); long end = system.currenttimemillis(); log.info("woodstox - (" + nbrrecords + "): total memory used: " + (memstart - memend)); log.info("woodstox - (" + nbrrecords + "): time taken in ms: " + (end - start)); } finally { xmlr.close(); } } note the following line: xmlinputfactory xmlif = xmlinputfactory2.newinstance(); where i pass in a stax2 xmlinputfactory. this uses the woodstox implementation. the main loop once the files are in place (you obtain this by passing -dcreate.xml=true), the main performs the following: system.gc(); system.gc(); for (int i = 0; i < 10; i++) { main.readlargefilewithjaxb(new file(output_folder + file.separatorchar + "large-person-10000.xml"), 10000); main.readlargefilewithjaxb(new file(output_folder + file.separatorchar + "large-person-100000.xml"), 100000); main.readlargefilewithjaxb(new file(output_folder + file.separatorchar + "large-person-1000000.xml"), 1000000); main.readlargexmlwithstax(new file(output_folder + file.separatorchar + "large-person-10000.xml"), 10000); main.readlargexmlwithstax(new file(output_folder + file.separatorchar + "large-person-100000.xml"), 100000); main.readlargexmlwithstax(new file(output_folder + file.separatorchar + "large-person-1000000.xml"), 1000000); main.readlargexmlwithfasterstax(new file(output_folder + file.separatorchar + "large-person-10000.xml"), 10000); main.readlargexmlwithfasterstax(new file(output_folder + file.separatorchar + "large-person-100000.xml"), 100000); main.readlargexmlwithfasterstax(new file(output_folder + file.separatorchar + "large-person-1000000.xml"), 1000000); } it invites the gc to run, although as we know this is at the gc thread discretion. it then executes each strategy 10 times, to normalise ram and cpu consumption. the final data are then collected by running an average on the ten runs. the benchmark results for memory consumption here follow some diagrams which show memory consumption across the different running environments, when unmarshalling 10,000 / 100,000 / 1,000,000 files. you will probably notice that memory consumption for stax-related strategies often shows a negative value. this means that there was more free memory after unmarshalling all elements than there was at the beginning of the unmarshalling loop; this, in turn, suggests that the gc ran a lot more with stax than with jaxb. this is logical if one thinks about it; since with stax we don't keep all objects into memory there are more objects available for garbage collection. in this particular case i believe the persontype object created in the while loop gets eligible for gc and enters the young generation area and then it gets reclamed by the gc. this, however, should have a minimum impact on performance, since we know that claiming objects from the young generation space is done very efficiently. summary for 10,000 xml elements summary for 100,000 xml elements summary for 1,000,000 xml elements the benchmark results for processing speed results for 10,000 elements results for 100,000 elements results for 1,000,000 elements conclusions the results on all three different environments, although with some differences, all tell us the same story: if you are looking for performance (e.g. xml unmarshalling speed), choose jaxb if you are looking for low-memory usage (and are ready to sacrifice some performance speed), then use stax. my personal opinion is also that i wouldn't go for woodstox, but i'd choose either jaxb (if i needed processing power and could afford the ram) or stax (if i didn't need top speed and was low on infrastructure resources). both these technologies are java standards and part of the jdk starting from java 6. resources benchmarker source code zip version: download large-xml-parser-1.0.0-snapshot-project tar.gz version: download large-xml-parser-1.0.0-snapshot-project.tar tar.bz2 version: download large-xml-parser-1.0.0-snapshot-project.tar benchmarker executables: zip version: download large-xml-parser-1.0.0-snapshot-bin tar.gz version: download large-xml-parser-1.0.0-snapshot-bin.tar tar.bz2 version: download large-xml-parser-1.0.0-snapshot-bin.tar data files: ubuntu 64-bit vm running environment: download stax-vs-jaxb-ubuntu-64-vm ubuntu 32-bit running environment : download stax-vs-jaxb-ubuntu-32-bit windows 7 ultimate running environment : download stax-vs-jaxb-windows7 from http://tedone.typepad.com/blog/2011/06/unmarshalling-benchmark-in-java-jaxb-vs-stax-vs-woodstox.html
June 27, 2011
by Marco Tedone
· 68,857 Views · 6 Likes
article thumbnail
Java EE6 CDI, Named Components and Qualifiers
One of the biggest promises java EE6 made, was to ease the use of dependency injection. They did, using CDI. CDI, which stands for Contexts and Dependency Injection for Java EE, offers a base set to apply dependency injection in your enterprise application. Before CDI, EJB 3 also introduced dependency injection, but this was a bit basic. You could inject an EJB (statefull or stateless) into another EJB or Servlet (if you container supported this). Offcourse not every application needs EJB’s, that is why CDI is gaining so much popularity. To start, I have made this example. There is a Payment interface, and 2 implementations. A cash payment and a visa payment. I want to be able to choose witch type of payment I inject, still using the same interface. public interface Payment { void pay(BigDecimal amount); } and the 2 implementations public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } } public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } } To inject the interface we use the @Inject annotation. The annotation does basically what it says. It injects a component, that is available in your application. 1 @Inject private Payment payment; Off course, you saw this coming from a mile away, this won’t work. The container has 2 implementations of our Payment interface, so he does not know which one to inject. Unsatisfied dependencies for type [Payment] with qualifiers [@Default] at injection point [[field] @Inject private be.styledideas.blog.qualifier.web.PaymentBackingAction.payment] So we need some sort of qualifier to point out what implementation we want. CDI offers the @Named Annotation, allowing you to give a name to an implementation. @Named("cash") public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } } @Named("visa") public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } } When we now change our injection code, we can specify wich implementation we need. @Inject private @Named("visa") Payment payment; This works, but the flexibility is limited. When we want to rename our @Named parameter, we have to change it on everyplace where it is used. There is also no refactoring support. There is a beter alternative using Custom made annotations using the @Qualifier annotation. Let us change the code a little bit. First of all, we create new Annotation types. @java.lang.annotation.Documented @java.lang.annotation.Retention(RetentionPolicy.RUNTIME) @javax.inject.Qualifier public @interface CashPayment {} @java.lang.annotation.Documented @java.lang.annotation.Retention(RetentionPolicy.RUNTIME) @javax.inject.Qualifier public @interface VisaPayment {} The @Qualifier annotation that is added to the annotation, makes this annotation discoverable by the container. We can now simply add these annotations to our implementations. @CashPayment public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } } @VisaPayment public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } } The only thing we now need to do, is change our injection code to @Inject private @VisaPayment Payment payment; When we now change something to our qualifier, we have nice compiler and refactoring support. This also adds extra flexibilty for API or Domain-specific language design. From http://styledideas.be/blog/2011/06/16/java-ee6-cdi-named-components-and-qualifiers/
June 24, 2011
by Jelle Victoor
· 72,305 Views · 5 Likes
article thumbnail
PHP: Fatal error: Can’t use method return value in write context
Just a quick post to help anyone struggling with this error message, as this issue gets raised from time to time on support forums. The reason for the error is usually that you’re attempting to use empty or isset on a function instead of a variable. While it may be obvious that this doesn’t make sense for isset(), the same cannot be said for empty(). You simply meant to check if the value returned from the function was an empty value; why shouldn’t you be able to do just that? The reason is that empty($foo) is more or less syntactic sugar for isset($foo) && $foo. When written this way you can see that the isset() part of the statement doesn’t make sense for functions. This leaves us with simply the $foo part. The solution is to actually just drop the empty() part: Instead of: if (empty($obj->method())) { } Simply drop the empty construct: if ($obj->method()) { }
June 23, 2011
by Mats Lindh
· 36,568 Views
article thumbnail
Validating JSF EL Expressions in JSF Pages with static-jsfexpression-validator
update: version 0.9.3 with new group/artifactid released on 7/25 including native support for jsf 1.2 (reflected below in the pom snippet). update: version 0.9.4 with function tolerance for jsf 1.2 released on 7/28 (it doesn't check functions are ok but checks their parameters etc.) static-jsfexpression-validator is utility for verifying that el expressions in jsf pages, such as #{bean.property}, are correct, that means that they don’t reference undefined managed beans and nonexistent getters or action methods. the purpose is to make jsf-based web applications safer to refactor as the change of a method name will lead to the detection of an invalid expression without need for extensive manual ui tests. it can be run statically, for example from a test. currently it builds on the jsf implementation v. 1.1 but can be in few hours (or days) modified to support newer version of jsf. how does it work? defined managed beans (name + type) are extracted from faces-config files and/or spring application context files jsp pages are parsed by jasper , tomcat’s jsp parser for each jsf tag: if it defines local variables, they are recorded (such as var in h:datatable ) all jsf el expressions in the tag’s attributes are validated by a real el resolver using two magic classes, namely custom variableresolver and propertyresolver, that – instead of looking up managed bean instances and invoking their getters – fabricate “fake values” of the expected types so that the “resolution” of an expression can proceed. the effect is that the existence of the referenced properties and action methods is verified against the target classes. sometimes it is not possible to determine the type of a jsf variable or property (e.g. when it’s a collection element), in which case it is necessary to declare it beforehand. you can also manually declare extra variables (managed beans) and override the detected type of properties. minimal setup add this dependency to your maven/ivy/…: net.jakubholy.jeeutils.jsfelcheck static-jsfexpression-validator-jsf11 0.9.3 test alternatively, you can fetch static-jsfexpression-validator-jsf11-0.9.3.jar (or -jsf12- or -jsf20-) and its dependencies yourself, see the appendix a. run it: java -cp static-jsfexpression-validator-jsf11-0.9.3.jar:... net.jakubholy.jeeutils.jsfelcheck.jsfstaticanalyzer --jsproot /path/to/jsp/files/dir alternatively, run it from a java class to be able to configure everything: public class jsfelvaliditytest { @test public void should_have_only_defined_beans_and_valid_properties_in_jsf_el_expressions() throws exception { jsfstaticanalyzer jsfstaticanalyzer = new jsfstaticanalyzer(); jsfstaticanalyzer.setfacesconfigfiles(collections.singleton(new file("web/web-inf/faces-config.xml"))); map> none = collections.emptymap(); collectedvalidationresults results = jsfstaticanalyzer.validateelexpressions("web", none, none, none); assertequals("there shall be no invalid jsf el expressions; check system.err/.out for details. failure " + results.failures() , 0, results.failures().size()); } } run it and check the standard error and output for results, which should ideally look something like this: info: >>> started for '/somefile.jsp ############################################# ... >>> local variables that you must declare type for [0] ######################################### >>> failed jsf el expressions [0] ######################################### (set logging to fine for class net.jakubholy.jeeutils.jsfelcheck.validator.validatingjsfelresolver to se failure details and stacktraces) >>> total excluded expresions: 0 by filters: [] >>> total expressions checked: 5872 (failed: 0, ignored expressions: 0) in 0min 25s standard usage normally you will need to configure the validator because you will have cases where property type etc. cannot be derived automatically. declaring local variable types, extra variables, property type overrides local variables – h:datatable etc. if your jsp includes a jsf tag that declares a new local variable (typically h:datatable), like vegetable in the example below: ... where favouritevegetable is a collection of vegetables then you must tell the validator what type of objects the collection contains: map> localvariabletypes = new hashtable>(); localvariabletypes.put("vegetarion.favouritevegetable", vegetable.class); jsfstaticanalyzer.validateelexpressions("web", localvariabletypes, extravariables, propertytypeoverrides); the failure to do so would be indicated by a number of failed expression validations and a suggestion to register type for this variable: >>> local variables that you must declare type for [6] ######################################### declare component type of 'vegetarion.favouritevegetable' assigned to the variable vegetable (file /favourites.jsp, tag line 109) >>> failed jsf el expressions [38] ######################################### (set logging to fine for class net.jakubholy.jeeutils.jsfelcheck.validator.validatingjsfelresolver to se failure details and stacktraces) failedvalidationresult [failure=invalidexpressionexception [invalid el expression '#{vegetable.name}': propertynotfoundexception - property 'name' not found on class net.jakubholy.jeeutils.jsfelcheck.expressionfinder.impl.jasper.variables.contextvariableregistry$error_youmustdelcaretypeforthisvariable$$enhancerbymockitowithcglib$$3c8d0e8f]; expression=#{vegetable.name}, file=/favourites.jsp, tagline=118] defining variables not in faces-config variable: the first element of an el expression. if you happen to be using a variable that is not a managed bean defined in faces-config (or spring config file), for example because you create it manually, you need to declare it and its type: map> extravariables = new hashtable>(); localvariabletypes.put("mymessages", map.class); jsfstaticanalyzer.validateelexpressions("web", localvariabletypes, extravariables, propertytypeoverrides); expressions like #{mymessages['whatever.key']} would be now ok. overriding the detected type of properties, especially for collection elements property: any but the first segment of an el expression (#{variable.propert1.property2['property3]….}). sometimes you need to explicitely tell the validator the type of a property. this is necessary if the poperty is an object taken from a collection, where the type is unknown at the runtime, but it may be useful also at other times. if you had: then you’d need to declare the type like this: map> propertytypeoverrides = new hashtable>(); propertytypeoverrides.put("vegetablemap.*", vegetable.class); //or just for 1 key: propertytypeoverrides.put("vegetablemap.carrot", vegetable.class); jsfstaticanalyzer.validateelexpressions("web", localvariabletypes, extravariables, propertytypeoverrides); using the .* syntax you indicate that all elements contained in the collection/map are of the given type. you can also override the type of a single property, whether it is contained in a collection or not, as shown on the third line. excluding/including selected expressions for validation you may supply the validator with filters that determine which expressions should be checked or ignored. this may be useful mainly if you it is not possible to check them, for example because a variable iterates over a collection with incompatible objects. the ignored expressions are added to a separate report and the number of ignored expressions together with the filters responsible for them is printed. example: ignore all expressions for the variable evilcollection: jsfstaticanalyzer.addelexpressionfilter(new elexpressionfilter(){ @override public boolean accept(parsedelexpression expression) { if (expression.size() == 1 && expression.iterator().next().equals("evilcollection")) { return false; } return true; } @override public string tostring() { return "excludeevilcollectionwithincompatibleobjects"; } }); (i admit that the interface should be simplified.) other configuration in jsfstaticanalyzer: setfacesconfigfiles(collection): faces-config files where to look for defined managed beans; null/empty not to read any setspringconfigfiles(collection) spring applicationcontext files where to look for defined managed beans; null/empty not to read any setsuppressoutput(boolean) – do not print to system.err/.out – used if you want to process the produced collectedvalidationresults on your own setjspstoincludecommaseparated(string) – normally all jsps under the jspdir are processed, you can force processing only the ones you want by supplying thier names here (jspc setting) setprintcorrectexpressions(boolean) – set to true to print all the correctly validated jsf el expressions understanding the results jsfstaticanalyzer.validateelexpressions prints the results into the standard output and error and also returnes them in a collectedvalidationresults with the following content: resultsiterable failures() – expressions whose validation wasn’t successful resultsiterable goodresults() – expressions validated successfully resultsiterable excluded() – expressions ignored due to a filter collection – local variables (h:datatable’s var) for which you need to declare their type the resultsiterable have size() and the individual *result classes contain enough information to describe the problem (the expression, exception, location, …). now we will look how the results appear in the output. unknown managed bean (variable) failedvalidationresult [failure=invalidexpressionexception [invalid el expression '#{messages['message.res.ok']}': variablenotfoundexception - no variable 'messages' among the predefined ones.]; expression=#{messages['message.res.ok']}, file=/sample_failures.jsp, tagline=20] solution : fix it or add the variable to the extravariables map parameter. invalid property (no corresponding getter found on the variable/previous property) a) invalid property on a correct target object class this kind of failures is the raison d’être of this tool. failedvalidationresult [failure=invalidexpressionexception [invalid el expression '#{segment.departuredatexxx}': propertynotfoundexception - property 'departuredatexxx' not found on class example.segment$$enhancerbymockitowithcglib$$5eeba04]; expression=#{segment.departuredatexxx}, file=/sample_failures.jsp, tagline=92] solution : fix it, i.e. correct the expression to reference an existing property of the class. if the validator is using different class then it should then you might need to define a propertytypeoverride. b) invalid property on an unknown target object class – mockobjectofunknowntype failedvalidationresult [failure=invalidexpressionexception [invalid el expression '#{carlist[1].price}': propertynotfoundexception - property 'price' not found on class net.jakubholy.jeeutils.jsfelcheck.validator.mockobjectofunknowntype$$enhancerbymockitowithcglib$$9fa876d1]; expression=#{carlist[1].price}, file=/cars.jsp, tagline=46] solution : carlist is clearly a list whose element type cannot be determined and you must therefore declare it via the propertytypeoverrides map property. local variable without defined type failedvalidationresult [failure=invalidexpressionexception [invalid el expression ' #{traveler.name}': propertynotfoundexception - property 'name' not found on class net.jakubholy.jeeutils.jsfelcheck.expressionfinder.impl.jasper.variables.contextvariableregistry$error_youmustdelcaretypeforthisvariable$$enhancerbymockitowithcglib$$b8a846b2]; expression= #{traveler.name}, file=/worldtravels.jsp, tagline=118] solution : declare the type via the localvariabletypes map parameter. more documentation check the javadoc, especially in jsfstaticanalyzer . limitations currently only local variables defined by h:datatable ‘s var are recognized. to add support for others you’d need create and register a class similar to datatablevariableresolver handling of included files isn’t perfect, the don’t know about local variables defined in the including file. but we have all info needed to implement this. static includes are handled by the jasper parser (though it likely parses the included files also as top-level files, if they are on its search path). future it depends on my project’s needs, your feedback and your contributions . where to get it from the project’s github or from the project’s maven central repository, snapshots also may appear in the sonatype snapshots repo . appendices a. dependencies of v.0.9.0 (also mostly similar for later versions): (note: spring is not really needed if you haven’t spring-managed jsf beans.) aopalliance:aopalliance:jar:1.0:compile commons-beanutils:commons-beanutils:jar:1.6:compile commons-collections:commons-collections:jar:2.1:compile commons-digester:commons-digester:jar:1.5:compile commons-io:commons-io:jar:1.4:compile commons-logging:commons-logging:jar:1.0:compile javax.faces:jsf-api:jar:1.1_02:compile javax.faces:jsf-impl:jar:1.1_02:compile org.apache.tomcat:annotations-api:jar:6.0.29:compile org.apache.tomcat:catalina:jar:6.0.29:compile org.apache.tomcat:el-api:jar:6.0.29:compile org.apache.tomcat:jasper:jar:6.0.29:compile org.apache.tomcat:jasper-el:jar:6.0.29:compile org.apache.tomcat:jasper-jdt:jar:6.0.29:compile org.apache.tomcat:jsp-api:jar:6.0.29:compile org.apache.tomcat:juli:jar:6.0.29:compile org.apache.tomcat:servlet-api:jar:6.0.29:compile org.mockito:mockito-all:jar:1.8.5:compile org.springframework:spring-beans:jar:2.5.6:compile org.springframework:spring-context:jar:2.5.6:compile org.springframework:spring-core:jar:2.5.6:compile xml-apis:xml-apis:jar:1.0.b2:compile from http://theholyjava.wordpress.com/2011/06/22/validating-jsf-el-expressions-in-jsf-pages-with-static-jsfexpression-validator/
June 23, 2011
by Jakub Holý
· 12,490 Views
  • Previous
  • ...
  • 812
  • 813
  • 814
  • 815
  • 816
  • 817
  • 818
  • 819
  • 820
  • 821
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: