DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Better Application Events in Spring Framework 4.2
Application events are available since the very beginning of the Spring framework as a mean for loosely coupled components to exchange information.
February 27, 2015
by Pieter Humphrey DZone Core CORE
· 20,856 Views · 2 Likes
article thumbnail
Writing Groovy's groovy.util.slurpersupport.GPathResult (XmlSlurper) Content as XML
In a previous blog post, I described using XmlNodePrinter to present XML parsed with XmlParser in a nice format to standard output, as a Java String, and in a new file. Because XmlNodePrinter works withgroovy.util.Node instances, it works well with XmlParser, but doesn't work so well with XmlSlurper becauseXmlSlurper deals with instances of groovy.util.slurpersupport.GPathResult rather than instances ofgroovy.util.Node. This post looks at how groovy.xml.XmlUtil can be used to present GPathResultobjects that results from slurping XML to standard output, as a Java String, and to a new file. This post's first code listing demonstrates slurping XML with XmlSlurper and writing the slurpedGPathResult to standard output using XmlUtil's serialize(GPathResult, OutputStream) method and theSystem.out handle. slurpAndPrintXml.groovy : Writing XML to Standard Output #!/usr/bin/env groovy // slurpAndPrintXml.groovy // // Use Groovy's XmlSlurper to "slurp" provided XML file and use XmlUtil to write // XML content out to standard output. if (args.length < 1) { println "USAGE: groovy slurpAndPrint.xml " } String xmlFileName = args[0] xml = new XmlSlurper().parse(xmlFileName) import groovy.xml.XmlUtil XmlUtil xmlUtil = new XmlUtil() xmlUtil.serialize(xml, System.out) The next code listing demonstrates use of XmlUtil's serialize(GPathResult) method to serialize theGPathResult to a Java String. slurpAndSaveXml.groovy : Writing XML to Java String #!/usr/bin/env groovy // slurpXmlToString.groovy // // Use Groovy's XmlSlurper to "slurp" provided XML file and use XmlUtil to // write the XML content to a String. if (args.length < 1) { println "USAGE: groovy slurpAndPrint.xml " } String xmlFileName = args[0] xml = new XmlSlurper().parse(xmlFileName) import groovy.xml.XmlUtil XmlUtil xmlUtil = new XmlUtil() String xmlString = xmlUtil.serialize(xml) println "String:\n${xmlString}" The third code listing demonstrates use of XmlUtil's serialize(GPathResult, Writer) method to write theGPathResult representing the slurped XML to a file via a FileWriter instance. slurpAndPrintXml.groovy : Writing XML to File #!/usr/bin/env groovy // slurpAndSaveXml.groovy // // Uses Groovy's XmlSlurper to "slurp" XML and then uses Groovy's XmlUtil // to write slurped XML back out to a file with the provided name. The // first argument this script expects is the path/name of the XML file to be // slurped and the second argument expected by this script is the path/name of // the file to which the XML should be saved/written. if (args.length < 2) { println "USAGE: groovy slurpAndSaveXml.groovy " } String xmlFileName = args[0] String outputFileName = args[1] xml = new XmlSlurper().parse(xmlFileName) import groovy.xml.XmlUtil XmlUtil xmlUtil = new XmlUtil() xmlUtil.serialize(xml, new FileWriter(new File(outputFileName))) The examples in this post have demonstrated writing/serializing XML slurped into GPathResult objects through the use of XmlUtil methods. The XmlUtil class also provides methods for serializing thegroovy.util.Node instances that XmlParser provides. The three methods accepting an instance of Nodeare similar to the three methods above for instances of GPathResult. Similarly, other methods of XmlUtilprovide similar support for XML represented as instances of org.w3c.dom.Element, Java String, andgroovy.lang.Writable. The XmlNodePrinter class covered in my previous blog post can be used to serialize XmlParser's parsed XML represented as a Node. The XmlUtil class also can be used to serialize XmlParser's parsed XML represented as a Node but offers the additional advantage of being able to serialize XmlSlurper's slurped XML represented as a GPathResult.
February 27, 2015
by Dustin Marx
· 8,457 Views
article thumbnail
Standing Up a Local Netflix Eureka
Here I will consider two different ways of standing up a local instance of Netflix Eureka. If you are not familiar with Eureka, it provides a central registry where (micro)services can register themselves and client applications can use this registry to look up specific instances hosting a service and to make the service calls. Approach 1: Native Eureka Library The first way is to simply use the archive file generated by the Netflix Eureka build process: 1. Clone the Eureka source repository here: https://github.com/Netflix/eureka 2. Run "./gradlew build" at the root of the repository, this should build cleanly generating a war file in eureka-server/build/libs folder 3. Grab this file, rename it to "eureka.war" and place it in the webapps folder of either tomcat or jetty. For this exercise I have used jetty. 4. Start jetty, by default jetty will boot up at port 8080, however I wanted to instead bring it up at port 8761, so you can start it up this way, "java -jar start.jar -Djetty.port=8761" The server should start up cleanly and can be verified at this endpoint - "http://localhost:8761/eureka/v2/apps" Approach 2: Spring-Cloud-Netflix Spring-Cloud-Netflix provides a very neat way to bootstrap Eureka. To bring up Eureka server using Spring-Cloud-Netflix the approach that I followed was to clone the sample Eureka server application available here: https://github.com/spring-cloud-samples/eureka 1. Clone this repository 2. From the root of the repository run "mvn spring-boot:run", and that is it!. The server should boot up cleanly and the REST endpoint should come up here: "http://localhost:8761/eureka/apps". As a bonus, Spring-Cloud-Netflix provides a neat UI showing the various applications who have registered with Eureka at the root of the webapp at "http://localhost:8761/". Just a few small issues to be aware of, note that the context url's are a little different in the two cases "eureka/v2/apps" vs "eureka/apps", this can be adjusted on the configurations of the services which register with Eureka. Conclusion Your mileage with these approaches may vary. I have found Spring-Cloud-Netflix a little unstable at times but it has mostly worked out well for me. The documentation at the Spring-Cloud site is also far more exhaustive than the one provided at the Netflix Eureka site.
February 26, 2015
by Biju Kunjummen
· 13,045 Views
article thumbnail
A JAXB Nuance: String Versus Enum from Enumerated Restricted XSD String
Although Java Architecture for XML Binding (JAXB) is fairly easy to use in nominal cases (especially since Java SE 6), it also presents numerous nuances. Some of the common nuances are due to the inability to exactlymatch (bind) XML Schema Definition (XSD) types to Java types. This post looks at one specific example of this that also demonstrates how different XSD constructs that enforce the same XML structure can lead to different Java types when the JAXB compiler generates the Java classes. The next code listing, for Food.xsd, defines a schema for food types. The XSD mandates that valid XML will have a root element called "Food" with three nested elements "Vegetable", "Fruit", and "Dessert". Although the approach used to specify the "Vegetable" and "Dessert" elements is different than the approach used to specify the "Fruit" element, both approaches result in similar "valid XML." The "Vegetable" and "Dessert" elements are declared directly as elements of the prescribed simpleTypes defined later in the XSD. The "Fruit" element is defined via reference (ref=) to another defined element that consists of a simpleType. Food.xsd Although Vegetable and Dessert elements are defined in the schema differently than Fruit, the resulting valid XML is the same. A valid XML file is shown next in the code listing for food1.xml. food1.xml Spinach Watermelon Pie At this point, I'll use a simple Groovy script to validate the above XML against the above XSD. The code for this Groovy XML validation script (validateXmlAgainstXsd.groovy) is shown next. validateXmlAgainstXsd.groovy #!/usr/bin/env groovy // validateXmlAgainstXsd.groovy // // Accepts paths/names of two files. The first is the XML file to be validated // and the second is the XSD against which to validate that XML. if (args.length < 2) { println "USAGE: groovy validateXmlAgainstXsd.groovy " System.exit(-1) } String xml = args[0] String xsd = args[1] import javax.xml.validation.Schema import javax.xml.validation.SchemaFactory import javax.xml.validation.Validator try { SchemaFactory schemaFactory = SchemaFactory.newInstance(javax.xml.XMLConstants.W3C_XML_SCHEMA_NS_URI) Schema schema = schemaFactory.newSchema(new File(xsd)) Validator validator = schema.newValidator() validator.validate(new javax.xml.transform.stream.StreamSource(xml)) } catch (Exception exception) { println "\nERROR: Unable to validate ${xml} against ${xsd} due to '${exception}'\n" System.exit(-1) } println "\nXML file ${xml} validated successfully against ${xsd}.\n" The next screen snapshot demonstrates running the above Groovy XML validation script against food1.xmland Food.xsd. The objective of this post so far has been to show how different approaches in an XSD can lead to the same XML being valid. Although these different XSD approaches prescribe the same valid XML, they lead to different Java class behavior when JAXB is used to generate classes based on the XSD. The next screen snapshot demonstrates running the JDK-provided JAXB xjc compiler against the Food.xsd to generate the Java classes. The output from the JAXB generation shown above indicates that Java classes were created for the "Vegetable" and "Dessert" elements but not for the "Fruit" element. This is because "Vegetable" and "Dessert" were defined differently than "Fruit" in the XSD. The next code listing is for the Food.java class generated by the xjc compiler. From this we can see that the generated Food.java class references specific generated Java types for Vegetable and Dessert, but references simply a generic Java String for Fruit. Food.java (generated by JAXB jxc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlSchemaType; import javax.xml.bind.annotation.XmlType; /** * Java class for anonymous complex type. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * * * * * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "vegetable", "fruit", "dessert" }) @XmlRootElement(name = "Food") public class Food { @XmlElement(name = "Vegetable", required = true) @XmlSchemaType(name = "string") protected Vegetable vegetable; @XmlElement(name = "Fruit", required = true) protected String fruit; @XmlElement(name = "Dessert", required = true) @XmlSchemaType(name = "string") protected Dessert dessert; /** * Gets the value of the vegetable property. * * @return * possible object is * {@link Vegetable } * */ public Vegetable getVegetable() { return vegetable; } /** * Sets the value of the vegetable property. * * @param value * allowed object is * {@link Vegetable } * */ public void setVegetable(Vegetable value) { this.vegetable = value; } /** * Gets the value of the fruit property. * * @return * possible object is * {@link String } * */ public String getFruit() { return fruit; } /** * Sets the value of the fruit property. * * @param value * allowed object is * {@link String } * */ public void setFruit(String value) { this.fruit = value; } /** * Gets the value of the dessert property. * * @return * possible object is * {@link Dessert } * */ public Dessert getDessert() { return dessert; } /** * Sets the value of the dessert property. * * @param value * allowed object is * {@link Dessert } * */ public void setDessert(Dessert value) { this.dessert = value; } } The advantage of having specific Vegetable and Dessert classes is the additional type safety they bring as compared to a general Java String. Both Vegetable.java and Dessert.java are actually enums because they come from enumerated values in the XSD. The two generated enums are shown in the next two code listings. Vegetable.java (generated with JAXB xjc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlEnum; import javax.xml.bind.annotation.XmlEnumValue; import javax.xml.bind.annotation.XmlType; /** * Java class for Vegetable. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * * */ @XmlType(name = "Vegetable") @XmlEnum public enum Vegetable { @XmlEnumValue("Carrot") CARROT("Carrot"), @XmlEnumValue("Squash") SQUASH("Squash"), @XmlEnumValue("Spinach") SPINACH("Spinach"), @XmlEnumValue("Celery") CELERY("Celery"); private final String value; Vegetable(String v) { value = v; } public String value() { return value; } public static Vegetable fromValue(String v) { for (Vegetable c: Vegetable.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } } Dessert.java (generated with JAXB xjc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlEnum; import javax.xml.bind.annotation.XmlEnumValue; import javax.xml.bind.annotation.XmlType; /** * Java class for Dessert. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * */ @XmlType(name = "Dessert") @XmlEnum public enum Dessert { @XmlEnumValue("Pie") PIE("Pie"), @XmlEnumValue("Cake") CAKE("Cake"), @XmlEnumValue("Ice Cream") ICE_CREAM("Ice Cream"); private final String value; Dessert(String v) { value = v; } public String value() { return value; } public static Dessert fromValue(String v) { for (Dessert c: Dessert.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } } Having enums generated for the XML elements ensures that only valid values for those elements can be represented in Java. Conclusion JAXB makes it relatively easy to map Java to XML, but because there is not a one-to-one mapping between Java and XML types, there can be some cases where the generated Java type for a particular XSD prescribed element is not obvious. This post has shown how two different approaches to building an XSD to enforce the same basic XML structure can lead to very different results in the Java classes generated with the JAXB xjccompiler. In the example shown in this post, declaring elements in the XSD directly on simpleTypes restricting XSD's string to a specific set of enumerated values is preferable to declaring elements as references to other elements wrapping a simpleType of restricted string enumerated values because of the type safety that is achieved when enums are generated rather than use of general Java Strings.
February 25, 2015
by Dustin Marx
· 21,012 Views · 1 Like
article thumbnail
How to Detect Java Deadlocks Programmatically
Deadlocks are situations in which two or more actions are waiting for the others to finish, making all actions in a blocked state forever. They can be very hard to detect during development, and they usually require restart of the application in order to recover. To make things worse, deadlocks usually manifest in production under the heaviest load, and are very hard to spot during testing. The reason for this is it’s not practical to test all possible interleavings of a program’s threads. Although some statical analysis libraries exist that can help us detect the possible deadlocks, it is still necessary to be able to detect them during runtime and get some information which can help us fix the issue or alert us so we can restart our application or whatever. Detect deadlocks programmatically using ThreadMXBean class Java 5 introduced ThreadMXBean - an interface that provides various monitoring methods for threads. I recommend you to check all of the methods as there are many useful operations for monitoring the performance of your application in case you are not using an external tool. The method of our interest is findMonitorDeadlockedThreads, or, if you are using Java 6,findDeadlockedThreads. The difference is that findDeadlockedThreads can also detect deadlocks caused by owner locks (java.util.concurrent), while findMonitorDeadlockedThreads can only detect monitor locks (i.e. synchronized blocks). Since the old version is kept for compatibility purposes only, I am going to use the second version. The idea is to encapsulate periodical checking for deadlocks into a reusable component so we can just fire and forget about it. One way to impement scheduling is through executors framework - a set of well abstracted and very easy to use multithreading classes. ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); this.scheduler.scheduleAtFixedRate(deadlockCheck, period, period, unit); Simple as that, we have a runnable called periodically after a certain amount of time determined by period and time unit. Next, we want to make our utility is extensive and allow clients to supply the behaviour that gets triggered after a deadlock is detected. We need a method that receives a list of objects describing threads that are in a deadlock: void handleDeadlock(final ThreadInfo[] deadlockedThreads); Now we have everything we need to implement our deadlock detector class. public interface DeadlockHandler { void handleDeadlock(final ThreadInfo[] deadlockedThreads); } public class DeadlockDetector { private final DeadlockHandler deadlockHandler; private final long period; private final TimeUnit unit; private final ThreadMXBean mbean = ManagementFactory.getThreadMXBean(); private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); final Runnable deadlockCheck = new Runnable() { @Override public void run() { long[] deadlockedThreadIds = DeadlockDetector.this.mbean.findDeadlockedThreads(); if (deadlockedThreadIds != null) { ThreadInfo[] threadInfos = DeadlockDetector.this.mbean.getThreadInfo(deadlockedThreadIds); DeadlockDetector.this.deadlockHandler.handleDeadlock(threadInfos); } } }; public DeadlockDetector(final DeadlockHandler deadlockHandler, final long period, final TimeUnit unit) { this.deadlockHandler = deadlockHandler; this.period = period; this.unit = unit; } public void start() { this.scheduler.scheduleAtFixedRate( this.deadlockCheck, this.period, this.period, this.unit); } } Let’s test this in practice. First, we will create a handler to output deadlocked threads information to System.err. We could use this to send email in a real world scenario, for example: public class DeadlockConsoleHandler implements DeadlockHandler { @Override public void handleDeadlock(final ThreadInfo[] deadlockedThreads) { if (deadlockedThreads != null) { System.err.println("Deadlock detected!"); Map stackTraceMap = Thread.getAllStackTraces(); for (ThreadInfo threadInfo : deadlockedThreads) { if (threadInfo != null) { for (Thread thread : Thread.getAllStackTraces().keySet()) { if (thread.getId() == threadInfo.getThreadId()) { System.err.println(threadInfo.toString().trim()); for (StackTraceElement ste : thread.getStackTrace()) { System.err.println("\t" + ste.toString().trim()); } } } } } } } } This iterates through all stack traces and prints stack trace for each thread info. This way we can know exactly on which line each thread is waiting, and for which lock. This approach has one downside - it can give false alarms if one of the threads is waiting with a timeout which can actually be seen as a temporary deadlock. Because of that, original thread could no longer exist when we handle our deadlock and findDeadlockedThreads will return null for such threads. To avoid possible NullPointerExceptions, we need to guard for such situations. Finally, lets force a simple deadlock and see our system in action: DeadlockDetector deadlockDetector = new DeadlockDetector(new DeadlockConsoleHandler(), 5, TimeUnit.SECONDS); deadlockDetector.start(); final Object lock1 = new Object(); final Object lock2 = new Object(); Thread thread1 = new Thread(new Runnable() { @Override public void run() { synchronized (lock1) { System.out.println("Thread1 acquired lock1"); try { TimeUnit.MILLISECONDS.sleep(500); } catch (InterruptedException ignore) { } synchronized (lock2) { System.out.println("Thread1 acquired lock2"); } } } }); thread1.start(); Thread thread2 = new Thread(new Runnable() { @Override public void run() { synchronized (lock2) { System.out.println("Thread2 acquired lock2"); synchronized (lock1) { System.out.println("Thread2 acquired lock1"); } } } }); thread2.start(); Output: Thread1 acquired lock1 Thread2 acquired lock2 Deadlock detected! “Thread-1” Id=11 BLOCKED on java.lang.Object@68ab95e6 owned by “Thread-0” Id=10 deadlock.DeadlockTester$2.run(DeadlockTester.java:42) java.lang.Thread.run(Thread.java:662) “Thread-0” Id=10 BLOCKED on java.lang.Object@58fe64b9 owned by “Thread-1” Id=11 deadlock.DeadlockTester$1.run(DeadlockTester.java:28) java.lang.Thread.run(Thread.java:662) Keep in mind that deadlock detection can be an expensive operation and you should test it with your application to determine if you even need to use it and how frequent you will check. I suggest an interval of at least several minutes as it is not crucial to detect deadlock more frequently than this as we don’t have a recovery plan anyway - we can only debug and fix the error or restart the application and hope it won’t happen again. If you have any suggestions about dealing with deadlocks, or a question about this solution, drop a comment below.
February 25, 2015
by Ivan Korhner
· 52,151 Views · 4 Likes
article thumbnail
Per Client Cookie Handling With Jersey
A lot of REST services will use cookies as part of the authentication / authorisation scheme. This is a problem because by default the old Jersey client will use the singletonCookieHandler.getDefault which is most cases will be null and if not null will not likely work in a multithreaded server environment. (This is because in the background the default Jersey client will use URL.openConnection) Now you can work around this by using the Apache HTTP Client adapter for Jersey; but this is not always available. So if you want to use the Jersey client with cookies in a server environment you need to do a little bit of reflection to ensure you use your own private cookie jar. final CookieHandler ch = new CookieManager(); Client client = new Client(new URLConnectionClientHandler( new HttpURLConnectionFactory() { @Override public HttpURLConnection getHttpURLConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })); This will only work if your environment is using the internal implementation ofsun.net.www.protocol.http.HttpURLConnection that comes with the JDK. This appears to be the case for modern versions of WLS. For JAX-RS 2.0 you can do a similar change using Jersey 2.x specific ClientConfig classand HttpUrlConnectorProvider. final CookieHandler ch = new CookieManager(); Client client = ClientBuilder.newClient(new ClientConfig().connectorProvider(new HttpUrlConnectorProvider().connectionFactory(new HttpUrlConnectorProvider.ConnectionFactory() { @Override public HttpURLConnection getConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })));
February 24, 2015
by Gerard Davison
· 7,977 Views
article thumbnail
Redirecting All Kinds of stdout in Python
A common task in Python (especially while testing or debugging) is to redirect sys.stdout to a stream or a file while executing some piece of code. However, simply "redirecting stdout" is sometimes not as easy as one would expect; hence the slightly strange title of this post. In particular, things become interesting when you want C code running within your Python process (including, but not limited to, Python modules implemented as C extensions) to also have its stdout redirected according to your wish. This turns out to be tricky and leads us into the interesting world of file descriptors, buffers and system calls. But let's start with the basics. Pure Python The simplest case arises when the underlying Python code writes to stdout, whether by calling print, sys.stdout.write or some equivalent method. If the code you have does all its printing from Python, redirection is very easy. With Python 3.4 we even have a built-in tool in the standard library for this purpose - contextlib.redirect_stdout. Here's how to use it: from contextlib import redirect_stdout f = io.StringIO() with redirect_stdout(f): print('foobar') print(12) print('Got stdout: "{0}"'.format(f.getvalue())) When this code runs, the actual print calls within the with block don't emit anything to the screen, and you'll see their output captured by in the stream f. Incidentally, note how perfect the with statement is for this goal - everything within the block gets redirected; once the block is done, things are cleaned up for you and redirection stops. If you're stuck on an older and uncool Python, prior to 3.4 [1], what then? Well, redirect_stdout is really easy to implement on your own. I'll change its name slightly to avoid confusion: from contextlib import contextmanager @contextmanager def stdout_redirector(stream): old_stdout = sys.stdout sys.stdout = stream try: yield finally: sys.stdout = old_stdout So we're back in the game: f = io.StringIO() with stdout_redirector(f): print('foobar') print(12) print('Got stdout: "{0}"'.format(f.getvalue())) Redirecting C-level streams Now, let's take our shiny redirector for a more challenging ride: import ctypes libc = ctypes.CDLL(None) f = io.StringIO() with stdout_redirector(f): print('foobar') print(12) libc.puts(b'this comes from C') os.system('echo and this is from echo') print('Got stdout: "{0}"'.format(f.getvalue())) I'm using ctypes to directly invoke the C library's puts function [2]. This simulates what happens when C code called from within our Python code prints to stdout - the same would apply to a Python module using a C extension. Another addition is the os.system call to invoke a subprocess that also prints to stdout. What we get from this is: this comes from C and this is from echo Got stdout: "foobar 12 " Err... no good. The prints got redirected as expected, but the output from puts and echo flew right past our redirector and ended up in the terminal without being caught. What gives? To grasp why this didn't work, we have to first understand what sys.stdout actually is in Python. Detour - on file descriptors and streams This section dives into some internals of the operating system, the C library, and Python [3]. If you just want to know how to properly redirect printouts from C in Python, you can safely skip to the next section (though understanding how the redirection works will be difficult). Files are opened by the OS, which keeps a system-wide table of open files, some of which may point to the same underlying disk data (two processes can have the same file open at the same time, each reading from a different place, etc.) File descriptors are another abstraction, which is managed per-process. Each process has its own table of open file descriptors that point into the system-wide table. Here's a schematic, taken from The Linux Programming Interface: File descriptors allow sharing open files between processes (for example when creating child processes with fork). They're also useful for redirecting from one entry to another, which is relevant to this post. Suppose that we make file descriptor 5 a copy of file descriptor 4. Then all writes to 5 will behave in the same way as writes to 4. Coupled with the fact that the standard output is just another file descriptor on Unix (usually index 1), you can see where this is going. The full code is given in the next section. File descriptors are not the end of the story, however. You can read and write to them with the read and write system calls, but this is not the way things are typically done. The C runtime library provides a convenient abstraction around file descriptors - streams. These are exposed to the programmer as the opaque FILE structure with a set of functions that act on it (for example fprintf and fgets). FILE is a fairly complex structure, but the most important things to know about it is that it holds a file descriptor to which the actual system calls are directed, and it provides buffering, to ensure that the system call (which is expensive) is not called too often. Suppose you emit stuff to a binary file, a byte or two at a time. Unbuffered writes to the file descriptor with write would be quite expensive because each write invokes a system call. On the other hand, using fwrite is much cheaper because the typicall call to this function just copies your data into its internal buffer and advances a pointer. Only occasionally (depending on the buffer size and flags) will an actual write system call be issued. With this information in hand, it should be easy to understand what stdout actually is for a C program. stdout is a global FILE object kept for us by the C library, and it buffers output to file descriptor number 1. Calls to functions like printf and puts add data into this buffer. fflush forces its flushing to the file descriptor, and so on. But we're talking about Python here, not C. So how does Python translate calls to sys.stdout.write to actual output? Python uses its own abstraction over the underlying file descriptor - a file object. Moreover, in Python 3 this file object is further wrapper in an io.TextIOWrapper, because what we pass to print is a Unicode string, but the underlying write system calls accept binary data, so encoding has to happen en route. The important take-away from this is: Python and a C extension loaded by it (this is similarly relevant to C code invoked via ctypes) run in the same process, and share the underlying file descriptor for standard output. However, while Python has its own high-level wrapper around it - sys.stdout, the C code uses its own FILE object. Therefore, simply replacing sys.stdout cannot, in principle, affect output from C code. To make the replacement deeper, we have to touch something shared by the Python and C runtimes - the file descriptor. Redirecting with file descriptor duplication Without further ado, here is an improved stdout_redirector that also redirects output from C code [4]: from contextlib import contextmanager import ctypes import io import os, sys import tempfile libc = ctypes.CDLL(None) c_stdout = ctypes.c_void_p.in_dll(libc, 'stdout') @contextmanager def stdout_redirector(stream): # The original fd stdout points to. Usually 1 on POSIX systems. original_stdout_fd = sys.stdout.fileno() def _redirect_stdout(to_fd): """Redirect stdout to the given file descriptor.""" # Flush the C-level buffer stdout libc.fflush(c_stdout) # Flush and close sys.stdout - also closes the file descriptor (fd) sys.stdout.close() # Make original_stdout_fd point to the same file as to_fd os.dup2(to_fd, original_stdout_fd) # Create a new sys.stdout that points to the redirected fd sys.stdout = io.TextIOWrapper(os.fdopen(original_stdout_fd, 'wb')) # Save a copy of the original stdout fd in saved_stdout_fd saved_stdout_fd = os.dup(original_stdout_fd) try: # Create a temporary file and redirect stdout to it tfile = tempfile.TemporaryFile(mode='w+b') _redirect_stdout(tfile.fileno()) # Yield to caller, then redirect stdout back to the saved fd yield _redirect_stdout(saved_stdout_fd) # Copy contents of temporary file to the given stream tfile.flush() tfile.seek(0, io.SEEK_SET) stream.write(tfile.read()) finally: tfile.close() os.close(saved_stdout_fd) There are a lot of details here (such as managing the temporary file into which output is redirected) that may obscure the key approach: using dup and dup2 to manipulate file descriptors. These functions let us duplicate file descriptors and make any descriptor point at any file. I won't spend more time on them - go ahead and read their documentation, if you're interested. The detour section should provide enough background to understand it. Let's try this: f = io.BytesIO() with stdout_redirector(f): print('foobar') print(12) libc.puts(b'this comes from C') os.system('echo and this is from echo') print('Got stdout: "{0}"'.format(f.getvalue().decode('utf-8'))) Gives us: Got stdout: "and this is from echo this comes from C foobar 12 " Success! A few things to note: The output order may not be what we expected. This is due to buffering. If it's important to preserve order between different kinds of output (i.e. between C and Python), further work is required to disable buffering on all relevant streams. You may wonder why the output of echo was redirected at all? The answer is that file descriptors are inherited by subprocesses. Since we rigged fd 1 to point to our file instead of the standard output prior to forking to echo, this is where its output went. We use a BytesIO here. This is because on the lowest level, the file descriptors are binary. It may be possible to do the decoding when copying from the temporary file into the given stream, but that can hide problems. Python has its in-memory understanding of Unicode, but who knows what is the right encoding for data printed out from underlying C code? This is why this particular redirection approach leaves the decoding to the caller. The above also makes this code specific to Python 3. There's no magic involved, and porting to Python 2 is trivial, but some assumptions made here don't hold (such as sys.stdout being a io.TextIOWrapper). Redirecting the stdout of a child process We've just seen that the file descriptor duplication approach lets us grab the output from child processes as well. But it may not always be the most convenient way to achieve this task. In the general case, you typically use the subprocess module to launch child processes, and you may launch several such processes either in a pipe or separately. Some programs will even juggle multiple subprocesses launched this way in different threads. Moreover, while these subprocesses are running you may want to emit something to stdout and you don't want this output to be captured. So, managing the stdout file descriptor in the general case can be messy; it is also unnecessary, because there's a much simpler way. The subprocess module's swiss knife Popen class (which serve as the basis for much of the rest of the module) accepts a stdout parameter, which we can use to ask it to get access to the child's stdout: import subprocess echo_cmd = ['echo', 'this', 'comes', 'from', 'echo'] proc = subprocess.Popen(echo_cmd, stdout=subprocess.PIPE) output = proc.communicate()[0] print('Got stdout:', output) The subprocess.PIPE argument can be used to set up actual child process pipes (a la the shell), but in its simplest incarnation it captures the process's output. If you only launch a single child process at a time and are interested in its output, there's an even simpler way: output = subprocess.check_output(echo_cmd) print('Got stdout:', output) check_output will capture and return the child's standard output to you; it will also raise an exception if the child exist with a non-zero return code. Conclusion I hope I covered most of the common cases where "stdout redirection" is needed in Python. Naturally, all of the same applies to the other standard output stream - stderr. Also, I hope the background on file descriptors was sufficiently clear to explain the redirection code; squeezing this topic in such a short space is challenging. Let me know if any questions remain or if there's something I could have explained better. Finally, while it is conceptually simple, the code for the redirector is quite long; I'll be happy to hear if you find a shorter way to achieve the same effect. [1] Do not despair. As of February 2015, a sizable chunk of the worldwide Python programmers are in the same boat. [2] Note that bytes passed to puts. This being Python 3, we have to be careful since libc doesn't understand Python's unicode strings. [3] The following description focuses on Unix/POSIX systems; also, it's necessarily partial. Large book chapters have been written on this topic - I'm just trying to present some key concepts relevant to stream redirection. [4] The approach taken here is inspired by this Stack Overflow answer.
February 23, 2015
by Eli Bendersky
· 18,753 Views
article thumbnail
Sneak Peek into the JCache API (JSR 107)
This post covers the JCache API at a high level and provides a teaser – just enough for you to (hopefully) start itching about it ;-) In this post …. JCache overview JCache API, implementations Supported (Java) platforms for JCache API Quick look at Oracle Coherence Fun stuff – Project Headlands (RESTified JCache by Adam Bien) , JCache related talks at Java One 2014, links to resources for learning more about JCache What is JCache? JCache (JSR 107) is a standard caching API for Java. It provides an API for applications to be able to create and work with in-memory cache of objects. Benefits are obvious – one does not need to concentrate on the finer details of implementing the Caching and time is better spent on the core business logic of the application. JCache components The specification itself is very compact and surprisingly intuitive. The API defines high level components (interfaces) some of which are listed below Caching Provider – used to control Caching Managers and can deal with several of them, Cache Manager – deals with create, read, destroy operations on a Cache Cache – stores entries (the actual data) and exposes CRUD interfaces to deal with the entries Entry – abstraction on top of a key-value pair akin to a java.util.Map Hierarchy of JCache API components JCache Implementations JCache defines the interfaces which of course are implemented by different vendors a.k.a Providers. Oracle Coherence Hazelcast Infinispan ehcache Reference Implementation – this is more for reference purpose rather than a production quality implementation. It is per the specification though and you can be rest assured of the fact that it does in fact pass the TCK as well From the application point of view, all that’s required is the implementation to be present in the classpath. The API also provides a way to further fine tune the properties specific to your provider via standard mechanisms. You should be able to track the list of JCache reference implementations from the JCP website link public class JCacheUsage{ public static void main(String[] args){ //bootstrap the JCache Provider CachingProvider jcacheProvider = Caching.getCachingProvider(); CacheManager jcacheManager = jcacheProvider.getCacheManager(); //configure cache MutableConfiguration jcacheConfig = new MutableConfiguration<>(); jcacheConfig.setTypes(String.class, MyPreciousObject.class); //create cache Cache cache = jcacheManager.createCache("PreciousObjectCache", jcacheConfig); //play around String key = UUID.randomUUID().toString(); cache.put(key, new MyPreciousObject()); MyPreciousObject inserted = cache.get(key); cache.remove(key); cache.get(key); //will throw javax.cache.CacheException since the key does not exist } } JCache provider detection JCache provider detection happens automatically when you only have a single JCache provider on the class path You can choose from the below options as well //set JMV level system property -Djavax.cache.spi.cachingprovider=org.ehcache.jcache.JCacheCachingProvider //code level config System.setProperty("javax.cache.spi.cachingprovider","org.ehcache.jcache.JCacheCachingProvider //you want to choose from multiple JCache providers at runtime CachingProvider ehcacheJCacheProvider = Caching.getCachingProvider("org.ehcache.jcache.JCacheCachingProvider"); //which JCache providers do I have on the classpath? Iterable jcacheProviders = Caching.getCachingProviders(); Java Platform support Compliant with Java SE 6 and above Does not define any details in terms of Java EE integration. This does not mean that it cannot be used in a Java EE environment – it’s just not standardized yet. Could not be plugged into Java EE 7 as a tried and tested standard Candidate for Java EE 8 Project Headlands: Java EE and JCache in tandem By none other than Adam Bien himself ! Java EE 7, Java SE 8 and JCache in action Exposes the JCache API via JAX-RS (REST) Uses Hazelcast as the JCache provider Highly recommended ! Oracle Coherence This post deals with high level stuff w.r.t JCache in general. However, a few lines about Oracle Coherence in general would help put things in perspective Oracle Coherence is a part of Oracle’s Cloud Application Foundation stack It is primarily an in-memory data grid solution Geared towards making applications more scalable in general What’s important to know is that from version 12.1.3 onwards, Oracle Coherence includes a reference implementation for JCache (more in the next section) JCache support in Oracle Coherence Support for JCache implies that applications can now use a standard API to access the capabilities of Oracle Coherence This is made possible by Coherence by simply providing an abstraction over its existing interfaces (NamedCache etc). Application deals with a standard interface (JCache API) and the calls to the API are delegated to the existing Coherence core library implementation Support for JCache API also means that one does not need to use Coherence specific APIs in the application resulting in vendor neutral code which equals portability How ironic – supporting a standard API and always keeping your competitors in the hunt ;-) But hey! That’s what healthy competition and quality software is all about ! Talking of healthy competition – Oracle Coherence does support a host of other features in addition to the standard JCache related capabilities. The Oracle Coherence distribution contains all the libraries for working with the JCache implementation The service definition file in the coherence-jcache.jar qualifies it as a valid JCache provider implementation Curious about Oracle Coherence ? Quick Starter page Documentation Installation Further reading about Coherence and JCache combo – Oracle Coherence documentation JCache at Java One 2014 Couple of great talks revolving around JCache at Java One 2014 Come, Code, Cache, Compute! by Steve Millidge Using the New JCache by Brian Oliver and Greg Luck Hope this was fun :-) Cheers !
February 23, 2015
by Abhishek Gupta DZone Core CORE
· 5,879 Views · 1 Like
article thumbnail
Java Message Service (JMS)—Explained
Java message service enables loosely coupled communication between two or more systems. It provides reliable and asynchronous form of communication. There are two types of messaging models in JMS. Point-to-Point Messaging Domain Applications are built on the concept of message queues, senders, and receivers. Each message is send to a specific queue, and receiving systems consume messages from the queues established to hold their messages. Queues retain all messages sent to them until the messages are consumed by the receiver or expire. Here there is only one consumer for a message. If the receiver is not available at any point, message will remain in the message broker (Queue) and will be delivered to the consumer when it is available or free to process the message. Also receiver acknowledges the consumption on each message. Publish/Subscribe Messaging Domain Applications send message to a message broker called Topic. This topic publishes the message to all the subscribers. Topic retains the messages until it is delivered to the systems at the receiving end. Applications are loosely coupled and do not need to be on the same server. Message communications are handled by the message broker; in this case it is called a topic. A message can have multiple consumers and consumers will get the messages only after it gets subscribed and consumers need to remain active in order to get new messages. Message Sender Message Sender object is created by a session and used for sending messages to a destination queue. It implements the MessageProducer interface. First we need to create a connection object using the ActiveMQConnectionFactory factory object. Then we create a session object. Using the session object we set the message broker (Queue) and create the message sender object. Here we are sending a map message object. Please see the code snippet for message sender. public class MessageSender { public static void main(String[] args) { Connection connection = null; try { Context ctx = new InitialContext(); ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616"); connection = cf.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); Destination destination = session.createQueue("test.message.queue"); MessageProducer messageProducer = session.createProducer(destination); MapMessage message = session.createMapMessage(); message.setString("Name", "Tim"); message.setString("Role", "Developer"); message.setDouble("Salary", 850000); messageProducer.send(message); } catch (Exception e) { System.out.println(e); } finally { if (connection != null) { try { connection.close(); } catch (JMSException e) { System.out.println(e); } } System.exit(0); } } } Message Receiver Message Receiver object is created by a session and used for receiving messages from a queue. It implements the MessageProducer interface. Please see the code snippet for message receiver. The process remains same in message sender and receiver. In case of receiver, we use a Message Listener. Listener remains active and gets invoked when the receiver consumes any message from the broker. Please see the code snippets below. public class MessageReceiver { public static void main(String[] args) { try { InitialContext ctx = new InitialContext(); ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616"); Connection connection = cf.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); Destination destination = session.createQueue("test.prog.queue"); MessageConsumer consumer = session.createConsumer(destination); consumer.setMessageListener(new MapMessageListener()); connection.start(); } catch (Exception e) { System.out.println(e); } } } Please see the code snippet for a message listener receiving map message object. public class MapMessageListener implements MessageListener { public void onMessage(Message message) { if (message instanceof MapMessage) { MapMessage mapMessage = (MapMessage)message; try { String name = mapMessage.getString("Name"); System.out.println("Name : " + name); } catch (JMSException e) { throw new RuntimeException(e); } } else { System.out.println("Invalid Message Received"); } } } Hope this will help you to understand the basics of JMS and write a production ready message sender and receiver programs.
February 23, 2015
by Roshan Thomas
· 81,259 Views · 12 Likes
article thumbnail
Retry-After HTTP Header in Practice
Retry-After is a lesser known HTTP response header.
February 20, 2015
by Tomasz Nurkiewicz DZone Core CORE
· 15,859 Views
article thumbnail
Writing Groovy's groovy.util.Node (XmlParser) Content as XML
Groovy's XmlParser makes it easy to parse an XML file, XML input stream, or XML string using one its overloaded parse methods (or parseText in the case of the String). The XML content parsed with any of these methods is made available as a groovy.util.Node instance. This blog post describes how to make the return trip and write the content of the Node back to XML in alternative formats such as a File or aString. Groovy's MarkupBuilder provides a convenient approach for generating XML from Groovy code. For example, I demonstrated writing XML based on SQL query results in the post GroovySql and MarkupBuilder: SQL-to-XML. However, when one wishes to write/serialize XML from a Groovy Node, an easy and appropriate approach is to use XmlNodePrinter as demonstrated in Updating XML with XmlParser. The next code listing, parseAndPrintXml.groovy demonstrates use of XmlParser to parse XML from a provided file and use of XmlNodePrinter to write that Node parsed from the file to standard output as XML. parseAndPrintXml.groovy : Writing XML to Standard Output #!/usr/bin/env groovy // parseAndPrintXml.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to print the contents of the Node parsed from the XML with // XmlParser to standard output. if (args.length < 1) { println "USAGE: groovy parseAndPrintXml.groovy " System.exit(-1) } XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) XmlNodePrinter nodePrinter = new XmlNodePrinter(preserveWhitespace:true) nodePrinter.print(xml) Putting aside the comments and code for checking command line arguments, there are really 4 lines (lines 15-18) in the above code listing of significance to this discussion. These four lines demonstrate instantiating anXmlParser (line 15), using the instance of XmlParser to "parse" a File instance based on a provided argument file name (line 16), instantiating an XmlNodePrinter (line 17), and using that XmlNodePrinterinstance to "print" the parsed XML to standard output (line 18). Although writing XML to standard output can be useful for a user to review or to redirect output to another script or tool, there are times when it is more useful to have access to the parsed XML as a String. The next code listing is just a bit more involved than the last one and demonstrates use of XmlNodePrinter to write the parsed XML contained in an Node instance as a Java String. parseXmlToString.groovy : Writing XML to Java String #!/usr/bin/env groovy // parseXmlToString.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to write the contents of the Node parsed from the XML with // XmlParser into a Java String. if (args.length < 1) { println "USAGE: groovy parseXmlToString.groovy " System.exit(-1) } XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) StringWriter stringWriter = new StringWriter() XmlNodePrinter nodePrinter = new XmlNodePrinter(new PrintWriter(stringWriter)) nodePrinter.setPreserveWhitespace(true) nodePrinter.print(xml) String xmlString = stringWriter.toString() println "XML as String:\n${xmlString}" As the just-shown code listing demonstrates, one can instantiate an instance of XmlNodePrinter that writes to a PrintWriter that was instantiated with a StringWriter. This StringWriter ultimately makes the XML available as a Java String. Writing XML from a groovy.util.Node to a File is very similar to writing it to a String with a FileWriterused instead of a StringWriter. This is demonstrated in the next code listing. parseAndSaveXml.groovy : Write XML to File #!/usr/bin/env groovy // parseAndSaveXml.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to write the contents of the Node parsed from the XML with // XmlParser to file with provided name. if (args.length < 2) { println "USAGE: groovy parseAndSaveXml.groovy " System.exit(-1) } XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) FileWriter fileWriter = new FileWriter(args[1]) XmlNodePrinter nodePrinter = new XmlNodePrinter(new PrintWriter(fileWriter)) nodePrinter.setPreserveWhitespace(true) nodePrinter.print(xml) I don't show it in this post, but the value of being able to write a Node back out as XML often comes after modifying that Node instance. Updating XML with XmlParser demonstrates the type of functionality that can be performed on a Node before serializing the modified instance back out.
February 20, 2015
by Dustin Marx
· 22,103 Views · 6 Likes
article thumbnail
Exception Handling in Spring REST Web Service
Learn how to handle exception in Spring controller using: ResponseEntity and HttpStatus, @ResponseStatus on the custom exception class, and more custom methods.
February 18, 2015
by Roshan Thomas
· 284,593 Views · 13 Likes
article thumbnail
Avoid Sequences of if…else Statements
Adding a feature to legacy code while trying to improve it can be quite challenging, but also quite straightforward. Nothing angers me more (ok, I might be a little exaggerating) than stumbling upon such pattern: public Foo getFoo(Bar bar) { if (bar instanceof BarA) { return new FooA(); } else if (bar instanceof BarB) { return new FooB(); } else if (bar instanceof BarC) { return new FooC(); } else if (bar instanceof BarD) { return new FooD(); } throw new BarNotFoundException(); } Apply Object-Oriented Programming The first reflex when writing such thing – yes, please don’t wait for the poor guy coming after you to clean your mess, should be to ask yourself whether applying basic Object-Oriented Programming couldn’t help you. In this case, you would have multiple children classes of: public interface FooBarFunction extends Function For example public class FooBarAFunction implements FooBarFunction { public FooA apply(BarA bar) { return new FooA(); } } Note: not enjoying the benefits of Java 8 is not reason not to use this: just create your own Function interface or use Guava’s. Use a Map I must admit that it not only scatters tightly related code in multiple files (this is Java…), it’s unfortunately not always possible to easily apply OOP. In that case, it’s quite easy to initialise a map that returns the correct type. public class FooBarFunction { private static final Map, Foo> MAPPINGS = new HashMap<>(); static { MAPPINGS.put(BarA.class, new FooA()); MAPPINGS.put(BarB.class, new FooB()); MAPPINGS.put(BarC.class, new FooC()); MAPPINGS.put(BarD.class, new FooD()); } public Foo getFoo(Bar bar) { Foo foo = MAPPINGS.get(bar.getClass()); if (foo == null) { throw new BarNotFoundException(); } return foo; } } Note this is only a basic example, and users of Dependency Injection can easily pass the map in the object constructor instead. More than a return The previous Map trick works quite well with return statements but not with code snippets. In this case, you need to use the map to return an enum and associate it with a switch-case. public class FooBarFunction { private enum BarEnum { A, B, C, D } private static final Map, BarEnum> MAPPINGS = new HashMap<>(); static { MAPPINGS.put(BarA.class, BarEnum.A); MAPPINGS.put(BarB.class, BarEnum.B); MAPPINGS.put(BarC.class, BarEnum.C); MAPPINGS.put(BarD.class, BarEnum.D); } public Foo getFoo(Bar bar) { BarEnum barEnum = MAPPINGS.get(bar.getClass()); switch(barEnum) { case BarEnum.A: // Do something; break; case BarEnum.B: // Do something; break; case BarEnum.C: // Do something; break; case BarEnum.D: // Do something; break; default: throw new BarNotFoundException(); } } } Note that not only I believe that this code is more readable but it’s also a fact it has better performance and the switch is evaluated once as opposite to each if…else being evaluated in sequence until it returns true. Note it’s expected not to put the code directly in the statement but use dedicated method calls. Conclusion I hope that at this point, you’re convinced there are other ways than sequences of if…else. The rest is in your hands. - See more at: http://blog.frankel.ch/avoid-sequences-else-statements#sthash.UTxFt2Nt.dpuf
February 16, 2015
by Nicolas Fränkel DZone Core CORE
· 26,106 Views
article thumbnail
R: ggplot2 - When to Adjust the Group Aesthetic
Each group consists of only one observation. Do you need to adjust the group aesthetic? I’ve been playing around with some weather data over the last couple of days which I aggregated down to the average temperature per month over the last 4 years and stored in a CSV file. This is what the file looks like: $ cat /tmp/averageTemperatureByMonth.csv "month","aveTemperature" "January",6.02684563758389 "February",5.89380530973451 "March",7.54838709677419 "April",10.875 "May",13.3064516129032 "June",15.9666666666667 "July",18.8387096774194 "August",18.3709677419355 "September",16.2583333333333 "October",13.4596774193548 "November",9.19166666666667 "December",7.01612903225806 I wanted to create a simple line chart which would show the months of the year in ascending order with the appropriate temperature. My first attempt was the following: df = read.csv("/tmp/averageTemperatureByMonth.csv") df$month = factor(df$month, month.name) ggplot(aes(x = month, y = aveTemperature), data = df) + geom_line( ) + ggtitle("Temperature by month") which resulted in the following error: geom_path: Each group consist of only one observation. Do you need to adjust the group aesthetic? My understanding is that the points don’t get joined up by default because the variable on the x axis is not a continuous one but rather a factor variable. One way to work around this problem is to make it numeric, like so: ggplot(aes(x = as.numeric(month), y = aveTemperature), data = df) + geom_line( ) + ggtitle("Temperature by month") which results in the following chart: This isn’t bad but it’d be much nicer if we could have the month names along the bottom instead. It turns out we can but we need to specify a group that each point belongs to. ggplot will then connects points which belong to the same group. In this case we don’t really have one so we’ll define a dummy one instead: ggplot(aes(x = month, y = aveTemperature, group=1), data = df) + geom_line( ) + ggtitle("Temperature by month") And now we get the visualisation we want: Be Sociable, Share!
February 14, 2015
by Mark Needham
· 15,870 Views
article thumbnail
Converting an Application to JHipster
I've been intrigued by JHipster ever since I first tried it last September. I'd worked with AngularJS and Spring Boot quite a bit, and I liked the idea that someone had combined them, adding some nifty features along the way. When I spoke about AngularJS earlier this month, I included a few slides on JHipster near the end of the presentation. This week, I received an email from someone who attended that presentation. Hey Matt, We met a few weeks back when you presented at DOSUG. You were talking about JHipster which I had been eyeing for a few months and wanted your quick .02 cents. I have built a pretty heavy application over the last 6 months that is using mostly the same tech as JHipster. Java Spring JPA AngularJS Compass Grunt It's ridiculously close for most of the tech stack. So, I was debating rolling it over into a JHipster app to make it a more familiar stack for folks. My concern is that it I will spend months trying to shoehorn it in for not much ROI. Any thoughts on going down this path? What are the biggest issues you've seen in using JHipster? It seems pretty straightforward except for the entity generators. I'm concerned they are totally different than what I am using. The main difference in what I'm doing compared to JHipster is my almost complete use of groovy instead of old school Java in the app. I would have to be forced into going back to regular java beans... Thoughts? I replied with the following advice: JHipster is great for starting a project, but I don't know that it buys you much value after the first few months. I would stick with your current setup and consider JHipster for your next project. I've only prototyped with it, I haven't created any client apps or put anything in production. I have with Spring Boot and AngularJS though, so I like that JHipster combines them for me. JHipster doesn't generate Scala or Groovy code, but you could still use them in a project as long as you had Maven/Gradle configured properly. You might try generating a new app with JHipster and examine how they're doing this. At the very least, it can be a good learning tool, even if you're not using it directly. Java Hipsters: Do you agree with this advice? Have you tried migrating an existing app to JHipster? Are any of you using Scala or Groovy in your JHipster projects?
February 13, 2015
by Matt Raible
· 8,287 Views · 2 Likes
article thumbnail
Getting Started with Dropwizard: Authentication, Configuration and HTTPS
Basic Authentication is the simplest way to secure access to a resource.
February 10, 2015
by Dmitry Noranovich
· 45,919 Views · 1 Like
article thumbnail
The API Gateway Pattern: Angular JS and Spring Security Part IV
Written by Dave Syer in the Spring blog In this article we continue our discussion of how to use Spring Security with Angular JS in a “single page application”. Here we show how to build an API Gateway to control the authentication and access to the backend resources using Spring Cloud. This is the fourth in a series of articles, and you can catch up on the basic building blocks of the application or build it from scratch by reading the first article, or you can just go straight to the source code in Github. In the last article we built a simple distributed application that used Spring Session to authenticate the backend resources. In this one we make the UI server into a reverse proxy to the backend resource server, fixing the issues with the last implementation (technical complexity introduced by custom token authentication), and giving us a lot of new options for controlling access from the browser client. Reminder: if you are working through this article with the sample application, be sure to clear your browser cache of cookies and HTTP Basic credentials. In Chrome the best way to do that for a single server is to open a new incognito window. Creating an API Gateway An API Gateway is a single point of entry (and control) for front end clients, which could be browser based (like the examples in this article) or mobile. The client only has to know the URL of one server, and the backend can be refactored at will with no change, which is a significant advantage. There are other advantages in terms of centralization and control: rate limiting, authentication, auditing and logging. And implementing a simple reverse proxy is really simple with Spring Cloud. If you were following along in the code, you will know that the application implementation at the end of the last article was a bit complicated, so it’s not a great place to iterate away from. There was, however, a halfway point which we could start from more easily, where the backend resource wasn’t yet secured with Spring Security. The source code for this is a separate project in Github so we are going to start from there. It has a UI server and a resource server and they are talking to each other. The resource server doesn’t have Spring Security yet so we can get the system working first and then add that layer. Declarative Reverse Proxy in One Line To turn it into an API Gateawy, the UI server needs one small tweak. Somewhere in the Spring configuration we need to add an @EnableZuulProxy annotation, e.g. in the main (only)application class: @SpringBootApplication @RestController @EnableZuulProxy public class UiApplication { ... } and in an external configuration file we need to map a local resource in the UI server to a remote one in the external configuration (“application.yml”): security: ... zuul: routes: resource: path: /resource/** url: http://localhost:9000 This says “map paths with the pattern /resource/** in this server to the same paths in the remote server at localhost:9000”. Simple and yet effective (OK so it’s 6 lines including the YAML, but you don’t always need that)! All we need to make this work is the right stuff on the classpath. For that purpose we have a few new lines in our Maven POM: org.springframework.cloud spring-cloud-starter-parent 1.0.0.BUILD-SNAPSHOT pom import org.springframework.cloud spring-cloud-starter-zuul ... Note the use of the “spring-cloud-starter-zuul” - it’s a starter POM just like the Spring Boot ones, but it governs the dependencies we need for this Zuul proxy. We are also using because we want to be able to depend on all the versions of transitive dependencies being correct. Consuming the Proxy in the Client With those changes in place our application still works, but we haven’t actually used the new proxy yet until we modify the client. Fortunately that’s trivial. We just need to go from this implementation of the “home” controller: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('http://localhost:9000/').success(function(data) { $scope.greeting = data; }) }); to a local resource: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('resource/').success(function(data) { $scope.greeting = data; }) }); Now when we fire up the servers everything is working and the requests are being proxied through the UI (API Gateway) to the resource server. Further Simplifications Even better: we don’t need the CORS filter any more in the resource server. We threw that one together pretty quickly anyway, and it should have been a red light that we had to do anything as technically focused by hand (especially where it concerns security). Fortunately it is now redundant, so we can just throw it away, and go back to sleeping at night! Securing the Resource Server You might remember in the intermediate state that we started from there is no security in place for the resource server. Aside: Lack of software security might not even be a problem if your network architecture mirrors the application architecture (you can just make the resource server physically inaccessible to anyone but the UI server). As a simple demonstration of that we can make the resource server only accessible on localhost. Just add this to application.properties in the resource server: server.address: 127.0.0.1 Wow, that was easy! Do that with a network address that’s only visible in your data center and you have a security solution that works for all resource servers and all user desktops. Suppose that we decide we do need security at the software level (quite likely for a number of reasons). That’s not going to be a problem, because all we need to do is add Spring Security as a dependency (in the resource server POM): org.springframework.boot spring-boot-starter-security That’s enough to get us a secure resource server, but it won’t get us a working application yet, for the same reason that it didn’t in Part III: there is no shared authentication state between the two servers. Sharing Authentication State We can use the same mechanism to share authentication (and CSRF) state as we did in the last, i.e. Spring Session. We add the dependency to both servers as before: org.springframework.session spring-session 1.0.0.RELEASE org.springframework.boot spring-boot-starter-redis but this time the configuration is much simpler because we can just add the same Filterdeclaration to both. First the UI server (adding @EnableRedisHttpSession): @SpringBootApplication @RestController @EnableZuulProxy @EnableRedisHttpSession public class UiApplication { ... } and then the resource server. There are two changes to make: one is adding@EnableRedisHttpSession and a HeaderHttpSessionStrategy bean to theResourceApplication: @SpringBootApplication @RestController @EnableRedisHttpSession class ResourceApplication { ... @Bean HeaderHttpSessionStrategy sessionStrategy() { new HeaderHttpSessionStrategy(); } } and the other is to explicitly ask for a non-stateless session creation policy inapplication.properties: security.sessions: NEVER As long as redis is still running in the background (use the fig.yml if you like to start it) then the system will work. Load the homepage for the UI at http://localhost:8080 and login and you will see the message from the backend rendered on the homepage. How Does it Work? What is going on behind the scenes now? First we can look at the HTTP requests in the UI server (and API Gateway): VERB PATH STATUS RESPONSE GET / 200 index.html GET /css/angular-bootstrap.css 200 Twitter bootstrap CSS GET /js/angular-bootstrap.js 200 Bootstrap and Angular JS GET /js/hello.js 200 Application logic GET /user 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /resource 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /login.html 200 Angular login form partial POST /login 302 Redirect to home page (ignored) GET /user 200 JSON authenticated user GET /resource 200 (Proxied) JSON greeting That’s identical to the sequence at the end of Part II except for the fact that the cookie names are slightly different (“SESSION” instead of “JSESSIONID”) because we are using Spring Session. But the architecture is different and that last request to “/resource” is special because it was proxied to the resource server. We can see the reverse proxy in action by looking at the “/trace” endpoint in the UI server (from Spring Boot Actuator, which we added with the Spring Cloud dependencies). Go tohttp://localhost:8080/trace in a browser and scroll to the end (if you don’t have one already get a JSON plugin for your browser to make it nice and readable). You will need to authenticate with HTTP Basic (browser popup), but the same credentials are valid as for your login form. At or near the end you should see a pair of requests something like this: { "timestamp": 1420558194546, "info": { "method": "GET", "path": "/", "query": "" "remote": true, "proxy": "resource", "headers": { "request": { "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260", "x-forwarded-prefix": "/resource", "x-forwarded-host": "localhost:8080" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } }, } }, { "timestamp": 1420558200232, "info": { "method": "GET", "path": "/resource/", "headers": { "request": { "host": "localhost:8080", "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } } } }, The second entry there is the request from the client to the gateway on “/resource” and you can see the cookies (added by the browser) and the CSRF header (added by Angular as discussed inPart II). The first entry has remote: true and that means it’s tracing the call to the resource server. You can see it went out to a uri path “/” and you can see that (crucially) the cookies and CSRF headers have been sent too. Without Spring Session these headers would be meaningless to the resource server, but the way we have set it up it can now use those headers to re-constitute a session with authentication and CSRF token data. So the request is permitted and we are in business! Conclusion We covered quite a lot in this article but we got to a really nice place where there is a minimal amount of boilerplate code in our two servers, they are both nicely secure and the user experience isn’t compromised. That alone would be a reason to use the API Gateway pattern, but really we have only scratched the surface of what that might be used for (Netflix uses it for a lot of things). Read up on Spring Cloud to find out more on how to make it easy to add more features to the gateway. The next article in this series will extend the application architecture a bit by extracting the authentication responsibilities to a separate server (the Single Sign On pattern).
February 9, 2015
by Pieter Humphrey DZone Core CORE
· 16,018 Views
article thumbnail
The State of Java
I think living in a beautiful city in a fantastic climate has its advantages. Not just the obvious ones, but we find people unusually keen to come and visit us on the pretence of presenting at the Sevilla Java User Group (and please, DO come and present at our JUG, welove visitors). This week we were really lucky, we had Georges Saab and Aurelio Garcia-Ribeyro giving us an update on where Java is now and where it looks like it’s going in the future. I’m just starting to use Java 8 in real life, so this could not have been better timed - I got to ask the guys a bunch of questions about the intentions behind some of the Java 8 features, and the current vision for the future. 2015 Java update and roadmap, JUG sevilla from Georges Saab and Aurelio Garcia-Ribeyro My notes from the session: Lambdas could be just a syntax change, but they could be more than that - they could impact the language, the libraries and the JVM. They could have a positive impact on performance, and this work can continue to go on through small updates to Java that don’t impact the syntax. Streams are a pipeline of operations, made possible/easier/more readable by lambdas. The aim is to make operations on collections easier and more readable. In the Old World Order, you had to care about how to perform certain operations. With streams, you don’t need to tell the computer exactly how it’s done, you can simply say what operations you want performed. This makes it easier for developers Streams will take all the operations you pass in and perform them in a single pass of the data, so you don’t have to write multiple loops to perform multiple operations on the same data structure, or tie your brain in knots figuring out how to do it in one loop. There are also no intermediate data structures when you use streams. The implementation can be optimised under the covers (e.g. not performing the sort operation if the data is already ordered correctly), and the developer doesn’t have to worry about it. Java can introduce further optimisations in later releases without changing the API or impacting the code a developer has already written. These new features in Java have a focus on readability, since code is much more often read than written. The operations are easier to parallelise, because the developer is no longer dictating the how - multiple for loops might not be easy to parallelise, but a series of operations can be. Default methods and new support for static methods on interfacesare interesting. I’d forgotten you could put static methods on interfaces and I’m going to sneak them into my latest project. Nashorn is here to replace Rhino. Personally I haven’t worked in the sort of environment that would need server-side JavaScript so this whole area has passed me by somewhat, but seems it might be interesting for Node.js or creating a REPL in JavaScript that you want to run on the JVM. The additional annotation support in Java 8 will be useful for Java EE. As this is something I’m currently playing with (specifically web sockets) I’m interested in this, but it seems like it will be a while before this filters into the Java EE that’s used on a daily basis. Mission Control and Flight Recorder - look interesting. Feel like I should play with them. Many people are skipping straight from Java 6 to 8 - the new language features and improved performance are major driving factors End of public updates of Java 7 in April. Having libraries that, um…encourage… adoption of the latest version of Java makes life a lot easier for those who develop the Java language, as they can concentrate on moving the language forward and not be tied down supporting old versions. Either this is the first time I’ve heard of JavaDB, or my memory has completely discarded it. I had no idea what it was. Java 9 is well under way, check out the JEPs. (This is the JEP process) Jigsaw was explained, and actually I could see myself using it for the project I’m working on right now. I had a look to see if I could use it via the OpenJDK, but it looks like the groundwork is there, but not the actual modules themselves. The G1 Garbage Collector is “…the go forward GC”, it’s the one that’s being actively worked on. This is the first I’ve heard of Enhanced Volatiles, I’m so behind the times! Access to internal packages is going away in Java 9. So don’t use any sun.* packages. Use jdeps to identify any dependencies in your code that need to change. …and, looking further ahead than Java 9, we have value types andProject Valhalla …a REPL for Java …possibly a lightweight JSON API …and VarHandles were also mentioned. Finally, the guys mentioned a talk by Brian Goetz called “Stewardship: the Sobering Parts”, which has gone onto my to-watch list. Ideas It became clear throughout the talk there are plenty of ideas that we could explore in later presentations. If you want to see any of the following, add a comment or ping me or IsraKaos on twitter or Meetup and we’ll try and schedule it. Similarly, if you can present on any of these topics and want to come to a beautiful, sunny city with amazing food to do so, drop me a line: The OpenJDK The JCP, the purpose, the processes, the people Adopt a JSR, Adopt OpenJDK New Date/Time (JSR310) JavaFX Code optimisation vs Data optimisation (I honestly don’t know what this means, but I wrote it down in my notes) Java EE Further Reading I really liked Richard Warburton’s Lambdas and Streams book Java 8 in Action has details on other Java 8 features like Date and Time Java 8 for the Really Impatient covers JavaFX too, and highlights some Java 7 features you might like Brian Goetz did a great talk last year, “Lambdas in Java: A peek under the hood”. I had to watch it twice before even half of the info sank in, but it’s really interesting. Stephen Colebourne, the guy behind Joda time and the new Date and Time API, has this talk about the new API. Java 9 OpenJDK page
February 9, 2015
by Trisha Gee
· 8,189 Views · 2 Likes
article thumbnail
NetBeans in the Classroom: MySQL JDBC Connection Pool & JDBC Resource for GlassFish
This tutorial assumes that you have installed the Java EE version of NetBeans 8.02. It is further assumed that you have replaced the default GlassFish server instance in NetBeans with a new instance with its own folder for the domain. This is required for all platforms. See my previous article “Creating a New Instance of GlassFish in NetBeans IDE” The other day I presented to my students the steps necessary to make GlassFish responsible for JDBC connections. As I went through the steps I realized that I needed to record the steps for my students to reference. Here then are these steps. Steps 1 through 4 are required, Step 5 is optional. Step 1a: Manually Adding the MySQL driver to the GlassFish Domain Before we even start NetBeans we must do a bit of preliminary work. With the exception of Derby, GlassFish does not include the MySQL driver or any other driver in its distribution. Go to the MySql Connector/J download site at http://dev.mysql.com/downloads/connector/j/ and download the latest version. I recommend downloading the Platform Independent version. If your OS is Windows download the ZIP archive otherwise download the TAR archive. You are looking for the driver file named mysql-connector-java-5.1.34-bin.jar in the archive. Copy the driver file to the lib folder in the directory where you placed your domain. On my system the folder is located at C:\Users\Ken\personal_domain\lib. If GlassFish is already running then you will have to restart it so that it picks up the new library. Step 1b: Automatically Adding the MySQL driver to the GlassFish Domain NetBeans has a feature that deploys the database driver to the domain’s lib folder if that driver is in NetBeans’ folder of drivers. On my Windows 8.1 system the MySQL driver can be found in C:\Program Files\NetBeans 8.0.2\ide\modules\ext. Start NetBeans and go to the Services tab, expand Servers and right mouse click on your GlassFish Server. Click on Properties and the Servers dialog will appear. On this dialog you will see a check box labelled Enable JDBC Driver Deployment. By default it is checked. NetBeans determines the driver to copy to GlassFish from the file glassfish-resources.xml that we will create in Step 4 of this tutorial. Without this file and if you have not copied the driver into GlassFish manually then GlassFish will not be able to connect to the database. Any code in your web application will not work and all you will likely see are blank pages. Step 1a or Step 1b? I recommend Step 1a and manually add the driver. The reason I prefer this approach is that I can be certain that the most recent driver is in use. As of this writing NetBeans contains version 5.1.23 of the connector but the current version is 5.1.34. If you copy a driver into the lib folder then NetBeans will not replace it with an older driver even if the check box on the Server dialog is checked. NetBeans does not replace a driver if one is already in place. If you need a driver that NetBeans does have a copy of then Step 1b is your only choice. Step 2: Create a Database Connection in NetBeans One feature I have always liked in NetBeans is that it has an interface for working with databases. All that is required is that you create a connection to the database. It also has additional features for managing a MySQL server but we won’t need those. If you have not already started your MySQL DBMS then do that now. I assume that the database you wish to connect to already exists. Go to the Services tab and right mouse click on New Connection. In the next dialog you must choose the database driver you wish to use. It defaults to Java DB (Embedded). Pull down the combobox labeled Driver: and select MySQL (Connector/J driver). Click on Next and you will now see the Customize Connection dialog. Here you can enter the details of the connection. On my system the server is localhost and the database name is Aquarium. Here is what my dialog looks like. Notice the Test Connection button. I have clicked on mine and so I have the message Connection Succeeded. Click on Next. There is nothing to do on this dialog so click on Next. On this last dialog you have the option of assigning a name to the connection. By default it uses the URL but I prefer a more meaningful name. I have used AquariumMySQL. Click on Finish and the connection will appear under Databases. If the icon next to AquariumMySQL has what looks like a crack in it similar to the jdbc:derby connection then this means that a connection to the database could not be made. Verify that the database is running and is accessible. If it is then delete the connection and start over. Having a connection to the database in NetBeans is invaluable. You can interact with the database directly and issue SQL commands. As a MySQL user this means that I do not need to run the MySQL command line program to interact with the database. Step 3: Create a Web Application Project in NetBeans If you have not already done so create a New Project in NetBeans. I require my students to create a New Project in the Maven category of a Web Application project. Click on Next. In this dialog you can give the project a name and a location in your file system. The Artifact Id, Group Id and Version are used by Maven. The final dialog lets you select the application server that your application will use and the version of Java EE that your code must be compliant with. Here is my project ready for the next step. Step 4: Create the GlassFish JDBC Resource For GlassFish to manage your database connection you need to set up two resources, a JDBC Connection Pool and a JDBC Resource. You can create both in one step by creating a GlassFish JDBC Resource because you can create the Connection Pool as part of the same operation. Right mouse click on the project name and select New and then Other … Scroll down the Categories list and select GlassFish. In the File Types list select JDBC Resource. Click on Next. The next dialog is the General Attributes. Click on the radio button for Create New JDBC Connection Pool. In the text field JNDI Name enter a name that is unique for the project. JNDI names for connection resources always begin with jdbc/ followed by a name that starts with a lower case letter. I have used jdbc/myAquarium. Do not prefix the name with java:app/ as some tutorials suggest. An upcoming article will explain why. Click on Next. There is nothing for us to enter on the Properties dialog. Click on Next. On the Choose Database Connection dialog we will give our connection pool a name and select the database connection we created in Step 2. Notice that in the list of available connections you are shown the connection URL and not the name you assigned to it back in Step 2. Click on Next. On the Add Connection Pool Properties dialog you will see the connection URL and the user name and password. We do need to make one change. The resource type shows javax.sql.DataSource and we must change it to javax.sql.ConnectionPoolDataSource. Click on Next. There is nothing we need to change on Add Connection Pool Optional Properties so click on Finish. A new folder has appeared in the Projects view named Other Sources. It contains a sub folder named setup. In this folder is the file glassfish-resources.xml. The glassfish-resources.xml file will contain the following. I have reformatted the file for easier viewing. OPTIONAL Step 5: Configure GlassFish with glassfish-resources.xml The glassfish-resources.xml file, when included in the application’s WAR file in the WEB-INF folder, can configure the resource and pool for the application when it is deployed in GlassFish. When the application is un-deployed the resource and pool are removed. If you want to set up the resource and pool permanently in GlassFish then follow these steps. Go to the Services tab and select Servers and then right mouse click on GlassFish. If GlassFish is not running then click on Start. With the server started click on View Domain Admin Console. Your web browser will now open and show you the GlassFish console. If you assigned a user name and password to the server you will have to enter this information before you see the console. In the Common Tasks tree select Resources. You should now see in the panel adjacent to the tree the following: Click on Add Resources. You should now see: In the Location click on Choose File and locate your glassfish-resources.xml file. Mine is found at D:\NetBeansProjects\GlassFishTutorial\src\main\setup. You should now see: Click on OK. If everything has gone well you should see: The final task in this step is to test if the connection works. In the Common Tasks tree select Resources, JDBC, JDBC Connection Pools and aquariumPool. Click on Ping. You should see: The most common reason for the Ping to fail is that the database driver is not in the domain’s lib folder. Go to Step 1a and manually add the driver. The resources are now visible in NetBeans. Having the resource and pool add to GlassFish permanently will allow other applications to share this same resource and pool. You are now ready to code!
February 9, 2015
by Ken Fogel
· 51,954 Views · 3 Likes
article thumbnail
Groovy Goodness: Getting All But the Last Element in a Collection with Init Method
In Groovy we can use the head and tail methods for a long time on Collection objects. With head we get the first element and with tailthe remaining elements of a collection. Since Groovy 2.4 we have a new method init which returns all elements but the last in a collection. In the following example we have a simple list and apply the different methods: def gr8Tech = ['Groovy', 'Grails', 'Spock', 'Gradle', 'Griffon'] // Since Groovy 2.4 we can use the init method. assert gr8Tech.init() == ['Groovy', 'Grails', 'Spock', 'Gradle'] assert gr8Tech.last() == 'Griffon' assert gr8Tech.head() == 'Groovy' assert gr8Tech.tail() == ['Grails', 'Spock', 'Gradle', 'Griffon'] Code written with Groovy 2.4.
February 8, 2015
by Hubert Klein Ikkink
· 7,559 Views
  • Previous
  • ...
  • 741
  • 742
  • 743
  • 744
  • 745
  • 746
  • 747
  • 748
  • 749
  • 750
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: