DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Zemian Deng

Software Developer at Java Programmer

US

Joined May 2010

https://zemian.github.io

About

My passion is in practical software engineering. See my thoughts on https://zemian.github.io NOTE: The views expressed on my blog and social network are my own and do not necessarily reflect the views of my employer.

Stats

Reputation: 55
Pageviews: 2.1M
Articles: 38
Comments: 11
  • Articles
  • Comments

Articles

article thumbnail
Python: Simple HTTP Server With CGI Scripts Enabled
This article takes us through how to get Python code running as a CGI script on a web server. Check it out!
July 20, 2016
· 35,351 Views · 4 Likes
article thumbnail
How to Import a Oracle Database Dump File
Here is an example on how to import a Oracle database dump file (a binary file that's exported from a Oracle database using the Oracle data pump utility).
August 24, 2015
· 49,628 Views · 1 Like
article thumbnail
How to Check Oracle Database Tablespace
When creating a new users in Oracle database (new schema), you need to verify the existing tablespace availability. This query will show you what's there and how much space are free to use.
August 22, 2015
· 203,273 Views · 3 Likes
article thumbnail
How to Store and Manage SQL Statements More Effectively With Java
Struggling with managing SQL statements in your Java code? Let Zemian give you away out!
July 28, 2015
· 18,928 Views
article thumbnail
Java RegEx: How to Replace All With Pre-processing on a Captured Group
Need to replace all occurances of a pattern text and replace it with a captured group? Something like this in Java works nicely: String html = "myurl\n" + "myurl2\n" + "myurl3"; html = html.replaceAll("id=(\\w+)'?", "productId=$1'"); Here I swapped the query name from "id" to "productId" on all the links that matched my criteria. But what happen if I needed to pre-process the captured ID value before replacing it? Let's say now I want to do a lookup and transform the ID value to something else? This extra requirement would lead us to dig deeper into Java RegEx package. Here is what I come up with: import java.util.regex.*; ... public String replaceAndLookupIds(String html) { StringBuffer newHtml = new StringBuffer(); Pattern p = Pattern.compile("id=(\\w+)'?"); Matcher m = p.matcher(html); while (m.find()) { String id= m.group(1); String newId = lookup(id); String rep = "productId=" + newId + "'"; m.appendReplacement(newHtml, rep); } m.appendTail(newHtml); return newHtml.toString(); }
June 17, 2015
· 12,838 Views · 1 Like
article thumbnail
How to Setup Intellij IDE War Exploded Artifact with Multiple CDI Dependent Projects
I have a large Java project with many sub modules, and they have simple top down dependencies like this: ProjectX +-ModuleLibA +-ModuleLibB +-ModuleCdiLibC +-ModuleCdiLibC2 +-ModuleLibD +-ModuleCdiLibE +-ModuleCdiLibE2 +-ModuleCdiLibE3 +-ModuleLibF +-ModuleLibG +-ModuleWebAppX Each of these modules has their own third party dependency jars. When I say top down, it simply means Module from bottom have all the one above sub module and its third party dependencies as well. The project is large, and with many files, but the structure is straight forward. It does have large amount of third party jars though. At the end, the webapp would have over 100 jars packaged in WEB-INF/lib folder! When you create this project structure in IntelliJ IDE (no I do not have the luxury of using Maven in this case), all the third party dependencies are nicely exported and managed from one Module to another as I create my Project with existing source and third parties jars. I do not need to re-define any redudant jars libraries definitions between Modules. When it come to define ModuleWebAppX at the end, all I have to do is to add ModuleLibG as a project dependency, and it brings all the other "transitives" dependent jars in! This is all done by IntelliJ IDE, which is very nice! IntelliJ IDE also let you setup Artifacts from your project to prepare for package and deployment that can run inside your IDE servers. By default, any web application will have an option to create a war:exploded artifact definition, and the IDE will automatically copy and update your project build artifacts into this output folder, and it can be deploy/redeploy into any EE server as exploded mode nicely. All these work really smoothly, until there is one problem that hit hard! The way IntelliJ IDE package default war:exploded artifact is that it will copy all the .class files generated from each Modules into a single "out/artifact/ProjectX_war_exploded" output folder. This works most of the time when our Java package and classes are unique, but not so with resource files that's not unique! My project uses several dependent CDI based modules. As you might know, each CDI module suppose to define their own, one and single location at META-INF/beans.xml to enable it and to customize CDI behavior. But becuase IntelliJ IDE flatten everything into a single output directory, I've lost the unique beans.xml file per each Module! This problem is hard to troubleshoot since it doesn't produce any error at first, nor it stops the web app from running. It just not able to load certain CDI beans that you have customized in the beans.xml!!! To resolve this, I have to make the IntelliJIDE artifact dependent modules to generate it's JAR instead of all copy into a single output. But we still want it to auto copy generated build files into the JAR archive automatically when we make a change. Lukcly IntelliJ has this feature. This is how I do it: 1. Open your project settings then select Artifacts on left. 2. Choose your war:exploded artifacts and look to your right. 3. Under OutputLayout tab, expand WEB-INF/lib, then right clik and "Create Archive" > Enter your moduleX name.jar. 4. Right click this newly created archive moduleX.jar name, then "Add Copy of" > "Module Output" and select one of your dependent module. 5. Repeat for each of the CDI based Modules! I wish there is a easier way to do across all Modules for this, but at least this manual solution works!
June 2, 2015
· 15,209 Views
article thumbnail
How to Create Multiple Workspaces with NetBeans
Examples Windows: netbeans.exe --userdir C:\MyOtherUserdir --cachedir "%HOME%\Locale Settings\Application Data\NetBeans\7.1\cache" Unix: ./netbeans --userdir ~/my-other-userdir Mac OS: /Applications/NetBeans.app/Contents/MacOS/executable --userdir ~/my-other-userdir Ref: http://wiki.netbeans.org/FaqAlternateUserdir
May 3, 2015
· 7,997 Views
article thumbnail
Getting Started with Servlet 3
The web application module in Java EE is probably the most common type of application module that a developer would encounter and work on. That's because not only it can provide users the UI, but it also supoprt many common web application patterns: Model View Controller, Filter, Session, Context Listener, Http Request, Paramters, Query, and Form handling, Http Response writer, redirect, error etc. You can do all these with Servlet spec alone, so getting to know it well is an important part of learning in writing good web application. Servlet has been around for a long time, and many developers are already familiar with it. There are many other web frameworks such as Tapestry or Spring MVC that are built on top of Servlet. These frameworks provide separate programming models that suppose to easy development process, but nontheless the core concept is still based on the Servlet technologies (or at least tightly integrated if it were to run by any web container server). In this post, I will try to highlight how to get a web module application started, and configure a typical need: a default landing page. Hello World Like many things in EE environment, you would write small components as Java class and then deploy them onto a server and let the server manage it's lifecycle and execution. So as with Servlet, you would write a simple Java class that implements Servlet interface, package it and deploy, and server will do it's magic. Before Servlet 3.0, your servlet component is configured and mapped in web.xml file, but now you can just add an annotation directly on your servlet class and the app server should be able to automatically deploy and run it. Here is an example of a classic hello world. package zemian.servlet3example.web; import java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @WebServlet("/hello") public class HelloServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { PrintWriter writer = resp.getWriter(); writer.println(""); writer.println(""); writer.println(""); writer.println("Hello World!"); writer.println(""); writer.println(""); } } In this Servlet, I simply extends an existing base HttpServlet class that should be available in all Serlvet spec. and in response to a http GET request, I write a Hello World message out as html response. You may find above code in servlet3-example. Build and deploy it and then you can access it with http://localhost/servlet3-exmaple/hello. (I have many other servlet examples in the project, but you may just concentrate in this class for now.) How to configure a default landing page in Servlet 3 A typical application server will likely default a landing page to "index.html" or "index.jsp" if it exists. For example, if I have written a IndexSerlvet class and mapped to "/index" instead, then you need to tell the server default to there. This will happen if users only type http://localhost/servlet3-example with context path in URL only. Despite you can can do just about most things in Java annotations with Servlet 3.0 as equivalent to the content found web.xml file, but not the welcome file though. So to do this, you would still need to create this good old web.xml. Here is an example index Above example will default the landing page to a Servlet url mapping with "/index" path. TIPS: Do NOT to use "/" prefix when definining welcome-file element, else you will get a page not found error and likely your server won't even print any error message in the log! Another alternate solution instead of overrite welcome-file is simply add a "index.jsp" file in root of webapp folder and do a redirect like this:
January 7, 2015
· 20,635 Views
article thumbnail
Writing Your Own Logging Service?
Application logging is one those things like favorite Editors war: everyone has their own opinions and there are endless of implemenations and flavors out there. Now a days, you likely would want to use something already available such as Log4j or Logback. Even JDK has a built in "java.util.logging" implementation. To avoid couple to a direct logger, many projects would opt to use a facade interface, and there is already couple good ones out there already, such as SLF4J or Apache Common Logging etc. Despite all these, many project owners still want to try write their own logger service! I wondered if I were to ask and write one myself, what would it be like? So I played around and come up with this simple facade that wraps one of the logger provider (JDK logger in this case), and you can check it out here. With my logger, you can use it like this in your application: import zemian.service.logging.*; class MyService { Log log = LogFactory.createLog(MyService.class); public void run() { log.info(Message.msg("%s service is running now.", this)); } } Some principles I followed when trying this out: Use simple names for different level of messages: error, warn, info, debug and trace (no crazy fine, finer and finest level names.) Seperate Log service from implementation so you can swap provider. Uses Message logging POJO as data encapsulation. It simplifies the log service interface. Use log parameters and lazy format binding to construct log message to speed performance. Do not go crazy with logging service implementation, making it complex. For example I recommend NOT to mix business logic or data in your logging if possible! If you need custom error codes to be logged for example, you can write your own Exception class and encapsulate there, and then let the logging service do its job: just logging. Here are some general rules about using logger in your application that I recommend: Use ERROR log messages when there is reallyl a error! Try not to log an "acceptable" error message in your application. Treat an ERROR as critical problem in your application, like if it's in production, some one should be paged to take care of the problem immediately. Each message should have a full Java stacktrace! Some application might want to assign a unique Error Code to these level of messages for easier identification and troubleshoot purpose. Use WARN log messages if it's a problem that's ignorable during production operation, but not good idea to supress it. Likely these might point to potentially problem in your application or env. Each message should have a full Java stacktrace, if available that is! Use INFO log messages for admin operators or application monitors peoples to see how your application is doing. High level application status or some important and meaningful business information indicators etc. Do not litter your log with developer's messages and unessary verbose and unclear message. Each message should be written in clear sentence so operators knows it's meaningful. Use DEBUG log messages for developers to see and troubleshoot the application. Use this for critical application junction and operation to show objects and services states etc. Try not to add repeated loop info messages here and litter your log content. Use TRACE log message for developers to troubleshoot tight for loop and high traffic messages information. You should select a logger provider that let you configure and turn these logging levels ON or OFF (preferrable at runtime if possible as well). Each level should able to automatically suppress all levels below it. And ofcourse you want a logger provider that can handle log message output to STDOUT and/or to FILE as destination as well.
December 19, 2014
· 22,036 Views
article thumbnail
How to Setup Custom SSLSocketFactory's TrustManager per Each URL Connection
We can see from javadoc that javax.net.ssl.HttpsURLConnection provided a static method to override withsetDefaultSSLSocketFory() method. This allow you to supply a custom javax.net.ssl.TrustManagerthat may verify your own CA certs handshake and validation etc. But this will override the default for all "https" URLs per your JVM! So how can we override just a single https URL? Looking at javax.net.ssl.HttpsURLConnection again we see instance method for setSSLSocketFactory(), but we can't instantiate HttpsURLConnection object directly! It took me some digging to realized that the java.net.URL is actually an factory class for its implementation! One can get an instance like this using new URL("https://localhost").openConnection() To complete this article, I will provide a simple working example that demonstrate this. package zemian; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; import java.security.SecureRandom; import java.security.cert.X509Certificate; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; public class WGetText { public static void main(String[] args) throws Exception { String urlString = System.getProperty("url", "https://google.com"); URL url = new URL(urlString); URLConnection urlConnection = url.openConnection(); HttpsURLConnection httpsUrlConnection = (HttpsURLConnection) urlConnection; SSLSocketFactory sslSocketFactory = createSslSocketFactory(); httpsUrlConnection.setSSLSocketFactory(sslSocketFactory); try (InputStream inputStream = httpsUrlConnection.getInputStream()) { BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream)); String line = null; while ((line = reader.readLine()) != null) { System.out.println(line); } } } private static SSLSocketFactory createSslSocketFactory() throws Exception { TrustManager[] byPassTrustManagers = new TrustManager[] { new X509TrustManager() { public X509Certificate[] getAcceptedIssuers() { return new X509Certificate[0]; } public void checkClientTrusted(X509Certificate[] chain, String authType) { } public void checkServerTrusted(X509Certificate[] chain, String authType) { } } }; SSLContext sslContext = SSLContext.getInstance("TLS"); sslContext.init(null, byPassTrustManagers, new SecureRandom()); return sslContext.getSocketFactory(); } }
October 31, 2014
· 37,485 Views
article thumbnail
Deploying Applications or Libraries to WebLogic Server Using Command Line
Here is how you can automate deployment for WebLogic server using the command line. First source the env settings from the server: $ source $ML_HOME/server/bin/setWLSEnv.sh Deploy Library: $ java weblogic.Deployer -nostage -deploy -library \ -adminurl localhost:7001 \ -username weblogic -password my_secret \ -targets myserver \ my_shared_lib.war Deploy Application: $ java weblogic.Deployer -nostage -deploy \ -adminurl localhost:7001 \ -username weblogic -password my_secret \ -targets myserver \ -name myapp.war myapp.war For development, you likely want to use the "-nostage" meaning to deploy the app or library directly from the file system. This means any changes to that file location and a reload from WLS will take effect immediately. For undeploy the command line options are same for library or app but with matching name. $ java weblogic.Deployer -undeploy \ -adminurl localhost:7001 \ -username weblogic -password my_secret \ -targets myserver \ -name myapp_or_lib.war
September 3, 2014
· 11,192 Views
article thumbnail
WebLogic Shared Library Deployment
When deploying a large WAR file application, it would be more easier to manage if we can separate the dependency jars away from the rest of the Web content; or at least those third party jars that do not update often. In this case, we usually call the jars content a "Shared Library" and the Web content the "Skinny WAR". With WebLogic Server, you can easily deploy such two artifacts. Just seperate and package your WAR application into two. The share library would be simply another WAR with only the WEB-INF/lib content in it, while the Skinny war will be the rest of your application without the jar depependencies. On the shared lib WAR file, ensure you have an META-INF/MANIFEST.MF that specify the name and version like the following: Implementation-Title: my_shared_lib Implementation-Version: 1.0 Specification-Title: my_shared_lib Specification-Version: 1.0 Extension-Name: my_shared_lib-1.0 Now your Skinny WAR would need to add an WEB-INF/weblogic.xml extension file to reference the library like this: my_shared_lib 1.0 1.0 true With these two packaged, now turn to your WLS admin console, you will find "Deployments" menu link on left, and on right, you click "Install" button. The next screen will prompt you to choose which type of deployment to install: "Library" (Shared Lib War) or "Application" (Skinny War). Re-run this twice, each with your two seperated WAR files you just built. The WLS will combine the two when running your WAR application. This comes handy if you are to deploy multiple instances of your Skinny war application, but now you only need one shared lib. NOTE: Ensure you select at least one, and the same Target servers where you deploy the Library and Application. Else your application will not be deployed and run.
September 2, 2014
· 20,127 Views · 1 Like
article thumbnail
A Simple Cron Wrapper Script With Logging
When working with crontab service, one thing I often need is to capture the output of the job. Having the job script aware of this output and logging is tedious, and often make the script harder to read. So I wrote a shell wrapper that will redirect all job script's STDOUT into a log file. This way I can inspect it when a job has run and the job script can just focus on the task itself. # file: runcmd.sh # Helper/wrapper script to run any command in the crontab env. This script will ensure # user profile script is loaded and to log any command output into log files. It also # ensure not to print anything to STDOUT to avoid crontab system mail alert. # # NOTE: be sure to pass in absolute path of the command to be run so it can be found. # # Usage: # ./runcmd.sh find $HOME/crontab/test.sh # Simple use case # LOG_NAME=mytest ./runcmd.sh $HOME/crontab/test.sh # Change the log name to something specific # # Options DIR=`dirname $0` CMD="$@" CMD_NAME=`basename $1` LOG_NAME=${LOG_NAME:=$CMD_NAME} LOG="$DIR/logs/$LOG_NAME.log`date +%s`" # Ensure logs dir exists if [[ ! -e $DIR/logs ]]; then mkdir -p $DIR/logs fi # Run cron command source $HOME/.bash_profile echo "`date` Started cron cmd=$CMD, logname=$LOG_NAME" >> $LOG 2>&1 $CMD >> $LOG 2>&1 echo "`date` Cron cmd is done." >> $LOG 2>&1 With this wrapper, you can run any shell script and their output will be recorded. For example this job script below will clean up the logs accumulated in our logs folder. Note that the wrapper will also auto source the ".bash_profile". Often this this is needed if your job script expect all the env variables you already have setup in your login shell scripts. # file: remove-crontab-logs.sh DIR=`dirname $0`/logs echo "Checking and removing logs in $DIR" find $DIR -type f -mtime +31 -print -delete echo "Done" Now in the crontab file, you may run the job script like this: # Clean up crontab logs @montly $HOME/crontab/runcmd.sh $HOME/crontab/remove-crontab-logs.sh
June 9, 2014
· 7,491 Views
article thumbnail
Be Careful with Java Path.endsWith(String) Usage
If you need to compare the java.io.file.Path object, be aware that Path.endsWith(String) will ONLY match another sub-element of Path object in your original path, not the path name string portion! If you want to match the string name portion, you would need to call the Path.toString() first. For example // Match all jar files. Files.walk(dir).forEach(path -> { if (path.toString().endsWith(".jar")) System.out.println(path); }); With out the "toString()" you will spend many fruitless hours wonder why your program didn't work.
April 19, 2014
· 9,678 Views · 1 Like
article thumbnail
How to Setup Remote Debug with WebLogic Server and Eclipse
Here is how I enable remote debugging with WebLogic Server (11g) and Eclipse IDE. (Actually the java option is for any JVM, just the instruction here is WLS specific.) 1. Edit /bin/setDomainEnv.sh file and add this on top: JAVA_OPTIONS="$JAVA_OPTIONS -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y" The suspend=y will start your server and wait for you to connect with IDE before continue. If you don't want this, then set to suspend=n instead. 2. Start/restart your WLS with /bin/startWebLogic.sh 3. Once WLS is running, you may connect to it using Eclipse IDE. Go to Menu: Run > Debug Configuration ... > Remote Java Application and create a new entry. Ensure your port number is matching to what you used above. Read more java debugging options here: http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#DebuggingOptions
April 12, 2014
· 72,427 Views
article thumbnail
How to Use NodeManager to Control WebLogic Servers
In my previous post, you have seen how we can start a WebLogic admin and multiple managed servers. One downside with that instruction is that those processes will start in foreground and the STDOUT are printed on terminal. If you intended to run these severs as background services, you might want to try the WebLogic node manager wlscontrol.sh tool. I will show you how you can get Node Manager started here. The easiest way is still to create the domain directory with the admin server running temporary and then create all your servers through the /console application as described in last post. Once you have these created, then you may shut down all these processes and start it with Node Manager. 1. cd $WL_HOME/server/bin && startNodeManager.sh & 3. $WL_HOME/common/bin/wlscontrol.sh -d mydomain -r $HOME/domains/mydomain -c -f startWebLogic.sh -s myserver START 4. $WL_HOME/common/bin/wlscontrol.sh -d mydomain -r $HOME/domains/mydomain -c -f startManagedWebLogic.sh -s appserver1 START The first step above is to start and run your Node Manager. It is recommended you run this as full daemon service so even OS reboot can restart itself. But for this demo purpose, you can just run it and send to background. Using the Node Manager we can then start the admin in step 2, and then to start the managed server on step 3. The NodeManager can start not only just the WebLogic server for you, but it can also monitor them and automatically restart them if they were terminated for any reasons. If you want to shutdown the server manually, you may use this command using Node Manager as well: $WL_HOME/common/bin/wlscontrol.sh -d mydomain -s appserver1 KILL The Node Manager can also be used to start servers remotely through SSH on multiple machines. Using this tool effectively can help managing your servers across your network. You may read more details here: http://docs.oracle.com/cd/E23943_01/web.1111/e13740/toc.htm TIPS1: If there is problem when starting server, you may wnat to look into the log files. One log file is the/servers//logs/.out of the server you trying to start. Or you can look into the Node Manager log itself at $WL_HOME/common/nodemanager/nodemanager.log TIPS2: You add startup JVM arguments to each server starting with Node Manager. You need to create a file under /servers//data/nodemanager/startup.properties and add this key value pair:Arguments = -Dmyapp=/foo/bar TIPS3: If you want to explore Windows version of NodeManager, you may want to start NodeManager without native library to save yourself some trouble. Try adding NativeVersionEnabled=false to$WL_HOME/common/nodemanager/nodemanager.properties file.
March 24, 2014
· 13,732 Views
article thumbnail
WebLogic Classloader Analysis Tool
The WebLogic Server has a built-in webapp called Classloader Analysis Tool, and you may access it through http://localhost:7001/wls-cat You need to login with same user as you configured for the /console webapp. With the CAT, you may check what classes are loaded by your application in the server. This is extremely handy if your app is loading jar that's already loaded by the server. For example, if you include your own Apache commons-lang.jar in a webapp and deploy it, you will see that org.apache.commons.lang.time.DateUtils is not from your webapp! If you ever get an error saying DateUtils#addDay() doesn't exist or signature not match, then likely you are using different version than the one comes with WLS. In this case, you will need to add "WEB-INF/weblogic.xml" that change classloading behavior. Like this: true Another cool thing you can use this webapp to check is resources packaged inside any jars. For resource file, you must use # prefix to it. For example try look up #log4j.properties and you will see where it's loading from. You may read more about this tool and related material here:http://docs.oracle.com/cd/E24329_01/web.1211/e24368/classloading.htm
March 10, 2014
· 28,710 Views · 2 Likes
article thumbnail
Generating a War File From a Plain IntelliJ Web Project
Sometimes you just want to create a quick web project in IntelliJ IDEA, and you would use their wizard and with web or Java EE module as starter project. But these projects will not have Ant nor Maven script generated for you automatically, and the IDEA Build would only compile your classes. So if you want an war file generated, try the following: 1) Menu: File > Project Structure > Artifacts 2) Click the green + icon and create a "Web Application: Archive", then OK 3) Menu: Build > Build Artifacts ... > Web: war By default it should generate it under your /out/artifacts/web_war.war Note that IntelliJ also allows you to setup "Web Application: Exploded" artifact, which great for development that run and deploy to an application server within your IDE.
March 3, 2014
· 73,703 Views · 2 Likes
article thumbnail
Developing Java EE applications with Maven and WebLogic 12c
The WebLogic Server 12c has very nice support for Maven now. The doc for this is kinda hidden though, so here is a direct link http://docs.oracle.com/middleware/1212/core/MAVEN To summarize the doc, Oracle did not provide a public Maven repository manager hosting for their server artifacts. However they do now provide a tool for you to create and populate your own. You can setup either your local repository (if you are working mostly on your own in a single computer), or you may deploy them into your own internal Maven repository manager such as Archiva or Nexus. Here I would show how the local repository is done. First step is use a maven plugin provided by WLS to populate the repository. I am using a MacOSX for this demo and my WLS is installed in $HOME/apps/wls12120. If you are on Windows, you may install it under C:/apps/wls12120. $ cd $HOME/apps/wls12120/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.1.2/ $ mvn install:install-file -DpomFile=oracle-maven-sync.12.1.2.pom -Dfile=oracle-maven-sync.12.1.2.jar $ mvn com.oracle.maven:oracle-maven-sync:push -Doracle-maven-sync.oracleHome=$HOME/apps/wls12120 -Doracle-maven-sync.testingOnly=false The artifacts are placed under your local $HOME/.m2/repository/com/oracle. Now you may use Maven to build Java EE application with these WebLogic artifact as dependencies. Not only these are available, the push also populated some additional maven plugins that helps development more easy. For example, you can generate a template project using their archetype plugin. $ cd $HOME $ mvn archetype:generate \ -DarchetypeGroupId=com.oracle.weblogic.archetype \ -DarchetypeArtifactId=basic-webapp \ -DarchetypeVersion=12.1.2-0-0 \ -DgroupId=org.mycompany \ -DartifactId=my-basic-webapp-project \ -Dversion=1.0-SNAPSHOT Type 'Y' to confirm to finish. Notice that pom.xml it generated; it is using the "javax:javaee-web-api:6.0:provided" dependency. This is working because we setup the repository earlier. Now you may build it. $ cd my-basic-webapp-project $ mvn package After this build you should have the war file under the target directory.You may manually copy and deploy this into your WebLogic server domain. Or you may continue to configure the maven pom to do this all with maven. Here is how I do it. Edit the my-basic-webapp-project/pom.xml file and replace the weblogic-maven-plugin plugin like this: com.oracle.weblogic weblogic-maven-plugin 12.1.2-0-0 ${oracleMiddlewareHome} ${oracleServerUrl} ${oracleUsername} ${oraclePassword} ${project.build.directory}/${project.build.finalName}.${project.packaging} ${oracleServerName} true ${project.build.finalName} With this change, you may deploy the webapp into WebLogic server (well, assuming you already started your "mydomain" with "myserver" server running locally. See my previous blog for instructions) $ cd my-basic-webapp-project $ mvn weblogic:deploy -DoracleMiddlewareHome=$HOME/apps/wls12120 -DoracleServerName=myserver -DoracleUsername=admin -DoraclePassword=admin123 After the "BUILD SUCCESS" message, you may visit the http://localhost:7001/basicWebapp URL. Revisit the WLS doc again and you will find that they also provide other project templates (Maven calls these archetypes) for building EJB, MDB, or WebService projects. These should help you get your EE projects started quickly.
February 26, 2014
· 19,211 Views · 1 Like
article thumbnail
Getting Started with Intellij IDEA and WebLogic Server
Before starting, you would need the Ultimate version of IDEA to run WebLogic Server (yes, the paid version or the 30 days trial). The Community edition of IDEA will not support Application Server deployment. I also assume you have already setup WebLogic Server and a user domain as per my previous blog instructions. So now let's setup the IDE to boost your development. Create a simple HelloWorld web application in IDEA. For your HelloWorld, you can go into the Project Settings > Artifacts, and add "web:war exploded" entry for your application. You will add this into your app server later. Ensure you have added the Application Server Views plugin with WebLogic Server. (It's under Settings > IDE Settings > Application Server) Click + and enter Name: WebLogic 12.1.2 WebLogic Home: C:\apps\wls12120 Back to your editor, select Menu: Run > Edit Configuration Click + and add "WebLogic Server" > Local Name: WLS On Server tab, ensure DomainPath is set: C:\apps\wls12120\mydomain On Deployment tab, select "web:war exploded" for your HelloWorld project. Click OK Now Menu: Run > "Run WLS" Your WebLogic Server should now start and running your web application inside. You may visit the browser on http://localhost:7001/web_war_exploded Some goodies with Intellij IDEA and WLS are: Redeploy WAR only without restarting server Deploy application in exploded mode and have IDE auto make and sync Debug application with server running within IDE Full control on server settings NOTE: As noted in previous blog, if you do not set MW_HOME as system variable, then you must add this in IDEA's Run Configuration. Or you edit your "mydomain/bin/startWebLogic.cmd" and"stopWebLogic.cmd" scripts directly.
February 5, 2014
· 45,143 Views
article thumbnail
Exploring Apache Camel Core - Seda Component
The seda component in Apache Camel is very similar to the direct component that I’ve presented in previous blog, but in a asynchronous manner.
September 15, 2013
· 26,391 Views
article thumbnail
Exploring Apache Camel Core - File Component
A file poller is a very useful mechanism to solve common IT problems. Camel’s built-in file component is extremely flexible, and there are many options available for configuration. Let’s cover few common usages here. Polling a Directory for Input Files Here is a typical Camel Route used to poll a directory for input files every second. import org.slf4j.*; import org.apache.camel.*; import org.apache.camel.builder.*; import java.io.*; public class FileRouteBuilder extends RouteBuilder { static Logger LOG = LoggerFactory.getLogger(FileRouteBuilder.class); public void configure() { from("file://target/input?delay=1000") .process(new Processor() { public void process(Exchange msg) { File file = msg.getIn().getBody(File.class); LOG.info("Processing file: " + file); } }); } } Run this with following: mvn compile exec:java -Dexec.mainClass=org.apache.camel.main.Main -Dexec.args='-r camelcoredemo.FileRouteBuilder' The program will begin to poll your target/input folder under your current directory and wait for incoming files. To test with input files, you would need to open another terminal and then create some files like this: echo 'Hello 1' > target/input/test1.txt echo 'Hello 2' > target/input/test2.txt You should now see the first prompt window start picking up the files and passing them to the next Processor step. In the Processor, we obtain the File object from the message body. It then simply logs its file name. You may hit CTRL+C when you are done. There many configurable options from the file component. You may use this in the URL, but most of the default settings are enough to get you going as the simple case above shows us. Some of these default behaviors are such that if the input folder doesn’t exist, it will create it. And when the file is done processing by the Route, it will be moved into a .camel folder. If you don’t want the file at all after processing, then set delete=true in the URL. Reading in the File Content and Converting to Different Types By default, the file component will create a org.apache.camel.component.file.GenericFile object for each file found and pass it down your Route as message body. You may retrieve all your file information through this object. Alternatively, you may also use the Exchange API to auto convert the message body object to a type you expect to receive (eg: as with msg.getIn().getBody(File.class)). In the example above, the File is a type you expect to get from the message body, and Camel will try to convert it for you. Camel uses the context’s registry space to pre-register many TypeConverter's that can handle the conversion of most of the common data types (like Java primitives). These TypeConverters are a powerful way to make your Route and Processor more flexible and portable. Camel will not only convert just your File object from a message body, but it can also read the file content. If your files are character text based, then you can simply do this. from("file://target/input?charset=UTF-8") .process(new Processor() { public void process(Exchange msg) { String text = msg.getIn().getBody(String.class); LOG.info("Processing text: " + text); } }); That’s it! Simply specify that it is a String type, and Camel will read your file and pass in the entire file text content as a body message. You may even use the charset to change the encoding. If you are dealing with a binary file, then simply try byte[] bytes =msg.getIn().getBody(byte[].class); conversion instead. Pretty cool huh? Polling and Processing Large Files When working with large files, there are a few options in the file component that you might want to use to ensure proper handling. For example, you might want to move the input file into a staging folder before the Route starts the processing; and when it’s done, move it to a .completed folder. from("file://target/input?preMove=staging&move=.completed") .process(new Processor() { public void process(Exchange msg) { File file = msg.getIn().getBody(File.class); LOG.info("Processing file: " + file); } }); To feed input files properly into the polling folder, it’s best if the sender generates the input files in a temporary folder first, and only when it’s ready then move it into the polling folder. This will minimize reading an incomplete file by the Route if the input file might take time to generate. Another solution to this is to configure the file endpoint to only read the polling folder when there is a signal or when a ready-marker file exists. For example: from("file://target/input?preMove=staging&move=.completed&doneFileName=ReadyFile.txt") .process(new Processor() { public void process(Exchange msg) { File file = msg.getIn().getBody(File.class); LOG.info("Processing file: " + file); } }); The code above will only read the target/input folder when a ReadyFile.txt file exists. The marker file can just be an empty file, and it will be removed by Camel after polling. This solution would allow the sender to generate input files in no matter how long it takes. Another concern with large file processing is avoiding loading a file's entire content into memory for processing. To be more practical, you want to split the file into records (eg: per line) and process it one by one (this is called "streaming"). Here is how you would do that using Camel. from("file://target/input?preMove=staging&move=.completed") .split(body().tokenize("\n")) .streaming() .process(new Processor() { public void process(Exchange msg) { String line = msg.getIn().getBody(String.class); LOG.info("Processing line: " + line); } }); This Route will allow you to process large size file without consuming too much memory, and it will process it line-by-line very efficiently. Writing Messages Back into File The file component can also be used to write messages into files. Recall that we may use dataset components to generate sample messages. We will use that to feed the Route and send it to the file component so you can see that each message generated will be saved into a file. package camelcoredemo; import org.slf4j.*; import org.apache.camel.*; import org.apache.camel.builder.*; import org.apache.camel.main.Main; import org.apache.camel.component.dataset.*; public class FileDemoCamel extends Main { static Logger LOG = LoggerFactory.getLogger(FileDemoCamel.class); public static void main(String[] args) throws Exception { FileDemoCamel main = new FileDemoCamel(); main.enableHangupSupport(); main.addRouteBuilder(createRouteBuilder()); main.bind("sampleGenerator", createDataSet()); main.run(args); } static RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from("dataset://sampleGenerator") .to("file://target/output"); } }; } static DataSet createDataSet() { return new SimpleDataSet(); } } Compile and run it. mvn compile exec:java -Dexec.mainClass=camelcoredemo.FileDemoCamel Upon completion, you will see that 10 files would be generated in the target/output folder with the file name in ID--- format. There are more options availabe from File component that you may explore. Try it out with a Route and see for yourself.
September 9, 2013
· 58,958 Views · 2 Likes
article thumbnail
Exploring Apache Camel Core - Timer Component
Camel Timer is a simple and yet useful component. It brings the JDK’s timer functionality into your camel Route with very simple config. from("timer://mytimer?period=1000") .process(new Processor() { public void process(Exchange msg) { LOG.info("Processing {}", msg); } }); That will generate a timer event message every second. You may short hand 1000 with 1s instead. It supports mfor minutes, or h for hours as well. Pretty handy. Another useful timer feature is that it can limit (stop) the number of timer messages after a certain count. You simply need to add repeatCount option toward the url. Couple of properties from the event message would be useful when handling the timer message. Here is an example how to read them. from("timer://mytimer?period=1s&repeatCount=5") .process(new Processor() { public void process(Exchange msg) { java.util.Date fireTime = msg.getProperty(Exchange.TIMER_FIRED_TIME, java.util.Date.class); int eventCount = msg.getProperty(Exchange.TIMER_COUNTER, Integer.class); LOG.info("We received {}th timer event that was fired on {}", eventCount, fireTime); } }); There are more options availabe from Timer component that you may explore. Try it out with a Route and see it for yourself.
September 4, 2013
· 8,386 Views
article thumbnail
How to Create a Web-app with Quartz Scheduler and Logging
I sometimes help out users in Quartz Scheduler forums. Once in a while some one would ask how can he/she setup the Quartz inside a web application. This is actualy a fairly simple thing to do. The library already comes with a ServletContextListener that you can use to start a Scheduler. I will show you a simple webapp example here. First create a Maven pom.xml file. 4.0.0 quartz-web-demo quartz-web-demo war 1.0-SANPSHOT org.quartz-scheduler quartz 2.2.0 Then you need to create a src/main/webapp/META-INF/web.xml file. quartz:config-file quartz.properties quartz:shutdown-on-unload true quartz:wait-on-shutdown true quartz:start-on-load true org.quartz.ee.servlet.QuartzInitializerListener And lastly, you need a src/main/resources/quartz.properties config file for Scheduler. # Main Quartz configuration org.quartz.scheduler.skipUpdateCheck = true org.quartz.scheduler.instanceName = MyQuartzScheduler org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 You may configure many other things with Quartz, but above should get you started as in In-Memory scheduler. Now you should able to compile and run it. bash> mvn compile bash> mvn org.apache.tomcat.maven:tomcat7-maven-plugin:2.1:run -Dmaven.tomcat.port=8081 How to configure logging for Quartz Scheduler Another frequently asked question is how do they setup logging and see the DEBUG level messages. The Quartz Schedulers uses SLF4J, so you have many loggers options to choose. I will show you how to setup Log4j for example below. First, add this to your pom.xml org.slf4j slf4j-log4j12 1.7.5 Then add src/main/resources/log4j.properties file to show messages onto STDOUT. log4j.rootLogger=INFO, stdout log4j.logger.org.quartz=DEBUG log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n Restart your web application on command line, and now you should see all the DEBUG level logging messages coming from Quartz library. With everything running, your next question might be asking how do you access the scheduler from your web application? Well, when the scheduler is created by the servlet context listener, it is stored inside the web app’s ServletContext space with org.quartz.impl.StdSchedulerFactory.KEY key. So you may retrieve it and use it in your own Servlet like this: public class YourServlet extends HttpServlet { public init(ServletConfig cfg) { String key = "org.quartz.impl.StdSchedulerFactory.KEY"; ServletContext servletContext = cfg.getServletContext(); StdSchedulerFactory factory = (StdSchedulerFactory) servletContext.getAttribute(key); Scheduler quartzScheduler = factory.getScheduler("MyQuartzScheduler"); // TODO use quartzScheduler here. } } Now you are on your way to build your next scheduling application! Have fun!
August 30, 2013
· 36,806 Views
article thumbnail
How to Configure SLF4J with Different Logger Implementations
There are many good benefits in using slf4j library as your Java application logging API layer. Here I will show few examples on how to use and configure it.
August 21, 2013
· 251,608 Views · 11 Likes
article thumbnail
How to Convert Asciidoc Text to HTML using Groovy
Here is how you can convert asciidoc text using a Groovy script: // filename: RunAsciidoc.groovy @Grab('org.asciidoctor:asciidoctor-java-integration:0.1.3') import org.asciidoctor.* def asciidoctor = Asciidoctor.Factory.create() def output = asciidoctor.renderFile(new File(args[0]), [:]) println(output); Now you may run it groovy RunAsciidoc.groovy myarticle.txt Many thanks to the asciidoctor.org project!
August 20, 2013
· 8,793 Views
article thumbnail
Getting Started with Quartz Scheduler on MySQL Database
Here are some simple steps to get you fully started with Quartz Scheduler on MySQL database using Groovy. The script below will allow you to quickly experiment different Quartz configuration settings using an external file. First step is to setup the database with tables. Assuming you already have installed MySQL and have access to create database and tables. bash> mysql -u root -p sql> create database quartz2; sql> create user 'quartz2'@'localhost' identified by 'quartz2123'; sql> grant all privileges on quartz2.* to 'quartz2'@'localhost'; sql> exit; bash> mysql -u root -p quartz2 < /path/to/quartz-dist/docs/dbTables/tables_mysql.sql The tables_mysql.sql can be found from Quartz distribution download, or directly from their source here. Once the database is up, you need to write some code to start up the Quartz Scheduler. Here is a simply Groovy script quartzServer.groovy that will run as a tiny scheduler server. // Run Quartz Scheduler as a server // Author: Author: Zemian Deng, Date: 2012-12-15_16:46:09 @GrabConfig(systemClassLoader=true) @Grab('mysql:mysql-connector-java:5.1.22') @Grab('org.slf4j:slf4j-simple:1.7.1') @Grab('org.quartz-scheduler:quartz:2.1.6') import org.quartz.* import org.quartz.impl.* import org.quartz.jobs.* config = args.length > 0 ? args[0] : "quartz.properties" scheduler = new StdSchedulerFactory(config).getScheduler() scheduler.start() // Register shutdown addShutdownHook { scheduler.shutdown() } // Quartz has its own thread, so now put this script thread to sleep until // user hit CTRL+C while (!scheduler.isShutdown()) { Thread.sleep(Long.MAX_VALUE) } And now you just need a config file quartz-mysql.properties that looks like this: # Main Quartz configuration org.quartz.scheduler.skipUpdateCheck = true org.quartz.scheduler.instanceName = DatabaseScheduler org.quartz.scheduler.instanceId = NON_CLUSTERED org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate org.quartz.jobStore.dataSource = quartzDataSource org.quartz.jobStore.tablePrefix = QRTZ_ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 # JobStore: JDBC jobStoreTX org.quartz.dataSource.quartzDataSource.driver = com.mysql.jdbc.Driver org.quartz.dataSource.quartzDataSource.URL = jdbc:mysql://localhost:3306/quartz2 org.quartz.dataSource.quartzDataSource.user = quartz2 org.quartz.dataSource.quartzDataSource.password = quartz2123 org.quartz.dataSource.quartzDataSource.maxConnections = 8 You can run the Groovy script as usual bash> groovy quartzServer.groovy quartz-mysql.properties Dec 15, 2012 6:20:26 PM com.mchange.v2.log.MLog INFO: MLog clients using java 1.4+ standard logging. Dec 15, 2012 6:20:27 PM com.mchange.v2.c3p0.C3P0Registry banner INFO: Initializing c3p0-0.9.1.1 [built 15-March-2007 01:32:31; debug? true; trace:10] [main] INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor [main] INFO org.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl [main] INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.1.6 created. [main] INFO org.quartz.core.QuartzScheduler - JobFactory set to: org.quartz.simpl.SimpleJobFactory@1a40247 [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Using thread monitor-based data access locking (synchronization). [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - JobStoreTX initialized. [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.1.6) 'DatabaseScheduler' with instanceId 'NON_CLUSTERED' Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. NOT STARTED. Currently in standby mode. Number of jobs executed: 0 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 5 threads. Using job-store 'org.quartz.impl.jdbcjobstore.JobStoreTX' - which supports persistence. and is not clustered. [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DatabaseScheduler' initialized from the specified file : 'quartz-mysql.properties' from the class resource path. [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.1.6 Dec 15, 2012 6:20:27 PM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 1hge16k8r18mveoq1iqtotg|1486306, debugUnreturnedConnectionStackTraces -> fals e, description -> null, driverClass -> com.mysql.jdbc.Driver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 1hge16k8r18mveoq1iqtotg|1486306, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:mysql://localhost:3306/quartz2, lastAcquisitionFailureDefaultUser -> null, maxAdministrativeTaskTime -> 0 , maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 8, maxStatements -> 0, maxStatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, pref erredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ] [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Freed 0 triggers from 'acquired' / 'blocked' state.[main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Recovering 0 jobs that were in-progress at the time of the last shut-down. [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Recovery complete. [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Removed 0 'complete' triggers. [main] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - Removed 0 stale fired job entries. [main] INFO org.quartz.core.QuartzScheduler - Scheduler DatabaseScheduler_$_NON_CLUSTERED started. ... CTRL+C [Thread-6] INFO org.quartz.core.QuartzScheduler - Scheduler DatabaseScheduler_$_NON_CLUSTERED shutting down. [Thread-6] INFO org.quartz.core.QuartzScheduler - Scheduler DatabaseScheduler_$_NON_CLUSTERED paused. [Thread-6] INFO org.quartz.core.QuartzScheduler - Scheduler DatabaseScheduler_$_NON_CLUSTERED shutdown complete. That's a full run of above setup. Go ahead and play with different config. Read http://quartz-scheduler.org/documentation/quartz-2.1.x/configuration for more details. Here I will post couple more easy config that will get you started in a commonly used config set. A MySQL cluster enabled configuration. With this, you can start one or more shell terminal and run different instance of quartzServer.groovy with the same config. All the quartz scheduler instances should cluster themselve and distribute your jobs evenly. # Main Quartz configuration org.quartz.scheduler.skipUpdateCheck = true org.quartz.scheduler.instanceName = DatabaseClusteredScheduler org.quartz.scheduler.instanceId = AUTO org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate org.quartz.jobStore.dataSource = quartzDataSource org.quartz.jobStore.tablePrefix = QRTZ_ org.quartz.jobStore.isClustered = true org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 # JobStore: JDBC jobStoreTX org.quartz.dataSource.quartzDataSource.driver = com.mysql.jdbc.Driver org.quartz.dataSource.quartzDataSource.URL = jdbc:mysql://localhost:3306/quartz2 org.quartz.dataSource.quartzDataSource.user = quartz2 org.quartz.dataSource.quartzDataSource.password = quartz2123 org.quartz.dataSource.quartzDataSource.maxConnections = 8 Here is another config set for a simple in-memory scheduler. # Main Quartz configuration org.quartz.scheduler.skipUpdateCheck = true org.quartz.scheduler.instanceName = InMemoryScheduler org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 Now, if you need more fancy UI management of Quartz, give MySchedule a try.
December 21, 2012
· 48,904 Views · 1 Like
article thumbnail
Checking DB Connection Using Java
For the sake of completeness, here is a Java version of the Groovy post to test your Oracle Database connection. package atest; import java.sql.*; /** * Run arguments sample: * jdbc:oracle:thin:@localhost:1521:XE system mypassword123 oracle.jdbc.driver.OracleDriver */ public class DbConn { public static void main(String[] args) throws Exception { String url = args[0]; String username = args[1]; String password = args[2]; String driver = args[3]; Class.forName(driver); Connection conn = DriverManager.getConnection(url, username, password); try { Statement statement = conn.createStatement(); ResultSet rs = statement.executeQuery("SELECT SYSDATE FROM DUAL"); while(rs.next()) { System.out.println(rs.getObject(1)); } } finally { conn.close(); } } }
December 14, 2012
· 60,232 Views
article thumbnail
Checking DB Connection Using Groovy
Here is a simple Groovy script to verify Oracle database connection using JDBC. @GrabConfig(systemClassLoader=true) @Grab('com.oracle:ojdbc6:11g') url= "jdbc:oracle:thin:@localhost:1521:XE" username = "system" password = "mypassword123" driver = "oracle.jdbc.driver.OracleDriver" // Groovy Sql connection test import groovy.sql.* sql = Sql.newInstance(url, username, password, driver) try { sql.eachRow('select sysdate from dual'){ row -> println row } } finally { sql.close() } This script should let you test connection and perform any quick ad hoc queries programmatically. However, when you first run it, it would likely failed without finding the Maven dependency for JDBC driver jar. In this case, you would need to first install the Oracle JDBC jar into maven local repository. This is due to Oracle has not publish their JDBC jar into any public Maven repository. So we are left with manually steps by installing it. Here are the onetime setup steps: 1. Download Oracle JDBC jar from their site: http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html. 2. Unzip the file into C:/ojdbc directory. 3. Now you can install the jar file into Maven local repository using Cygwin. bash> cd /cygdrive/c/ojdbc bash> mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11g -Dpackaging=jar -Dfile=ojdbc6-11g.jar That should make your script run successfully. The Groovy way of using Sql has many sugarcoated methods that you let you quickly query and see data on screens. You can see more Groovy feature by studying their API doc. Note that you would need systemClassLoader=true to make Groovy load the JDBC jar into classpath and use it properly. Oh, BTW, if you are using Oracle DB production, you will likely using a RAC configuration. The JDBC url connection string for that should look something like this: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=MY_DB))) Update: 12/07/2012 It appears that the groovy.sql.Sql class has a static withInstance method. This let you run onetime DB work without writing try/finally block. See this example: @GrabConfig(systemClassLoader=true) @Grab('com.oracle:ojdbc6:11g') url= "jdbc:oracle:thin:@localhost:1521:XE" username = "system" password = "mypassword123" driver = "oracle.jdbc.driver.OracleDriver" import groovy.sql.* Sql.withInstance(url, username, password, driver) { sql -> sql.eachRow('select sysdate from dual'){ row -> println row } } It's much more compact. But be aware of performance if you run it multiple times, because you will open and close the a java.sql.Connection per each call! I have also collected couple other popular databases connection test examples. These should have their driver jars already in Maven central, so Groovy Grab should able to grab them just fine. // MySQL database test @GrabConfig(systemClassLoader=true) @Grab('mysql:mysql-connector-java:5.1.22') import groovy.sql.* Sql.withInstance("jdbc:mysql://localhost:3306/mysql", "root", "mypassword123", "com.mysql.jdbc.Driver"){ sql -> sql.eachRow('SELECT * FROM USER'){ row -> println row } } // H2Database @GrabConfig(systemClassLoader=true) @Grab('com.h2database:h2:1.3.170') import groovy.sql.* Sql.withInstance("jdbc:h2:~/test", "sa", "", "org.h2.Driver"){ sql -> sql.eachRow('SELECT * FROM INFORMATION_SCHEMA.TABLES'){ row -> println row } }
December 12, 2012
· 28,638 Views
article thumbnail
What Does UTF-8 With BOM Mean?
Believe it or not, There is no such thing as Plain Text! All files in a modern Operating Sytems (Windows, Linux, or MacOSX) are saved with an encoding scheme! They are encoded (a table mapping of what each byte means) in such way so that other programs can read it back and understand how to get information out. It happens that US/ASCII encoding is earliest and widely used that people think it's just "Plain Tex". But even ASCII is an encoding! It uses 7 bits in mapping all US characters in saving the bytes into file. Obviously you are free to use any kind of encoding (mapping) scheme to save any files, but if you want other programs to read it back easily, then sticking to some standard ones would help a lot. Without an agreed upon encoding, programs will not able to read files and be any useful! The most useful and practical file encoding today is "UTF-8" because it support Unicode, and it's widely used in internet. I discovered something odd when using Eclipse and Notepadd++. In Ecilpse, if we set default encoding with UTF-8, it would use normal UTF-8 without the Byte Order Mark (BOM). But in Notepad++, it appears to support UTF-8 wihtout BOM, but it won't recoginze it when first open. You can check this by going Menu > Encoding and see which one is selected. Notepad++ seems to only recognize UTF-8 wihtout BOM with ones it converted by it's own conversion utility. Perhaps it's a bug in notepad++. So what is BOM? The byte order mark is useless for UTF-8. They only used for UTF-16 so they know which byte order is first. But UTF-8 will allow you to save these BOM for conversion purpose... they are ineffective in encoding the doc itself. So a "normal" UTF-8, it won't have BOM, but Windows would like to use them anyway. The Windows NOTEPAD would automatically save BOM in UTF-8! So be-aware when viewing UTF-8 without BOM encoding files in Notepad++, as it can be deceiving at first glance. Ref: http://en.wikipedia.org/wiki/UTF-8 http://www.joelonsoftware.com/articles/Unicode.html
November 12, 2012
· 74,865 Views

Comments

How to Setup Custom SSLSocketFactory's TrustManager per Each URL Connection

Nov 01, 2014 · James Sugrue

@Jonathan, thanks for the HttpClient note!

Extracting Icons from EXE/DLL and Icon Manipulatio

Aug 26, 2013 · ggboy boygg

Hello Andrej, thanks for the feedback.

You are right that Camel bean can be any POJO without even implementing Processor interface. I used in my example because I didn't want to bring out the bean component features, but simply wanted users to see how to get started with Camel. I wanted focus on the Main and Context part with minimal Route info. They can explore each components on those doc mentioned.

Also, just to point out that one benefit of using bean with Processor interface vs plain POJO is that you gain performance without have to write explicit TypeConverter!

As for the code format, I don't have control over here on Dzone. Maybe you can read the same article on my original post, since it has different format.

--

Zemian

Extracting Icons from EXE/DLL and Icon Manipulatio

Aug 26, 2013 · ggboy boygg

Hello Andrej, thanks for the feedback.

You are right that Camel bean can be any POJO without even implementing Processor interface. I used in my example because I didn't want to bring out the bean component features, but simply wanted users to see how to get started with Camel. I wanted focus on the Main and Context part with minimal Route info. They can explore each components on those doc mentioned.

Also, just to point out that one benefit of using bean with Processor interface vs plain POJO is that you gain performance without have to write explicit TypeConverter!

As for the code format, I don't have control over here on Dzone. Maybe you can read the same article on my original post, since it has different format.

--

Zemian

What's up with the JUnit and Hamcrest Dependencies?

Oct 21, 2012 · James Sugrue

That's great news Marc. Thanks!
What's up with the JUnit and Hamcrest Dependencies?

Oct 21, 2012 · James Sugrue

That's great news Marc. Thanks!
What's up with the JUnit and Hamcrest Dependencies?

Oct 21, 2012 · James Sugrue

That's great news Marc. Thanks!
Is Self Submission Really that Bad?

Oct 21, 2012 · admin

That's great news Marc. Thanks!
Is Self Submission Really that Bad?

Oct 21, 2012 · admin

That's great news Marc. Thanks!
Is Self Submission Really that Bad?

Oct 21, 2012 · admin

That's great news Marc. Thanks!
How to Write Better POJO Services

Sep 17, 2012 · James Sugrue

Hi Andrew,

For simple case, you can handle these service dependencies by extending the ServiceContainer I provided. I stated that they will be in straight order that you add them in, but you can do whatever you want.

For anything complicated, you might want to explore the Spring container, which handles dependencies and inject and much more.

Zemian

How to Write Better POJO Services

Sep 17, 2012 · James Sugrue

Hi Andrew,

For simple case, you can handle these service dependencies by extending the ServiceContainer I provided. I stated that they will be in straight order that you add them in, but you can do whatever you want.

For anything complicated, you might want to explore the Spring container, which handles dependencies and inject and much more.

Zemian

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: