DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Java Classloader - Handling Multiple Versions of The Same Class
This article deals with different approaches to load multiple versions of the same class. The problem: Many times during the development you will need to work with different libraries versions which not always backward compatible or from some reason you need to support multiple versions of the same library. Using standard Java classloaders will not solve this issue since they all use "loadClass" method to load a specific class only once. After that "loadClass" will return the reference of the existing Class instance. The solution: Using another classloader to load the library (or multiple classloaders). There are several approaches available to achieve that: Use a URLClassLoader: This class loader will allow you to load your jars via URLs, or specify a directory for your classes location. Here is an example: URLClassLoader clsLoader = URLClassLoader.newInstance(new URL[] {new URL("file:/C://Test/test.jar")}); Class cls = clsLoader.loadClass("test.Main"); Method method = cls.getMethod("main", String[].class); String[]params = new String[2]; method.invoke(null, (Object) params); 2. Write your own custom class loader. Since Java class loaders (including URLClassLoader) first ask to load classes from their parent class loader, you can encounter a situation where you will need your custom classloader to load the classes from your specified location first. In this case here is a sample of a custom class loader which do exactly that: import java.net.URL; import java.net.URLClassLoader; import java.util.List; public class CustomClassLoader extends ClassLoader { private ChildClassLoader childClassLoader; public CustomClassLoader(List classpath) { super(Thread.currentThread().getContextClassLoader()); URL[] urls = classpath.toArray(new URL[classpath.size()]); childClassLoader = new ChildClassLoader( urls, new DetectClass(this.getParent()) ); } @Override protected synchronized Class loadClass(String name, boolean resolve) throws ClassNotFoundException { try { return childClassLoader.findClass(name); } catch( ClassNotFoundException e ) { return super.loadClass(name, resolve); } } private static class ChildClassLoader extends URLClassLoader { private DetectClass realParent; public ChildClassLoader( URL[] urls, DetectClass realParent ) { super(urls, null); this.realParent = realParent; } @Override public Class findClass(String name) throws ClassNotFoundException { try { Class loaded = super.findLoadedClass(name); if( loaded != null ) return loaded; return super.findClass(name); } catch( ClassNotFoundException e ) { return realParent.loadClass(name); } } } private static class DetectClass extends ClassLoader { public DetectClass(ClassLoader parent) { super(parent); } @Override public Class findClass(String name) throws ClassNotFoundException { return super.findClass(name); } } }
February 24, 2014
by Uri Lukach
· 109,200 Views · 8 Likes
article thumbnail
Brief comparison of BDD frameworks
JDave, Concordion, Easyb, JBehave, Cucumber are all compared here briefly for your convenience.
February 24, 2014
by Sebastian Laskawiec
· 129,267 Views · 16 Likes
article thumbnail
Android Rotate and Scale Bitmap Example
i built an android demo app so i could test my understanding of displaying bitmaps on a canvas. i had done scaling of bitmaps, rotation of bitmaps, and translation from one origin to another, but i had not done more than one of those transformations at a time. the demo app is shown in the figures above. there are two images in the center of the screen. each image is scaled to fit within the light blue region. when you press the rotate button, each of the images is rotated around its center, while maintaining its position in the center of the region on the screen. the scale button resizes the images. there are three different sizes. each time you touch scale, it switches to the next size. the offset cycles you through four different offsets. in the app mainactivity, two instances of starshipview are in the layout. in the oncreate method, each view is assigned a bitmap. sv.setbitmapfromresource (r.drawable.starship1); sv.setscale (1.0f); sv.invalidate (); the onclick method in mainactivity gets called whenever a button is clicked. the code in onclick finds the two views in its layout and sets properties that control the amount of rotation, size of the bitmap, and x and y offsets. sv.setscale (newscale1); sv.setdegrees (degrees1); sv.setoffsetx (newoffset1); sv.setoffsety (newoffset1); sv.invalidate (); inside class starshipview, in the ondraw method, the bitmap assigned to the view is written to the canvas. the code is actually very simple, once you get comfortable with using matrix objects to do the work. here’s what goes on in the ondraw method of class starshipview. first, the matrix object is set so it will fit the bitmap into the rectangle for the view. for this demo app, i chose some interesting sizes to test this part of the code. the starship image is 512 x 512. it is scaled to fit into the 96 dp area on the left. the star field image on the right is 96 x 96 is displayed in the 120 dp square on the right. the second step is to translate the view up and left by half the width and half the height. that is done because rotation is around the top left point (the origin) of the view. rotation follows that step. it is very simple: “matrix.postrotate (rotation)”. /** * draw the bitmap onto the canvas. * * the following transformations are done using a matrix object: * (1) the bitmap is scaled to fit within the view; * (2) the bitmap is translated up and left half the width and height, to support rotation around the center; * (3) the bitmap is rotated n degrees; * (4) the bitmap is translated to the specified offset valuess. */ @override public void ondraw(canvas canvas) { if (pbitmap == null) return; // use the same matrix over and over again to minimize // allocation in ondraw. matrix matrix = mmatrix; matrix.reset (); float vw = this.getwidth (); float vh = this.getheight (); float hvw = vw / 2; float hvh = vh / 2; float bw = (float) pbitmap.getwidth (); float bh = (float) pbitmap.getheight (); // first scale the bitmap to fit into the view. // use either scale factor for width and height, // whichever is the smallest. float s1x = vw / bw; float s1y = vh / bh; float s1 = (s1x < s1y) ? s1x : s1y; matrix.postscale (s1, s1); // translate the image up and left half the height // and width so rotation (below) is around the center. matrix.posttranslate(-hvw, -hvh); // rotate the bitmap the specified number of degrees. int rotation = getdegrees (); matrix.postrotate(rotation); // if the bitmap is to be scaled, do so. // also figure out the x and y offset values, which start // with the values assigned to the view // and are adjusted based on the scale. float offsetx = getoffsetx (), offsety = getoffsety (); if (pscale != 1.0f) { matrix.postscale (pscale, pscale); float sx = (0.0f + pscale) * vw / 2; float sy = (0.0f + pscale) * vh / 2; offsetx += sx; offsety+= sy; } else { offsetx += hvw; offsety += hvh; } // the last translation moves the bitmap to where it has to be to have its top left point be // where it should be following the rotation and scaling. matrix.posttranslate (offsetx, offsety); // finally, draw the bitmap using the matrix as a guide. canvas.drawbitmap (pbitmap, matrix, null); } once the bitmap is rotated, it needs to have its location translated to the place where it should display in the view. that is specified in the offsetx and offsety values. so you see one more matrix.posttranslate call in the method. the final action in the ondraw method is the drawing of the bitmap. notice that the drawbitmap method uses the matrix with the various transformations encoded in it. source code you can download the source code for this demo from the wglxy.com website. click here: download zip file from wglxy.com . the zip is attached at the bottom of that page. after you import the project into eclipse, it’s a good idea to use the project – clean menu item to rebuild the project. this demo app was compiled with android 4.4 (api 19). it works in all api levels from api 10 on up. references as with many other problems, i found very good advice on stackoverflow. a stackoverflow post on rotating images around the center of the image helped me.
February 24, 2014
by Bill Lahti
· 52,308 Views
article thumbnail
Common Gotchas in Java
Java is a minimalist language with deliberately less features than other languages, never the less it has edge cases which strange effects, and even some common cases with surprising effects to trip up the unwary. If you are used to reading another language you can easily read Java the wrong way leaving to confusion. Variables are only references or primitives That's right, variables are not Objects. This means when you see the following, s is not an object, it is not a String, it is a reference to a String String s = "Hello"; This answers many areas of confusion such as; Q: If String is immutable how can I change it. e g. s += "!"; A: You can't in normal Java, you can only change a reference to a String. == compares references, not their contents. To add to the confusion, using == some times works. If you have two immutable values which are the same, the JVM can try to make the references the same too. e.g. String s1 = "Hi", s2 = "Hi"; Integer a = 12, b = 12; In both these case, an object pool is used so the references end up being the same. s1 == s2 and a == b are both true as the JVM has made the references to the same object. However, vary the code a little so the JVM doesn't pool the objects, and == returns false, perhaps unexpectedly. In this case you need to use equals. String s3 = new String(s1); Integer c = -222, d = -222; s1 == s2 // is true s1 == s3 // is false s1.equals(s3) // is true a == b // is true c == d // is false (different objects were created) c.equals(d) // is true For Integer, the object pool starts at -128 up to at least 127 (possibly higher) Java passes references by value All variables are passed by value, even references. This means when you have a variable which is a reference to an object, this reference is copied, but not the object. e.g. public static void addAWord(StringBuilder sb) { sb.append(" word"); sb = null; } StringBuilder sb = new StringBuilder("first "); addWord(sb); addWord(sb); System.out.println(sb); // prints "first word word" The object referenced can be changed, but changes to the copied reference have no effect on the caller. In most JVMs, the Object.hashCode() doesn't have anything to do with memory location A hashCode() has to remain constant. Without this fact hash collections like HashSet or ConcurrentHashMap wouldn't work. However, the object can be anywhere in memory and can change location without your program being aware this has happened. Using the location for a hashCode wouldn't work (unless you have a JVM where objects are not moved) For OpenJDK and the HotSpot JVM, the hashCode() generated on demand and stored in the object's header. Using Unsafe you can see whether the hashCode() has been set and even change it by over Object.toString() does something surprising rather than useful The default behaviour of toString() is to print an internal name for a class and a hashCode(). As mentioned the hashCode is not the memory location, even though it is printed in hexi-decimal. Also the class name, especially for arrays is confusing. For example; a String[] is printed as [Ljava.lang.String; The [ signifies that it is an array, the L signifies it is a "language" created class, not a primitive like byte which BTW has a code of B. and the ;signifies the end of the class. For example say you have an array like String[] words = { "Hello", "World" }; System.out.println(words); print something like [Ljava.lang.String;@45ee12a7 Unfortunately you have to know that the class is an object array, e.g. if you have justObject words, you have a problem and you have to know to call Arrays.toString(words) instead. This break encapsulation in a rather bad way and a common source of confusion on StackOverflow. I have asked different developers at Oracle about this and the impression I got is that it's too hard to fix it now. :(
February 24, 2014
by Peter Lawrey
· 13,149 Views
article thumbnail
Running Hadoop MapReduce Application from Eclipse Kepler
it's very important to learn hadoop by practice. one of the learning curves is how to write the first map reduce app and debug in favorite ide, eclipse. do we need any eclipse plugins? no, we do not. we can do hadoop development without map reduce plugins this tutorial will show you how to set up eclipse and run your map reduce project and mapreduce job right from your ide. before you read further, you should have setup hadoop single node cluster and your machine. you can download the eclipse project from github . use case: we will explore the weather data to find maximum temperature from tom white’s book hadoop: definitive guide (3rd edition) chapter 2 and run it using toolrunner i am using linux mint 15 on virtualbox vm instance. in addition, you should have hadoop (mrv1 am using 1.2.1) single node cluster installed and running, if you have not done so, would strongly recommend you do it from here download eclipse ide, as of writing this, latest version of eclipse is kepler 1. create new java project 2. add dependencies jars right click on project properties and select java build path add all jars from $hadoop_home/lib and $hadoop_home (where hadoop core and tools jar lives) 3. create mapper package com.letsdobigdata; import java.io.ioexception; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.longwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.mapper; public class maxtemperaturemapper extends mapper { private static final int missing = 9999; @override public void map(longwritable key, text value, context context) throws ioexception, interruptedexception { string line = value.tostring(); string year = line.substring(15, 19); int airtemperature; if (line.charat(87) == '+') { // parseint doesn't like leading plus // signs airtemperature = integer.parseint(line.substring(88, 92)); } else { airtemperature = integer.parseint(line.substring(87, 92)); } string quality = line.substring(92, 93); if (airtemperature != missing && quality.matches("[01459]")) { context.write(new text(year), new intwritable(airtemperature)); } } } 4. create reducer package com.letsdobigdata; import java.io.ioexception; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.reducer; public class maxtemperaturereducer extends reducer { @override public void reduce(text key, iterable values, context context) throws ioexception, interruptedexception { int maxvalue = integer.min_value; for (intwritable value : values) { maxvalue = math.max(maxvalue, value.get()); } context.write(key, new intwritable(maxvalue)); } } 5. create driver for mapreduce job map reduce job is executed by useful hadoop utility class toolrunner package com.letsdobigdata; import org.apache.hadoop.conf.configured; import org.apache.hadoop.fs.path; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.job; import org.apache.hadoop.mapreduce.lib.input.fileinputformat; import org.apache.hadoop.mapreduce.lib.output.fileoutputformat; import org.apache.hadoop.util.tool; import org.apache.hadoop.util.toolrunner; /*this class is responsible for running map reduce job*/ public class maxtemperaturedriver extends configured implements tool{ public int run(string[] args) throws exception { if(args.length !=2) { system.err.println("usage: maxtemperaturedriver "); system.exit(-1); } job job = new job(); job.setjarbyclass(maxtemperaturedriver.class); job.setjobname("max temperature"); fileinputformat.addinputpath(job, new path(args[0])); fileoutputformat.setoutputpath(job,new path(args[1])); job.setmapperclass(maxtemperaturemapper.class); job.setreducerclass(maxtemperaturereducer.class); job.setoutputkeyclass(text.class); job.setoutputvalueclass(intwritable.class); system.exit(job.waitforcompletion(true) ? 0:1); boolean success = job.waitforcompletion(true); return success ? 0 : 1; } public static void main(string[] args) throws exception { maxtemperaturedriver driver = new maxtemperaturedriver(); int exitcode = toolrunner.run(driver, args); system.exit(exitcode); } } 6. supply input and output we need to supply input file that will be used during map phase and the final output will be generated in output directory by reduct task. edit run configuration and supply command line arguments. sample.txt reside in the project root. your project explorer should contain following ] 7. map reduce job execution 8. final output if you managed to come this far, once the job is complete, it will create output directory with _success and part_nnnnn , double click to view it in eclipse editor and you will see we have supplied 5 rows of weather data (downloaded from ncdc weather) and we wanted to find out the maximum temperature in a given year from input file and the output will contain 2 rows with max temperature in (centigrade) for each supplied year 1949 111 (11.1 c) 1950 22 (2.2 c) make sure you delete the output directory next time running your application else you will get an error from hadoop saying directory already exists. happy hadooping!
February 21, 2014
by Hardik Pandya
· 144,263 Views · 2 Likes
article thumbnail
Eclipse's BIRT: Scripted Data Set
This article presents the usage of sripted data set in the eclipse's BIRT.
February 18, 2014
by Kosta Stojanovski
· 38,098 Views · 1 Like
article thumbnail
Managing Configurations with Apache Commons Configuration
Using Apache Commons Configuration to configure long-running applications.
February 13, 2014
by Faheem Sohail
· 25,813 Views · 2 Likes
article thumbnail
The best code coverage for Scala
The best code coverage metric for Scala is statement coverage. Simple as that. It suits the typical programming style in Scala best. Scala is a chameleon and it can look like anything you wish, but very often more statements are written on a single line and conditional “if” statements are used rarely. In other words, line coverage and branch coverage metrics are not helpful. Java tools Scala runs on JVM and therefore many existing tools for Java can be used for Scala as well. But for code coverage it’s a mistake to do so. One wrong option is to use tools that measure coverage looking at bytecode like JaCoCo. Even though it gives you a coverage rate number, JaCoCo knows nothing about Scala and therefore it doesn’t tell you which piece of code you forgot to cover. Another misfortune are tools that natively support line and branch coverage metrics only. Cobertura is a standard in Java world and XML coverage report that it generates is supported by many tools. Some Scala code coverage tools decided to use Cobertura XML report format because of its popularity. Sadly, it doesn’t support statement coverage. Statement coverage Why? Because a typical Scala statement looks like this (a single line of code): def f(l: List[Int]) = l.filter(_ > 0).filter(_ < 42).takeWhile(_ != 3).foreach(println(_)) Neither line nor branch coverage works here. When would you consider this single line as being covered by a test? If at least one statement of that line has been called? Maybe. Or all of them? Also maybe. Where is a branch? Yes, there are statements that are executed conditionally, but the decision logic is hidden in internal implementation of List. Branch coverage tools are helpless, because they don't see this kind of conditional execution. What we need to know instead is whether individual statements like _ > 0, _ < 42 or println(_) have beed executed by an automated test. This is the statement coverage. Scoverage to the rescue! Luckily there is a tool named Scoverage. It is a plugin for Scala compiler. There is also a plugin for SBT. It does exactly what we need. It generates HTML report and also own XML report containing detailed information about covered statements. Scoverage plugin for SonarQube Recently I have implemented a plugin for Sonar 4 so that statement coverage measurement can become an integral part of your team's continuous integration process and a required quality standard. It allows you to review overall project statement coverage as well as dig deeper into sub-modules, directories and source code files to see uncovered statements. Project dashboard with Scoverage plugin: Multi-module project overview: Columns with statement coverage, total number of statements and number of covered statements: Source code markup with covered and uncovered lines:
February 12, 2014
by Rado Buranský
· 33,747 Views · 1 Like
article thumbnail
How to Build an iOS and Android App in 24 hours with HTML5 and Cordova
what can one create during the new year and christmas holidays? as it turned down – quite enough. even if you have two kids and a bunch of family members whom you want to visit. the only thing you cannot accomplish in time is to finish an article for dzone. it takes a lot of time, nearly the entire january. by the 5th of january i had a laptop and a couple of days to spend on some development. having estimated what i can do here, i decided to create a mobile app that would work faster than the original. for this, i needed to find communicative creators of a popular app. hence, i found a “ spender ” app in the app store. it is a simple app for tracking your budget. with it, you can estimate how effectively you spend your money in the end of each month. by the 5th of january, this app was in top-10 in the russian app store. i also found their dev-story on iphones.ru. in their dev-story, the developers wrote that after completing their previous project, they had three-four free days. so, they decided to create a new app during this free time. their product manager and programmers helped them with positioning the app and its key features. this encouraged me and i began to think how to create nearly the same app in 2 days . note: the original app was updated in the middle of january, and now it looks a little different from my app. anyway, you can find its screenshots in the dev-story. i already had the experience of mobile app development using c# and cocoa. since this was my personal free time, i wanted to use it with maximum effectiveness. even if i didn’t succeed, i was eager to learn a new framework or programming language. i was working for devexpress from 2006 till2011 and have been reading their announces since i left the company. so, i knew that they created a mobile js-framework based on cordova/phonegap. they made it after i left the company, so i was curious to try it. the gartner research company reports that by august, 2013 most of the enterprise mobile software was created using phonegap or phonegap-based products (like kony ). from my consumer experience, it's far from true. maybe i was wrong? i'm not so good at html and javascript. i can create mark-up with stackoverflow.com and i can write simple selectors with jquery. i can also find the required information in their documentation. in other words, html+js was a gap in my knowledge and i was ready to fill it or gain some experience. thus, i planned to create a cross-platform application that could become an advantage over the original ios-only spender app. moreover, i wanted to spend my time in the most effective way. on the one hand, i had a potentially effective js framework, on the other – a lack of js experience. i hoped that the js framework advantages could balance my poor experience. since i like to use a vcs during development, i'll try to recover my progress. you can download complete apps here: ios , android i'm not sure i can provide public access to my repo, because it contains images i bought from fotolia and third-party libraries, each with a difference license. i'm not a lawyer, so i’d prefer not to take the risk. the most curious of you can take a look into the app bundle itself. js wasn't minified. place: tula, russia, date: january, 5, 2014 +20 minutes spent on installing node.js and cordova cli +10 minutes downloaded a template app from cordova. added a template from phonejs. created a git-repo, registered it in webstorm. added a new record to the httpd.conf in order to have an ability to debug my future app in the browser. +38 minutes changed the app namespace to "io.nikitin.thriftbox". added navigation. phonejs is an mvc-framework. each app screen is represented as a collection of html markup (views) and fabric function (viewmodel). here is how it looks at its simplest // view content and thriftbox.home = function (params) { // request parameters taken from uri return {}; // viewmodel instance }; then view and view model are bound via knockout-bindings . to be in time, i create only two screens: expense input and monthly expense report. +4 hours 20 minutes here i got stuck for the first time. i couldn't create a markup of digit buttons. the original app had a huge keyboard that looked like a calculator or dialer. i found out that it was not that easy to create such a keyboard, even using a table tag. in the iphone retina screen, 1px borders between buttons changed their colors after clicking on the buttons. on my iphone, the difference in colors was very noticeable. i had to invent how to tackle this. i tried to implement buttons using div s. but i couldn't achieve a border width of 1 px and make all buttons look equal in different screens. three hours later i gave up the idea of using divs and moved forward. +28 minutes removing a clicked button indicator on ios. ios displays a gray indicator around tapped links and objects with the onclick event handler. since i had my own indicator of a tapped object (the tapped button became darker), i didn't need the default indicator. i solved this problem using the dxaction event: was: 1 became: 1 this event is an extended variation of a "click" event: its handler supports uri navigation between views and correctly works in the scrollable area. +14 minutes the buttonpress event handler shown in the previous example now validates numbers from user input. var number = ko.observable(null); var isvalidnumber = ko.computed(function() { return number() && parsefloat(number()) > 0; }); ...... function buttonpress(button) { if (button) { if (number()) number(number() + button); else number(button); } else if (number()) number(number().substr(0, number().length - 1)); } var viewmodel = { number: number, isvalidnumber: isvalidnumber, viewshowing: viewshowing, buttonpress: buttonpress }; ..... +8 minutes added a fastclick.js , which removes a delay between tapping the screen and raising the 'click' event on phones. the mobile browser delays the raising of the click event by default to be sure the end-user will not perform a double tap. for the end-user, this looks as if the app is sluggish. you click buttons much faster than an app responds. fastclick.js handles the touchstart event and then creates all the click event process logic. btw, adding this library was a mistake; later i'll tell why. +4 minutes added a limitation to the length of user input numbers. corrected the font size for a better look-and-feel. +58 minutes added a choice of an expense category. added a scrollable pane with available categories below the input field. video . it took less time than it could be. in the phonejs component collection, i found dxtileview . it provides a kinetic scrolling with the required appearance out-of-the-box. it's not easy to implement kinetic scrolling by yourself and thus it’s great that this scrolling is enabled for ios only - android doesn't have it. it was 7:40 pm, so, i decided to continue the next day. place: tula, russia, date: january, 5, 2014 +3 hours 9 minutes storing data on a local storage. phonejs contains classes for working with data: selection, filtering, sorting, and grouping. there are several approaches to store data: odata and localstorage. i didn't want to implement a server side for a free app, and decided to use localstorage. later i found out that this was not an ideal decision. for example, when updating to ios 5.1 user data is erased , other people complained that localstorage is cleared regularly or even when shutting the device down. i didn't want to risk, so i used file api of phonegap. documentation says that this api is based on w3c file api. in fact, this means that this api differs in safari for mac os, chrome for mac os, cordova for ios and cordova for android. file api implementation is different for ios and android . e.g. android implementation doesn't contain the 'blob' class and 'window.permanent' constant. ii however implements the 'localfilesystem' and 'localfilesystem.persistent' classes. the laptop browser provides additional api for requesting an additional storage space, which mobile browsers don't provide. the available documentation for this api adds more problems. i found several articles searching by "html5 file api". and, i couldn't find an article that would cover all my questions. finally i created a new class for working with fileapi. this class supports cordova 3.3 on ios, android, and chrome 32 for mac os and windows 8. you can find it here: https://github.com/chebum/filestorage-for-phone.js/blob/master/filestorage.js you can use it as follows: // in this example i create data/records file in the documents folder of the app fs.initfileapi(1000000, true) .then(function () { var records = new fs.filearraystore({ key: "id", filename: "records" }); return records.insert({ customer: "peter" }) }) .then(function () { alert("record saved!"); }); // or use low-level api: fs.initfileapi(100000, true) .then(function() { return fs.writefile("file1", "file content") }) .then(function() { alert("file saved!"); }); +33 minutes saving the added records to the storage. category list is stored in arraystore , to simplify the selection operations. +26 minutes creating layout for the app's views. phonejs provides several layouts that are the placeholders for the views. my app's start page didn't fit into any of the available layout, so i have chosen the emptylayout. but, it doesn't provide animation effects when navigating through views. i copied the emptylayout code and added an attribute that had animation effects. +1 h. 51 min. template's about screen was redesigned to a report screen, empty by that moment. created a viewmodel that selects data for a current month. added localization date formatting for the screen caption. +59 minutes added the display of expenses grouped by categories for a current month. +28 minutes added the selection of months for which the report should be generated. end-users can tap the screen header to select the required month. +1 h. 20 min. added cordova-plugin statusbar that didn't work outof-the-box. i found that the reference to cordova.js was commented in the phonejs app template: as a result, the native part of my app didn't work. +39 minutes in the report screen, the upper part was changed to dxtoolbar . +22 minutes i discoveredwhy the dxbutton click event handler didn't work. removing the fastclick.js solved my problem, but caused a delay between tapping and event raising. i've changed the dxaction event subscription to 'touchstart'. +25 minutes formatting output strings when generating a report. at night i dreamed of crappy buttons in the application’s main screen. places: tula, vnukovo airport, date: january, 7-8, 2014 i had an early flight to budapest from vnukovo, and because i had no time in the afternoon, i gradually completed at the airport at night. as you know, it’s not very comfortable to sleep or sit in a café chair for a long time, but it turned out that programming was ok. +2 h. 5 min. in the morning, i decided to split the buttons in order to remove borders between them. i took the ios dialer keyboard as a sample. i created three keyboards. the button size changes depending on screen resolution: for 3.5'', 4'' and 5'' phones. each table cell contained a div with configured alignment. because of the lack of an incomplete vertical text alignment in html, the final css style for buttons ended to be quite complex: .home-view .buttons td div { color: #4a5360; border: 1px solid #4a5360; text-align: center; position: absolute; left: 50%; /* small buttons - default */ font-size: 26px; padding: 13px 0 13px 0; width: 52px; line-height: 26px; border-radius: 26px; margin-left: -27px; margin-top: -27px; } +1 h. 50 minutes i bought several vector icon sets on fotolia. i cut the required icons and converted them to png. it took me quite a long time, maybe, because it was 1.30 am :) +1 hour 10 minutes added a splash-screen for the app. +36 minutes created three sizes for the app icon. localized the app name for ios. +20 minutes hiding the splash screen after the app is completely loaded. +2 hours fixing multiple bugs. +2 hours creating screenshots for play store +30 minutes creating screenshots for app store +30 minutes writing an app description for two app stores. +1 h. 30 minutes submitting my app to the app store. here i faced with an issue with the app certification. my accountancy let's summarize the time i spent and divide it into categories. development: 21 hours 37 minutes graphics and texts: 8 hours 26 minutes totally: 30 hours 3 minutes as a result, i got a minimum-feature working app, though it is not as cool as the latest version of "spender". i couldn't create splitting expenses by days and income input. my app's ui could be more elegant as well. after analyzing the original 'spender' developer work, i got the following. they say that they involved four developers for three-four days. it is about 96-128 man-hours. i spent only 30 man-hours and got an app for three mobile platforms. ios and android versions are already in stores. the version for windows phone 8 requires a ui redesign. i can be proud of myself :). you can download complete apps here: ios , android
February 12, 2014
by Ivan Nikitin
· 209,933 Views
article thumbnail
To ServiceMix or Not to ServiceMix
This morning an interesting topic was posted to the Apache ServiceMix user forum, asking the question: To ServiceMix or not ServiceMix. In my mind the short answer is: NO Guillaume Nodet one of the key architects and long time committer on Apache ServiceMix already had his mind set 3 years ago when he wrong this blog post - Thoughts about ServiceMix. What has happened on the ServiceMix project was that the ServiceMix kernel was pulled out of ServiceMix into its own project - Apache Karaf. That happened in spring 2009, which Guillaume also blogged about. So is all that bad? No its IMHO all great. In fact having the kernel as a separate project, and Camel and CXF as the integration and WS/RS frameworks, would allow the ServiceMix team to focus on building the ESB that truly had value-add. But that did not happen. ServiceMix did not create a cross product security model, web console, audit and trace tooling, clustering, governance, service registry, and much more that people were looking for in an ESB (or related to a SOA suite). There were only small pieces of it, but never really baked well into the project. That said its not too late. I think the ServiceMix project is dying, but if a lot of people in the community step up, and contribute and work on these things, then it can bring value to some users. But I seriously doubt this will happen. PS: 6 years ago I was working as a consultant and looked at the next integration platform for a major Danish organization, and we looked at ServiceMix back then and dismissed it due its JBI nature, and the new OSGi based architecture was only just started. And frankly it has taken a long long time to mature Apache Karaf / Felix / Aries and the other pieces in OSGi to what they are today to offer a stable and sound platform for users to build their integration applications. That was not the case 4-6 years ago. Okay No to ServiceMix - what are my options then? So what should use you instead of ServiceMix? Well in my mind you have at least these two options. 1) Use Apache Karaf and add the pieces you need, such as Camel, CXF, ActiveMQ and build your own ESB. These individual projects have regular releases, and you can upgrade as you need. The ServiceMix project only has the JBI components in additional, that you should NOT use. Only legacy users that got on the old ServiceMix 3.x wagon may need to use this in a graceful upgrade from JBI to Karaf based containers. 2) Take a look at fabric8. IMHO fabric8 is all that value-add the ServiceMix project did not create, and a lot more. James Strachan, just blogged today about some of his thoughts on fabric8, JBoss Fuse, and Karaf. I encourage you to take a read. For example he talks about how fabric becomes poly container, so you have a much wider choice of which containers/JVM to run your integration applications. OSGi is no longer a requirement. (IMHO that is very very existing and potentially a changer). I encourage you to check out fabric8 web-site, and also read the overview and motivation sections of the documentation. And then check out some of the videos. After the upcoming JBoss Fuse 6.1 release, the Fuse team at Red Hat will have more time and focus to bring the documentation at fabric8 up to date covering all the functionality we have (there is a lot more), and as well bring out a 1.0 community released using pure community releases. This gives end users a 100% free to use out of the box release. And users looking for a commercial release can then use JBoss Fuse. Best of both worlds. Summary Okay back to the question - to ServiceMix or not. Then NO. Innovation happens outside ServiceMix, and also more and more outside Apache. If you have thoughts then you can share those in comments to this blog, or better yet, get involved in the discussion forum at the ServiceMix user forum. PPS: The thoughts on this blog is mine alone, and are not any official words from my employer.
February 12, 2014
by Claus Ibsen
· 16,692 Views
article thumbnail
SPNego Authentication with JBoss
Background SPNego is RFC 4178 used for negotiation either NTLM or Kerberos based SSO. A typical use case is for web applications to reuse the authentication used by Desktops such as Windows or Linux. In this article, we will explore approaches for SPNego authentication with JBoss Enterprise Application Platform. JBoss Negotiation is the library that provides the SPNego authentication support in JBoss. This library has been integrated in JBoss EAP and WildFly Application Server. Checklist Obtain JBoss EAP from jboss.org. Enable your JavaEE Web Application for SPNego Authentication. Configure JBoss EAP for SPNego. Configure your Browsers for SPNego. Start JBoss EAP. Test your web application. Obtain JBoss EAP from jboss.org Download JBoss EAP 6.2 or newer from http://www.jboss.org/products/eap You can also use WildFly Application Server from http://www.wildfly.org. Your configuration may vary slightly. Enable your JavaEE Web Application for SPNego Authentication It is easier to use a demo web application as a starting point. You can then modify your web application for SPNego authentication. The demo web application we use for this article is called spnego-demo, by my colleague, Josef Cazek. The demo web application is available at https://github.com/kwart/spnego-demo . You can also download the spnego-demo.war from here . Fully configured web application spnego-demo.war can be obtained from this location . Copy the spnego-demo.war in your jboss-eap-6.2/standalone/deployments directory. Configure JBoss EAP for SPNego Authentication You will need to configure a couple of security domains and system properties in JBoss EAP6. There are two ways by which you can configure: either manual editing or using CLI tool. Manual Editing of configuration file standalone.xml in jboss-eap-6.2/standalone/configuration Add system properties to this file. Remember to put this block right after the extensions block (around line 25 of the configuration file). Add security domains to this file. Remember to put these blocks in the block. Using Command Line Interface to update JBoss EAP Go to the bin directory of JBoss EAP 6.2 and run the following. $ cat << EOT > $SPNEGO_TEST_DIR/cli-commands.txt /subsystem=security/security-domain=host:add(cache-type=default) /subsystem=security/security-domain=host/authentication=classic:add(login-modules=[{"code"=>"Kerberos", "flag"=>"required", "module-options"=>[ ("debug"=>"true"),("storeKey"=>"true"),("refreshKrb5Config"=>"true"),("useKeyTab"=>"true"),("doNotPrompt"=>"true"),("keyTab"=>"$SPNEGO_TEST_DIR/http.keytab"),("principal"=>"HTTP/localhost@JBOSS.ORG")]}]) {allow-resource-service-restart=true} /subsystem=security/security-domain=SPNEGO:add(cache-type=default) /subsystem=security/security-domain=SPNEGO/authentication=classic:add(login-modules=[{"code"=>"SPNEGO", "flag"=>"required", "module-options"=>[("serverSecurityDomain"=>"host")]}]) {allow-resource-service-restart=true} /subsystem=security/security-domain=SPNEGO/mapping=classic:add(mapping-modules=[{"code"=>"SimpleRoles", "type"=>"role", "module-options"=>[("jduke@JBOSS.ORG"=>"Admin"),("hnelson@JBOSS.ORG"=>"User")]}]) {allow-resource-service-restart=true} /system-property=java.security.krb5.conf:add(value="$SPNEGO_TEST_DIR/krb5.conf") /system-property=java.security.krb5.debug:add(value=true) /system-property=jboss.security.disable.secdomain.option:add(value=true) :reload() EOT $ ./jboss-cli.sh -c --file=$SPNEGO_TEST_DIR/cli-commands.txt This is explained in https://github.com/kwart/spnego-demo/blob/master/README.md We will need a keytab file. In this example, we will use the Kerberos Server using ApacheDS (as explained in Appendix A). $ java -classpath kerberos-using-apacheds.jar org.jboss.test.kerberos.CreateKeytab HTTP/localhost@JBOSS.ORG httppwd http.keytab Note that the http.keytab has been configured in the security domain called "host" in standalone.conf. So place the keytab file appropriately while correcting the path defined in security domain. More information is available at https://github.com/kwart/kerberos-using-apacheds/blob/master/README.md JBoss EAP will need a keytab file. In this example we use a keytab called as http.keytab Different tools such as ktutil exist to create keytab files. Keytab files contain Kerberos Principals and encrypted keys. It is important to safeguard keytab files. It is very important that JBoss EAP configuration for keytab in the security domain "host" refers to the actual path of the keytab file. Configure your Browsers for SPNego The browsers such as Microsoft IE, Mozilla Firefox, Google Chrome, Apple Safari have different settings for enabling SPNego or Integrated Authentication. Start JBoss EAP Go to the bin directory of JBoss EAP 6.2 and either use standalone.sh (Unix/Linux) or standalone.bat to start your instance. Test your Web Application Assuming you have followed Appendix A steps to start the kerberos server and done kinit, you are ready to test the web application. In this article we have used spnego-demo, we can test that by going to http://localhost:8080/spnego-demo/ You can click on the "User Page" link and you should be able to see the principal name as "hnelson@jboss.org" Appendix A Local Kerberos Server Download the zip file https://github.com/kwart/kerberos-using-apacheds/archive/master.zip Unzip the zip file into a directory. Build the package using maven. $ mvn clean package Start the Kerberos Server as $ java -jar target/kerberos-using-apacheds.jar test.ldif A krb5.conf file has been created. Login now using hnelson@jboss.org $ kinit hnelson@JBOSS.ORG Password for hnelson@JBOSS.ORG: secret Launch Firefox via command line from where the kinit was run On MacOSX $open -a firefox http://localhost:8080/spnego-demo/ Appendix B Kerberos Command Line Utilities klist can be used to see the current kerberos tickets. $ klist Credentials cache: API:501:10 Principal: hnelson@JBOSS.ORG Issued Expires Principal Feb 9 21:19:30 2014 Feb 10 07:19:27 2014 krbtgt/JBOSS.ORG@JBOSS.ORG kdestroy can be used to clear the current kerberos tickets. References SPNego Demo Web Application: https://github.com/kwart/spnego-demo Kerberos Server using ApacheDS: https://github.com/kwart/kerberos-using-apacheds JBoss EAP 6 http://www.jboss.org/products/eap PicketLink Open Source Project: http://www.picketlink.org Troubleshooting https://docs.jboss.org/author/display/PLINK/SPNego+Support+Questions Remember krb5.conf is important for client side kerberos interactions. You can use a environment variable on Unix/Linux/Mac systems called KRB5_CONFIG to point to your krb5.conf Acknowledgement Darran Lofthouse for the wonderful JBoss Negotiation Project and Josef Czacek for the SPNego-demo and Kerberos_using_Apache DS projects.
February 12, 2014
by Anil Saldanha
· 19,150 Views
article thumbnail
Java 8 Type Annotations
Lambda expressions are by far the most discussed and promoted feature of Java 8. While I agree that Lambdas are a large improvement I think that some other Java 8 feature go a bit short because of the Lambda hype. In this post I want to show a number of examples from another nice Java 8 feature: Type Annotations. Type Annotations are annotations that can be placed anywhere you use a type. This includes the new operator, type casts, implements clauses and throws clauses. Type Annotations allow improved analysis of Java code and can ensure even stronger type checking. In source code this means we get two new ElementTypes for annotations: @Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER}) public @interface Test { } The enum value TYPE_PARAMETER allows an annotation to be applied at type variables (e.g. MyClass). Annotations with target TYPE_USE can be applied at any type use. Please note that the annotations from the following examples will not work out of the box when Java 8 is released. Java 8 only provides the ability to define these types of annotations. It is then up to framework and tool developers to actually make use of it. So this is a collection of annotations frameworks could give us in the future. Most of the examples are taken from the Type Annotations specification and various Java 8 presentations. Simple type definitions with type annotations look like this: @NotNull String str1 = ... @Email String str2 = ... @NotNull @NotBlank String str3 = ... Type annotations can also be applied to nested types Map.@NonNull Entry = ... Constructors with type annotations: new @Interned MyObject() new @NonEmpty @Readonly List(myNonEmptyStringSet) They work with nested (non static) class constructors too: myObject.new @Readonly NestedClass() Type casts: myString = (@NonNull String) myObject; query = (@Untainted String) str; Inheritance: class UnmodifiableList implements @Readonly List { ... } We can use type Annotations with generic type arguments: List<@Email String> emails = ... List<@ReadOnly @Localized Message> messages = ... Graph<@Directional Node> directedGraph = ... Of course we can nest them: Map<@NonNull String, @NonEmpty List<@Readonly Document>> documents; Or apply them to intersection Types: public & @Localized MessageSource> void foo(...) { ... } Including parameter bounds and wildcard bounds: class Folder { ... } Collection c = ... List<@Immutable ? extends Comparable> unchangeable = ... Generic method invocation with type annotations looks like this: myObject.<@NotBlank String>myMethod(...); For generic constructors, the annotation follows the explicit type arguments: 1 new @Interned MyObject() Throwing exceptions: void monitorTemperature() throws @Critical TemperatureException { ... } void authenticate() throws @Fatal @Logged AccessDeniedException { ... } Type annotations in instanceof statements: boolean isNonNull = myString instanceof @NonNull String; boolean isNonBlankEmail = myString instanceof @NotBlank @Email String; And finally Java 8 method and constructor references: @Vernal Date::getDay List<@English String>::size Arrays::<@NonNegative Integer>sort Conclusion Type annotations are an interesting addition to the Java type system. They can be applied to any use of a type and enable a more detailed code analysis. If you want to use Type annotations right now you should have a look at the Checker Framework.
February 11, 2014
by Michael Scharhag
· 82,918 Views · 4 Likes
article thumbnail
Getting Started with HTML5 WebSocket on Oracle WebLogic 12c
The current release of Oracle WebLogic (12.1.2) added support to HTML5 WebSocket (RFC-6455) providing the bi-directional communication over a single TCP connection between clients and servers. Unlike the HTTP communication model, client and server can send and receive data independently from each other. The use of WebSocket is becoming very popular for highly interactive web applications that depend on time critical data delivery, requirements for true bi-directional data flow and higher throughput. To initiate the WebSocket connection, the client sends a handshake request to the server. The connection is established if the handshake request passes validation, and the server accepts the request. When a WebSocket connection is established, a browser client can send data to a WebLogic Server instance while simultaneously receiving data from that server instance. The WebLogic server implementation has the following components: The WebSocket protocol implementation that handles connection upgrades and establishes and manages connections as well as exchanges with the client. The WebLogic server fully implements the WebSocket protocol using its existing threading and networking infrastructure. The WebLogic WebSocket Java API, through the weblogic.websocket package, allows you to create WebSocket-based server side applications handling client connections, WebSocket messages, providing context information for a particular WebSocket connection and managing the request/response handshake. For additional information about the WebLogic WebSocket Java API interfaces and classes, visit the following resource: http://docs.oracle.com/middleware/1212/wls/WLPRG/websockets.htm#BABJEFFD To declare a WebSocket endpoint you use the @WebSocket annotation that allow you to mark a class as a WebSocket listener that's ready to be exposed and handle events. The annotated class must implement the WebSocketListener interface or extend from the WebSocketAdapter class. When you deploy the WebSocket-based application on WebLogic, you basically follow the same approach using the standard Java EE Web Application archives (WARs), either as standalone or WAR module. Either way, the WebLogic Application Server detects the @WebSocket annotation on the class and automatically establishes it as a WebSocket endpoint. Here is a simple application that creates a WebSocket endpoint at the /echo/ location path, receives messages from the client and sends the same message back to the client. import weblogic.websocket.WebSocketAdapter; import weblogic.websocket.WebSocketConnection; import weblogic.websocket.annotation.WebSocket; @WebSocket(pathPatterns={"/echo/*"}) public class Echo extends WebSocketAdapter{ @Override public void onMessage(WebSocketConnection connection, String msg){ try{ connection.send(msg); }catch(IOException ioe){ //Handle error condition } } } n the client side, you typically use the WebSocket JavaScript API that most of the web browsers already support. There are many samples out there that you can use to test the implementation above. One quick way of doing that is to navigate to http://www.websocket.org/echo.html and then point the location field to your WebLogic server. If you follow the sample available on https://github.com/mjabali/HelloWorld_WebSocket then you'll end up with something like ws://localhost:7001/HelloWorld_WebSocket/echo/ for the WebSocket server application. The sample client provided will be available at http://localhost:7001/HelloWorld_WebSocket/index.html after the WebLogic deployment. See the README.md file on GitHub for additional instructions. Have fun!
February 10, 2014
by Marcelo Jabali
· 11,746 Views
article thumbnail
Secret Key Import in Java Keystore by Key Replacement Method
If you are a programmer and have to deal with cryptography issues, you've surely heard about keywords such as encryption, decryption and key management. The last key word, key management, is defined as a group of operations such as generating, exchanging, storing and protecting security artifacts (i.e. keys and certificates). Security artifacts are essential parts of any cryptography operations. Without effective management of such valuable resources, the system can be easily compromised by attackers. Java supports key management by introducing two utilities; Java Keys Store or JKS as short and Java Keytool Utility. Java Key Store is a handy and safe storage to store keys and certificates. Java key store API describes methods and properties of Java keystore class which makes it possible to work with keystore file programmatically. To manage keys and certificates, Java provides a second utility named Java Keytool Utility. Keytool utility is included and delivered with JDK (Java Development Kit) distributions. The Keytool manual introduces and describes various commands and options that are available and provides by Java Keytool utility. Key management is feasible by services that are offered by both Java keystore and Java Keytool utility together. The Key management that Java provides is covering most of programming scenarios. Unfortunately there is only one limitation. Java Keytool utility as the main key management unit does not support any means to import custom created keys to Java keystore. It only supports key generation which results in auto generated keys. This is a major shortcoming in situations where there must be key exchange between application peers. In such situations key specifications are specific to the security models which are agreed between developers. Sometimes the keys are byte streams which are not accompanied with any certificates. These streams are defined as cryptography artifacts and must be protected and saved by Java keystore. One solution to this problem is to use third party utilities such as Openssl. Openssl utility offers a mechanism which is a hack to the unavailability of key import in Java Keytool utility. The trick is to save keys in PKCS12 format using the Openssl utility, and treating the created artifact as a new keystore. Fortunately Java Keytool utility supports key store merge option. The created keystore by Openssl utility then can be merged into any Java keystore by Java Keytool utility. Unfortunately I could not succeed in following this solution. One reason could be that my key had customized specifications such as size and value, plus there was no certificate available to accompany it as well. It seemed that there was no other way to overcome such limitation. I found a solution, pretty easy and quick that helped me achieving the result I wanted without being dependent on any third party tools. One advantage of this method is the use of current options that are offered by both Keytool and Java Keystore utilities. Let’s name this method “Key Replacement”. Firstly a new key must be created, for example, a secret key. The secret key will be auto generated by the Keytool utility and will be saved under a known ALIAS inside a new key store or in an existing one. Open your command prompt and issue the following command: keytool -genseckey -alias mykey -keyalg DES -keysize 56 -storetype jceks -keystore Make sure you have set your Java runtime environment correctly. Description and details of the above command and options can be found on Keytool manual. After issuing the above command, you will be asked to provide a password for the keystore. If the keystore already exists, provide its password; otherwise enter a password to be set for the newly created keystore. If the operation was successful, you can list the keystore entries by issuing the following command: keytool -list -v -storetype JCEKS -keystore The result of list command will be a list of keystore's entries. In our case, the record we seek is something like : Keystore type: JCEKS Keystore provider: SunJCE Your keystore contains 1 entry Alias name: mykey Creation date: Sep 30, 2013 Entry type: SecretKeyEntry As you can see the newly created key is represented by the alias we have set. This key is auto generated by Java Keytool utility. We are one step closer to what we needed. We have created a key entry with the alias we want. The final step is to change the key entry value with our customized value. Remaining steps consists of, locating the target key entry inside the keystore by its alias and change its value with our own value programmatically. The following simple java program will do the job. KeyStore ks = KeyStore.getInstance("JCEKS"); char[] password = "PASSWORD TO KEYSATORE".toCharArray(); java.io.FileInputStream fis = null; try { ks.load(new FileInputStream("PATH TO KEY STORE"), password); } finally { if (fis != null) { fis.close(); } } SecretKey mySecretKey = new SecretKeySpec(Util.hex2byte("5A5A5A5A5A5A5A5A"), 0, Util.hex2byte("5A5A5A5A5A5A5A5A").length, "DES"); KeyStore.SecretKeyEntry skEntry = new KeyStore.SecretKeyEntry(mySecretKey); ks.setEntry("mykey", skEntry, new KeyStore.PasswordProtection(password)); java.io.FileOutputStream fos = null; try { fos = new java.io.FileOutputStream("PATH TO KEYSTORE"); ks.store(fos, password); } finally { if (fos != null) { fos.close(); } } This java program will: · Open the keystore. · Load the key store prior to any operation. · Build a secret key with desired specs (Custom value and Custom length). · Replace the value of target key by using the setEntry() method of keystore object by providing its alias and a new key value. · Finally close and save the keystore object. To double check the modification, use the following code to locate and display the modified key value by loading the keystore object again. try { ks.load(new FileInputStream("PATH TO KEY SOTRE"), password); } finally { if (fis != null) { fis.close(); } } Key key = ks.getKey("mykey ", password); System.out.println("-----BEGIN PRIVATE KEY-----"); System.out.println(new BASE64Encoder().encode(key.getEncoded())); System.out.println("-----END PRIVATE KEY-----"); The steps are straight forward: · Load the key store object. · Load the target key by using the getKey() method and specifying its alias. · Fetch the key value and output its value in PEM format (Base 64 encoding). Voila! That’s our key. This article shows some simple steps which can be used to import a custom created secret key to java keystore. Hope this article will be helpful in cases where the tools such as Openssl utility has no use. Sam,
February 10, 2014
by Sam Sepassi
· 36,078 Views · 1 Like
article thumbnail
Build Your Own Custom Lucene Query and Scorer
Every now and then we’ll come across a search problem that can’t simply be solved with plain Solr relevancy. This usually means a customer knows exactly how documents should be scored. They may have little tolerance for close approximations of this scoring through Solr boosts, function queries, etc. They want a Lucene-based technology for text analysis and performant data structures, but they need to be extremely specific in how documents should be scored relative to each other. Well for those extremely specialized cases we can prescribe a little out-patient surgery to your Solr install – building your own Lucene Query. This is the Nuclear Option Before we dive in, a word of caution. Unless you just want the educational experience, building a custom Lucene Query should be the “nuclear option” for search relevancy. It’s very fiddly and there are many ins-and-outs. If you’re actually considering this to solve a real problem, you’ve already gone down the following paths: You’ve utilized Solr’s extensive set of query parsers & features including function queries, joins, etc. None of this solved your problem You’ve exhausted the ecosystem of plugins that extend on the capabilities in (1). That didn’t work. You’ve implemented your own query parser plugin that takes user input and generates existing Lucene queries to do this work. This still didn’t solve your problem. You’ve thought carefully about your analyzers – massaging your data so that at index time and query time, text lines up exactly as it should to optimize the behavior of existing search scoring. This still didn’t get what you wanted. You’ve implemented your own custom Similarity that modifies how Lucene calculates the traditional relevancy statistics – query norms, term frequency, etc. You’ve tried to use Lucene’s CustomScoreQuery to wrap an existing Query and alter each documents score via a callback. This still wasn’t low-level enough for you, you needed even more control. If you’re still reading you either think this is going to be fun/educational (good for you!) or you’re one of the minority that must control exactly what happens with search. If you don’t know, you can of course contact us for professional services. Ok back to the action… Refresher – Lucene Searching 101 Recall that to search in Lucene, we need to get a hold of an IndexSearcher. This IndexSearcher performs search over an IndexReader. Assuming we’ve created an index, with these classes we can perform searches like in this code: Directory dir = new RAMDirectory(); IndexReader idxReader = new IndexReader(dir); idxSearcher idxSearcher = new IndexSearcher(idxReader) Query q = new TermQuery(new Term(“field”, “value”)); idxSearcher.search(q); Let’s summarize the objects we’ve created: Directory – Lucene’s interface to a file system. This is pretty straight-forward. We won’t be diving in here. IndexReader – Access to data structures in Lucene’s inverted index. If we want to look up a term, and visit every document it exists in, this is where we’d start. If we wanted to play with term vectors, offsets, or anything else stored in the index, we’d look here for that stuff as well. IndexSearcher — wraps an IndexReader for the purpose of taking search queries and executing them. Query – How we expect the searcher to perform the search, encompassing both scoring and which documents are returned. In this case, we’re searching for “value” in field “field”. This is the bit we want to toy with In addition to these classes, we’ll mention a support class exists behind the scenes: Similarity – Defines rules/formulas for calculating norms at index time and query normalization. Now with this outline, let’s think about a custom Lucene Query we can implement to help us learn. How about a query that searches for terms backwards. If the document matches a term backwards (like ananab for banana), we’ll return a score of 5.0. If the document matches the forwards version, let’s still return the document, with a score of 1.0 instead. We’ll call this Query “BackwardsTermQuery”. This example is hosted here on github. A tale of 3 classes – A Query, A Weight, and a Scorer Before we sling code, let’s talk about general architecture. A Lucene Query follows this general structure: A custom Query class, inheriting from Query A custom Weight class, inheriting from Weight A custom Scorer class inheriting from Scorer These three objects wrap each other. A Query creates a Weight, and a Weight in turn creates a Scorer. A Query is itself a very straight-forward class. One of its main responsibilities when passed to the IndexSearcher is to create a Weight instance. Other than that, there are additional responsibilities to Lucene and users of your Query to consider, that we’ll discuss in the “Query” section below. A Query creates a Weight. Why? Lucene needs a way to track IndexSearcher level statistics specific to each query while retaining the ability to reuse the query across multiple IndexSearchers. This is the role of the Weight class. When performing a search, IndexSearcher asks the Query to create a Weight instance. This instance becomes the container for holding high-level statistics for the Query scoped to this IndexSearcher (we’ll go over these steps more in the “Weight” section below). The IndexSearcher safely owns the Weight, and can abuse and dispose of it as needed. If later the Query gets reused by another IndexSearcher, a new Weight simply gets created. Once an IndexSearcher has a Weight, and has calculated any IndexSearcher level statistics, the IndexSearcher’s next task is to find matching documents and score them. To do this, the Weight in turn creates a Scorer. Just as the Weight is tied closely to an IndexSearcher, a Scorer is tied to an individual IndexReader. Now this may seem a little odd – in our code above the IndexSearcher always has exactly one IndexReader right? Not quite. See, a little hidden implementation detail is that IndexReaders may actually wrap other smaller IndexReaders – each tied to a different segment of the index. Therefore, an IndexSearcher needs to have the ability score documents across multiple, independent IndexReaders. How your scorer should iterate over matches and score documents is outlined in the “Scorer” section below. So to summarize, we can expand the last line from our example above… idxSearcher.search(q); … into this psuedocode: Weight w = q.createWeight(idxSearcher); // IndexSearcher level calculations for weight Foreach IndexReader idxReader: Scorer s = w.scorer(idxReader); // collect matches and score them Now that we have the basic flow down, let’s pick apart the three classes in a little more detail for our custom implementation. Our Custom Query What should our custom Query implementation look like? Query implementations always have two audiences: (1) Lucene and (2) users of your Query implementation. For your users, expose whatever methods you require to modify how a searcher matches and scores with your query. Want to only return as a match 1/3 of the documents that match the query? Want to punish the score because the document length is longer than the query length? Add the appropriate modifier on the query that impacts the scorer’s behavior. For our BackwardsTermQuery, we don’t expose accessors to modify the behavior of the search. The user simply uses the constructor to specify the term and field to search. In our constructor, we will simply be reusing Lucene’s existing TermQuery for searching individual terms in a document. private TermQuery backwardsQuery; private TermQuery forwardsQuery; public BackwardsTermQuery(String field, String term) { // A wrapped TermQuery for the reverse string Term backwardsTerm = new Term(field, new StringBuilder(term).reverse().toString()); backwardsQuery = new TermQuery(backwardsTerm); // A wrapped TermQuery for the Forward Term forwardsTerm = new Term(field, term); forwardsQuery = new TermQuery(forwardsTerm); } Just as importantly, be sure your Query meets the expectation of Lucene. Most importantly, you MUST override the following. createWeight() hashCode() equals() The method createWeight() we’ve discussed. This is where you’ll create a weight instance for an IndexSearcher. Pass any parameters that will influence the scoring algorithm, as the Weight will in turn be creating a searcher. Even though they are not abstract methods, overriding the hashCode()/equals() methods is very important. These methods are used by Lucene/Solr to cache queries/results. If two queries are equal, there’s no reason to rerun the query. Running another instance of your query could result in seeing the results of your first query multiple times. You’ll see your search for “peas” work great, then you’ll search for “bananas” and see “peas” search results. Override equals() and hashCode() so that “peas” != bananas. Our BackwardsTermQuery implements createWeight() by creating a custom BackwardsWeight that we’ll cover below: @Override public Weight createWeight(IndexSearcher searcher) throws IOException { return new BackwardsWeight(searcher); } BackwardsTermQuery has a fairly boilerplate equals() and hashCode() that passes through to the wrapped TermQuerys. Be sure equals() includes all the boilerplate stuff such as the check for self-comparison, the use of the super equals operator, the class comparison, etc etc. By using Lucene’s unit test suite, we can get a lot of good checks that our implementation of these is correct. @Override public boolean equals(Object other) { if (this == other) { return true; } if (!super.equals(other)) { return false; } if (getClass() != other .getClass()) { return false; } BackwardsTermQuery otherQ = (BackwardsTermQuery)(other); if (otherQ.getBoost() != getBoost()) { return false; } return otherQ.backwardsQuery.equals(backwardsQuery) && otherQ.forwardsQuery.equals(forwardsQuery); } @Override public int hashCode() { return super.hashCode() + backwardsQuery.hashCode() + forwardsQuery.hashCode(); } Our Custom Weight You may choose to use Weight simply as a mechanism to create Scorers (where the real meat of search scoring lives). However, your Custom Weight class must at least provide boilerplate implementations of the query normalization methods even if you largely ignore what is passed in: getValueForNormalization normalize These methods participate in a little ritual that IndexSearcher puts your Weight through with the Similarity for query normalization. To summarize the query normalization code in the IndexSearcher: float v = weight.getValueForNormalization(); float norm = getSimilarity().queryNorm(v); weight.normalize(norm, 1.0f); Great, what does this code do? Well a value is extracted from Weight. This value is then passed to a Similarity instance that “normalizes” that value. Weight then receives this normalized value back. In short, this is allowing IndexSearcher to give weight some information about how its “value for normalization” compares to the rest of the stuff being searched by this searcher. This is extremely high level, “value for normalization” could mean anything, but here it generally means “what I think is my weight” and what Weight receives back is what the searcher says “no really here is your weight”. The details of what that means depend on the Similarity and Weight implementation. It’s expected that the Weight’s generated Scorer will use this normalized weight in scoring. You can chose to do whatever you want in your own Scorer including completely ignoring what’s passed to normalize(). While our Weight isn’t factoring into the scoring calculation, for consistency sake, we’ll participate in the little ritual by overriding these methods: @Override public float getValueForNormalization() throws IOException { return backwardsWeight.getValueForNormalization() + forwardsWeight.getValueForNormalization(); } @Override public void normalize(float norm, float topLevelBoost) { backwardsWeight.normalize(norm, topLevelBoost); forwardsWeight.normalize(norm, topLevelBoost); } Outside of these query normalization details, and implementing “scorer”, little else happens in the Weight. However, you may perform whatever else that requires an IndexSearcher in the Weight constructor. In our implementation, we don’t perform any additional steps with IndexSearcher. The final and most important requirement of Weight is to create a Scorer. For BackwardsWeight we construct our custom BackwardsScorer, passing scorers created from each of the wrapped queries to work with. @Override public Scorer scorer(AtomicReaderContext context, boolean scoreDocsInOrder, boolean topScorer, Bits acceptDocs) throws IOException { Scorer backwardsScorer = backwardsWeight.scorer(context, scoreDocsInOrder, topScorer, acceptDocs); Scorer forwardsScorer = forwardsWeight.scorer(context, scoreDocsInOrder, topScorer, acceptDocs); return new BackwardsScorer(this, context, backwardsScorer, forwardsScorer); } Our Custom Scorer The Scorer is the real meat of the search work. Responsible for identifying matches and providing scores for those matches, this is where the lion share of our customization will occur. It’s important to note that a Scorer is also a Lucene DocIdSetIterator. A DocIdSetIterator is a cursor into a set of documents in the index. It provides three important methods: docID() – what is the id of the current document? (this is an internal Lucene ID, not the Solr “id” field you might have in your index) nextDoc() – advance to the next document advance(target) – advance (seek) to the target One uses a DocIdSetIterator by first calling nextDoc() or advance() and then reading the docID to get the iterator’s current location. The value of the docIDs only increase as they are iterated over. By implementing this interface a Scorer acts as an iterator over matches in the index. A Scorer for the query “field1:cat” can be iterated over in this manner to return all the documents that match the cat query. In fact, if you recall from my article, this is exactly how the terms are stored in the search index. You can chose to either figure out how to correctly iterate through the documents in a search index, or you can use the other Lucene queries as building blocks. The latter is often the simplest. For example, if you wish to iterate over the set of documents containing two terms, simply use the scorer corresponding to a BooleanQuery for iteration purposes. The first method of our scorer to look at is docID(). It works by reporting the lowest docID() of our underlying scorers. This scorer can be thought of as being “before” the other in the index, and as we want to report numerically increasing docIDs, we always want to chose this value: @Override public int docID() { int backwordsDocId = backwardsScorer.docID(); int forwardsDocId = forwardsScorer.docID(); if (backwordsDocId <= forwardsDocId && backwordsDocId != NO_MORE_DOCS) { currScore = BACKWARDS_SCORE; return backwordsDocId; } else if (forwardsDocId != NO_MORE_DOCS) { currScore = FORWARDS_SCORE; return forwardsDocId; } return NO_MORE_DOCS; } Similarly, we always want to advance the scorer with the lowest docID, moving it ahead. Then, we report our current position by returning docID() which as we’ve just seen will report the docID of the scorer that advanced the least in the nextDoc() operation. @Override public int nextDoc() throws IOException { int currDocId = docID(); // increment one or both if (currDocId == backwardsScorer.docID()) { backwardsScorer.nextDoc(); } if (currDocId == forwardsScorer.docID()) { forwardsScorer.nextDoc(); } return docID(); } In our advance() implementation, we allow each Scorer to advance. An advance() implementation promises to either land docID() exactly on or past target. Our call to docID() after we call advance will return either that one or both are on target, or it will return the lowest docID past target. @Override public int advance(int target) throws IOException { backwardsScorer.advance(target); forwardsScorer.advance(target); return docID(); } What a Scorer adds on top of DocIdSetIterator is the “score” method. When score() is called, a score for the current document (the doc at docID) is expected to be returned. Using the full capabilities of the IndexReader, any number of information stored in the index can be consulted to arrive at a score either in score() or while iterating documents in nextDoc()/advance(). Given the docId, you’ll be able to access the term vector for that document (if available) to perform more sophisticated calculations. In our query, we’ll simply keep track as to whether the current docID is from the wrapped backwards term scorer, indicating a match on the backwards term, or the forwards scorer, indicating a match on the normal, unreversed term. Recall docID() is always called on advance/nextDoc. You’ll notice we update currScore in docID, updating it every time the document advances. @Override public float score() throws IOException { return currScore; } A Note on Unit Testing Now that we have an implementation of a search query, we’ll want to test it! I highly recommend using Lucene’s test framework. Lucene will randomly inject different implementations of various support classes, index implementations, to throw your code off balance. Additionally, Lucene creates test implementations of classes such as IndexReader that work to check whether your Query correctly fulfills its contract. In my work, I’ve had numerous cases where tests would fail intermittently, pointing to places where my use of Lucene’s data structures subtly violated the expected contract. An example unit test is included in the github project associated with this blog post. Wrapping Up That’s a lot of stuff! And I didn’t even cover everything there is to know! As an exercise to the reader, you can explore the Scorer methods cost() and freq(), as well as the rewrite() method of Query used optionally for optimization. Additionally, I haven’t explored how most of the traditional search queries end up using a framework of Scorers/Weights that don’t actually inherit from Scorer or Weight known as “SimScorer” and “SimWeight”. These support classes consult a Similarity instance to customize calculation certain search statistics such as tf, convert a payload to a boost, etc. In short there’s a lot here! So tread carefully, there’s plenty of fiddly bits out there! But have fun! Creating a custom Lucene query is a great way to really understand how search works, and the last resort short in solving relevancy problems short of creating your own search engine. And if you have relevancy issues, contact us! If you don’t know whether you do, our search relevancy product, Quepid – might be able to tell you!
February 10, 2014
by Doug Turnbull
· 14,147 Views
article thumbnail
Blast from the Past - 'The XML Diff and Patch GUI Tool'
I needed to diff some OPML files today and came across this project. Even through it's 10 years old, it still mostly worked and the best part is it's a source distrib... :) The XML Diff and Patch GUI Tool Amol Kher Microsoft Corporation July 2004 Applies to: the XML Diff and Patch GUI tool Summary: This article shows how to use the XmlDiff class to compare two XML files and show these differences as an HTML document in a .NET Framework 1.1 application. The article also shows how to build a WinForms application for comparing XML files. Contents Introduction An Overview of the XML Diff and Patch API XML Diff and Patch Meets Winforms Working with XML DiffGrams Other Features of the XML Diff and Patch Tool Introduction There is no good command line tool that can be used to compare two XML files and view the differences. There is an online tool called XML Diff and Patch that's available on the GotDotNet website under the XML Tools section. For those who have not, you can find it at Microsoft XML Diff and Patch 1.0 [GD: yes, this link is busted... you can download it below]. It is a very convenient tool for those who want to compare the difference between two XML files. Comparing XML files is different from comparing regular text files because one wants to compare logical differences in the XML nodes not just differences in text. For example one may want to compare XML documents and ignore white space between elements, comments or processing instructions. The XML Diff and Patch tool allows one to perform such comparisons but it is primarily available as an online web application. We cannot take this tool and use it from command line. This article focuses on developing a command-line tool by reusing code from the XML Diff and Patch installation and samples. The tool works very similar to the WinDiff utility; it presents the differences in a separate window and highlights them. The XML Diff and Patch tool contains a library that contains an XmlDiff class, which can be used to compare two XML documents. The Compare method on this class takes two files and either returns true, if the files are equal, or generates an output file called an XML diffgram containing a list of differences between the files. The XmlDiff class can be supplied an options class XmlDiffOptions that can be used to set the various options for comparing files. ... Microsoft Downloads - XML Diff & Patch GUI Tool Winforms application that can be used to compare 2 XML files. Version: 1.0 Date Published: 7/14/2004 xmldiffgui.msi, 278 KB This code sample shows how to build a Windows forms application that utilizes the XML Diff & Patch library to show the difference between 2 XML files. There's a bug somewhere in it in that it was giving me an error when trying to load the HTML into an IE window, but that's likely a path thing. In the end, it executed and diff'd the two XML files. And since we do have the source... :) Another blast from the past is that this was available on the old GotDotNet site. I miss that site... :(
February 7, 2014
by Greg Duncan
· 11,269 Views
article thumbnail
JBoss Modules Suck, It’s Impossible To Use Custom Resteasy/JAX-RS Under JBoss 7
Since JBoss EAP 6.1 / AS 7.2.0 is modular and you can exclude what modules are visible to your webapp, you would expect it to be easy to ignore the built-in implementation of JAX-RS (Rest Easy 2.3.6) and use a custom one (3.0.6). However, sadly, this is not the case. You are stuck with what the official guide suggests, i.e.upgrading Rest Easy globally – provided that no other webapp running on the server becomes broken by the upgrade. This should be enough to exclude the built-in Rest Easy and be able to use a version included in the webapp: However it is far from working. This nearly does the job (though few of the exclusions might be unnecessary): However, only nearly. The problem is that the exclusion of javax.ws.rs.api has no effect. It seems as the core Java EE APIs cannot be excluded. Dead end. BTW, this are my final jax-rs related dependencies: // resteasyVersion = '3.0.6.Final' compile group: 'org.jboss.resteasy', name: 'jaxrs-api', version: resteasyVersion compile group: 'org.jboss.resteasy', name: 'resteasy-jaxrs', version: resteasyVersion compile group: 'org.jboss.resteasy', name: 'resteasy-jackson2-provider', version: resteasyVersion // JSONP compile group: 'org.jboss.resteasy', name: 'async-http-servlet-3.0', version: resteasyVersion // Required at runtime compile group: 'org.jboss.resteasy', name: 'resteasy-servlet-initializer', version: resteasyVersion // Required at runtime An approximate history of failed attempts I do not remember anymore exactly all the dead ends I went through but here is an approximate overview of the exceptions I got at deployment or runtime. java.lang.ClassNotFoundException: org.jboss.resteasy.plugins.server.servlet.HttpServlet30Dispatcher - fixed likely by adding org.jboss.resteasy:async-http-servlet-3.0:3.0.6.Final to the dependencies java.lang.ClassCastException: myapp.rs.RestApplication cannot be cast to javax.servlet.Servlet - fixed likely by adding org.jboss.resteasy:resteasy-servlet-initializer:3.0.6.Final to the dependencies java.lang.NoSuchMethodError: org.jboss.resteasy.spi.ResteasyProviderFactory.(Lorg/jboss/resteasy/spi/ResteasyProviderFactory;)V - fixed likely by adding more of the RestEasy/Jackson modules to the exclusion list java.lang.NoSuchMethodError: org.jboss.resteasy.specimpl.BuiltResponse.getHeaders()Ljavax/ws/rs/core/MultivaluedMap; - this is the ultimate one that cannot be fixed; the problem is that BuiltResponse from resteasy-jaxrs inherits from javax.ws.rs.core.Response however the version of this class from jaxrs-api-3.0.6.Final.jar is ignored in favour of Response from JAX-RS 1.1 from the javax.ws.rs.api module (/jboss-eap-6.1.0/modules/system/layers/base/javax/ws/rs/api/main/jboss-jaxrs-api_1.1_spec-1.0.1.Final-redhat-2.jar), which lacks the getHeaders method and, as mentioned, cannot be excluded. (Thanks to allprog for hinting at this confilct!) Conclusion The only way to use a newer JAX-RS is to upgrade the JBoss modules. If that would break some other webapps, you are stuck. Lessons learned: Application servers with the plenty of out-of-the-box, well-integrated (?) functionality seem attractive but when you run into conflicting libraries and classloading issues, their value diminishes rapidly. Starting with something simple that you control fully, such as Jettty, is perhaps in the long run a better solution. Also, running multiple webapps on the same server was perhaps smart in 2000 but is not worth the pain nowadays. We have plenty of disk space and memory so reuse of libraries is unimportant and the ability to manage global settings for all apps at one place has certainly better alternatives. Microservices FTW! Update: As Yannick has pointed out , the conclusion seems too general and unjustified. That is because I have arrived to it already before and this problem with JBoss serves only as another confirmation. Solution? Bill Burke has proposed a solution : I’ve lived through your pain and here’s a solution that works on AS7.1.1, EAP6.x, and Wildfly: https://github.com/keycloak/keycloak/blob/master/server/src/main/webapp/WEB-INF/jboss-deployment-structure.xml JBoss Modules don’t suck. The implicit dependencies do. The culprit is the “javaee.api” module which you have missing from your exclude. This module includes every single Java EE API. I haven’t tried, but I think if you reduce your excludes to just that module and the “resteasy” subsystem, it will work. ... FYI, I believe the “javaee.api” module problem is fixed in Wildfly so you won’t have to do the extra exclude.
February 7, 2014
by Jakub Holý
· 29,723 Views
article thumbnail
Java: Handling a RuntimeException in a Runnable
At the end of last year I was playing around with running scheduled tasks to monitor a Neo4j cluster and one of the problems I ran into was that the monitoring would sometimes exit. I eventually realised that this was because a RuntimeException was being thrown inside the Runnable method and I wasn’t handling it. The following code demonstrates the problem: import java.util.ArrayList; import java.util.List; import java.util.concurrent.*; public class RunnableBlog { public static void main(String[] args) throws ExecutionException, InterruptedException { ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); executor.scheduleAtFixedRate(new Runnable() { @Override public void run() { System.out.println(Thread.currentThread().getName() + " -> " + System.currentTimeMillis()); throw new RuntimeException("game over"); } }, 0, 1000, TimeUnit.MILLISECONDS).get(); System.out.println("exit"); executor.shutdown(); } } If we run that code we’ll see the RuntimeException but the executor won’t exit because the thread died without informing it: Exception in thread "main" pool-1-thread-1 -> 1391212558074 java.util.concurrent.ExecutionException: java.lang.RuntimeException: game over at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at RunnableBlog.main(RunnableBlog.java:11) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: java.lang.RuntimeException: game over at RunnableBlog$1.run(RunnableBlog.java:16) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) At the time I ended up adding a try catch block and printing the exception like so: public class RunnableBlog { public static void main(String[] args) throws ExecutionException, InterruptedException { ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); executor.scheduleAtFixedRate(new Runnable() { @Override public void run() { try { System.out.println(Thread.currentThread().getName() + " -> " + System.currentTimeMillis()); throw new RuntimeException("game over"); } catch (RuntimeException e) { e.printStackTrace(); } } }, 0, 1000, TimeUnit.MILLISECONDS).get(); System.out.println("exit"); executor.shutdown(); } } This allows the exception to be recognised and as far as I can tell means that the thread executing the Runnable doesn’t die. java.lang.RuntimeException: game over pool-1-thread-1 -> 1391212651955 at RunnableBlog$1.run(RunnableBlog.java:16) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) pool-1-thread-1 -> 1391212652956 java.lang.RuntimeException: game over at RunnableBlog$1.run(RunnableBlog.java:16) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) pool-1-thread-1 -> 1391212653955 java.lang.RuntimeException: game over at RunnableBlog$1.run(RunnableBlog.java:16) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) This worked well and allowed me to keep monitoring the cluster. However, I recently started reading ‘Java Concurrency in Practice‘ (only 6 years after I bought it!) and realised that this might not be the proper way of handling the RuntimeException. public class RunnableBlog { public static void main(String[] args) throws ExecutionException, InterruptedException { ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); executor.scheduleAtFixedRate(new Runnable() { @Override public void run() { try { System.out.println(Thread.currentThread().getName() + " -> " + System.currentTimeMillis()); throw new RuntimeException("game over"); } catch (RuntimeException e) { Thread t = Thread.currentThread(); t.getUncaughtExceptionHandler().uncaughtException(t, e); } } }, 0, 1000, TimeUnit.MILLISECONDS).get(); System.out.println("exit"); executor.shutdown(); } } I don’t see much difference between the two approaches so it’d be great if someone could explain to me why this approach is better than my previous one of catching the exception and printing the stack trace.
February 6, 2014
by Mark Needham
· 19,050 Views
article thumbnail
Getting Started with Intellij IDEA and WebLogic Server
Before starting, you would need the Ultimate version of IDEA to run WebLogic Server (yes, the paid version or the 30 days trial). The Community edition of IDEA will not support Application Server deployment. I also assume you have already setup WebLogic Server and a user domain as per my previous blog instructions. So now let's setup the IDE to boost your development. Create a simple HelloWorld web application in IDEA. For your HelloWorld, you can go into the Project Settings > Artifacts, and add "web:war exploded" entry for your application. You will add this into your app server later. Ensure you have added the Application Server Views plugin with WebLogic Server. (It's under Settings > IDE Settings > Application Server) Click + and enter Name: WebLogic 12.1.2 WebLogic Home: C:\apps\wls12120 Back to your editor, select Menu: Run > Edit Configuration Click + and add "WebLogic Server" > Local Name: WLS On Server tab, ensure DomainPath is set: C:\apps\wls12120\mydomain On Deployment tab, select "web:war exploded" for your HelloWorld project. Click OK Now Menu: Run > "Run WLS" Your WebLogic Server should now start and running your web application inside. You may visit the browser on http://localhost:7001/web_war_exploded Some goodies with Intellij IDEA and WLS are: Redeploy WAR only without restarting server Deploy application in exploded mode and have IDE auto make and sync Debug application with server running within IDE Full control on server settings NOTE: As noted in previous blog, if you do not set MW_HOME as system variable, then you must add this in IDEA's Run Configuration. Or you edit your "mydomain/bin/startWebLogic.cmd" and"stopWebLogic.cmd" scripts directly.
February 5, 2014
by Zemian Deng
· 46,069 Views
article thumbnail
Convert Java Objects to XML and XML to Java Objects with XStream
Learn how to convert XML objects to Java, and vice versa.
February 4, 2014
by Hari Subramanian
· 124,257 Views
  • Previous
  • ...
  • 763
  • 764
  • 765
  • 766
  • 767
  • 768
  • 769
  • 770
  • 771
  • 772
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: