DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

The Latest Coding Topics

article thumbnail
OAuth 2.0 Bearer Token Profile Vs MAC Token Profile
Almost all the implementation I see today are based on OAuth 2.0 Bearer Token Profile. Of course its an RFC proposed standard today. OAuth 2.0 Bearer Token profile brings a simplified scheme for authentication. This specification describes how to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a "bearer") can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport. Before dig in to the OAuth 2.0 MAC profile lets have quick high-level overview of OAuth 2.0 message flow. OAuth 2.0 has mainly three phases. 1. Requesting an Authorization Grant. 2. Exchanging the Authorization Grant for an Access Token. 3. Access the resources with the Access Token. Where does the token type come in to action ? OAuth 2.0 core specification does not mandate any token type. At the same time at any point token requester - client - cannot decide which token type it needs. It's purely up to the Authorization Server to decide which token type to be returned in the Access Token response. So, the token type comes in to action in phase-2 when Authorization Server returning back the OAuth 2.0 Access Token. The access token type provides the client with the information required to successfully utilize the access token to make a protected resource request (along with type-specific attributes). The client must not use an access token if it does not understand the token type. Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a protected resource request. For example following is what you get for Access Token response irrespective of which grant type you use. HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { "access_token":"mF_9.B5f-4.1JqM", "token_type":"Bearer", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA" } The above is for Bearer - following is for MAC. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "access_token":"SlAV32hkKG", "token_type":"mac", "expires_in":3600, "refresh_token":"8xLOxBtZp8", "mac_key":"adijq39jdlaska9asud", "mac_algorithm":"hmac-sha-256" } Here you can see MAC Access Token response has two additional attributes. mac_key and the mac_algorithm. Let me rephrase this - "Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter". This MAC Token Profile defines the HTTP MAC access authentication scheme, providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host. In the above response access_token is the MAC key identifier. Unlike in Bearer, MAC token profile never passes it's top secret over the wire. The access_token or the MAC key identifier is a string identifying the MAC key used to calculate the request MAC. The string is usually opaque to the client. The server typically assigns a specific scope and lifetime to each set of MAC credentials. The identifier may denote a unique value used to retrieve the authorization information (e.g. from a database), or self-contain the authorization information in a verifiable manner (i.e. a string consisting of some data and a signature). The mac_key is a shared symmetric secret used as the MAC algorithm key. The server will not reissue a previously issued MAC key and MAC key identifier combination. Now let's see what happens in phase-3. Following shows how the Authorization HTTP header looks like when Bearer Token been used. Authorization: Bearer mF_9.B5f-4.1JqM This adds very low overhead on client side. It simply needs to pass the exact access_token it got from the Authorization Server in phase-2. Under MAC token profile, this is how it looks like. Authorization: MAC id="h480djs93hd8", ts="1336363200", nonce="dj83hs9s", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=" This needs bit more attention. id is the MAC key identifier or the access_token from the phase-2. ts the request timestamp. The value is a positive integer set by the client when making each request to the number of seconds elapsed from a fixed point in time (e.g. January 1, 1970 00:00:00 GMT). This value is unique across all requests with the same timestamp and MAC key identifier combination. nonce is a unique string generated by the client. The value is unique across all requests with the same timestamp and MAC key identifier combination. The client uses the MAC algorithm and the MAC key to calculate the request mac. This is how you derive the normalized string to generate the HMAC. The normalized request string is a consistent, reproducible concatenation of several of the HTTP request elements into a single string. By normalizing the request into a reproducible string, the client and server can both calculate the request MAC over the exact same value. The string is constructed by concatenating together, in order, the following HTTP request elements, each followed by a new line character (%x0A): 1. The timestamp value calculated for the request. 2. The nonce value generated for the request. 3. The HTTP request method in upper case. For example: "HEAD", "GET", "POST", etc. 4. The HTTP request-URI as defined by [RFC2616] section 5.1.2. 5. The hostname included in the HTTP request using the "Host" request header field in lower case. 6. The port as included in the HTTP request using the "Host" request header field. If the header field does not include a port, the default value for the scheme MUST be used (e.g. 80 for HTTP and 443 for HTTPS). 7. The value of the "ext" "Authorization" request header field attribute if one was included in the request (this is optional), otherwise, an empty string. Each element is followed by a new line character (%x0A) including the last element and even when an element value is an empty string. Either you use Bearer of MAC - the end user or the resource owner is identified using the access_token. Authorization, throttling, monitoring or any other quality of service operations can be carried out against the access_token irrespective of which token profile you use.
January 24, 2013
by Prabath Siriwardena
· 36,237 Views
article thumbnail
Create Post Tags With CSS
In this tutorial we are going to create post tags using CSS, we will also create a WordPress widget that will allow you to display these post tags in the sidebar of your WordPress site. The above image shows the tags we are going to create. In WordPress you can also display the amount of the posts that have these tags assigned to them, we will display this number on the right of the tag and will display the count on the hover of the tags. View the demo to see what we are going to create. Demo To start with we are going to define the HTML we need to use, the tags are going to be list items inside the list items is going to be a span tag with the number of posts that have this tag assigned to them. Admin 11 There is a CSS class on the unordered list which we can use to style the child elements of the list items. 404 1Admin 11Animation 13API 11Border 3box-sizing 1CSS 12CSS3 14cURL 3Database 6Effects 2Element 3Error 8Freebies 8Google 16htaccess 10HTML 8HTML5 12JavaScript 25Search 16SEO 11Snippet 6Tutorial 11Web 8WordPress 27 Styling The Tags This will create the length and background colours of the tags, there is a padding on the left of the tags which create enough room for the hole in the tag and arrow on the end. .posttags { list-style:none; } .posttags li { background:#eee; border-radius: 0 20px 20px 0; display: block; height:24px; line-height: 24px; margin: 1em 0.7em; padding:5px 5px 5px 20px; position: relative; text-align: left; width:124px; } .posttags li a { color: #56a3d5; display: block; } Using the CSS pseudo class :before we can create a new element that is styled as a triangle, this will need be positioned before the rectangle box to which creates the tag look. .posttags li:before { content:""; float:left; position:absolute; top:0; left:-17px; width:0; height:0; border-color:transparent #eee transparent transparent; border-style:solid; border-width:17px 17px 17px 0; } The below code creates the hole on the left of the tag, this will use the CSS pseudo class :after that will create new element, using the position absolute we can position this in the exact place for the hole. .posttags li:after { content:""; position:absolute; top:15px; left:2px; float:left; width:6px; height:6px; -moz-border-radius:2px; -webkit-border-radius:2px; border-radius:2px; background:#fff; -moz-box-shadow:-1px -1px 2px #004977; -webkit-box-shadow:-1px -1px 2px #004977; box-shadow:-1px -1px 2px #004977; } The span tag is used to display the count of tags assigned to a post. This post count will start off as a border of the right side of the tag and when we hover over the tag we will use CSS transitions to expand the width of the span to display the tag count. .posttags li span { background: #56a3d5; border-color: #3591cd #318cc7 #2f86be; background-image: -webkit-linear-gradient(top, #6aaeda, #4298d0); background-image: -moz-linear-gradient(top, #6aaeda, #4298d0); background-image: -o-linear-gradient(top, #6aaeda, #4298d0); background-image: linear-gradient(to bottom, #6aaeda, #4298d0); border-radius: 0 20px 20px 0; color: transparent; display: inline-block; height:24px; line-height: 24px; padding:5px; width:0; position: absolute; text-align: center; top:-1px; right:0; -webkit-transition: width 0.3s ease-out; -moz-transition: width 0.3s ease-out; -o-transition: width 0.3s ease-out; transition: width 0.3s ease-out; } [css] On the hover event of the list item we can now change the width of the span tag and the colour of the text to make it display the post count. [css] .posttags li:hover span{ width:20px; color: #FFF; } View the demo to see what this will create. Demo Create A WordPress Widget To Display The Post Tags The following code can be used to create a WordPress widget to display all the post tags on your WordPress site. For this widget we will use the widget boilerplate which inherits from from the WP_Widget class to easily display the widget, creates a form for the widget settings and add the stylesheet to display the post tags. To get all the post tags for your WordPress site you need to use the WordPress function get_tags(), this will return an array of tag objects. From this object you will have access to all the data for this tag including the post count. From this tag object we have access to the tag ID, we can then use the WordPress function get_tag_link( $tagId ) to link the entire tag to the tag page. 'pu_sliding_tag_count', 'description' => __('A widget that allows you to add a widget to display sliding post tags.', 'framework') ) ); } // end constructor /** * Adding styles */ public function add_styles() { wp_register_style( 'pu_sliding_tags', plugins_url('sliding-tag-count.css', __FILE__) ); wp_enqueue_style('pu_sliding_tags'); } /** * Front-end display of widget. * * @see WP_Widget::widget() * * @param array $args Widget arguments. * @param array $instance Saved values from database. */ public function widget( $args, $instance ) { extract( $args ); // Our variables from the widget settings $title = apply_filters('widget_title', $instance['title'] ); // Before widget (defined by theme functions file) echo $before_widget; // Display the widget title if one was input if ( $title ) echo $before_title . $title . $after_title; ?> 0 ) { echo ' '; foreach($posttags as $posttag) { echo ' term_id).'">'.$posttag->name.' '.$posttag->count.' '; } echo ' '; } ?> $v){ $instance[$k] = strip_tags($v); } return $instance; } /** * Create the form for the Widget admin * * @see WP_Widget::form() * * @param array $instance Previously saved values from database. */ function form( $instance ) { // Set up some default widget settings $defaults = array( 'title' => 'Post Tags' ); $instance = wp_parse_args( (array) $instance, $defaults ); ?> Copy the above code into a new PHP file and place this inside your plugin folder. You will then see this new widget in your plugin screen, once this is activated you will see this new widget on the widget dashboard screen.
January 23, 2013
by Paul Underwood
· 5,833 Views
article thumbnail
How to Publish Maven Site Docs to BitBucket or GitHub Pages
In this post we will Utilize GitHub and/or BitBucket's static web page hosting capabilities to publish our project's Maven 3 Site Documentation. Each of the two SCM providers offer a slightly different solution to host static pages. The approach spelled out in this post would also be a viable solution to "backup" your site documentation in a supported SCM like Git or SVN. This solution does not directly cover site documentation deployment covered by the maven-site-plugin and the Wagon library (scp, WebDAV or FTP). There is one main project hosted on GitHub that I have posted with the full solution. The project URL is https://github.com/mike-ensor/clickconcepts-master-pom/. The POM has been pushed to Maven Central and will continue to be updated and maintained. com.clickconcepts.project master-site-pom 0.16 GitHub Pages GitHub hosts static pages by using a special branch "gh-pages" available to each GitHub project. This special branch can host any HTML and local resources like JavaScript, images and CSS. There is no server side development. To navigate to your static pages, the URL structure is as follows: http://.github.com/ An example of the project I am using in this blog post: http://mike-ensor.github.com/clickconcepts-master-pom/ where the first bold URL segment is a username and the second bold URL segment is the project. GitHub does allow you to create a base static hosted static site for your username by creating a repository with your username.github.com. The contents would be all of your HTML and associated static resources. This is not required to post documentation for your project, unlike the BitBucket solution. There is a GitHub Site plugin that publishes site documentation via GitHub's object API but this is outside the scope of this blog post because it does not provide a single solution for GitHub and BitBucket projects using Maven 3. BitBucket BitBucket provides a similar service to GitHub in that it hosts static HTML pages and their associated static resources. However, there is one large difference in how those pages are stored. Unlike GitHub, BitBucket requires you to create a new repository with a name fitting the convention. The files will be located on the master branch and each project will need to be a directory off of the root. mikeensor.bitbucket.org/ /some-project +index.html +... /css /img /some-other-project +index.html +... /css /img index.html .git .gitignore The naming convention is as follows: .bitbucket.org An example of a BitBucket static pages repository for me would be: http://mikeensor.bitbucket.org/. The structure does not require that you create an index.html page at the root of the project, but it would be advisable to avoid 404s. Generating Site Documentation Maven provides the ability to post documentation for your project by using the maven-site-plugin. This plugin is difficult to use due to the many configuration options that oftentimes are not well documented. There are many blog posts that can help you write your documentation including my post on maven site documentation. I did not mention how to use "xdoc", "apt" or other templating technologies to create documentation pages, but not to fear, I have provided this in my GitHub project. Putting it all Together The Maven SCM Publish plugin (http://maven.apache.org/plugins/maven-scm-publish-plugin/ publishes site documentation to a supported SCM. In our case, we are going to use Git through BitBucket or GitHub. Maven SCM Plugin does allow you to publish multi-module site documentation through the various properties, but the scope of this blog post is to cover single/mono module projects and the process is a bit painful. Take a moment to look at the POM file located in the clickconcepts-master-pom project. This master POM is rather comprehensive and the site documentation is only one portion of the project, but we will focus on the site documentation. There are a few things to point out here, first, the scm-publish plugin and the idiosyncronies when implementing the plugin. In order to create the site documentation, the "site" plugin must first be run. This is accomplished by running site:site. The plugin will generate the documentation into the "target/site" folder by default. The SCM Publish Plugin, by default, looks for the site documents to be in "target/staging" and is controlled by the content parameter. As you can see, there is a mismatch between folders. NOTE: My first approach was to run the site:stage command which is supposed to put the site documents into the "target/staging" folder. This is not entirely correct, the site plugin combines with the distributionManagement.site.url property to stage the documents, but there is very strange behavior and it is not documented well. In order to get the site plugin's site documents and the SCM Publish's location to match up, use the content property and set that to the location of the Site Plugin output (). If you are using GitHub, there is no modification to the siteOutputDirectory needed, however, if you are using BitBucket, you will need to modify the property to add in a directory layer into the site documentation generation (see above for differences between GitHub and BitBucket pages). The second property will tell the SCM Publish Plugin to look at the root "site" folder so that when the files are copied into the repository, the project folder will be the containing folder. The property will look like: ${project.build.directory}/site/ ${project.artifactId} ${project.build.directory} /site Next we will take a look at the custom properties defined in the master POM and used by the SCM Publish Plugin above. Each project will need to define several properties to use the Master POM that are used within the plugins during the site publishing. Fill in the variables with your own settings. BitBucket ... ... master scm:git:git@bitbucket.org:mikeensor/mikeensor.bitbucket.org.git ${project.build.directory}/site/${project.artifactId} ${project.build.directory}/site ${changelog.bitbucket.fileUri} ${changelog.revision.bitbucket.fileUri} ... ... GitHub ... ... gh-pages scm:git:git@github.com:mikeensor/clickconcepts-master-pom.git ${changelog.github.fileUri} ${changelog.revision.github.fileUri} ... ... NOTE: changelog parameters are required to use the Master POM and are not directly related to publishing site docs to GitHub or BitBucket How to Generate If you are using the Master POM (or have abstracted out the Site Plugin and the SCM Plugin) then to generate and publish the documentation is simple. mvn clean site:site scm-publish:publish-scm mvn clean site:site scm-publish:publish-scm -Dscmpublish.dryRun=true Gotchas In the SCM Publish Plugin documentation's "tips" they recommend creating a location to place the repository so that the repo is not cloned each time. There is a risk here in that if there is a git repository already in the folder, the plugin will overwrite the repository with the new site documentation. This was discovered by publishing two different projects and having my root repository wiped out by documentation from the second project. There are ways to mitigate this by adding in another folder layer, but make sure you test often! Another gotcha is to use the -Dscmpublish.dryRun=true to test out the site documentation process without making the SCM commit and push Project and Documentation URLs Here is a list of the fully working projects used to create this blog post: Master POM with Site and SCM Publish plugins &ndash https://github.com/mike-ensor/clickconcepts-master-pom. Documentation URL: http://mike-ensor.github.com/clickconcepts-master-pom/ Child Project using Master Pom &ndash http://mikeensor.bitbucket.org/fest-expected-exception. Documentation URL: http://mikeensor.bitbucket.org/fest-expected-exception/
January 23, 2013
by Mike Ensor
· 13,140 Views
article thumbnail
Spring Data JDBC Generic DAO Implementation: Most Lightweight ORM Ever
I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects. Design objectives Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching CRUD implemented in seconds For small applications where JPA is an overkill Use when simplicity is needed or when future migration e.g. to JPA is considered Minimalistic support for database dialect differences (e.g. transparent paging of results) Features Each DAO provides built-in support for: Mapping to/from domain objects through RowMapper abstraction Generated and user-defined primary keys Extracting generated key Compound (multi-column) primary keys Immutable domain objects Paging (requesting subset of results) Sorting over several columns (database agnostic) Optional support for many-to-one relationships Supported databases (continuously tested): MySQL PostgreSQL H2 HSQLDB Derby ...and most likely most of the others Easily extendable to other database dialects via SqlGenerator class. Easy retrieval of records by ID API Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you: public interface PagingAndSortingRepository extends CrudRepository { T save(T entity); Iterable save(Iterable entities); T findOne(ID id); boolean exists(ID id); Iterable findAll(); long count(); void delete(ID id); void delete(T entity); void delete(Iterable entities); void deleteAll(); Iterable findAll(Sort sort); Page findAll(Pageable pageable); } Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting: Page page = userRepository.findAll( new PageRequest( 5, 10, new Sort( new Order(DESC, "reputation"), new Order(ASC, "user_name") ) ) ); Spring Data JDBC repository library will translate this call into (PostgreSQL syntax): SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC LIMIT 50 OFFSET 10 ...or even (Derby syntax): SELECT * FROM ( SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.* FROM ( SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC ) AS t ) AS a WHERE ROW_NUM BETWEEN 51 AND 60 No matter which database you use, you'll get Page object in return (you still have to provide RowMapper yourself to translate from ResultSet to domain object. If you don't know Spring Data project yet, Page is a wonderful abstraction, not only encapsulating List , but also providing metadata such as total number of records, on which page we currently are, etc. Reasons to use You consider migration to JPA or even some NoSQL database in the future. Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don't expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies - but at least you minimize the impact by using same DAO API. You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill You want to have full control over generated SQL if needed You want to work with objects, but don't need lazy loading, relationship handling, multi-level caching, dirty checking... You need CRUD and not much more You want to by DRY You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work You have very few database tables Getting started For more examples and working code don't forget to examine project tests. Prerequisites Maven coordinates: com.blogspot.nurkiewicz jdbcrepository 0.1 Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it: $ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git $ git checkout 0.1 $ mvn javadoc:jar source:jar install In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration: @EnableTransactionManagement @Configuration public class MinimalConfig { @Bean public PlatformTransactionManager transactionManager() { return new DataSourceTransactionManager(dataSource()); } @Bean public DataSource dataSource() { MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource(); ds.setUser("user"); ds.setPassword("secret"); ds.setDatabaseName("db_name"); return ds; } } Entity with auto-generated key Say you have a following database table with auto-generated key (MySQL syntax): CREATE TABLE COMMENTS ( id INT AUTO_INCREMENT, user_name varchar(256), contents varchar(1000), created_time TIMESTAMP NOT NULL, PRIMARY KEY (id) ); First you need to create domain object User mapping to that table (just like in any other ORM): public class Comment implements Persistable { private Integer id; private String userName; private String contents; private Date createdTime; @Override public Integer getId() { return id; } @Override public boolean isNew() { return id == null; } //getters/setters/constructors/... } Apart from standard Java boilerplate you should notice implementing Persistable where Integer is the type of primary key. Persistable is an interface coming from Spring Data project and it's the only requirement we place on your domain object. Finally we are ready to create our CommentRepository DAO: @Repository public class CommentRepository extends JdbcRepository { public CommentRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS"); } public static final RowMapper ROW_MAPPER = //see below private static final RowUnmapper ROW_UNMAPPER = //see below @Override protected Comment postCreate(Comment entity, Number generatedId) { entity.setId(generatedId.intValue()); return entity; } } First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning. As you can see we extend JdbcRepository which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper , RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used. If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don't want to work with raw JDBC results. It's quite straightforward: public static final RowMapper ROW_MAPPER = new RowMapper () { @Override public Comment mapRow(ResultSet rs, int rowNum) throws SQLException { return new Comment( rs.getInt("id"), rs.getString("user_name"), rs.getString("contents"), rs.getTimestamp("created_time") ); } }; RowUnmapper comes from this library and it's essentially the opposite of RowMapper : takes an object and turns it into a Map . This map is later used by the library to construct SQL CREATE / UPDATE queries: private static final RowUnmapper ROW_UNMAPPER = new RowUnmapper () { @Override public Map mapColumns(Comment comment) { Map mapping = new LinkedHashMap (); mapping.put("id", comment.getId()); mapping.put("user_name", comment.getUserName()); mapping.put("contents", comment.getContents()); mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime())); return mapping; } }; If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper. Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don't need it, just don't override postCreate() . Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example. By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations... all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features. Entity with manually assigned key In this example we'll see how entities with user-defined primary keys are handled. Let's start from database model: CREATE TABLE USERS ( user_name varchar(255), date_of_birth TIMESTAMP NOT NULL, enabled BIT(1) NOT NULL, PRIMARY KEY (user_name) ); ...and User domain model: public class User implements Persistable { private transient boolean persisted; private String userName; private Date dateOfBirth; private boolean enabled; @Override public String getId() { return userName; } @Override public boolean isNew() { return !persisted; } public User withPersisted(boolean persisted) { this.persisted = persisted; return this; } //getters/setters/constructors/... } Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not ( isNew() ) method - there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you'll get away with return true all the time from isNew() . And finally our DAO, UserRepository bean: @Repository public class UserRepository extends JdbcRepository { public UserRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name"); } public static final RowMapper ROW_MAPPER = //... public static final RowUnmapper ROW_UNMAPPER = //... @Override protected User postUpdate(User entity) { return entity.withPersisted(true); } @Override protected User postCreate(User entity, Number generatedId) { return entity.withPersisted(true); } } "USERS" and "user_name" parameters designate table name and primary key column name. I'll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it. Check out JdbcRepositoryManualKeyTest for a working code based on this example. Compound primary key We also support compound primary keys (primary keys consisting of several columns). Take this table as an example: CREATE TABLE BOARDING_PASS ( flight_no VARCHAR(8) NOT NULL, seq_no INT NOT NULL, passenger VARCHAR(1000), seat CHAR(3), PRIMARY KEY (flight_no, seq_no) ); I would like you to notice the type of primary key in Peristable : public class BoardingPass implements Persistable { private transient boolean persisted; private String flightNo; private int seqNo; private String passenger; private String seat; @Override public Object[] getId() { return pk(flightNo, seqNo); } @Override public boolean isNew() { return !persisted; } //getters/setters/constructors/... } Unfortunately we don't support small value classes encapsulating all ID values in one object (like JPA does with @IdClass), so you have to live with Object[] array. Defining DAO class is similar to what we've already seen: public class BoardingPassRepository extends JdbcRepository { public BoardingPassRepository() { this("BOARDING_PASS"); } public BoardingPassRepository(String tableName) { super(MAPPER, UNMAPPER, new TableDescription(tableName, null, "flight_no", "seq_no") ); } public static final RowMapper ROW_MAPPER = //... public static final RowUnmapper UNMAPPER = //... } Two things to notice: we extend JdbcRepository and we provide two ID column names just as expected: "flight_no", "seq_no" . We query such DAO by providing both flight_no and seq_no (necessarily in that order) values wrapped by Object[] : BoardingPass pass = repository.findOne(new Object[] {"FOO-1022", 42}); No doubts, this is cumbersome in practice, so we provide tiny helper method which you can statically import: import static com.blogspot.nurkiewicz.jdbcrepository.JdbcRepository.pk; //... BoardingPass foundFlight = repository.findOne(pk("FOO-1022", 42)); Check out JdbcRepositoryCompoundPkTest for a working code based on this example. Transactions This library is completely orthogonal to transaction management. Every method of each repository requires running transaction and it's up to you to set it up. Typically you would place @Transactional on service layer (calling DAO beans). I don't recommend placing @Transactional over every DAO bean. Caching Spring Data JDBC repository library is not providing any caching abstraction or support. However adding @Cacheable layer on top of your DAOs or services using caching abstraction in Spring is quite straightforward. See also: @Cacheable overhead in Spring. Contributions ..are always welcome. Don't hesitate to submit bug reports and pull requests. Biggest missing feature now is support for MSSQL and Oracle databases. It would be terrific if someone could have a look at it. Testing This library is continuously tested using Travis (). Test suite consists of 265 tests (53 distinct tests each run against 5 different databases: MySQL, PostgreSQL, H2, HSQLDB and Derby. When filling bug reports or submitting new features please try including supporting test cases. Each pull request is automatically tested on a separate branch. Building After forking the official repository building is as simple as running: $ mvn install You'll notice plenty of exceptions during JUnit test execution. This is normal. Some of the tests run against MySQL and PostgreSQL available only on Travis CI server. When these database servers are unavailable, whole test is simply skipped: Results : Tests run: 265, Failures: 0, Errors: 0, Skipped: 106 Exception stack traces come from root AbstractIntegrationTest. Design Library consists of only a handful of classes, highlighted in the diagram below: JdbcRepository is the most important class that implements all PagingAndSortingRepository methods. Each user repository has to extend this class. Also each such repository must at least implement RowMapper and RowUnmapper (only if you want to modify table data). SQL generation is delegated to SqlGenerator. PostgreSqlGenerator. and DerbySqlGenerator are provided for databases that don't work with standard generator. License This project is released under version 2.0 of the Apache License (same as Spring framework).
January 22, 2013
by Tomasz Nurkiewicz DZone Core CORE
· 76,022 Views · 2 Likes
article thumbnail
Linq performance with Count() and Any()
Recently I was using resharper to refactor some of my code and found that it was suggesting to use any() extension method instead of count() method in List. I was really keen about performance and found that resharper was right. There is huge performance difference between any() and count() if you are using count() just to check whether list is empty or not. Difference between Count() and Any(): In Count() method code will traverse all the list and get total number of objects in list while in Any() will return after examining first element in the sequence. So in list where we have many object it will be significant execution time if we use count(). Example: Let’s take an example to illustrate this scenario. Following is a code for that. using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace ConsoleApplication3 { class Program { static void Main() { //Creating List of customers List customers = new List(); for (int i = 0; i <= 100; i++) { Customer customer = new Customer { CustomerId = i, CustomerName = string.Format("Customer{0}", i) }; customers.Add(customer); } //Measuring time with count Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); if (customers.Count > 0) { Console.WriteLine("Customer list is not empty with count"); } stopWatch.Stop(); Console.WriteLine("Time consumed with count: {0}", stopWatch.Elapsed); //Measuring time with any stopWatch.Restart(); if (customers.Any()) { Console.WriteLine("Customer list is not empty with any"); } stopWatch.Stop(); Console.WriteLine("Time consumed with count: {0}", stopWatch.Elapsed); } } public class Customer { public int CustomerId { get; set; } public string CustomerName { get; set; } } } Here in the above code you can see that I created ‘Customer’ class which has simple two properties ‘CustomerId’ and ‘CustomerName’. Then in Main method I have created a list of customers and used for loop and object intializer to fill customers list. After that I have written a code to measure time for count and any with ‘Stop watch’ class and printing CPU ticks. It’s pretty simple. Now let’s run this console application to see output. Conclusion: Here in the above example you can see there is huge performance benefit of using any() instead of count() when we are checking whether list is empty or not. Hope you like it. Stay tuned for more...
January 21, 2013
by Jalpesh Vadgama
· 24,576 Views
article thumbnail
ActiveMQ: Securing the ActiveMQ Web Console in Tomcat
This post will demonstrate how to secure the ActiveMQ WebConsole with a username and password when deployed in the Apache Tomcat web server. The Apache ActiveMQ documentation on the Web Console provides a good example of how this is done for Jetty, which is the default web server shipped with ActiveMQ, and this post will show how this is done when deploying the web console in Tomcat. To demonstrate, the first thing you will need to do is grab the latest distribution of ActiveMQ. For the purpose of this demonstration I will be using the 5.5.1-fuse-09-16 release which can be obtained via the Red Hat Support Portal or via the FuseSource repository: Red Hat Support Portal ActiveMQ ActiveMQ Web Console Tomcat mysql-connector-java-5.1.18-bin.jar Once you have the distributions, extract and start the broker. If you don't already have Tomcat installed you can grab it from the link above as well. I am using Tomcat 6.0.36 in this demonstration. Next, create a directory called activemq-console in the Tomcat webapps directory and extract the ActiveMQ Web Console war by using the jar -xf command. With all the binaries installed and our broker running we can begin configuring our web app and Tomcat to secure the Web Console. First, open the ActiveMQ Web Console's web descriptor, this can be found in the following location: activemq-console/WEB-INF/web.xml, and add the following configuration: Authenticate entire app /* GET POST activemq NONE BASIC This configuration enables the security constraint on the entire application as noted with /* url-pattern. Another point to notice is the auth-constraint which has been set to the activemq role, we will define this shortly. And lastly, note that this is configured for basic authentication. This means the username password are base64 encoded but not truly encrypted. To improve the security further you could enable a secure transport such as SSL. Now lets configure the Tomcat server to validate our activemq role we just specified in the web app. Out-of-the-box Tomcat is configured to use the UserDataBaseRealm. This is configured in [TOMCAT_HOME]/conf/server.xml. This instructs the web server to validate against the tomcat-users.xml file which can be found in [TOMCAT_HOME]/conf as well. Open the tomcat-users.xml file and add the following: This defines our activemq role and configures a user with that role. The last thing we need to do before starting our Tomcat server is add the required configuration to communicate with the broker. First, copy the activemq-all jar into the Tomcat lib directory. Next, open the catalina.sh/catalina.bat startup script and add the following configuration to initialize the JAVA_OPTS variable: JAVA_OPTS="-Dwebconsole.jms.url=tcp://localhost:61616 -Dwebconsole.jmx.url=service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi -Dwebconsole.jmx.user= -Dwebconsole.jmx.password=" Now we are ready to start the Tomcat server. Once started, you should be able to access the ActiveMQ Web Console at the following URL: http://localhost:8080/activemq-console. You should be prompted with something similar to this dialog: Once you enter the user name and password you should get logged into the ActiveMQ Web Console. As I mentioned before the user name and password are base64 encoded and each request is authenticated against the UserDataBaseRealm. The browser will retain your username and password in memory so you will need to exit the browser to end the session. What you have seen so far is a simple authentication using the UserDataBaseRealm which contains a list of users in a text file. Next we will look at configuring the ActiveMQ Web Console to use a JDBCRealm which will authenticate against users stored in a database. Lets first create a new database as follows using a MySQL database: mysql> CREATE DATABASE tomcat_users; Query OK, 1 row affected (0.00 sec) mysql> Provide the appropriate permissions for this database to a database user: mysql> GRANT ALL ON tomcat_users.* TO 'activemq'@'localhost'; Query OK, 0 rows affected (0.02 sec) mysql> Then you can login to the database and create the following tables: mysql> USE tomcat_users; Database changed mysql> CREATE TABLE tomcat_users ( -> user_name varchar(20) NOT NULL PRIMARY KEY, -> password varchar(32) NOT NULL -> ); Query OK, 0 rows affected (0.10 sec) mysql> CREATE TABLE tomcat_roles ( -> role_name varchar(20) NOT NULL PRIMARY KEY -> ); Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tomcat_users_roles ( -> user_name varchar(20) NOT NULL, -> role_name varchar(20) NOT NULL, -> PRIMARY KEY (user_name, role_name), -> CONSTRAINT tomcat_users_roles_foreign_key_1 FOREIGN KEY (user_name) REFERENCES tomcat_users (user_name), -> CONSTRAINT tomcat_users_roles_foreign_key_2 FOREIGN KEY (role_name) REFERENCES tomcat_roles (role_name) -> ); Query OK, 0 rows affected (0.06 sec) mysql> Next seed the tables with the user and role information: mysql> INSERT INTO tomcat_users (user_name, password) VALUES ('admin', 'dbpass'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO tomcat_roles (role_name) VALUES ('activemq'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO tomcat_users_roles (user_name, role_name) VALUES ('admin', 'activemq'); Query OK, 1 row affected (0.00 sec) mysql> Now we can verify the information in our database: mysql> select * from tomcat_users; +-----------+----------+ | user_name | password | +-----------+----------+ | admin | dbpass | +-----------+----------+ 1 row in set (0.00 sec) mysql> select * from tomcat_users_roles; +-----------+-----------+ | user_name | role_name | +-----------+-----------+ | admin | activemq | +-----------+-----------+ 1 row in set (0.00 sec) mysql> If you left the Tomcat server running from the first part of this demonstration shut it down at this time so we can change the configuration to use the JDBCRealm. In the server.xml file, located in [TOMCAT_HOME]/conf, we need to comment out the existing UserDataBaseRealm and add the JDBCRealm: Looking at the JDBCRealm, you can see we are using the mysql JDBC driver, the connection URL is configured to connect to the tomcat_users database using the specified credentials, and the table and column names used in our database have been specified. Now the Tomcat server can be started again. This time when you login to the ActiveMQ Web Console use the username and password specified when loading the database tables. That's all there is to it, you now know how to configure the ActiveMQ Web Console to use Tomcat's UserDatabaseRealm and JDBCRealm. The following sites were helpful in gathering this information: http://activemq.apache.org/web-console.html http://www.avajava.com/tutorials/lessons/how-do-i-use-a-jdbc-realm-with-tomcat-and-mysql.html?page=1 http://oreilly.com/pub/a/java/archive/tomcat-tips.html?page=1
January 21, 2013
by Jason Sherman
· 11,902 Views
article thumbnail
Assign a Fixed IP to an AWS EC2 Instance
as described in my previous post the ip (and dns) of your running ec2 ami will change after a reboot of that instance. of course this makes it very hard to make your applications on that machine available for the outside world, like in this case our wordpress blog. that is where elastic ip comes to the rescue. with this feature you can assign a static ip to your instance. assign one to your application as follows: click on the elastic ips link in the aws console allocate a new address associate the address with a running instance right click to associate the ip with an instance: pick the instance to assign this ip to: note the ip being assigned to your instance if you go to the ip address you were assigned then you see the home page of your server: and the nicest thing is that if you stop and start your instance you will receive a new public dns but your instance is still assigned to the elastic ip address: one important note: as long as an elastic ip address is associated with a running instance, there is no charge for it. however an address that is not associated with a running instance costs $0.01/hour. this prevents users from ‘reserving’ addresses while they are not being used.
January 20, 2013
by Eric Genesky
· 22,667 Views
article thumbnail
Using Redis with Spring
As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store. Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application. As always, we will start with Maven POM file for our project: 4.0.0 com.example.spring redis 0.0.1-SNAPSHOT jar UTF-8 3.1.0.RELEASE org.springframework.data spring-data-redis 1.0.0.RELEASE cglib cglib-nodep 2.2 log4j log4j 1.2.16 redis.clients jedis 2.0.0 jar org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post. We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore: package com.example.redis.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; @Configuration public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); } @Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; } } That's basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let's consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple: @Autowired private RedisTemplate< String, Object > template; public Object getValue( final String key ) { return template.opsForValue().get( key ); } public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); } Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second: public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); template.expire( key, 1, TimeUnit.SECONDS ); } Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User public class User { private final Long id; private String name; private String email; // Setters and getters are omitted for simplicity } into Redis using key pattern "user:": public void setUser( final User user ) { final String key = String.format( "user:%s", user.getId() ); final Map< String, Object > properties = new HashMap< String, Object >(); properties.put( "id", user.getId() ); properties.put( "name", user.getName() ); properties.put( "email", user.getEmail() ); template.opsForHash().putAll( key, properties); } Respectively, object could easily be inspected and retrieved using the id. public User getUser( final Long id ) { final String key = String.format( "user:%s", id ); final String name = ( String )template.opsForHash().get( key, "name" ); final String email = ( String )template.opsForHash().get( key, "email" ); return new User( id, name, email ); } There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let's add a bit more configuration here (as part of AppConfig class): @Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); } @Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) ); return container; } The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition: package com.example.redis.impl; import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener; public class RedisMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] paramArrayOfByte) { System.out.println( "Received by RedisMessageListener: " + message.toString() ); } } Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it's pretty simple: @Autowired private RedisTemplate< String, Object > template; public void publish( final String message ) { template.execute( new RedisCallback< Long >() { @SuppressWarnings( "unchecked" ) @Override public Long doInRedis( RedisConnection connection ) throws DataAccessException { return connection.publish( ( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ), ( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) ); } } ); } That's basically it for very quick introduction but definitely enough to fall in love with Redis.
January 17, 2013
by Andriy Redko
· 80,685 Views · 26 Likes
article thumbnail
TaskletStep Oriented Processing in Spring Batch
Many enterprise applications require batch processing to process billions of transactions every day. These big transaction sets have to be processed without performance problems. Spring Batch is a lightweight and robust batch framework to process these big data sets. Spring Batch offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, TaskletStep Oriented Processing Model is explained. Let us investigate fundamental Spring Batch components : Job : An entity that encapsulates an entire batch process. Step and Tasklets are defined under a Job Step : A domain object that encapsulates an independent, sequential phase of a batch job. JobInstance : Batch domain object representing a uniquely identifiable job run – it’s identity is given by the pair Job and JobParameters. JobParameters : Value object representing runtime parameters to a batch job. JobExecution : A JobExecution refers to the technical concept of a single attempt to run a Job. An execution may end in failure or success, but the JobInstance corresponding to a given execution will not be considered complete unless the execution completes successfully. JobRepository : An interface which responsible for persistence of batch meta-data entities. In the following sample, an in-memory repository is used via MapJobRepositoryFactoryBean. JobLauncher : An interface exposing run method, which launches and controls the defined jobs. TaskLet : An interface exposing execute method, which will be a called repeatedly until it either returns RepeatStatus.FINISHED or throws an exception to signal a failure. It is used when both readers and writers are not required as the following sample. Let us take a look how to develop Tasklet-Step Oriented Processing Model. Used Technologies : JDK 1.7.0_09 Spring 3.1.3 Spring Batch 2.1.9 Maven 3.0.4 STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in). STEP 2 : LIBRARIES Firstly, dependencies are added to Maven’ s pom.xml. 3.1.3.RELEASE 2.1.9.RELEASE org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} org.springframework.batch spring-batch-core ${spring-batch.version} log4j log4j 1.2.16 maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 org.apache.maven.plugins maven-compiler-plugin 3.0 1.7 1.7 The following Maven plugin can be used to create runnable-jar, org.apache.maven.plugins maven-shade-plugin 2.0 package shade 1.7 1.7 com.onlinetechvision.exe.Application META-INF/spring.handlers META-INF/spring.schemas STEP 3 : CREATE SuccessfulStepTasklet TASKLET SuccessfulStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in successful step. package com.onlinetechvision.tasklet; import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus; /** * SuccessfulStepTasklet Class illustrates a successful job * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class SuccessfulStepTasklet implements Tasklet { private static final Logger logger = Logger.getLogger(SuccessfulStepTasklet.class); private String taskResult; /** * Executes SuccessfulStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug("Task Result : " + getTaskResult()); return RepeatStatus.FINISHED; } public String getTaskResult() { return taskResult; } public void setTaskResult(String taskResult) { this.taskResult = taskResult; } } STEP 4 : CREATE FailedStepTasklet TASKLET FailedStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in failed step. package com.onlinetechvision.tasklet; import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus; /** * FailedStepTasklet Class illustrates a failed job. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class FailedStepTasklet implements Tasklet { private static final Logger logger = Logger.getLogger(FailedStepTasklet.class); private String taskResult; /** * Executes FailedStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug("Task Result : " + getTaskResult()); throw new Exception("Error occurred!"); } public String getTaskResult() { return taskResult; } public void setTaskResult(String taskResult) { this.taskResult = taskResult; } } STEP 5 : CREATE BatchProcessStarter CLASS BatchProcessStarter Class is created to launch the jobs. Also, it logs their execution results. A Completed Job Instance can not be restarted with the same parameter(s) because it already exists in job repository and JobInstanceAlreadyCompleteException is thrown with “A job instance already exists and is complete” description. It can be restarted with different parameter. In the following sample, different currentTime parameter is set in order to restart FirstJob. package com.onlinetechvision.spring.batch; import org.apache.log4j.Logger; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobExecution; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.core.JobParametersInvalidException; import org.springframework.batch.core.launch.JobLauncher; import org.springframework.batch.core.repository.JobExecutionAlreadyRunningException; import org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException; import org.springframework.batch.core.repository.JobRepository; import org.springframework.batch.core.repository.JobRestartException; /** * BatchProcessStarter Class launches the jobs and logs their execution results. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class BatchProcessStarter { private static final Logger logger = Logger.getLogger(BatchProcessStarter.class); private Job firstJob; private Job secondJob; private Job thirdJob; private JobLauncher jobLauncher; private JobRepository jobRepository; /** * Starts the jobs and logs their execution results. * */ public void start() { JobExecution jobExecution = null; JobParametersBuilder builder = new JobParametersBuilder(); try { builder.addLong("currentTime", new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); getJobLauncher().run(getSecondJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getSecondJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); getJobLauncher().run(getThirdJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getThirdJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); builder.addLong("currentTime", new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); } catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) { logger.error(e); } } public Job getFirstJob() { return firstJob; } public void setFirstJob(Job firstJob) { this.firstJob = firstJob; } public Job getSecondJob() { return secondJob; } public void setSecondJob(Job secondJob) { this.secondJob = secondJob; } public Job getThirdJob() { return thirdJob; } public void setThirdJob(Job thirdJob) { this.thirdJob = thirdJob; } public JobLauncher getJobLauncher() { return jobLauncher; } public void setJobLauncher(JobLauncher jobLauncher) { this.jobLauncher = jobLauncher; } public JobRepository getJobRepository() { return jobRepository; } public void setJobRepository(JobRepository jobRepository) { this.jobRepository = jobRepository; } } STEP 6 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. It covers Tasklets and BatchProcessStarter definitions. STEP 7 : CREATE jobContext.xml Spring Configuration file, jobContext.xml, is created. Jobs’ flows are the following : FirstJob’ s flow : 1) FirstStep is started. 2) After FirstStep is completed with COMPLETED status, SecondStep is started. 3) After SecondStep is completed with COMPLETED status, ThirdStep is started. 4) After ThirdStep is completed with COMPLETED status, FirstJob execution is completed with COMPLETED status. SecondJob’ s flow : 1) FourthStep is started. 2) After FourthStep is completed with COMPLETED status, FifthStep is started. 3) After FifthStep is completed with COMPLETED status, SecondJob execution is completed with COMPLETED status. ThirdJob’ s flow : 1) SixthStep is started. 2) After SixthStep is completed with COMPLETED status, SeventhStep is started. 3) After SeventhStep is completed with FAILED status, ThirdJob execution is completed FAILED status. FirstJob’ s flow is same with the first execution. STEP 8 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.onlinetechvision.spring.batch.BatchProcessStarter; /** * Application Class starts the application. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Application { /** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext appContext = new ClassPathXmlApplicationContext("jobContext.xml"); BatchProcessStarter batchProcessStarter = (BatchProcessStarter)appContext.getBean("batchProcessStarter"); batchProcessStarter.start(); } } STEP 9 : BUILD PROJECT After OTV_SpringBatch_TaskletStep_Oriented_Processing Project is built, OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar will be created. STEP 10 : RUN PROJECT After created OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar file is run, the following console output logs will be shown : First Job’ s console output : 25.11.2012 21:29:19 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:19 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=0, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN; exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=1 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=1 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=1, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=2 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=2 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=2, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=3 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=3, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=0, version=1, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:44) - JobExecution: id=0, version=2, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] Second Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=secondJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=1, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=secondJob.fourthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fourthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fourthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=4 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fourth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=4, version=3, name=fourthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fourthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fifthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fifthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=5 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fifth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=5, version=3, name=fifthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fifthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.end5 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.end5 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=1, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=secondJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:48) - JobExecution: id=1, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] Third Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=thirdJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=2, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=thirdJob.sixthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.sixthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [sixthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=6 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Sixth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=6, version=3, name=sixthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.sixthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.seventhStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [seventhStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=7 25.11.2012 21:29:20 DEBUG (FailedStepTasklet.java:33) - Task Result : Error occurred! 25.11.2012 21:29:20 DEBUG (TaskletStep.java:456) - Rollback for Exception: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (TransactionTemplate.java:152) - Initiating transaction rollback on application exception ... 25.11.2012 21:29:20 DEBUG (AbstractPlatformTransactionManager.java:821) - Initiating transaction rollback 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:54) - Rolling back resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@40874c04] 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:291) - Handling exception: java.lang.Exception, caused by: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:251) - Handling fatal exception explicitly (rethrowing first of 1): java.lang.Exception: Error occurred! 25.11.2012 21:29:20 ERROR (AbstractStep.java:222) - Encountered an error executing the step ... 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@66a7d863] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=7, version=2, name=seventhStep, status=FAILED, exitStatus=FAILED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=0, rollbackCount=1 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@156f803c] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.seventhStep with status=FAILED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.fail8 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.fail8 with status=FAILED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=2, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=thirdJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [FAILED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:52) - JobExecution: id=2, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED; exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] First Job’ s console output after restarting : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878960660}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=3, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=8 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=8 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=8, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=9 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=9 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=9, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=10 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=10 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=10, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=3, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878960660}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:57) - JobExecution: id=3, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] STEP 11 : DOWNLOAD https://github.com/erenavsarogullari/OTV_SpringBatch_TaskletStep REFERENCES : Spring Batch – Reference Documentation Spring Batch – API Documentation
January 17, 2013
by Eren Avsarogullari
· 21,694 Views · 1 Like
article thumbnail
Changes to String.substring in Java 7
this post was originally published as part of the java advent series . if you like it, please spread the word by sharing, tweeting, fb, g+ and so on! want to write for the java advent blog? we are looking for contributors to fill all 24 slot and would love to have your contribution! contact attila balazs to contribute! it is common knowledge that java optimizes the substring operation for the case where you generate a lot of substrings of the same source string. it does this by using the (value, offset, count) way of storing the information. see an example below: in the above diagram you see the strings “hello” and “world!” derived from “hello world!” and the way they are represented in the heap: there is one character array containing “hello world!” and two references to it. this method of storage is advantageous in some cases, for example for a compiler which tokenizes source files. in other instances it may lead you to an outofmemorerror (if you are routinely reading long strings and only keeping a small part of it – but the above mechanism prevents the gc from collecting the original string buffer). some even call it a bug . i wouldn’t go so far, but it’s certainly a leaky abstraction because you were forced to do the following to ensure that a copy was made: new string(str.substring(5, 6)) . this all changed in may of 2012 or java 7u6. the pendulum is swung back and now full copies are made by default. what does this mean for you? for most probably it is just a nice piece of java trivia if you are writing parsers and such, you can not rely any more on the implicit caching provided by string. you will need to implement a similar mechanism based on buffering and a custom implementation of charsequence if you were doing new string(str.substring) to force a copy of the character buffer, you can stop as soon as you update to the latest java 7 (and you need to do that quite soon since java 6 is being eold as we speak ). thankfully the development of java is an open process and such information is at the fingertips of everyone! a couple of more references (since we don’t say pointers in java ) related to strings: if you are storing the same string over and over again (maybe you’re parsing messages from a socket for example), you should read up on alternatives to string.intern() (and also consider reading chapter 50 from the second edition of effective java: avoid strings where other types are more appropriate) look into (and do benchmarks before using them!) options like usecompressedstrings (which seems to have been removed ), usestringcache and stringcache hope i didn’t strung you along too much and you found this useful! until next time - attila balazs meta: this post is part of the java advent calendar and is licensed under the creative commons 3.0 attribution license. if you like it, please spread the word by sharing, tweeting, fb, g+ and so on! want to write for the blog? we are looking for contributors to fill all 24 slot and would love to have your contribution! contact attila balazs to contribute!
January 16, 2013
by Attila-Mihaly Balazs
· 10,756 Views
article thumbnail
Building a Game With JavaScript: Start Screen
This is a continuation from the previous post. Specification Many games have a start screen or main menu of some sort. (Though I love games like Braid that bypass the whole notion.) Let’s begin by designing our start screen. We’ll have a solid color background. Perhaps the ever lovely cornflower blue. Then we’ll draw the name of our game and provide an instruction to the player. In order to make sure we have the player’s attention, we’ll animate the color of the instruction. It will morph from black to red and back again. Finally, when the player clicks the screen we’ll transition to the main game. Or at least we’ll stub out the transition. Here’s a demo based on the code we’ll cover later in this post (as well as that from the previous post.) Implementation Here’s the code to implement our start screen. // `input` will be defined elsewhere, it's a means // for us to capture the state of input from the player var startScreen = (function(input) { // the red component of rgb var hue = 0; // are we moving toward red or black? var direction = 1; var transitioning = false; // record the input state from last frame // because we need to compare it in the // current frame var wasButtonDown = false; // a helper function // used internally to draw the text in // in the center of the canvas (with respect // to the x coordinate) function centerText(ctx, text, y) { var measurement = ctx.measureText(text); var x = (ctx.canvas.width - measurement.width) / 2; ctx.fillText(text, x, y); } // draw the main menu to the canvas function draw(ctx, elapsed) { // let's draw the text in the middle of the canvas // note that it's ineffecient to calculate this // in every frame since it never changes // however, I leave it here for simplicity var y = ctx.canvas.height / 2; // create a css color from the `hue` var color = 'rgb(' + hue + ',0,0)'; // clear the entire canvas // (this is not strictly necessary since we are always // updating the same pixels for this screen, however it // is generally what you need to do.) ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); // draw the title of the game // this is static and doesn't change ctx.fillStyle = 'white'; ctx.font = '48px monospace'; centerText(ctx, 'My Awesome Game', y); // draw instructions to the player // this animates the color based on the value of `hue` ctx.fillStyle = color; ctx.font = '24px monospace'; centerText(ctx, 'click to begin', y + 30); } // update the color we're drawing and // check for input from the user function update() { // we want `hue` to oscillate between 0 and 255 hue += 1 * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; // note that this logic is dependent on the frame rate, // that means if the frame rate is slow then the animation // is slow. // we could make it indepedent on the frame rate, but we'll // come to that later. // here we magically capture the state of the mouse // notice that we are not dealing with events inside the game // loop. // we'll come back to this too. var isButtonDown = input.isButtonDown(); // we want to know if the input (mouse click) _just_ happened // that means we only want to transition away from the menu to the // game if there _was_ input on the last frame _but none_ on the // current one. var mouseJustClicked = !isButtonDown && wasButtonDown; // we also check the value of `transitioning` so that we don't // initiate the transition logic more the once (like if the player // clicked the mouse repeatedly before we finished transitioning) if (mouseJustClicked && !transitioning) { transitioning = true; // do something here to transition to the actual game } // record the state of input for use in the next frame wasButtonDown = isButtonDown; } // this is the object that will be `startScreen` return { draw: draw, update: update }; }()); Explanation Recall that our start screen is meant to be invoked by our game loop. The game loop doesn’t know about the specifics of the start screen, but it does expect it to have a certain shape. This enables us to swap out screen objects without having to modify the game loop itself. The shape that the game loop expects is this: { update: function(timeElapsedSinceLastFrame) { }, draw: function(drawingContext) { } } Update Let’s begin with the start screen’s update function. The first bit of logic is this: hue += 1 * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; Perhaps hue is not the best choice of variable names. It represents the red component for an RGB color value. The range of values for this component is 0 (no red) to 255 (all the reds!). On each iteration of our loop we “move” the hue towards either the red or black. The variable direction can be either 1 or -1. A value of 1 means we are moving towards 255 and a value of -1 means we are moving towards 0. When we cross a boundary, we flip the direction. Keen observers will ask why we bother with 1 * direction. In our current logic, it’s an unnecessary step and unnecessary steps in game development are generally bad. In this case, I wanted to separate the rate of change from the direction. In order words, you could modify that expression to 2 * direction and the color would change twice as fast. This leads us to another important point. Our rate of change is tied to how quickly our loop iterates; most likely 60fps. However, it’s not guaranteed to be 60fps and that makes this approach a dangerous practice. Once way to detach ourselves from the loop’s speed would be to use the elapsed time that is being passed into our update function. Let’s say that we want to it to take 2 full seconds to go from red to black regardless of how often the update function is called. There’s a span of 256 discrete values between red and black. To make our calculations clear, let’s say there are 256 units and we’ll label these units R. Also, the elapsed time will be in milliseconds (ms). For a given frame, if were are given a slice of elapsed time in ms, we’ll want to calculate how many R units to increase (or decrease) hue by for that slice. Our rate of change can be defined as 256 **R** / 2000 **ms** or 0.128 R/ms. (You can read that as “0.128 units of red per millisecond”.) This rate of change is a constant for our start screen and as such we can define it once (as opposed to calculating it inside the update function). Now that we have the rate of change , we only need to multiply it by the elapsed time received in update to determine how many Rs we want. A revised version of the function would look like this: var rate = 0.128; // R/ms function update(elapsed) { var amount = rate * elapsed; hue += amount * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; } One consequence of this change is that hue will no longer be integral values (as much as that can be said in JavaScript.) This means that we’d really want to have two values for the hue: an actual value and a rounded value. This is because the RBG model requires an integral value for each color component. function update(elapsed) { var amount = rate * elapsed; hue += amount * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; rounded_hue = Math.round(hue); } Draw Let’s turn our attention to draw for a moment. One of the first things you generally do is to clear the entire screen. This is simple to do with the canvas API’s clearRect method. ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Notice that ctx is an instance of CanvasRenderingContext2D and not a HTMLCanvasElement. However, there is a handy back reference to the canvas element that we use to grab the actual width and height. There are other options other than clearing the entire canvas, but I’m not going to address this in this post. Also, there are some performance considerations. See the article listed under references. After clearing the screen, we want to draw something new. In this case, the game title and the instructions. In both cases I want to center the text horizontally. I created a helper function that I can provide with the text to render as well as the vertical position (y). function centerText(ctx, text, y) { var measurement = ctx.measureText(text); var x = (ctx.canvas.width - measurement.width) / 2; ctx.fillText(text, x, y); } measureText returns the width in pixels that the rendered text will take up. We use this in combination with the canvas element’s width to determine the x position for the text. fillText is responsible for actually drawing the text. The rendering context ctx is stateful. Meaning that, what happens when you call methods like measureText or fillText depends on the state of the rendering context. The state can be modified by setting its properties. var y = ctx.canvas.height / 2; ctx.fillStyle = 'white'; ctx.font = '48px monospace'; centerText(ctx, 'My Awesome Game', y); The properties fillStyle and font change the state of the rendering context and hence affect the methods calls inside of centerText. This state applies to all future methods calls. This means that all calls to fillText will use the color white until you can the fillStyle. Notice too that we are calculating the x and y values for the text on every frame. This is potentially wasteful since these values are unlikely to change. However, if we want to respond to changes in canvas size (or even changes to the text itself) then we’d want to continue calculating these on every frame. Otherwise, if we were confident that we didn’t need to do this, we could calculate these values once and cache them. Now let’s use the red component calculated in update to render the instructional text. var color = 'rgb(' + hue + ',0,0)'; ctx.fillStyle = color; ctx.font = '24px monospace'; centerText(ctx, 'click to begin', y + 30); fillStyle can be set in a number of ways. Earlier, we used the simple value white. Here were are using rgb() to set the individual components explicitly. Any CSS color should work with fillStyle. (I won’t be too surprised if some don’t though.) Now you might be wondering why we bothered calculating hue inside update since hue is all about what to draw on the screen. The reason is that draw is concerned with the mechanics of rendering. Anything that is modeling the game state should live in update. The tell in this example is that hue is dependent on elapsed time and the draw doesn’t know anything about that. Update (again) Moving back to update, the next bit deals with input from the player. In the sample code I’ve extracted the input logic away. The key thing here is that we are not relying on events to tell us about input from the player. Instead we have some helper, input in this case, that gives us the current state of the input. If event-driven logic says “tell me when this happens” then our game logic says “tell me if this is happening now”. The primary reason for this is to be deterministic. We can establish at the beginning of our update what the current input state is and that it won’t change before the next invocation of the function. In simple games this might be inconsequential, but in others it can be a subtle source of bugs. var isButtonDown = input.isButtonDown(); var mouseJustClicked = !isButtonDown && wasButtonDown; if (mouseJustClicked && !transitioning) { transitioning = true; // do something here to transition to the actual game } wasButtonDown = isButtonDown; We only want transition when the mouse button has been released. In this case, “released” is defined as “down on the last frame but up on this one”. Hence, we need to track what the mouse button’s state was on the last frame. That’s wasButtonDown and it lives outside of update. Secondly, we don’t want to trigger multiple transitions. That is, if our transition takes some time (perhaps due to animation) then we want to ignore subsequent clicks. We have our transitioning variable outside of update to track that for us. More to come… Update: I just realized that I didn't include a shim for requestAnimationFrame for the demo on jsfiddle. That means the demo will fail on many browsers. (Of course, it will also fail if there's no canvas support either.) If it doesn't work, check your console for errors.
January 15, 2013
by Christopher Bennage
· 9,097 Views
article thumbnail
@Cacheable overhead in Spring
Spring 3.1 introduced great caching abstraction layer. Finally we can abandon all home-grown aspects, decorators and code polluting our business logic related to caching. Since then we can simply annotate heavyweight methods and let Spring and AOP machinery do the work: @Cacheable("books") public Book findBook(ISBN isbn) {...} "books" is a cache name, isbn parameter becomes cache key and returned Book object will be placed under that key. The meaning of cache name is dependant on the underlying cache manager (EhCache, concurrent map, etc.) - Spring makes it easy to plug different caching providers. But this post won't be about caching feature in Spring... Some time ago my teammate was optimizing quite low-level code and discovered an opportunity for caching. He quickly applied @Cacheable just to discover that the code performed worse then it used to. He got rid of the annotation and implemented caching himself manually, using good old java.util.ConcurrentHashMap. The performance was much better. He blamed @Cacheable and Spring AOP overhead and complexity. I couldn't believe that a caching layer can perform so poorly until I had to debug Spring caching aspects few times myself (some nasty bug in my code, you know, cache invalidation is one of the two hardest things in CS). Well, the caching abstraction code is much more complex than one would expect (after all it's just get and put!), but it doesn't necessarily mean it must be that slow? In science we don't believe and trust, we measure and benchmark. So I wrote a benchmark to precisely measure the overhead of @Cacheable layer. Caching abstraction layer in Spring is implemented on top of Spring AOP, which can further be implemented on top of Java proxies, CGLIB generated subclasses or AspectJ instrumentation. Thus I'll test the following configurations: no caching at all - to measure how fast the code is with no intermediate layer manual cache handling using ConcurrentHashMap in business code @Cacheable with CGLIB implementing AOP @Cacheable with java.lang.reflect.Proxy implementing AOP @Cacheable with AspectJ compile time weaving (as similar benchmark shows, CTW is slightly faster than LTW) Home-grown AspectJ caching aspect - something between manual caching in business code and Spring abstraction Let me reiterate: we are not measuring the performance gain of caching and we are not comparing various cache providers. That's why our test method is as fast as it can be and I will be using simplest ConcurrentMapCacheManager from Spring. So here is a method in question: public interface Calculator { int identity(int x); } public class PlainCalculator implements Calculator { @Cacheable("identity") @Override public int identity(int x) { return x; } } I know, I know there is no point in caching such a method. But I want to measure the overhead of caching layer (during cache hit to be specific). Each caching configuration will have its own ApplicationContext as you can't mix different proxying modes in one context: public abstract class BaseConfig { @Bean public Calculator calculator() { return new PlainCalculator(); } } @Configuration class NoCachingConfig extends BaseConfig {} @Configuration class ManualCachingConfig extends BaseConfig { @Bean @Override public Calculator calculator() { return new CachingCalculatorDecorator(super.calculator()); } } @Configuration abstract class CacheManagerConfig extends BaseConfig { @Bean public CacheManager cacheManager() { return new ConcurrentMapCacheManager(); } } @Configuration @EnableCaching(proxyTargetClass = true) class CacheableCglibConfig extends CacheManagerConfig {} @Configuration @EnableCaching(proxyTargetClass = false) class CacheableJdkProxyConfig extends CacheManagerConfig {} @Configuration @EnableCaching(mode = AdviceMode.ASPECTJ) class CacheableAspectJWeaving extends CacheManagerConfig { @Bean @Override public Calculator calculator() { return new SpringInstrumentedCalculator(); } } @Configuration @EnableCaching(mode = AdviceMode.ASPECTJ) class AspectJCustomAspect extends CacheManagerConfig { @Bean @Override public Calculator calculator() { return new ManuallyInstrumentedCalculator(); } } Each @Configuration class represents one application context. CachingCalculatorDecorator is a decorator around real calculator that does the caching (welcome to the 1990s): public class CachingCalculatorDecorator implements Calculator { private final Map cache = new java.util.concurrent.ConcurrentHashMap(); private final Calculator target; public CachingCalculatorDecorator(Calculator target) { this.target = target; } @Override public int identity(int x) { final Integer existing = cache.get(x); if (existing != null) { return existing; } final int newValue = target.identity(x); cache.put(x, newValue); return newValue; } } SpringInstrumentedCalculator and ManuallyInstrumentedCalculator are exactly the same as PlainCalculator but they are instrumented by AspectJ compile-time weaver with Spring and custom aspect accordingly. My custom caching aspect looks like this: public aspect ManualCachingAspect { private final Map cache = new ConcurrentHashMap(); pointcut cacheMethodExecution(int x): execution(int com.blogspot.nurkiewicz.cacheable.calculator.ManuallyInstrumentedCalculator.identity(int)) && args(x); Object around(int x): cacheMethodExecution(x) { final Integer existing = cache.get(x); if (existing != null) { return existing; } final Object newValue = proceed(x); cache.put(x, (Integer)newValue); return newValue; } } After all this preparation we can finally write the benchmark itself. At the beginning I start all the application contexts and fetch Calculator instances. Each instance is different. For example noCaching is a PlainCalculator instance with no wrappers, cacheableCglib is a CGLIB generated subclass while aspectJCustom is an instance of ManuallyInstrumentedCalculator with my custom aspect woven. private final Calculator noCaching = fromSpringContext(NoCachingConfig.class); private final Calculator manualCaching = fromSpringContext(ManualCachingConfig.class); private final Calculator cacheableCglib = fromSpringContext(CacheableCglibConfig.class); private final Calculator cacheableJdkProxy = fromSpringContext(CacheableJdkProxyConfig.class); private final Calculator cacheableAspectJ = fromSpringContext(CacheableAspectJWeaving.class); private final Calculator aspectJCustom = fromSpringContext(AspectJCustomAspect.class); private static Calculator fromSpringContext(Class config) { return new AnnotationConfigApplicationContext(config).getBean(Calculator.class); } I'm going to exercise each Calculator instance with the following test. The additional accumulator is necessary, otherwise JVM might optimize away the whole loop (!): private int benchmarkWith(Calculator calculator, int reps) { int accum = 0; for (int i = 0; i < reps; ++i) { accum += calculator.identity(i % 16); } return accum; } Here is the full caliper test without parts already discussed: public class CacheableBenchmark extends SimpleBenchmark { //... public int timeNoCaching(int reps) { return benchmarkWith(noCaching, reps); } public int timeManualCaching(int reps) { return benchmarkWith(manualCaching, reps); } public int timeCacheableWithCglib(int reps) { return benchmarkWith(cacheableCglib, reps); } public int timeCacheableWithJdkProxy(int reps) { return benchmarkWith(cacheableJdkProxy, reps); } public int timeCacheableWithAspectJWeaving(int reps) { return benchmarkWith(cacheableAspectJ, reps); } public int timeAspectJCustom(int reps) { return benchmarkWith(aspectJCustom, reps); } } I hope you are still following our experiment. We are now going to execute Calculate.identity() millions of times and see which caching configuration performs best. Since we only call identity() with 16 different arguments, we hardly ever touch the method itself as we always get cache hit. Curious to see the results? benchmark ns linear runtime NoCaching 1.77 = ManualCaching 23.84 = CacheableWithCglib 1576.42 ============================== CacheableWithJdkProxy 1551.03 ============================= CacheableWithAspectJWeaving 1514.83 ============================ AspectJCustom 22.98 = Interpretation Let's go step by step. First of all calling a method in Java is pretty darn fast! 1.77 nanoseconds, we are talking here about 3 CPU cycles on my Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz! If this doesn't convince you that Java is fast, I don't know what will. But back to our test. Hand-made caching decorator is also pretty fast. Of course it's slower by an order of magnitude compared to pure function call, but still blazingly fast compared to all @Scheduled benchmarks. We see a drop by 3 orders of magnitude, from 1.8 ns to 1.5 μs. I'm especially disappointed by the @Cacheable backed by AspectJ. After all caching aspect is precompiled directly into my Java .class file, I would expect it to be much faster compared to dynamic proxies and CGLIB. But that doesn't seem to be the case. All three Spring AOP techniques are similar. The greatest surprise is my custom AspectJ aspect. It's even faster than CachingCalculatorDecorator! maybe it's due to polymorphic call in the decorator? I strongly encourage you to clone this benchmark on GitHub and run it (mvn clean test, takes around 2 minutes) to compare your results. Conclusions You might be wondering why Spring abstraction layer is so slow? Well, first of all, check out the core implementation in CacheAspectSupport - it's actually quite complex. Secondly, is it really that slow? Do the math - you typically use Spring in business applications where database, network and external APIs are the bottleneck. What latencies do you typically see? Milliseconds? Tens or hundreds of milliseconds? Now add an overhead of 2 μs (worst case scenario). For caching database queries or REST calls this is completely negligible. It doesn't matter which technique you choose. But if you are caching very low-level methods close to the metal, like CPU-intensive, in-memory computations, Spring abstraction layer might be an overkill. The bottom line: measure! PS: both benchmark and contents of this article in Markdown format are freely available.
January 15, 2013
by Tomasz Nurkiewicz DZone Core CORE
· 29,277 Views · 2 Likes
article thumbnail
An Introduction to STOMP
STOMP (Simple (or Streaming) Text-Oriented Messaging Protocol) is a simple text-oriented protocol, similar to HTTP. STOMP provides an interoperable wire format that allows clients to communicate with almost every available message broker. STOMP is easy to implement and gives you flexibility since it is language-agnostic, meaning clients and brokers developed in different languages can send and receive messages to and from each other. There are lots of server implementations that support STOMP (mostly compliant with the STOMP 1.0 specification). The following is a list of STOMP 1.0 compliant message servers: Apache ActiveMQ – http://activemq.apache.org Apache Apollo – http://activemq.apache.org/apollo CoilMQ – http://code.google.com/p/coilmq Gozirra - http://www.germane-software.com/software/Java/Gozirra HornetQ – http://www.jboss.com/hornetq MorbidQ – http://www.morbidq.com RabbitMQ - http://www.rabbitmq.com/plugins.html#rabbitmq-stomp Sprinkle - http://www.thuswise.org/sprinkle/index.html StompServer - http://stompserver.rubyforge.org/ On the client side, there are many implementations for a vast number of technologies. Below, you will find the libraries available for the most popular languages. Language Libraries C libstomp - http://stomp.codehaus.org/C C++ Apache CMS - http://activemq.apache.org/cms/ C# and .Net Apache NMS - http://activemq.apache.org/nms/ Flash as3-stomp - http://code.google.com/p/as3-stomp/ Java Gozirra - http://www.germane-software.com/software/Java/Gozirra/ Objective-C objc-stomp - https://github.com/juretta/objc-stomp Perl Net::Stomp::Client - http://search.cpan.org/dist/Net-STOMP-Client/ Net::Stomp - http://search.cpan.org/dist/Net-Stomp/ PHP stomp - http://www.php.net/manual/en/book.stomp.php stomp-php - http://stomp.fusesource.org/documentation/php/book.html Python stomper - https://github.com/oisinmulvihill/stomper stomp.py - http://code.google.com/p/stomppy/ Ruby on Rails stomp gem - https://rubygems.org/gems/stomp activemessaging - http://code.google.com/p/activemessaging/ STOMP is an alternative to other open messaging protocols such as AMQP (Advanced Message Queueing Protocol) and implementation specific wire protocols used in JMS (Java Message Service) brokers such as OpenWire. It distinguishes itself by covering a small subset of commonly used messaging operations (commands) rather than providing a comprehensive messaging API. Because STOMP is language-agnostic and easy to implement, it’s popular to application developers and technical architects alike. STOMP is also text-based. Since it does not use binary protocols like other message brokers, a wide range of client technologies work with STOMP, like Ruby, Python, and Perl. STOMP is simple to implement, but supports a wide range of core enterprise messaging features, such as authentication, messaging models (point to point and publish and subscribe), message acknowledgement, transactions, message headers and properties, etc.
January 15, 2013
by Marcelo Jabali
· 34,772 Views · 3 Likes
article thumbnail
Optimization in R
Optimization is a very common problem in data analytics. Given a set of variables (which one has control), how to pick the right value such that the benefit is maximized. More formally, optimization is about determining a set of variables x1, x2, … that maximize or minimize an objective function f(x1, x2, …). Unconstrained optimization In an unconstraint case, the variable can be freely selected within its full range. A typical solution is to compute the gradient vector of the objective function [∂f/∂x1, ∂f/∂x2, …] and set it to [0, 0, …]. Solve this equation and output the result x1, x2 … which will give the local maximum. In R, this can be done by a numerical analysis method. > f <- function(x){(x[1] - 5)^2 + (x[2] - 6)^2} > initial_x <- c(10, 11) > x_optimal <- optim(initial_x, f, method="CG") > x_min <- x_optimal$par > x_min [1] 5 6 Equality constraint optimization Moving onto the constrained case, lets say x1, x2 … are not independent and then have to related to each other in some particular way: g1(x1, x2, …) = 0, g2(x1, x2, …) = 0. The optimization problem can be expressed as … Maximize objective function: f(x1, x2, …) Subjected to equality constraints: g1(x1, x2, …) = 0 g2(x1, x2, …) = 0 A typical solution is to turn the constraint optimization problem into an unconstrained optimization problem using Lagrange multipliers. Define a new function F as follows ... F(x1, x2, …, λ1, λ2, …) = f(x1, x2, …) + λ1.g1(x1, x2, …) + λ2.g2(x1, x2, …) + … Then solve for ... [∂F/∂x1, ∂F/∂x2, …, ∂F/∂λ1, ∂F/∂λ2, …] = [0, 0, ….] Inequality constraint optimization In this case, the constraint is inequality. We cannot use the Lagrange multiplier technique because it requires equality constraint. There is no general solution for arbitrary inequality constraints. However, we can put some restriction in the form of constraint. In the following, we study two models where constraint is restricted to be a linear function of the variables: w1.x1 + w2.x2 + … >= 0 Linear Programming Linear programming is a model where both the objective function and constraint function is restricted as linear combination of variables. The Linear Programming problem can be defined as follows ... Maximize objective function: f(x1, x2) = c1.x1 + c2.x2 Subjected to inequality constraints: a11.x1 + a12.x2 <= b1 a21.x1 + a22.x2 <= b2 a31.x1 + a32.x2 <= b3 x1 >= 0, x2 >=0 As an illustrative example, lets walkthrough a portfolio investment problem. In the example, we want to find an optimal way to allocate the proportion of asset in our investment portfolio. StockA gives 5% return on average StockB gives 4% return on average StockC gives 6% return on average To set some constraints, lets say my investment in C must be less than sum of A, B. Also, investment in A cannot be more than twice of B. Finally, at least 10% of investment in each stock. To formulate this problem: Variable: x1 = % investment in A, x2 = % in B, x3 = % in C Maximize expected return: f(x1, x2, x3) = x1*5% + x2*4% + x3*6% Subjected to constraints: 10% < x1, x2, x3 < 100% x1 + x2 + x3 = 1 x3 < x1 + x2 x1 < 2 * x2 > library(lpSolve) > library(lpSolveAPI) > # Set the number of vars > model <- make.lp(0, 3) > # Define the object function: for Minimize, use -ve > set.objfn(model, c(-0.05, -0.04, -0.06)) > # Add the constraints > add.constraint(model, c(1, 1, 1), "=", 1) > add.constraint(model, c(1, 1, -1), ">", 0) > add.constraint(model, c(1, -2, 0), "<", 0) > # Set the upper and lower bounds > set.bounds(model, lower=c(0.1, 0.1, 0.1), upper=c(1, 1, 1)) > # Compute the optimized model > solve(model) [1] 0 > # Get the value of the optimized parameters > get.variables(model) [1] 0.3333333 0.1666667 0.5000000 > # Get the value of the objective function > get.objective(model) [1] -0.05333333 > # Get the value of the constraint > get.constraints(model) [1] 1 0 0 Quadratic Programming Quadratic programming is a model where both the objective function is a quadratic function (contains up to two term products) while constraint function is restricted as linear combination of variables. The Quadratic Programming problem can be defined as follows ... Minimize quadratic objective function: f(x1, x2) = c1.x12 + c2. x1x2 + c2.x22 - (d1. x1 + d2.x2) Subject to constraints a11.x1 + a12.x2 == b1 a21.x1 + a22.x2 == b2 a31.x1 + a32.x2 >= b3 a41.x1 + a42.x2 >= b4 a51.x1 + a52.x2 >= b5 Express the problem in Matrix form: Minimize objective ½(DTx) - dTx Subject to constraint ATx >= b First k columns of A is equality constraint As an illustrative example, lets continue on the portfolio investment problem. In the example, we want to find an optimal way to allocate the proportion of asset in our investment portfolio. StockA gives 5% return on average StockB gives 4% return on average StockC gives 6% return on average We also look into the variance of each stock (known as risk) as well as the covariance between stocks. For investment, we not only want to have a high expected return, but also a low variance. In other words, stocks with high return but also high variance is not very attractive. Therefore, maximize the expected return and minimize the variance is the typical investment strategy. One way to minimize variance is to pick multiple stocks (in a portfolio) to diversify the investment. Diversification happens when the stock components within the portfolio moves from their average in different direction (hence the variance is reduced). The Covariance matrix ∑ (between each pairs of stocks) is given as follows: 1%, 0.2%, 0.5% 0.2%, 0.8%, 0.6% 0.5%, 0.6%, 1.2% In this example, we want to achieve a overall return of at least 5.2% of return while minimizing the variance of return To formulate the problem: Variable: x1 = % investment in A, x2 = % in B, x3 = % in C Minimize variance: xT∑x Subjected to constraint: x1 + x2 + x3 == 1 X1*5% + x2*4% + x3*6% >= 5.2% 0% < x1, x2, x3 < 100% > library(quadprog) > mu_return_vector <- c(0.05, 0.04, 0.06) > sigma <- matrix(c(0.01, 0.002, 0.005, + 0.002, 0.008, 0.006, + 0.005, 0.006, 0.012), + nrow=3, ncol=3) > D.Matrix <- 2*sigma > d.Vector <- rep(0, 3) > A.Equality <- matrix(c(1,1,1), ncol=1) > A.Matrix <- cbind(A.Equality, mu_return_vector, diag(3)) > b.Vector <- c(1, 0.052, rep(0, 3)) > out <- solve.QP(Dmat=D.Matrix, dvec=d.Vector, Amat=A.Matrix, bvec=b.Vector, meq=1) > out$solution [1] 0.4 0.2 0.4 > out$value [1] 0.00672 > Integration with Predictive Analytics Optimization is usually integrated with predictive analytics, which is another important part of data analytics. For an overview of predictive analytics, here is my previous blog about it. The overall processing can be depicted as follows: In this diagram, we use machine learning to train a predictive model in batch mode. Once the predictive model is available, observation data is fed into it at real time and a set of output variables is predicted. Such output variable will be fed into the optimization model as the environment parameters (e.g. return of each stock, covariance ... etc.) from which a set of optimal decision variable is recommended.
January 15, 2013
by Ricky Ho
· 9,430 Views
article thumbnail
The Definitive Gradle Guide for NetBeans IDE
Gradle is a build tool like Ant and Maven only much, much better!
January 14, 2013
by Attila Kelemen
· 65,058 Views · 1 Like
article thumbnail
Measure Elapsed Time with Camel
Apache Camel provides an event notifier support class which allows you to keep information about what happened on Exchange, Route and Endpoint. One of the benefits of this class is that you can easily audit messages created in Camel Routes, collect information and report that in log by example. When developing an application, it is very important to calculate/measure elapsed time on the platform to find which part of your code, processor or system integrated which is the bad duck and must be improved. In three steps, I would show you How to enable this mechanism to report : - Time elapsed to call an endpoint (could be another camel route, web service, ...) - Time elapsed on the route exchange STEP 1 - Create a Class implementing the EventNotifierSupport public class AuditEventNotifier extends EventNotifierSupport { public void notify(EventObject event) throws Exception { if (event instanceof ExchangeSentEvent) { ExchangeSentEvent sent = (ExchangeSentEvent) event; log.info(">>> Took " + sent.getTimeTaken() + " millis to send to external system : " + sent.getEndpoint()); } if (event instanceof ExchangeCompletedEvent) {; ExchangeCompletedEvent exchangeCompletedEvent = (ExchangeCompletedEvent) event; Exchange exchange = exchangeCompletedEvent.getExchange(); String routeId = exchange.getFromRouteId(); Date created = ((ExchangeCompletedEvent) event).getExchange().getProperty(Exchange.CREATED_TIMESTAMP, Date.class); // calculate elapsed time Date now = new Date(); long elapsed = now.getTime() - created.getTime(); log.info(">>> Took " + elapsed + " millis for the exchange on the route : " + routeId); } } public boolean isEnabled(EventObject event) { return true; } protected void doStart() throws Exception { // filter out unwanted events setIgnoreCamelContextEvents(true); setIgnoreServiceEvents(true); setIgnoreRouteEvents(true); setIgnoreExchangeCreatedEvent(true); setIgnoreExchangeCompletedEvent(false); setIgnoreExchangeFailedEvents(true); setIgnoreExchangeRedeliveryEvents(true); setIgnoreExchangeSentEvents(false); } protected void doStop() throws Exception { // noop } } Not really complicated and the code is explicit. Check the doStart() method to enable/disable the events for which you would like to gather information. This example uses only Exchange.CREATED_TIMESTAMP property but the next version of Camel 2.7.0 will provide you the property exchange.RECEIVED_TIMESTAMP and so you will be able to calculate more easily the time spend by the exchange to call the different processors till it arrives at the end of the route. This example collects Date information but you can imagine to use this mechanism to check if your route processes the message according to SLA, .... STEP 2 - Instantiate the bean in Camel Spring XML By adding this bean definition, Camel will automatically register it to the CamelContext created. STEP 3 - Collect info into the log 18:10:46,060 | INFO | tp1238469515-285 | AuditEventNotifier | ? ? | 68 - org.apache.camel.camel-core - 2.6.0.fuse-00-00 | >>> Took 3 millis for the exchange on the route : mock-HTTP-Server 18:10:46,062 | INFO | tp2056154542-293 | AuditEventNotifier | ? ? | 68 - org.apache.camel.camel-core - 2.6.0.fuse-00-00 | >>> Took 25 millis to send to external system : Endpoint[http://localhost:9191/sis] 18:10:46,077 | INFO | tp2056154542-293 | AuditEventNotifier | ? ? | 68 - org.apache.camel.camel-core - 2.6.0.fuse-00-00 | >>> Took 103 millis for the exchange on the route : ws-to-sis
January 11, 2013
by Charles Moulliard
· 11,929 Views
article thumbnail
Display and Format Negative Currency Values in C#
This small tip about how to show the negative currency value in your application. For Example if you have currency value like -1234 and you want to display it like -$1,234 according to you culture. Problem In C# to one of the way to format currency value easily is use ToString("c") with value will do work for you. For example check below code //format -1234 as currency.. Console.WriteLine((-1234).ToString("c")); Output This will display value in output like this ($1,234.00). But the actual problem with the formatting is its not displaying - sign with currency value. Solution So to come out of this You need to create a custom NumberFormatInfo from your current locale. Then you can specify it's CurrencyNegativePattern, for example: CultureInfo currentCulture = Thread.CurrentThread.CurrentCulture; CultureInfo newCulture = new CultureInfo(currentCulture.Name); newCulture.NumberFormat.CurrencyNegativePattern = 1; Thread.CurrentThread.CurrentCulture = newCulture; Console.WriteLine((-1234).ToString("c")); Output Now output of the above code is -$1,234.00. That's what actually needed display negative sign for negative currency value. So in above code actual trick done by NumberFormatInfo.CurrencyNegativePattern Property, which is set to 1 which associated with pattern -$n. There number of different pattern supported which you can check on MSDN and assign value to property according to your need. This will not affect positive value of currency i.e. if you pas 1234 as value its dispaly as $1,234 only. And for this this example my current culture is "en-US" . Conclusion I Hope you all like this simple and easy example of formatting negative currency. you can explore more on MSDN on reference provided at end of post. NumberFormatInfo Class - http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.aspx NumberFormatInfo.CurrencyNegativePattern Property - http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.currencynegativepattern.aspx
January 11, 2013
by Pranay Rana
· 18,082 Views
article thumbnail
Enabling CLIENT-CERT based authorization on Tomcat: Part 1
In continuation with my earlier blog on Enabling SSL on Tomcat, in this blog I will go to next step and enable CLIENT-CERT based authorization on Tomcat. Again if you want to tryout the code go to my Github and download the code. For this sample, I assume that you have tried my earlier SSL example on Tomcat and have the setup. As per the SSL example I assume, You have setup Tomcat 6.0 version You have set the SSL Connector Configuration in Tomcat server.xml You have started the Tomcat server and run the SecureHttpClient0Test test In this blog, I will show you how to, Setup MemoryRealm In the server.xml comment the Realm tag and replace that with the code below, Setup user role setup In /conf/tomcat-users.xml Setup security-contraint Add access control in the individual application web.xml as below, Demo App /secure/* GET secureconn CLIENT-CERT Demo App secureconn Run JUnit test Open the class src/test/java/com/goSmarter/test/SecureHttpClient1Test.java file and change the below code to point to /conf folder public static final String path = "D:/apache-tomcat-6.0.36/conf/"; Start the Tomcat and run the JUnit test using “mvn test -Dtest=”com.goSmarter.test.SecureHttpClient1Test” If you want to debug the Realm, you need to increase the log level for Realm in /conf/logging.properties as below, org.apache.catalina.realm.level = ALL org.apache.catalina.realm.useParentHandlers = true org.apache.catalina.authenticator.level = ALL org.apache.catalina.authenticator.useParentHandlers = true If you notice there are 2 positive tests and 1 negative test, negative test will give a forbidden 403 return status when a wrong certificate is sent based on the security-constraint. I hope this blog helped you.
January 10, 2013
by Krishna Prasad
· 17,478 Views
article thumbnail
Python Testing - PhantomJS with Selenium WebDriver
PhantomJS is a headless WebKit with JavaScript API. It can be used for headless website testing. PhantomJS has a lot of different uses. The interesting bit for me is to use PhantomJS as a lighter-weight replacement for a browser when running web acceptance tests. This enables faster testing, without a display or the overhead of full-browser startup/shutdown. I write my web automation using Selenium WebDriver, in Python. In future versions of PhantomJS, the GhostDriver component will be included. GhostDriver is a pure JavaScript implementation of the WebDriver Wire Protocol for PhantomJS. It's a Remote WebDriver that uses PhantomJS as back-end. So, Ghostdriver is the bridge we need to use Selenium WebDriver with Phantom.JS. Since it is not available in the current PhantomJS release, you can try it yourself by compiling a special version of PhantomJS: It wes pretty trvial to setup on Ubuntu (12.04): $ sudo apt-get install build-essential chrpath git-core libssl-dev libfontconfig1-dev $ git clone git://github.com/ariya/phantomjs.git $ cd phantomjs $ git checkout 1.8 $ ./build.sh $ git remote add detro https://github.com/detro/phantomjs.git $ git fetch detro && git checkout -b detro-ghostdriver-dev remotes/detro/ghostdriver-dev $ ./build.sh Then grab the `phantomjs` binary it produced (look inside `phantomjs/bin`). This is a self-contained executable, it can be moved to a different directory or another machine. Make sure it is located somewhere on your PATH, or declare it's location when creating your PhantomJS driver like the example below. for these examples, `phantomjs` binary is located in same directory as test script. Example: Python Using PhantomJS and Selenium WebDriver. #!/usr/bin/env python driver = webdriver.PhantomJS('./phantomjs') # do webdriver stuff here driver.quit() Example: Python Unit Test Using PhantomJS and Selenium WebDriver. #!/usr/bin/env python import unittest from selenium import webdriver class TestUbuntuHomepage(unittest.TestCase): def setUp(self): self.driver = webdriver.PhantomJS('./phantomjs') def testTitle(self): self.driver.get('http://www.ubuntu.com/') self.assertIn('Ubuntu', self.driver.title) def tearDown(self): self.driver.quit() if __name__ == '__main__': unittest.main(verbosity=2) resources: http://phantomjs.org/build.html https://github.com/detro/ghostdriver Selenium WebDriver Python API Documentation
January 8, 2013
by Corey Goldberg
· 42,423 Views
article thumbnail
Generic Exceptions in Java
Wow, I can’t believe it’s been 6 weeks since I last blogged. I still need to write up a review on Plumbr, after seeing it in action at Devoxx, but to ease me back into the writing game, here’s a small(-ish) but useful spot of Java. It’s always nice to borrow and steal concepts and ideas from other languages. Scala’s Option is one idea I really like, so I wrote an implementation in Java. It wraps an object which may or may not be null, and provides some methods to work with in a more kinda-sorta functional way. For example, the isDefined method adds an object-oriented way of checking if a value is null. It is then used in other places, such as the getOrElse method, which basically says “give me what you’re wrapping, or a fallback if it’s null”. public T getOrElse(T fallback) { return isDefined() ? get() : fallback; } In practice, this would replace tradional Java, such as public void foo() { String s = dao.getValue(); if (s == null) { s = "bar"; } System.out.println(s); } with the more concise and OO public void foo() { Option s = dao.getValue(); System.out.println(s.getOrElse("bar")); } However, what if I want to do something other than get a fallback value – say, throw an exception? More to the point, what if I want to throw a specific type of exception that is both specific in use and not hard-coded into Option? This requires a spot of cunning, and a splash of type inference. Because this is Java, we can start with a new factory – ExceptionFactory. This is a basic implementation that only creates exceptions constructed with a message, but you can of course expand the code as required. public interface ExceptionFactory { E create(String message); } Notice the – this is the key to how this works. Using the factory, we can now add a new method to Option: public T getOrThrow(ExceptionFactory exceptionFactory, String message) throws E { if (isDefined()) { return get(); } else { throw exceptionFactory.create(message); } } Again, notice the throws E – this is inferred from the exception factory. And that, believe it or not, is 90% of what it takes. The one irritation is the need to have exception factories. If you can stomach this, you’re all set. Let’s define a couple of custom exceptions to see this in action. public class ExceptionA extends Exception { public ExceptionA(String message) { super(message); } public static ExceptionFactory factory() { return new ExceptionFactory() { @Override public ExceptionA create(String message) { return new ExceptionA(message); } }; } } And the suspiciously similar ExceptionB public class ExceptionB extends Exception { public ExceptionB(String message) { super(message); } public static ExceptionFactory factory() { return new ExceptionFactory() { @Override public ExceptionB create(String message) { return new ExceptionB(message); } }; } } And finally, throw it all together: public class GenericExceptionTest { @Test(expected = ExceptionA.class) public void exceptionA_throw() throws ExceptionA { Option.option(null).getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); } @Test public void exceptionA_noThrow() throws ExceptionA { String s = Option.option("foo").getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); } @Test(expected = ExceptionB.class) public void exceptionB_throw() throws ExceptionB { Option.option(null).getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); } @Test public void exceptionB_noThrow() throws ExceptionB { String s = Option.option("foo").getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); } } The important thing to notice, as highlighted in bold above, is the exception declared in the method signature is specific – it’s not a common ancestor (Exception or Throwable). This means you can now use Options in your DAO layer, your service layer, wherever, and throw specific exceptions where and how you need. Download source: You can get the source code and tests from here – genex Sidenote One other interesting thing that came out of writing this was the observation that it’s possible to do this: public void foo() { throw null; } public void bar() { try { foo(); } catch (NullPointerException e) { ... } } It goes without saying that this is not a good idea
January 7, 2013
by Steve Chaloner
· 30,990 Views
  • Previous
  • ...
  • 784
  • 785
  • 786
  • 787
  • 788
  • 789
  • 790
  • 791
  • 792
  • 793
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: