DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

The Latest Coding Topics

article thumbnail
DistinctBy in Linq (Find Distinct Object by Property)
In this post I am going to discuss about how to get distinct object using property of it from collection. Here I am going to show three different way to achieve it easily. In this post I am going to discuss about extension method that can do task more than the current Distinct method available in .Net framework. Distinct method of Linq works as following right now. public class Product { public string Name { get; set; } public int Code { get; set; } } Consider that we have product Class which is having Code and Name as property in it. Now Requirement is I have to find out the all product with distinct Code values. Product[] products = { new Product { Name = "apple", Code = 9 }, new Product { Name = "orange", Code = 4 }, new Product { Name = "apple", Code = 10 }, new Product { Name = "lemon", Code = 9 } }; var lstDistProduct = products.Distinct(); foreach (Product p in list1) { Console.WriteLine(p.Code + " : " + p.Name); } Output It returns all the product event though two product have same Code value. So this doesn't meet requirement of getting object with distinct Code value. Way 1 : Make use of MoreLinq Library First way to achieve the requirement is make use of MoreLinq Library, which support function called DistinctBy in which you can specify the property on which you want to find Distinct objects. Below code is shows the use of the function. var list1 = products.DistinctBy(x=> x.Code); foreach (Product p in list1) { Console.WriteLine(p.Code + " : " + p.Name); } Output As you can see in output there is only two object get return which actually I want. i.e. distinct value by Code or product. If you want to pass more than on property than you can just do like this var list1 = products.DistinctBy(a => new { a.Name, a.Code }); You can read about the MoreLinq and Download this DLL from here : http://code.google.com/p/morelinq/ one more thing about this library also contains number of other function that you can check. Way 2: Implement Comparable Second way to achieve the same functionality is make use of overload Distinct function which support to have comparator as argument. here is MSDN documentation on this : Enumerable.Distinct Method (IEnumerable, IEqualityComparer) So for that I implemented IEqualityComparer and created new ProductComparare which you can see in below code. class ProductComparare : IEqualityComparer { private Func _funcDistinct; public ProductComparare(Func funcDistinct) { this._funcDistinct = funcDistinct; } public bool Equals(Product x, Product y) { return _funcDistinct(x).Equals(_funcDistinct(y)); } public int GetHashCode(Product obj) { return this._funcDistinct(obj).GetHashCode(); } } So In ProductComparare constructor I am passing function as argument, so when I create any object of it I have to pass my project function as argument. In Equal method I am comparing object which are returned by my projection function. Now following is the way how I used this Comparare implementation to satisfy my requirement. var list2 = products.Distinct(new ProductComparare( a => a.Code )); foreach (Product p in list2) { Console.WriteLine(p.Code + " : " + p.Name); Output So this approach also satisfy my requirement easily. I not looked in code of MoreLinq library but I think its also doing like this only. If you want to pass more than on property than you can just do like this var list1 = products.Distinct(a => new { a.Name, a.Code });. Way 3: Easy GroupBy wa The third and most eaisest way to avoide this I did in above like using MoreLine and Comparare implementation is just make use of GroupBy like as below List list = products .GroupBy(a => a.Code ) .Select(g => g.First()) .ToList(); foreach (Product p in list) { Console.WriteLine(p.Code + " : " + p.Name); } In above code I am doing grouping object on basis of property and than in Select function just selecting fist one of the each group will doing work for me. Output So this approach also satisfy my requirement easily and output is similar to above two approach. If you want to pass more than on property than you can just do like this .GroupBy(a => new { a.Name, a.Code }). So this one is very easy trick to achieve the functionality that I want without using any thing extra in my code. Conclusion So Above is the way you can achieve Distinct of collection easily by property of object. Leave comment if you have any query or if you like it.
January 30, 2013
by Pranay Rana
· 23,329 Views
article thumbnail
JDBC Realm and Form Based Authentication with GlassFish 3.1.2.2 and Primefaces 3.4
One of the most popular posts on my blog is the short tutorial about the JDBC Security Realm and form based Authentication on GlassFish with Primefaces. After I received some comments about it that it isn't any longer working with latest GlassFish 3.1.2.2 I thought it might be time to revisit it and present an updated version. Here we go: Preparation As in the original tutorial I am going to rely on some stuff. Make sure to have a recent NetBeans 7.3 beta2 (which includes GlassFish 3.1.2.2) and the MySQL Community Server (5.5.x) installed. You should have verified that everything is up an running and that you can start GlassFish and the MySQL Server also is started. Some Basics A GlassFish authentication realm, also called a security policy domain or security domain, is a scope over which the GlassFish Server defines and enforces a common security policy. GlassFish Server is preconfigured with the file, certificate, and administration realms. In addition, you can set up LDAP, JDBC, digest, Oracle Solaris, or custom realms. An application can specify which realm to use in its deployment descriptor. If you want to store the user credentials for your application in a database your first choice is the JDBC realm. Prepare the Database Fire up NetBeans and switch to the Services tab. Right click the "Databases" node and select "Register MySQL Server". Fill in the details of your installation and click "ok". Right click the new MySQL node and select "connect". Now you see all the already available databases. Right click again and select "Create Database". Enter "jdbcrealm" as the new database name. Remark: We're not going to do all that with a separate database user. This is something that is highly recommended but I am using the root user in this examle. If you have a user you can also grant full access to it here. Click "ok". You get automatically connected to the newly created database. Expand the bold node and right click on "Tables". Select "Execute Command" or enter the table details via the wizard. CREATE TABLE USERS ( `USERID` VARCHAR(255) NOT NULL, `PASSWORD` VARCHAR(255) NOT NULL, PRIMARY KEY (`USERID`) ); CREATE TABLE USERS_GROUPS ( `GROUPID` VARCHAR(20) NOT NULL, `USERID` VARCHAR(255) NOT NULL, PRIMARY KEY (`GROUPID`) ); That is all for now with the database. Move on to the next paragraph. Let GlassFish know about MySQL First thing to do is to get the latest and greatest MySQL Connector/J from the MySQL website which is 5.1.22 at the time of writing this. Extract the mysql-connector-java-5.1.22-bin.jar file and drop it into your domain folder (e.g. glassfish\domains\domain1\lib). Done. Now it is finally time to create a project. Basic Project Setup Start a new maven based web application project. Choose "New Project" > "Maven" > Web Application and hit next. Now enter a name (e.g. secureapp) and all the needed maven cordinates and hit next. Choose your configured GlassFish 3+ Server. Select Java EE 6 Web as your EE version and hit "Finish". Now we need to add some more configuration to our GlassFish domain.Right click on the newly created project and select "New > Other > GlassFish > JDBC Connection Pool". Enter a name for the new connection pool (e.g. SecurityConnectionPool) and underneath the checkbox "Extract from Existing Connection:" select your registered MySQL connection. Click next. review the connection pool properties and click finish. The newly created Server Resources folder now shows your sun-resources.xml file. Follow the steps and create a "New > Other > GlassFish > JDBC Resource" pointing the the created SecurityConnectionPool (e.g. jdbc/securityDatasource).You will find the configured things under "Other Sources / setup" in a file called glassfish-resources.xml. It gets deployed to your server together with your application. So you don't have to care about configuring everything with the GlassFish admin console.Additionally we still need Primefaces. Right click on your project, select "Properties" change to "Frameworks" category and add "JavaServer Faces". Switch to the Components tab and select "PrimeFaces". Finish by clicking "OK". You can validate if that worked by opening the pom.xml and checking for the Primefaces dependency. 3.4 should be there. Feel free to change the version to latest 3.4.2. Final GlassFish Configuration Now it is time to fire up GlassFish and do the realm configuration. In NetBeans switch to the "Services" tab again and right click on the "GlassFish 3+" node. Select "Start" and watch the Output window for a successful start. Right click again and select "View Domain Admin Console", which should open your default browser pointing you to http://localhost:4848/. Select "Configurations > server-config > Security > Realms" and click "New..." on top of the table. Enter a name (e.g. JDBCRealm) and select the com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm from the drop down. Fill in the following values into the textfields: JAAS jdbcRealm JNDI jdbc/securityDatasource User Table users User Name Column username Password Column password Group Table groups Group Name Column groupname Leave all the other defaults/blanks and select "OK" in the upper right corner. You are presented with a fancy JavaScript warning window which tells you to _not_ leave the Digest Algorithm Field empty. I field a bug about it. It defaults to SHA-256. Which is different to GlassFish versions prior to 3.1 which used MD5 here. The older version of this tutorial didn't use a digest algorithm at all ("none"). This was meant to make things easier but isn't considered good practice at all. So, let's stick to SHA-256 even for development, please. Secure your application Done with configuring your environment. Now we have to actually secure the application. First part is to think about the resources to protect. Jump to your Web Pages folder and create two more folders. One named "admin" and another called "users". The idea behind this is, to have two separate folders which could be accessed by users belonging to the appropriate groups. Now we have to create some pages. Open the Web Pages/index.xhtml and replace everything between the h:body tags with the following: Select where you want to go: Now add a new index.xhtml to both users and admin folders. Make them do something like this: Hello Admin|User On to the login.xhtml. Create it with the following content in the root of your Web Pages folder. Username: Password: As you can see, whe have the basic Primefaces p:panel component which has a simple html form which points to the predefined action j_security_check. This is, where all the magic is happening. You also have to include two input fields for username and password with the predefined names j_username and j_password. Now we are going to create the loginerror.xhtml which is displayed, if the user did not enter the right credentials. (use the same DOCTYPE and header as seen in the above example). Sorry, you made an Error. Please try again: Login The only magic here is the href link of the Login anchor. We need to get the correct request context and this could be done by accessing the faces context. If a user without the appropriate rights tries to access a folder he is presented a 403 access denied error page. If you like to customize it, you need to add it and add the following lines to your web.xml: 403 /faces/403.xhtml That snippet defines, that all requests that are not authorized should go to the 403 page. If you have the web.xml open already, let's start securing your application. We need to add a security constraint for any protected resource. Security Constraints are least understood by web developers, even though they are critical for the security of Java EE Web applications. Specifying a combination of URL patterns, HTTP methods, roles and transport constraints can be daunting to a programmer or administrator. It is important to realize that any combination that was intended to be secure but was not specified via security constraints, will mean that the web container will allow those requests. Security Constraints consist of Web Resource Collections (URL patterns, HTTP methods), Authorization Constraint (role names) and User Data Constraints (whether the web request needs to be received over a protected transport such as TLS). Admin Pages Protected Admin Area /faces/admin/* GET POST HEAD PUT OPTIONS TRACE DELETE admin NONE All Access None Protected User Area /faces/users/* GET POST HEAD PUT OPTIONS TRACE DELETE NONE If the constraints are in place you have to define, how the container should challenge the user. A web container can authenticate a web client/user using either HTTP BASIC, HTTP DIGEST, HTTPS CLIENT or FORM based authentication schemes. In this case we are using FORM based authentication and define the JDBCRealm FORM JDBCRealm /faces/login.xhtml /faces/loginerror.xhtml The realm name has to be the name that you assigned the security realm before. Close the web.xml and open the sun-web.xml to do a mapping from the application role-names to the actual groups that are in the database. This abstraction feels weird, but it has some reasons. It was introduced to have the option of mapping application roles to different group names in enterprises. I have never seen this used extensively but the feature is there and you have to configure it. Other appservers do make the assumption that if no mapping is present, role names and group names do match. GlassFish doesn't think so. Therefore you have to put the following into the glassfish-web.xml. You can create it via a right click on your project's WEB-INF folder, selecting "New > Other > GlassFish > GlassFish Descriptor" admin admin hat was it _basically_ ... everything you need is in place. The only thing that is missing are the users in the database. It is still empty ...We need to add a test user: Adding a Test-User to the Database And again we start by right clicking on the jdbcrealm database on the "Services" tab in NetBeans. Select "Execute Command" and insert the following: INSERT INTO USERS VALUES ("admin", "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"); INSERT INTO USERS_GROUPS VALUES ("admin", "admin"); You can login with user: admin and password: admin and access the secured area. Sample code to generate the hash could look like this: try { MessageDigest md = MessageDigest.getInstance("SHA-256"); String text = "admin"; md.update(text.getBytes("UTF-8")); // Change this to "UTF-16" if needed byte[] digest = md.digest(); BigInteger bigInt = new BigInteger(1, digest); String output = bigInt.toString(16); System.out.println(output); } catch (NoSuchAlgorithmException | UnsupportedEncodingException ex) { Logger.getLogger(PasswordTest.class.getName()).log(Level.SEVERE, null, ex); } Have fun securing your apps and keep the questions coming! In case you need it, the complete source code is on https://github.com/myfear/JDBCRealmExample
January 29, 2013
by Markus Eisele
· 39,050 Views · 1 Like
article thumbnail
Using JAXB to Generate Java Objects from XML Document
Quite sometime back I had written about Using JAXB to generate XML from the Java, XSD. And now I am writing how to do the reverse of it i.e generating Java objects from the XML document. There was one comment mentioning that JAXB reference implementation and hence in this article I am making use of the reference implementation that is shipped with the JDK. Firstly the XML which I am using to generate the java objects are: Sanaulla Seagate External HDD August 24, 2010 6776.5 Benq HD Monitor August 24, 2012 15000 And the XSD to which it conforms to is: The XSD has already been explained and one can find out more by reading this article. I create a class: XmlToJavaObjects which will drive the unmarshalling operation and before I generate the JAXB Classes from the XSD, the directory structure is: I go ahead and use the xjc.exe to generate JAXB classes for the given XSD: $> xjc expense.xsd and I now refresh the directory structure to see these generated classes as shown below: With the JAXB Classes generated and the XML data available, we can go ahead with the Unmarshalling process. Unmarshalling the XML: To unmarshall: We need to create JAXContext instance. Use JAXBContext instance to create the Unmarshaller. Use the Unmarshaller to unmarshal the XML document to get an instance of JAXBElement. Get the instance of the required JAXB Root Class from the JAXBElement. Once we get the instance of the required JAXB Root class, we can use it to get the complete XML data in Java objects. The code to unmarshal the XML data is given below: package problem; import generated.ExpenseT; import generated.ItemListT; import generated.ItemT; import generated.ObjectFactory; import generated.UserT; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBElement; import javax.xml.bind.JAXBException; import javax.xml.bind.Unmarshaller; public class XmlToJavaObjects { /** * @param args * @throws JAXBException */ public static void main(String[] args) throws JAXBException { //1. We need to create JAXContext instance JAXBContext jaxbContext = JAXBContext.newInstance(ObjectFactory.class); //2. Use JAXBContext instance to create the Unmarshaller. Unmarshaller unmarshaller = jaxbContext.createUnmarshaller(); //3. Use the Unmarshaller to unmarshal the XML document to get an instance of JAXBElement. JAXBElement unmarshalledObject = (JAXBElement)unmarshaller.unmarshal( ClassLoader.getSystemResourceAsStream("problem/expense.xml")); //4. Get the instance of the required JAXB Root Class from the JAXBElement. ExpenseT expenseObj = unmarshalledObject.getValue(); UserT user = expenseObj.getUser(); ItemListT items = expenseObj.getItems(); //Obtaining all the required data from the JAXB Root class instance. System.out.println("Printing the Expense for: "+user.getUserName()); for ( ItemT item : items.getItem()){ System.out.println("Name: "+item.getItemName()); System.out.println("Value: "+item.getAmount()); System.out.println("Date of Purchase: "+item.getPurchasedOn()); } } } And the output would be: Do drop in your queries/feedback as comments and I will try to address them at the earliest.
January 29, 2013
by Mohamed Sanaulla
· 169,374 Views
article thumbnail
Java concurrency: the hidden thread deadlocks
Most Java programmers are familiar with the Java thread deadlock concept. It essentially involves 2 threads waiting forever for each other. This condition is often the result of flat (synchronized) or ReentrantLock (read or write) lock-ordering problems. Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting to lock monitor 0x0237ada4 (object 0x272200e8, a java.lang.Object), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x0237aa64 (object 0x272200f0, a java.lang.Object), which is held by "pool-1-thread-2" The good news is that the HotSpot JVM is always able to detect this condition for you…or is it? A recent thread deadlock problem affecting an Oracle Service Bus production environment has forced us to revisit this classic problem and identify the existence of “hidden” deadlock situations. This article will demonstrate and replicate via a simple Java program a very special lock-ordering deadlock condition which is not detected by the latest HotSpot JVM 1.7. You will also find a video at the end of the article explaining you the Java sample program and the troubleshooting approach used. The crime scene I usually like to compare major Java concurrency problems to a crime scene where you play the lead investigator role. In this context, the “crime” is an actual production outage of your client IT environment. Your job is to: Collect all the evidences, hints & facts (thread dump, logs, business impact, load figures…) Interrogate the witnesses & domain experts (support team, delivery team, vendor, client…) The next step of your investigation is to analyze the collected information and establish a potential list of one or many “suspects” along with clear proofs. Eventually, you want to narrow it down to a primary suspect or root cause. Obviously the law “innocent until proven guilty” does not apply here, exactly the opposite. Lack of evidence can prevent you to achieve the above goal. What you will see next is that the lack of deadlock detection by the Hotspot JVM does not necessary prove that you are not dealing with this problem. The suspect In this troubleshooting context, the “suspect” is defined as the application or middleware code with the following problematic execution pattern. Usage of FLAT lock followed by the usage of ReentrantLock WRITE lock (execution path #1) Usage of ReentrantLock READ lock followed by the usage of FLAT lock (execution path #2) Concurrent execution performed by 2 Java threads but via a reversed execution order The above lock-ordering deadlock criteria’s can be visualized as per below: Now let’s replicate this problem via our sample Java program and look at the JVM thread dump output. Sample Java program This above deadlock conditions was first identified from our Oracle OSB problem case. We then re-created it via a simple Java program. You can download the entire source code of our program here. The program is simply creating and firing 2 worker threads. Each of them execute a different execution path and attempt to acquire locks on shared objects but in different orders. We also created a deadlock detector thread for monitoring and logging purposes. For now, find below the Java class implementing the 2 different execution paths. package org.ph.javaee.training8; import java.util.concurrent.locks.ReentrantReadWriteLock; /** * A simple thread task representation * @author Pierre-Hugues Charbonneau * */ public class Task { // Object used for FLAT lock private final Object sharedObject = new Object(); // ReentrantReadWriteLock used for WRITE & READ locks private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); /** * Execution pattern #1 */ public void executeTask1() { // 1. Attempt to acquire a ReentrantReadWriteLock READ lock lock.readLock().lock(); // Wait 2 seconds to simulate some work... try { Thread.sleep(2000);}catch (Throwable any) {} try { // 2. Attempt to acquire a Flat lock... synchronized (sharedObject) {} } // Remove the READ lock finally { lock.readLock().unlock(); } System.out.println("executeTask1() :: Work Done!"); } /** * Execution pattern #2 */ public void executeTask2() { // 1. Attempt to acquire a Flat lock synchronized (sharedObject) { // Wait 2 seconds to simulate some work... try { Thread.sleep(2000);}catch (Throwable any) {} // 2. Attempt to acquire a WRITE lock lock.writeLock().lock(); try { // Do nothing } // Remove the WRITE lock finally { lock.writeLock().unlock(); } } System.out.println("executeTask2() :: Work Done!"); } public ReentrantReadWriteLock getReentrantReadWriteLock() { return lock; } } As soon ad the deadlock situation was triggered, a JVM thread dump was generated using JVisualVM. As you can see from the Java thread dump sample. The JVM did not detect this deadlock condition (e.g. no presence of Found one Java-level deadlock) but it is clear these 2 threads are in deadlock state. Root cause: ReetrantLock READ lock behavior The main explanation we found at this point is associated with the usage of the ReetrantLock READ lock. The read locks are normally not designed to have a notion of ownership. Since there is not a record of which thread holds a read lock, this appears to prevent the HotSpot JVM deadlock detector logic to detect deadlock involving read locks. Some improvements were implemented since then but we can see that the JVM still cannot detect this special deadlock scenario. Now if we replace the read lock (execution pattern #2) in our program by a write lock, the JVM will finally detect the deadlock condition but why? Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting for ownable synchronizer 0x272239c0, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x025cad3c (object 0x272236d0, a java.lang.Object), which is held by "pool-1-thread-2" Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting for ownable synchronizer 0x272239c0, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x025cad3c (object 0x272236d0, a java.lang.Object), which is held by "pool-1-thread-2" Java stack information for the threads listed above: =================================================== "pool-1-thread-2": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x272239c0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945) at org.ph.javaee.training8.Task.executeTask2(Task.java:54) - locked <0x272236d0> (a java.lang.Object) at org.ph.javaee.training8.WorkerThread2.run(WorkerThread2.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) "pool-1-thread-1": at org.ph.javaee.training8.Task.executeTask1(Task.java:31) - waiting to lock <0x272236d0> (a java.lang.Object) at org.ph.javaee.training8.WorkerThread1.run(WorkerThread1.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) This is because write locks are tracked by the JVM similar to flat locks. This means the HotSpot JVM deadlock detector appears to be currently designed to detect: Deadlock on Object monitors involving FLAT locks Deadlock involving Locked ownable synchronizers associated with WRITE locks The lack of read lock per-thread tracking appears to prevent deadlock detection for this scenario and significantly increase the troubleshooting complexity. I suggest that you read Doug Lea’s comments on this whole issue since concerns were raised back in 2005 regarding the possibility to add per-thread read-hold tracking due to some potential lock overhead. Find below my troubleshooting recommendations if you suspect a hidden deadlock condition involving read locks: Analyze closely the thread call stack trace, it may reveal some code potentially acquiring read locks and preventing other threads to acquire write locks. If you are the owner of the code, keep track of the read lock count via the usage of the lock.getReadLockCount() method I’m looking forward for your feedback, especially from individuals with experience on this type of deadlock involving read locks. Finally, find below a video explaining such findings via the execution and monitoring of our sample Java program.
January 28, 2013
by Pierre - Hugues Charbonneau
· 105,140 Views · 3 Likes
article thumbnail
Profiling MySQL Memory Usage With Valgrind Massif
This post comes from Roel Van de Paar at the MySQL Performance Blog. There are times where you need to know exactly how much memory the mysqld server (or any other program) is using, where (i.e. for what function) it was allocated, how it got there (a backtrace, please!), and at what point in time the allocation happened. For example; you may have noticed a sharp memory increase after executing a particular query. Or, maybe mysqld is seemingly using too much memory overall. Or again, maybe you noticed mysqld’s memory profile slowly growing overtime, indicating a possible memory bug. Whatever the reason, there is a simple but powerful way to profile MySQL memory usage; the Massif tool from Valgrind. An excerpt from the Massif manual page (Heap memory being simply the allotted pool of memory for use by programs); Massif tells you not only how much heap memory your program is using, it also gives very detailed information that indicates which parts of your program are responsible for allocating the heap memory. Firstly, we need to get the Valgrind program. Though you could use the latest version which comes with your OS (think yum or apt-get install Valgrind), I prefer to obtain & compile the latest release (3.8.1 at the moment): sudo yum remove valgrind* # or apt-get etc. sudo yum install wget make gcc gcc-c++ libtool libaio-devel bzip2 glibc* wget http://valgrind.org/downloads/valgrind-3.8.1.tar.bz2 # Or newer tar -xf valgrind-3.8.1.tar.bz2 cd valgrind-3.8.1 ./configure make sudo make install valgrind --version # check version to be same as what was downloaded (3.8.1 here) There are several advantages to self-compiling: When using the latest version of Valgrind, even compiled ‘out of the box’ (i.e. with no changes), you will likely see less issues then with earlier versions. For example, earlier versions may have too-small Valgrind-internal memory tracking allocations hardcoded. In other words; you may not be able to run your huge-buffer-pool under Valgrind without it complaining quickly. If you self compile, and those Valgrind-internal limits are still too small, you can easily change them before compiling. An often bumped up setting is VG_N_SEGMENTS in coregrind/m_aspacemgr/aspacemgr-linux.c (when you see ‘Valgrind: FATAL: VG_N_SEGMENTS is too low’) Newer releases [better] support newer hardware and software. Once ‘valgrind –version’ returns the correct installed version, you’re ready to go. In this example, we’ll write the output to /tmp/massif.out. If you prefer to use another location (and are therefore bound to set proper file rights etc.) use: $ touch /your_location/massif.out $ chown user:group /your_location/massif.out # Use the user mysqld will now run under (check 'user' setting in my.cnf also) (see here if this is not clear) Now, before you run mysqld under Valgrind, make sure debug symbols are present. Debug symbols are present when the binary is not stripped of them (downloaded ‘GA’ [generally available] packages may contain optimized or stripped binaries, which are optimized for speed rather than debugging). If the binaries you have are stripped, you have a few options to get a debug build of mysqld to use for memory profiling purposes: Download the appropriate debuginfo packages (these may not be available for all releases). Download debug binaries of the same server version as you are currently using, and simply use the debug mysqld as a drop-in replacement for your current mysqld (i.e. shutdown, mv mysqld mysqld.old, cp /debug_bin_path/mysqld ./mysqld, startup). If you have (through download or from past storage) the source code available (of the same server version as you are currently using) then simply debug-compile the source and use the mysqld binary as a drop-in replacement as shown in the last point. (For example, Percona Server 5.5 source can be debug-compiled by using ‘./build/build-binary –debug ..’). Valgrind Massif needs the debug symbol information to be present, so that it can print stack traces that show where memory is consumed. Without debug symbols available, you would not be able to see the actual function call responsible for memory usage. If you’re not sure if you have stripped binaries, simply test the procedure below and see what output you get. Once you’re all set with debug symbols, shutdown your mysqld server using your standard shutdown procedure, and then re-start it manually under Valgrind using the Massif tool: $ valgrind --tool=massif --massif-out-file=/tmp/massif.out /path/to/mysqld {mysqld options...} Note that ‘{mysqld options}’ could for instance include –default-file=/etc/my.cnf (if this is where your my.cnf file is located) in order to point mysqld to your settings file etc. After mysqld is properly started (check if you can login with your mysql client), you would execute whatever steps you think are necessary to increase memory usage/trigger the memory problem. You could also just leave the server running for some time (for example, if you have experienced memory increase over time). Once you’ve done that, shutdown mysqld (again using your normal shutdown procedure), and then use the ms_print tool on the masif.out file to output a textual graph of memory usage: ms_print /tmp/massif.out An partial example output from a recent customer problem we worked on: 96.51% (68,180,839B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc. ->50.57% (35,728,995B) 0x7A3CB0: my_malloc (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | ->10.10% (7,135,744B) 0x7255BB: Log_event::read_log_event(char const*, unsigned int, char const**, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | ->10.10% (7,135,744B) 0x728DAA: Log_event::read_log_event(st_io_cache*, st_mysql_mutex*, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | ->10.10% (7,135,744B) 0x5300A8: ??? (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | | ->10.10% (7,135,744B) 0x5316EC: handle_slave_sql (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | | ->10.10% (7,135,744B) 0x3ECF60677B: start_thread (in /lib64/libpthread-2.5.so) | | | ->10.10% (7,135,744B) 0x3ECEAD325B: clone (in /lib64/libc-2.5.so) [...] And, a few snapshots later: 92.81% (381,901,760B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc. ->84.91% (349,404,796B) 0x7A3CB0: my_malloc (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | ->27.73% (114,084,096B) 0x7255BB: Log_event::read_log_event(char const*, unsigned int, char const**, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | ->27.73% (114,084,096B) 0x728DAA: Log_event::read_log_event(st_io_cache*, st_mysql_mutex*, Format_description_log_event const*) (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | ->27.73% (114,084,096B) 0x5300A8: ??? (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | | ->27.73% (114,084,096B) 0x5316EC: handle_slave_sql (in /usr/local/percona/mysql-5.5.28/usr/sbin/mysqld) | | | ->27.73% (114,084,096B) 0x3ECF60677B: start_thread (in /lib64/libpthread-2.5.so) | | | ->27.73% (114,084,096B) 0x3ECEAD325B: clone (in /lib64/libc-2.5.so) As you can see, a fair amount of (and in this case ‘too much’) memory is being allocated to the Log_event::read_log_event function. You can also see the memory allocated to the function grow significantly accross the snapshots. This example helped to pin down a memory leak bug on a filtered slave (read more in the actual bug report). Besides running Valgrind Massif in the way above, you can also change Massif’s snapshot options and other cmd line options to match the snapshot frequency etc. to your specific requirements. However, you’ll likely find that the default options will perform well in most scenario’s. For the technically advanced, you can take things one step further: use Valgrind’s gdbserver to obtain Massif snapshots on demand (i.e. you can command-line initiate Massif snapshots just before, during and after executing any commands which may alter memory usage significantly). Conclusion: using Valgrind Massif, and potentially Valgrind’s gdbserver (which was not used in the resolution of the example bug discussed), will help you to analyze the ins and outs of mysqld’s (or any other programs) memory usage. Credits: Staff @ a Percona customer, Ovais, Laurynas, Sergei, George, Vladislav, Raghavendra, Ignacio, myself & others at Percona all combined efforts leading to the information you can read above.
January 26, 2013
by Peter Zaitsev
· 4,543 Views
article thumbnail
Sorting Text Files with MapReduce
in my last post i wrote about sorting files in linux. decently large files (in the tens of gb’s) can be sorted fairly quickly using that approach. but what if your files are already in hdfs, or ar hundreds of gb’s in size or larger? in this case it makes sense to use mapreduce and leverage your cluster resources to sort your data in parallel. mapreduce should be thought of as a ubiquitous sorting tool, since by design it sorts all the map output records (using the map output keys), so that all the records that reach a single reducer are sorted. the diagram below shows the internals of how the shuffle phase works in mapreduce. given that mapreduce already performs sorting between the map and reduce phases, then sorting files can be accomplished with an identity function (one where the inputs to the map and reduce phases are emitted directly). this is in fact what the sort example that is bundled with hadoop does. you can look at the how the example code works by examining the org.apache.hadoop.examples.sort class. to use this example code to sort text files in hadoop, you would use it as follows: shell$ export hadoop_home=/usr/lib/hadoop shell$ $hadoop_home/bin/hadoop jar $hadoop_home/hadoop-examples.jar sort \ -informat org.apache.hadoop.mapred.keyvaluetextinputformat \ -outformat org.apache.hadoop.mapred.textoutputformat \ -outkey org.apache.hadoop.io.text \ -outvalue org.apache.hadoop.io.text \ /hdfs/path/to/input \ /hdfs/path/to/output this works well, but it doesn’t offer some of the features that i commonly rely upon in linux’s sort, such as sorting on a specific column, and case-insensitive sorts. linux-esque sorting in mapreduce i’ve started a new github repo called hadoop-utils , where i plan to roll useful helper classes and utilities. the first one is a flexible hadoop sort. the same hadoop example sort can be accomplished with the hadoop-utils sort as follows: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ /hdfs/path/to/input \ /hdfs/path/to/output to bring sorting in mapreduce closer to the linux sort, the --key and --field-separator options can be used to specify one or more columns that should be used for sorting, as well as a custom separator (whitespace is the default). for example, imagine you had a file in hdfs called /input/300names.txt which contained first and last names: shell$ hadoop fs -cat 300names.txt | head -n 5 roy franklin mario gardner willis romero max wilkerson latoya larson to sort on the last name you would run: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --key 2 \ /input/300names.txt \ /hdfs/path/to/output the syntax of --key is pos1[,pos2] , where the first position (pos1) is required, and the second position (pos2) is optional - if it’s omitted then pos1 through the rest of the line is used for sorting. just like the linux sort, --key is 1-based, so --key 2 in the above example will sort on the second column in the file. lzop integration another trick that this sort utility has is its tight integration with lzop, a useful compression codec that works well with large files in mapreduce (see chapter 5 of hadoop in practice for more details on lzop). it can work with lzop input files that span multiple splits, and can also lzop-compress outputs, and even create lzop index files. you would do this with the codec and lzop-index options: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --key 2 \ --codec com.hadoop.compression.lzo.lzopcodec \ --map-codec com.hadoop.compression.lzo.lzocodec \ --lzop-index \ /hdfs/path/to/input \ /hdfs/path/to/output multiple reducers and total ordering if your sort job runs with multiple reducers (either because mapreduce.job.reduces in mapred-site.xml has been set to a number larger than 1, or because you’ve used the -r option to specify the number of reducers on the command-line), then by default hadoop will use the hashpartitioner to distribute records across the reducers. use of the hashpartitioner means that you can’t concatenate your output files to create a single sorted output file. to do this you’ll need total ordering , which is supported by both the hadoop example sort and the hadoop-utils sort - the hadoop-utils sort enables this with the --total-order option. shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --total-order 0.1 10000 10 \ /hdfs/path/to/input \ /hdfs/path/to/output the syntax is for this option is unintuitive so let’s look at what each field means. more details on total ordering can be seen in chapter 4 of hadoop in practice . more details for details on how to download and run the hadoop-utils sort take a look at the cli guide in the github project page .
January 26, 2013
by Alex Holmes
· 15,114 Views
article thumbnail
Fun with the MySQL pager command
This post comes from Stephane Combaudon at the MySQL Performance Blog. Last time I wrote about a few tips that can make you more efficient when using the command line on Unix. Today I want to focus more on pager. The most common usage of pager is to set it to a Unix pager such as less. It can be very useful to view the result of a command spanning over many lines (for instance SHOW ENGINE INNODB STATUS): mysql> pager less PAGER set to 'less' mysql> show engine innodb status\G [...] Now you are inside less and you can easily navigate through the result set (use q to quit, space to scroll down, etc). Reminder: if you want to leave your custom pager, this is easy, just run pager: mysql> pager Default pager wasn't set, using stdout. Or \n: mysql> \n PAGER set to stdout But the pager command is not restricted to such basic usage! You can pass the output of queries to most Unix programs that are able to work on text. We have discussed the topic, but here are a few more examples. Discarding the result set Sometimes you don’t care about the result set, you only want to see timing information. This can be true if you are trying different execution plans for a query by changing indexes. Discarding the result is possible with pager: mysql> pager cat > /dev/null PAGER set to 'cat > /dev/null' # Trying an execution plan mysql> SELECT ... 1000 rows in set (0.91 sec) # Another execution plan mysql> SELECT ... 1000 rows in set (1.63 sec) Now it’s much easier to see all the timing information on one screen. Comparing result sets Let’s say you are rewriting a query and you want to check if the result set is the same before and after rewrite. Unfortunately, it has a lot of rows: mysql> SELECT ... [..] 989 rows in set (0.42 sec) Instead of manually comparing each row, you can calculate a checksum and only compare the checksum: mysql> pager md5sum PAGER set to 'md5sum' # Original query mysql> SELECT ... 32a1894d773c9b85172969c659175d2d - 1 row in set (0.40 sec) # Rewritten query - wrong mysql> SELECT ... fdb94521558684afedc8148ca724f578 - 1 row in set (0.16 sec) Hmmm, checksums don’t match, something is wrong. Let’s retry: # Rewritten query - correct mysql> SELECT ... 32a1894d773c9b85172969c659175d2d - 1 row in set (0.17 sec) Checksums are identical, the rewritten query is much likely to produce the same result as the original one. Cleaning up SHOW PROCESSLIST If you have lots of connections on your MySQL, it’s very difficult to read the output of SHOW PROCESSLIST. For instance, if you have several hundreds of connections and you want to know how many connections are sleeping, manually counting the rows from the output of SHOW PROCESSLIST is probably not the best solution. With pager, it is straightforward: mysql> pager grep Sleep | wc -l PAGER set to 'grep Sleep | wc -l' mysql> show processlist; 337 346 rows in set (0.00 sec) This should be read as ’337 out 346 connections are sleeping’. Slightly more complicated now: you want to know the number of connections for each status: mysql> pager awk -F '|' '{print $6}' | sort | uniq -c | sort -r PAGER set to 'awk -F '|' '{print $6}' | sort | uniq -c | sort -r' mysql> show processlist; 309 Sleep 3 2 Query 2 Binlog Dump 1 Command Astute readers will have noticed that these questions could have been solved by querying INFORMATION_SCHEMA. For instance, counting the number of sleeping connections can be done with: mysql> SELECT COUNT(*) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND='Sleep'; +----------+ | COUNT(*) | +----------+ | 320 | +----------+ and counting the number of connection for each status can be done with: mysql> SELECT COMMAND,COUNT(*) TOTAL FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY COMMAND ORDER BY TOTAL DESC; +-------------+-------+ | COMMAND | TOTAL | +-------------+-------+ | Sleep | 344 | | Query | 5 | | Binlog Dump | 2 | +-------------+-------+ True, but: It’s nice to know several ways to get the same result Some of you may feel more comfortable with writing SQL queries, while others will prefer command line tools Conclusion As you can see, pager is your friend! It’s very easy to use and it can solve problems in an elegant and very efficient way. You can even write your custom script (if it is too complicated to fit in a single line) and pass it to the pager.
January 24, 2013
by Peter Zaitsev
· 13,557 Views
article thumbnail
OAuth 2.0 Bearer Token Profile Vs MAC Token Profile
Almost all the implementation I see today are based on OAuth 2.0 Bearer Token Profile. Of course its an RFC proposed standard today. OAuth 2.0 Bearer Token profile brings a simplified scheme for authentication. This specification describes how to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a "bearer") can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport. Before dig in to the OAuth 2.0 MAC profile lets have quick high-level overview of OAuth 2.0 message flow. OAuth 2.0 has mainly three phases. 1. Requesting an Authorization Grant. 2. Exchanging the Authorization Grant for an Access Token. 3. Access the resources with the Access Token. Where does the token type come in to action ? OAuth 2.0 core specification does not mandate any token type. At the same time at any point token requester - client - cannot decide which token type it needs. It's purely up to the Authorization Server to decide which token type to be returned in the Access Token response. So, the token type comes in to action in phase-2 when Authorization Server returning back the OAuth 2.0 Access Token. The access token type provides the client with the information required to successfully utilize the access token to make a protected resource request (along with type-specific attributes). The client must not use an access token if it does not understand the token type. Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a protected resource request. For example following is what you get for Access Token response irrespective of which grant type you use. HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { "access_token":"mF_9.B5f-4.1JqM", "token_type":"Bearer", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA" } The above is for Bearer - following is for MAC. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "access_token":"SlAV32hkKG", "token_type":"mac", "expires_in":3600, "refresh_token":"8xLOxBtZp8", "mac_key":"adijq39jdlaska9asud", "mac_algorithm":"hmac-sha-256" } Here you can see MAC Access Token response has two additional attributes. mac_key and the mac_algorithm. Let me rephrase this - "Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter". This MAC Token Profile defines the HTTP MAC access authentication scheme, providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host. In the above response access_token is the MAC key identifier. Unlike in Bearer, MAC token profile never passes it's top secret over the wire. The access_token or the MAC key identifier is a string identifying the MAC key used to calculate the request MAC. The string is usually opaque to the client. The server typically assigns a specific scope and lifetime to each set of MAC credentials. The identifier may denote a unique value used to retrieve the authorization information (e.g. from a database), or self-contain the authorization information in a verifiable manner (i.e. a string consisting of some data and a signature). The mac_key is a shared symmetric secret used as the MAC algorithm key. The server will not reissue a previously issued MAC key and MAC key identifier combination. Now let's see what happens in phase-3. Following shows how the Authorization HTTP header looks like when Bearer Token been used. Authorization: Bearer mF_9.B5f-4.1JqM This adds very low overhead on client side. It simply needs to pass the exact access_token it got from the Authorization Server in phase-2. Under MAC token profile, this is how it looks like. Authorization: MAC id="h480djs93hd8", ts="1336363200", nonce="dj83hs9s", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=" This needs bit more attention. id is the MAC key identifier or the access_token from the phase-2. ts the request timestamp. The value is a positive integer set by the client when making each request to the number of seconds elapsed from a fixed point in time (e.g. January 1, 1970 00:00:00 GMT). This value is unique across all requests with the same timestamp and MAC key identifier combination. nonce is a unique string generated by the client. The value is unique across all requests with the same timestamp and MAC key identifier combination. The client uses the MAC algorithm and the MAC key to calculate the request mac. This is how you derive the normalized string to generate the HMAC. The normalized request string is a consistent, reproducible concatenation of several of the HTTP request elements into a single string. By normalizing the request into a reproducible string, the client and server can both calculate the request MAC over the exact same value. The string is constructed by concatenating together, in order, the following HTTP request elements, each followed by a new line character (%x0A): 1. The timestamp value calculated for the request. 2. The nonce value generated for the request. 3. The HTTP request method in upper case. For example: "HEAD", "GET", "POST", etc. 4. The HTTP request-URI as defined by [RFC2616] section 5.1.2. 5. The hostname included in the HTTP request using the "Host" request header field in lower case. 6. The port as included in the HTTP request using the "Host" request header field. If the header field does not include a port, the default value for the scheme MUST be used (e.g. 80 for HTTP and 443 for HTTPS). 7. The value of the "ext" "Authorization" request header field attribute if one was included in the request (this is optional), otherwise, an empty string. Each element is followed by a new line character (%x0A) including the last element and even when an element value is an empty string. Either you use Bearer of MAC - the end user or the resource owner is identified using the access_token. Authorization, throttling, monitoring or any other quality of service operations can be carried out against the access_token irrespective of which token profile you use.
January 24, 2013
by Prabath Siriwardena
· 36,211 Views
article thumbnail
Create Post Tags With CSS
In this tutorial we are going to create post tags using CSS, we will also create a WordPress widget that will allow you to display these post tags in the sidebar of your WordPress site. The above image shows the tags we are going to create. In WordPress you can also display the amount of the posts that have these tags assigned to them, we will display this number on the right of the tag and will display the count on the hover of the tags. View the demo to see what we are going to create. Demo To start with we are going to define the HTML we need to use, the tags are going to be list items inside the list items is going to be a span tag with the number of posts that have this tag assigned to them. Admin 11 There is a CSS class on the unordered list which we can use to style the child elements of the list items. 404 1Admin 11Animation 13API 11Border 3box-sizing 1CSS 12CSS3 14cURL 3Database 6Effects 2Element 3Error 8Freebies 8Google 16htaccess 10HTML 8HTML5 12JavaScript 25Search 16SEO 11Snippet 6Tutorial 11Web 8WordPress 27 Styling The Tags This will create the length and background colours of the tags, there is a padding on the left of the tags which create enough room for the hole in the tag and arrow on the end. .posttags { list-style:none; } .posttags li { background:#eee; border-radius: 0 20px 20px 0; display: block; height:24px; line-height: 24px; margin: 1em 0.7em; padding:5px 5px 5px 20px; position: relative; text-align: left; width:124px; } .posttags li a { color: #56a3d5; display: block; } Using the CSS pseudo class :before we can create a new element that is styled as a triangle, this will need be positioned before the rectangle box to which creates the tag look. .posttags li:before { content:""; float:left; position:absolute; top:0; left:-17px; width:0; height:0; border-color:transparent #eee transparent transparent; border-style:solid; border-width:17px 17px 17px 0; } The below code creates the hole on the left of the tag, this will use the CSS pseudo class :after that will create new element, using the position absolute we can position this in the exact place for the hole. .posttags li:after { content:""; position:absolute; top:15px; left:2px; float:left; width:6px; height:6px; -moz-border-radius:2px; -webkit-border-radius:2px; border-radius:2px; background:#fff; -moz-box-shadow:-1px -1px 2px #004977; -webkit-box-shadow:-1px -1px 2px #004977; box-shadow:-1px -1px 2px #004977; } The span tag is used to display the count of tags assigned to a post. This post count will start off as a border of the right side of the tag and when we hover over the tag we will use CSS transitions to expand the width of the span to display the tag count. .posttags li span { background: #56a3d5; border-color: #3591cd #318cc7 #2f86be; background-image: -webkit-linear-gradient(top, #6aaeda, #4298d0); background-image: -moz-linear-gradient(top, #6aaeda, #4298d0); background-image: -o-linear-gradient(top, #6aaeda, #4298d0); background-image: linear-gradient(to bottom, #6aaeda, #4298d0); border-radius: 0 20px 20px 0; color: transparent; display: inline-block; height:24px; line-height: 24px; padding:5px; width:0; position: absolute; text-align: center; top:-1px; right:0; -webkit-transition: width 0.3s ease-out; -moz-transition: width 0.3s ease-out; -o-transition: width 0.3s ease-out; transition: width 0.3s ease-out; } [css] On the hover event of the list item we can now change the width of the span tag and the colour of the text to make it display the post count. [css] .posttags li:hover span{ width:20px; color: #FFF; } View the demo to see what this will create. Demo Create A WordPress Widget To Display The Post Tags The following code can be used to create a WordPress widget to display all the post tags on your WordPress site. For this widget we will use the widget boilerplate which inherits from from the WP_Widget class to easily display the widget, creates a form for the widget settings and add the stylesheet to display the post tags. To get all the post tags for your WordPress site you need to use the WordPress function get_tags(), this will return an array of tag objects. From this object you will have access to all the data for this tag including the post count. From this tag object we have access to the tag ID, we can then use the WordPress function get_tag_link( $tagId ) to link the entire tag to the tag page. 'pu_sliding_tag_count', 'description' => __('A widget that allows you to add a widget to display sliding post tags.', 'framework') ) ); } // end constructor /** * Adding styles */ public function add_styles() { wp_register_style( 'pu_sliding_tags', plugins_url('sliding-tag-count.css', __FILE__) ); wp_enqueue_style('pu_sliding_tags'); } /** * Front-end display of widget. * * @see WP_Widget::widget() * * @param array $args Widget arguments. * @param array $instance Saved values from database. */ public function widget( $args, $instance ) { extract( $args ); // Our variables from the widget settings $title = apply_filters('widget_title', $instance['title'] ); // Before widget (defined by theme functions file) echo $before_widget; // Display the widget title if one was input if ( $title ) echo $before_title . $title . $after_title; ?> 0 ) { echo ' '; foreach($posttags as $posttag) { echo ' term_id).'">'.$posttag->name.' '.$posttag->count.' '; } echo ' '; } ?> $v){ $instance[$k] = strip_tags($v); } return $instance; } /** * Create the form for the Widget admin * * @see WP_Widget::form() * * @param array $instance Previously saved values from database. */ function form( $instance ) { // Set up some default widget settings $defaults = array( 'title' => 'Post Tags' ); $instance = wp_parse_args( (array) $instance, $defaults ); ?> Copy the above code into a new PHP file and place this inside your plugin folder. You will then see this new widget in your plugin screen, once this is activated you will see this new widget on the widget dashboard screen.
January 23, 2013
by Paul Underwood
· 5,824 Views
article thumbnail
How to Publish Maven Site Docs to BitBucket or GitHub Pages
In this post we will Utilize GitHub and/or BitBucket's static web page hosting capabilities to publish our project's Maven 3 Site Documentation. Each of the two SCM providers offer a slightly different solution to host static pages. The approach spelled out in this post would also be a viable solution to "backup" your site documentation in a supported SCM like Git or SVN. This solution does not directly cover site documentation deployment covered by the maven-site-plugin and the Wagon library (scp, WebDAV or FTP). There is one main project hosted on GitHub that I have posted with the full solution. The project URL is https://github.com/mike-ensor/clickconcepts-master-pom/. The POM has been pushed to Maven Central and will continue to be updated and maintained. com.clickconcepts.project master-site-pom 0.16 GitHub Pages GitHub hosts static pages by using a special branch "gh-pages" available to each GitHub project. This special branch can host any HTML and local resources like JavaScript, images and CSS. There is no server side development. To navigate to your static pages, the URL structure is as follows: http://.github.com/ An example of the project I am using in this blog post: http://mike-ensor.github.com/clickconcepts-master-pom/ where the first bold URL segment is a username and the second bold URL segment is the project. GitHub does allow you to create a base static hosted static site for your username by creating a repository with your username.github.com. The contents would be all of your HTML and associated static resources. This is not required to post documentation for your project, unlike the BitBucket solution. There is a GitHub Site plugin that publishes site documentation via GitHub's object API but this is outside the scope of this blog post because it does not provide a single solution for GitHub and BitBucket projects using Maven 3. BitBucket BitBucket provides a similar service to GitHub in that it hosts static HTML pages and their associated static resources. However, there is one large difference in how those pages are stored. Unlike GitHub, BitBucket requires you to create a new repository with a name fitting the convention. The files will be located on the master branch and each project will need to be a directory off of the root. mikeensor.bitbucket.org/ /some-project +index.html +... /css /img /some-other-project +index.html +... /css /img index.html .git .gitignore The naming convention is as follows: .bitbucket.org An example of a BitBucket static pages repository for me would be: http://mikeensor.bitbucket.org/. The structure does not require that you create an index.html page at the root of the project, but it would be advisable to avoid 404s. Generating Site Documentation Maven provides the ability to post documentation for your project by using the maven-site-plugin. This plugin is difficult to use due to the many configuration options that oftentimes are not well documented. There are many blog posts that can help you write your documentation including my post on maven site documentation. I did not mention how to use "xdoc", "apt" or other templating technologies to create documentation pages, but not to fear, I have provided this in my GitHub project. Putting it all Together The Maven SCM Publish plugin (http://maven.apache.org/plugins/maven-scm-publish-plugin/ publishes site documentation to a supported SCM. In our case, we are going to use Git through BitBucket or GitHub. Maven SCM Plugin does allow you to publish multi-module site documentation through the various properties, but the scope of this blog post is to cover single/mono module projects and the process is a bit painful. Take a moment to look at the POM file located in the clickconcepts-master-pom project. This master POM is rather comprehensive and the site documentation is only one portion of the project, but we will focus on the site documentation. There are a few things to point out here, first, the scm-publish plugin and the idiosyncronies when implementing the plugin. In order to create the site documentation, the "site" plugin must first be run. This is accomplished by running site:site. The plugin will generate the documentation into the "target/site" folder by default. The SCM Publish Plugin, by default, looks for the site documents to be in "target/staging" and is controlled by the content parameter. As you can see, there is a mismatch between folders. NOTE: My first approach was to run the site:stage command which is supposed to put the site documents into the "target/staging" folder. This is not entirely correct, the site plugin combines with the distributionManagement.site.url property to stage the documents, but there is very strange behavior and it is not documented well. In order to get the site plugin's site documents and the SCM Publish's location to match up, use the content property and set that to the location of the Site Plugin output (). If you are using GitHub, there is no modification to the siteOutputDirectory needed, however, if you are using BitBucket, you will need to modify the property to add in a directory layer into the site documentation generation (see above for differences between GitHub and BitBucket pages). The second property will tell the SCM Publish Plugin to look at the root "site" folder so that when the files are copied into the repository, the project folder will be the containing folder. The property will look like: ${project.build.directory}/site/ ${project.artifactId} ${project.build.directory} /site Next we will take a look at the custom properties defined in the master POM and used by the SCM Publish Plugin above. Each project will need to define several properties to use the Master POM that are used within the plugins during the site publishing. Fill in the variables with your own settings. BitBucket ... ... master scm:git:git@bitbucket.org:mikeensor/mikeensor.bitbucket.org.git ${project.build.directory}/site/${project.artifactId} ${project.build.directory}/site ${changelog.bitbucket.fileUri} ${changelog.revision.bitbucket.fileUri} ... ... GitHub ... ... gh-pages scm:git:git@github.com:mikeensor/clickconcepts-master-pom.git ${changelog.github.fileUri} ${changelog.revision.github.fileUri} ... ... NOTE: changelog parameters are required to use the Master POM and are not directly related to publishing site docs to GitHub or BitBucket How to Generate If you are using the Master POM (or have abstracted out the Site Plugin and the SCM Plugin) then to generate and publish the documentation is simple. mvn clean site:site scm-publish:publish-scm mvn clean site:site scm-publish:publish-scm -Dscmpublish.dryRun=true Gotchas In the SCM Publish Plugin documentation's "tips" they recommend creating a location to place the repository so that the repo is not cloned each time. There is a risk here in that if there is a git repository already in the folder, the plugin will overwrite the repository with the new site documentation. This was discovered by publishing two different projects and having my root repository wiped out by documentation from the second project. There are ways to mitigate this by adding in another folder layer, but make sure you test often! Another gotcha is to use the -Dscmpublish.dryRun=true to test out the site documentation process without making the SCM commit and push Project and Documentation URLs Here is a list of the fully working projects used to create this blog post: Master POM with Site and SCM Publish plugins &ndash https://github.com/mike-ensor/clickconcepts-master-pom. Documentation URL: http://mike-ensor.github.com/clickconcepts-master-pom/ Child Project using Master Pom &ndash http://mikeensor.bitbucket.org/fest-expected-exception. Documentation URL: http://mikeensor.bitbucket.org/fest-expected-exception/
January 23, 2013
by Mike Ensor
· 13,133 Views
article thumbnail
Spring Data JDBC Generic DAO Implementation: Most Lightweight ORM Ever
I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects. Design objectives Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching CRUD implemented in seconds For small applications where JPA is an overkill Use when simplicity is needed or when future migration e.g. to JPA is considered Minimalistic support for database dialect differences (e.g. transparent paging of results) Features Each DAO provides built-in support for: Mapping to/from domain objects through RowMapper abstraction Generated and user-defined primary keys Extracting generated key Compound (multi-column) primary keys Immutable domain objects Paging (requesting subset of results) Sorting over several columns (database agnostic) Optional support for many-to-one relationships Supported databases (continuously tested): MySQL PostgreSQL H2 HSQLDB Derby ...and most likely most of the others Easily extendable to other database dialects via SqlGenerator class. Easy retrieval of records by ID API Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you: public interface PagingAndSortingRepository extends CrudRepository { T save(T entity); Iterable save(Iterable entities); T findOne(ID id); boolean exists(ID id); Iterable findAll(); long count(); void delete(ID id); void delete(T entity); void delete(Iterable entities); void deleteAll(); Iterable findAll(Sort sort); Page findAll(Pageable pageable); } Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting: Page page = userRepository.findAll( new PageRequest( 5, 10, new Sort( new Order(DESC, "reputation"), new Order(ASC, "user_name") ) ) ); Spring Data JDBC repository library will translate this call into (PostgreSQL syntax): SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC LIMIT 50 OFFSET 10 ...or even (Derby syntax): SELECT * FROM ( SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.* FROM ( SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC ) AS t ) AS a WHERE ROW_NUM BETWEEN 51 AND 60 No matter which database you use, you'll get Page object in return (you still have to provide RowMapper yourself to translate from ResultSet to domain object. If you don't know Spring Data project yet, Page is a wonderful abstraction, not only encapsulating List , but also providing metadata such as total number of records, on which page we currently are, etc. Reasons to use You consider migration to JPA or even some NoSQL database in the future. Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don't expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies - but at least you minimize the impact by using same DAO API. You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill You want to have full control over generated SQL if needed You want to work with objects, but don't need lazy loading, relationship handling, multi-level caching, dirty checking... You need CRUD and not much more You want to by DRY You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work You have very few database tables Getting started For more examples and working code don't forget to examine project tests. Prerequisites Maven coordinates: com.blogspot.nurkiewicz jdbcrepository 0.1 Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it: $ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git $ git checkout 0.1 $ mvn javadoc:jar source:jar install In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration: @EnableTransactionManagement @Configuration public class MinimalConfig { @Bean public PlatformTransactionManager transactionManager() { return new DataSourceTransactionManager(dataSource()); } @Bean public DataSource dataSource() { MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource(); ds.setUser("user"); ds.setPassword("secret"); ds.setDatabaseName("db_name"); return ds; } } Entity with auto-generated key Say you have a following database table with auto-generated key (MySQL syntax): CREATE TABLE COMMENTS ( id INT AUTO_INCREMENT, user_name varchar(256), contents varchar(1000), created_time TIMESTAMP NOT NULL, PRIMARY KEY (id) ); First you need to create domain object User mapping to that table (just like in any other ORM): public class Comment implements Persistable { private Integer id; private String userName; private String contents; private Date createdTime; @Override public Integer getId() { return id; } @Override public boolean isNew() { return id == null; } //getters/setters/constructors/... } Apart from standard Java boilerplate you should notice implementing Persistable where Integer is the type of primary key. Persistable is an interface coming from Spring Data project and it's the only requirement we place on your domain object. Finally we are ready to create our CommentRepository DAO: @Repository public class CommentRepository extends JdbcRepository { public CommentRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS"); } public static final RowMapper ROW_MAPPER = //see below private static final RowUnmapper ROW_UNMAPPER = //see below @Override protected Comment postCreate(Comment entity, Number generatedId) { entity.setId(generatedId.intValue()); return entity; } } First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning. As you can see we extend JdbcRepository which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper , RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used. If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don't want to work with raw JDBC results. It's quite straightforward: public static final RowMapper ROW_MAPPER = new RowMapper () { @Override public Comment mapRow(ResultSet rs, int rowNum) throws SQLException { return new Comment( rs.getInt("id"), rs.getString("user_name"), rs.getString("contents"), rs.getTimestamp("created_time") ); } }; RowUnmapper comes from this library and it's essentially the opposite of RowMapper : takes an object and turns it into a Map . This map is later used by the library to construct SQL CREATE / UPDATE queries: private static final RowUnmapper ROW_UNMAPPER = new RowUnmapper () { @Override public Map mapColumns(Comment comment) { Map mapping = new LinkedHashMap (); mapping.put("id", comment.getId()); mapping.put("user_name", comment.getUserName()); mapping.put("contents", comment.getContents()); mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime())); return mapping; } }; If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper. Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don't need it, just don't override postCreate() . Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example. By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations... all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features. Entity with manually assigned key In this example we'll see how entities with user-defined primary keys are handled. Let's start from database model: CREATE TABLE USERS ( user_name varchar(255), date_of_birth TIMESTAMP NOT NULL, enabled BIT(1) NOT NULL, PRIMARY KEY (user_name) ); ...and User domain model: public class User implements Persistable { private transient boolean persisted; private String userName; private Date dateOfBirth; private boolean enabled; @Override public String getId() { return userName; } @Override public boolean isNew() { return !persisted; } public User withPersisted(boolean persisted) { this.persisted = persisted; return this; } //getters/setters/constructors/... } Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not ( isNew() ) method - there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you'll get away with return true all the time from isNew() . And finally our DAO, UserRepository bean: @Repository public class UserRepository extends JdbcRepository { public UserRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name"); } public static final RowMapper ROW_MAPPER = //... public static final RowUnmapper ROW_UNMAPPER = //... @Override protected User postUpdate(User entity) { return entity.withPersisted(true); } @Override protected User postCreate(User entity, Number generatedId) { return entity.withPersisted(true); } } "USERS" and "user_name" parameters designate table name and primary key column name. I'll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it. Check out JdbcRepositoryManualKeyTest for a working code based on this example. Compound primary key We also support compound primary keys (primary keys consisting of several columns). Take this table as an example: CREATE TABLE BOARDING_PASS ( flight_no VARCHAR(8) NOT NULL, seq_no INT NOT NULL, passenger VARCHAR(1000), seat CHAR(3), PRIMARY KEY (flight_no, seq_no) ); I would like you to notice the type of primary key in Peristable : public class BoardingPass implements Persistable { private transient boolean persisted; private String flightNo; private int seqNo; private String passenger; private String seat; @Override public Object[] getId() { return pk(flightNo, seqNo); } @Override public boolean isNew() { return !persisted; } //getters/setters/constructors/... } Unfortunately we don't support small value classes encapsulating all ID values in one object (like JPA does with @IdClass), so you have to live with Object[] array. Defining DAO class is similar to what we've already seen: public class BoardingPassRepository extends JdbcRepository { public BoardingPassRepository() { this("BOARDING_PASS"); } public BoardingPassRepository(String tableName) { super(MAPPER, UNMAPPER, new TableDescription(tableName, null, "flight_no", "seq_no") ); } public static final RowMapper ROW_MAPPER = //... public static final RowUnmapper UNMAPPER = //... } Two things to notice: we extend JdbcRepository and we provide two ID column names just as expected: "flight_no", "seq_no" . We query such DAO by providing both flight_no and seq_no (necessarily in that order) values wrapped by Object[] : BoardingPass pass = repository.findOne(new Object[] {"FOO-1022", 42}); No doubts, this is cumbersome in practice, so we provide tiny helper method which you can statically import: import static com.blogspot.nurkiewicz.jdbcrepository.JdbcRepository.pk; //... BoardingPass foundFlight = repository.findOne(pk("FOO-1022", 42)); Check out JdbcRepositoryCompoundPkTest for a working code based on this example. Transactions This library is completely orthogonal to transaction management. Every method of each repository requires running transaction and it's up to you to set it up. Typically you would place @Transactional on service layer (calling DAO beans). I don't recommend placing @Transactional over every DAO bean. Caching Spring Data JDBC repository library is not providing any caching abstraction or support. However adding @Cacheable layer on top of your DAOs or services using caching abstraction in Spring is quite straightforward. See also: @Cacheable overhead in Spring. Contributions ..are always welcome. Don't hesitate to submit bug reports and pull requests. Biggest missing feature now is support for MSSQL and Oracle databases. It would be terrific if someone could have a look at it. Testing This library is continuously tested using Travis (). Test suite consists of 265 tests (53 distinct tests each run against 5 different databases: MySQL, PostgreSQL, H2, HSQLDB and Derby. When filling bug reports or submitting new features please try including supporting test cases. Each pull request is automatically tested on a separate branch. Building After forking the official repository building is as simple as running: $ mvn install You'll notice plenty of exceptions during JUnit test execution. This is normal. Some of the tests run against MySQL and PostgreSQL available only on Travis CI server. When these database servers are unavailable, whole test is simply skipped: Results : Tests run: 265, Failures: 0, Errors: 0, Skipped: 106 Exception stack traces come from root AbstractIntegrationTest. Design Library consists of only a handful of classes, highlighted in the diagram below: JdbcRepository is the most important class that implements all PagingAndSortingRepository methods. Each user repository has to extend this class. Also each such repository must at least implement RowMapper and RowUnmapper (only if you want to modify table data). SQL generation is delegated to SqlGenerator. PostgreSqlGenerator. and DerbySqlGenerator are provided for databases that don't work with standard generator. License This project is released under version 2.0 of the Apache License (same as Spring framework).
January 22, 2013
by Tomasz Nurkiewicz DZone Core CORE
· 75,998 Views · 2 Likes
article thumbnail
Linq performance with Count() and Any()
Recently I was using resharper to refactor some of my code and found that it was suggesting to use any() extension method instead of count() method in List. I was really keen about performance and found that resharper was right. There is huge performance difference between any() and count() if you are using count() just to check whether list is empty or not. Difference between Count() and Any(): In Count() method code will traverse all the list and get total number of objects in list while in Any() will return after examining first element in the sequence. So in list where we have many object it will be significant execution time if we use count(). Example: Let’s take an example to illustrate this scenario. Following is a code for that. using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace ConsoleApplication3 { class Program { static void Main() { //Creating List of customers List customers = new List(); for (int i = 0; i <= 100; i++) { Customer customer = new Customer { CustomerId = i, CustomerName = string.Format("Customer{0}", i) }; customers.Add(customer); } //Measuring time with count Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); if (customers.Count > 0) { Console.WriteLine("Customer list is not empty with count"); } stopWatch.Stop(); Console.WriteLine("Time consumed with count: {0}", stopWatch.Elapsed); //Measuring time with any stopWatch.Restart(); if (customers.Any()) { Console.WriteLine("Customer list is not empty with any"); } stopWatch.Stop(); Console.WriteLine("Time consumed with count: {0}", stopWatch.Elapsed); } } public class Customer { public int CustomerId { get; set; } public string CustomerName { get; set; } } } Here in the above code you can see that I created ‘Customer’ class which has simple two properties ‘CustomerId’ and ‘CustomerName’. Then in Main method I have created a list of customers and used for loop and object intializer to fill customers list. After that I have written a code to measure time for count and any with ‘Stop watch’ class and printing CPU ticks. It’s pretty simple. Now let’s run this console application to see output. Conclusion: Here in the above example you can see there is huge performance benefit of using any() instead of count() when we are checking whether list is empty or not. Hope you like it. Stay tuned for more...
January 21, 2013
by Jalpesh Vadgama
· 24,564 Views
article thumbnail
ActiveMQ: Securing the ActiveMQ Web Console in Tomcat
This post will demonstrate how to secure the ActiveMQ WebConsole with a username and password when deployed in the Apache Tomcat web server. The Apache ActiveMQ documentation on the Web Console provides a good example of how this is done for Jetty, which is the default web server shipped with ActiveMQ, and this post will show how this is done when deploying the web console in Tomcat. To demonstrate, the first thing you will need to do is grab the latest distribution of ActiveMQ. For the purpose of this demonstration I will be using the 5.5.1-fuse-09-16 release which can be obtained via the Red Hat Support Portal or via the FuseSource repository: Red Hat Support Portal ActiveMQ ActiveMQ Web Console Tomcat mysql-connector-java-5.1.18-bin.jar Once you have the distributions, extract and start the broker. If you don't already have Tomcat installed you can grab it from the link above as well. I am using Tomcat 6.0.36 in this demonstration. Next, create a directory called activemq-console in the Tomcat webapps directory and extract the ActiveMQ Web Console war by using the jar -xf command. With all the binaries installed and our broker running we can begin configuring our web app and Tomcat to secure the Web Console. First, open the ActiveMQ Web Console's web descriptor, this can be found in the following location: activemq-console/WEB-INF/web.xml, and add the following configuration: Authenticate entire app /* GET POST activemq NONE BASIC This configuration enables the security constraint on the entire application as noted with /* url-pattern. Another point to notice is the auth-constraint which has been set to the activemq role, we will define this shortly. And lastly, note that this is configured for basic authentication. This means the username password are base64 encoded but not truly encrypted. To improve the security further you could enable a secure transport such as SSL. Now lets configure the Tomcat server to validate our activemq role we just specified in the web app. Out-of-the-box Tomcat is configured to use the UserDataBaseRealm. This is configured in [TOMCAT_HOME]/conf/server.xml. This instructs the web server to validate against the tomcat-users.xml file which can be found in [TOMCAT_HOME]/conf as well. Open the tomcat-users.xml file and add the following: This defines our activemq role and configures a user with that role. The last thing we need to do before starting our Tomcat server is add the required configuration to communicate with the broker. First, copy the activemq-all jar into the Tomcat lib directory. Next, open the catalina.sh/catalina.bat startup script and add the following configuration to initialize the JAVA_OPTS variable: JAVA_OPTS="-Dwebconsole.jms.url=tcp://localhost:61616 -Dwebconsole.jmx.url=service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi -Dwebconsole.jmx.user= -Dwebconsole.jmx.password=" Now we are ready to start the Tomcat server. Once started, you should be able to access the ActiveMQ Web Console at the following URL: http://localhost:8080/activemq-console. You should be prompted with something similar to this dialog: Once you enter the user name and password you should get logged into the ActiveMQ Web Console. As I mentioned before the user name and password are base64 encoded and each request is authenticated against the UserDataBaseRealm. The browser will retain your username and password in memory so you will need to exit the browser to end the session. What you have seen so far is a simple authentication using the UserDataBaseRealm which contains a list of users in a text file. Next we will look at configuring the ActiveMQ Web Console to use a JDBCRealm which will authenticate against users stored in a database. Lets first create a new database as follows using a MySQL database: mysql> CREATE DATABASE tomcat_users; Query OK, 1 row affected (0.00 sec) mysql> Provide the appropriate permissions for this database to a database user: mysql> GRANT ALL ON tomcat_users.* TO 'activemq'@'localhost'; Query OK, 0 rows affected (0.02 sec) mysql> Then you can login to the database and create the following tables: mysql> USE tomcat_users; Database changed mysql> CREATE TABLE tomcat_users ( -> user_name varchar(20) NOT NULL PRIMARY KEY, -> password varchar(32) NOT NULL -> ); Query OK, 0 rows affected (0.10 sec) mysql> CREATE TABLE tomcat_roles ( -> role_name varchar(20) NOT NULL PRIMARY KEY -> ); Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tomcat_users_roles ( -> user_name varchar(20) NOT NULL, -> role_name varchar(20) NOT NULL, -> PRIMARY KEY (user_name, role_name), -> CONSTRAINT tomcat_users_roles_foreign_key_1 FOREIGN KEY (user_name) REFERENCES tomcat_users (user_name), -> CONSTRAINT tomcat_users_roles_foreign_key_2 FOREIGN KEY (role_name) REFERENCES tomcat_roles (role_name) -> ); Query OK, 0 rows affected (0.06 sec) mysql> Next seed the tables with the user and role information: mysql> INSERT INTO tomcat_users (user_name, password) VALUES ('admin', 'dbpass'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO tomcat_roles (role_name) VALUES ('activemq'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO tomcat_users_roles (user_name, role_name) VALUES ('admin', 'activemq'); Query OK, 1 row affected (0.00 sec) mysql> Now we can verify the information in our database: mysql> select * from tomcat_users; +-----------+----------+ | user_name | password | +-----------+----------+ | admin | dbpass | +-----------+----------+ 1 row in set (0.00 sec) mysql> select * from tomcat_users_roles; +-----------+-----------+ | user_name | role_name | +-----------+-----------+ | admin | activemq | +-----------+-----------+ 1 row in set (0.00 sec) mysql> If you left the Tomcat server running from the first part of this demonstration shut it down at this time so we can change the configuration to use the JDBCRealm. In the server.xml file, located in [TOMCAT_HOME]/conf, we need to comment out the existing UserDataBaseRealm and add the JDBCRealm: Looking at the JDBCRealm, you can see we are using the mysql JDBC driver, the connection URL is configured to connect to the tomcat_users database using the specified credentials, and the table and column names used in our database have been specified. Now the Tomcat server can be started again. This time when you login to the ActiveMQ Web Console use the username and password specified when loading the database tables. That's all there is to it, you now know how to configure the ActiveMQ Web Console to use Tomcat's UserDatabaseRealm and JDBCRealm. The following sites were helpful in gathering this information: http://activemq.apache.org/web-console.html http://www.avajava.com/tutorials/lessons/how-do-i-use-a-jdbc-realm-with-tomcat-and-mysql.html?page=1 http://oreilly.com/pub/a/java/archive/tomcat-tips.html?page=1
January 21, 2013
by Jason Sherman
· 11,894 Views
article thumbnail
Assign a Fixed IP to an AWS EC2 Instance
as described in my previous post the ip (and dns) of your running ec2 ami will change after a reboot of that instance. of course this makes it very hard to make your applications on that machine available for the outside world, like in this case our wordpress blog. that is where elastic ip comes to the rescue. with this feature you can assign a static ip to your instance. assign one to your application as follows: click on the elastic ips link in the aws console allocate a new address associate the address with a running instance right click to associate the ip with an instance: pick the instance to assign this ip to: note the ip being assigned to your instance if you go to the ip address you were assigned then you see the home page of your server: and the nicest thing is that if you stop and start your instance you will receive a new public dns but your instance is still assigned to the elastic ip address: one important note: as long as an elastic ip address is associated with a running instance, there is no charge for it. however an address that is not associated with a running instance costs $0.01/hour. this prevents users from ‘reserving’ addresses while they are not being used.
January 20, 2013
by Eric Genesky
· 22,656 Views
article thumbnail
Using Redis with Spring
As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store. Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application. As always, we will start with Maven POM file for our project: 4.0.0 com.example.spring redis 0.0.1-SNAPSHOT jar UTF-8 3.1.0.RELEASE org.springframework.data spring-data-redis 1.0.0.RELEASE cglib cglib-nodep 2.2 log4j log4j 1.2.16 redis.clients jedis 2.0.0 jar org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post. We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore: package com.example.redis.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; @Configuration public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); } @Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; } } That's basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let's consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple: @Autowired private RedisTemplate< String, Object > template; public Object getValue( final String key ) { return template.opsForValue().get( key ); } public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); } Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second: public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); template.expire( key, 1, TimeUnit.SECONDS ); } Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User public class User { private final Long id; private String name; private String email; // Setters and getters are omitted for simplicity } into Redis using key pattern "user:": public void setUser( final User user ) { final String key = String.format( "user:%s", user.getId() ); final Map< String, Object > properties = new HashMap< String, Object >(); properties.put( "id", user.getId() ); properties.put( "name", user.getName() ); properties.put( "email", user.getEmail() ); template.opsForHash().putAll( key, properties); } Respectively, object could easily be inspected and retrieved using the id. public User getUser( final Long id ) { final String key = String.format( "user:%s", id ); final String name = ( String )template.opsForHash().get( key, "name" ); final String email = ( String )template.opsForHash().get( key, "email" ); return new User( id, name, email ); } There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let's add a bit more configuration here (as part of AppConfig class): @Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); } @Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) ); return container; } The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition: package com.example.redis.impl; import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener; public class RedisMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] paramArrayOfByte) { System.out.println( "Received by RedisMessageListener: " + message.toString() ); } } Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it's pretty simple: @Autowired private RedisTemplate< String, Object > template; public void publish( final String message ) { template.execute( new RedisCallback< Long >() { @SuppressWarnings( "unchecked" ) @Override public Long doInRedis( RedisConnection connection ) throws DataAccessException { return connection.publish( ( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ), ( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) ); } } ); } That's basically it for very quick introduction but definitely enough to fall in love with Redis.
January 17, 2013
by Andriy Redko
· 80,668 Views · 26 Likes
article thumbnail
TaskletStep Oriented Processing in Spring Batch
Many enterprise applications require batch processing to process billions of transactions every day. These big transaction sets have to be processed without performance problems. Spring Batch is a lightweight and robust batch framework to process these big data sets. Spring Batch offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, TaskletStep Oriented Processing Model is explained. Let us investigate fundamental Spring Batch components : Job : An entity that encapsulates an entire batch process. Step and Tasklets are defined under a Job Step : A domain object that encapsulates an independent, sequential phase of a batch job. JobInstance : Batch domain object representing a uniquely identifiable job run – it’s identity is given by the pair Job and JobParameters. JobParameters : Value object representing runtime parameters to a batch job. JobExecution : A JobExecution refers to the technical concept of a single attempt to run a Job. An execution may end in failure or success, but the JobInstance corresponding to a given execution will not be considered complete unless the execution completes successfully. JobRepository : An interface which responsible for persistence of batch meta-data entities. In the following sample, an in-memory repository is used via MapJobRepositoryFactoryBean. JobLauncher : An interface exposing run method, which launches and controls the defined jobs. TaskLet : An interface exposing execute method, which will be a called repeatedly until it either returns RepeatStatus.FINISHED or throws an exception to signal a failure. It is used when both readers and writers are not required as the following sample. Let us take a look how to develop Tasklet-Step Oriented Processing Model. Used Technologies : JDK 1.7.0_09 Spring 3.1.3 Spring Batch 2.1.9 Maven 3.0.4 STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in). STEP 2 : LIBRARIES Firstly, dependencies are added to Maven’ s pom.xml. 3.1.3.RELEASE 2.1.9.RELEASE org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} org.springframework.batch spring-batch-core ${spring-batch.version} log4j log4j 1.2.16 maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 org.apache.maven.plugins maven-compiler-plugin 3.0 1.7 1.7 The following Maven plugin can be used to create runnable-jar, org.apache.maven.plugins maven-shade-plugin 2.0 package shade 1.7 1.7 com.onlinetechvision.exe.Application META-INF/spring.handlers META-INF/spring.schemas STEP 3 : CREATE SuccessfulStepTasklet TASKLET SuccessfulStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in successful step. package com.onlinetechvision.tasklet; import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus; /** * SuccessfulStepTasklet Class illustrates a successful job * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class SuccessfulStepTasklet implements Tasklet { private static final Logger logger = Logger.getLogger(SuccessfulStepTasklet.class); private String taskResult; /** * Executes SuccessfulStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug("Task Result : " + getTaskResult()); return RepeatStatus.FINISHED; } public String getTaskResult() { return taskResult; } public void setTaskResult(String taskResult) { this.taskResult = taskResult; } } STEP 4 : CREATE FailedStepTasklet TASKLET FailedStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in failed step. package com.onlinetechvision.tasklet; import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus; /** * FailedStepTasklet Class illustrates a failed job. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class FailedStepTasklet implements Tasklet { private static final Logger logger = Logger.getLogger(FailedStepTasklet.class); private String taskResult; /** * Executes FailedStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug("Task Result : " + getTaskResult()); throw new Exception("Error occurred!"); } public String getTaskResult() { return taskResult; } public void setTaskResult(String taskResult) { this.taskResult = taskResult; } } STEP 5 : CREATE BatchProcessStarter CLASS BatchProcessStarter Class is created to launch the jobs. Also, it logs their execution results. A Completed Job Instance can not be restarted with the same parameter(s) because it already exists in job repository and JobInstanceAlreadyCompleteException is thrown with “A job instance already exists and is complete” description. It can be restarted with different parameter. In the following sample, different currentTime parameter is set in order to restart FirstJob. package com.onlinetechvision.spring.batch; import org.apache.log4j.Logger; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobExecution; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.core.JobParametersInvalidException; import org.springframework.batch.core.launch.JobLauncher; import org.springframework.batch.core.repository.JobExecutionAlreadyRunningException; import org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException; import org.springframework.batch.core.repository.JobRepository; import org.springframework.batch.core.repository.JobRestartException; /** * BatchProcessStarter Class launches the jobs and logs their execution results. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class BatchProcessStarter { private static final Logger logger = Logger.getLogger(BatchProcessStarter.class); private Job firstJob; private Job secondJob; private Job thirdJob; private JobLauncher jobLauncher; private JobRepository jobRepository; /** * Starts the jobs and logs their execution results. * */ public void start() { JobExecution jobExecution = null; JobParametersBuilder builder = new JobParametersBuilder(); try { builder.addLong("currentTime", new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); getJobLauncher().run(getSecondJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getSecondJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); getJobLauncher().run(getThirdJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getThirdJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); builder.addLong("currentTime", new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString()); } catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) { logger.error(e); } } public Job getFirstJob() { return firstJob; } public void setFirstJob(Job firstJob) { this.firstJob = firstJob; } public Job getSecondJob() { return secondJob; } public void setSecondJob(Job secondJob) { this.secondJob = secondJob; } public Job getThirdJob() { return thirdJob; } public void setThirdJob(Job thirdJob) { this.thirdJob = thirdJob; } public JobLauncher getJobLauncher() { return jobLauncher; } public void setJobLauncher(JobLauncher jobLauncher) { this.jobLauncher = jobLauncher; } public JobRepository getJobRepository() { return jobRepository; } public void setJobRepository(JobRepository jobRepository) { this.jobRepository = jobRepository; } } STEP 6 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. It covers Tasklets and BatchProcessStarter definitions. STEP 7 : CREATE jobContext.xml Spring Configuration file, jobContext.xml, is created. Jobs’ flows are the following : FirstJob’ s flow : 1) FirstStep is started. 2) After FirstStep is completed with COMPLETED status, SecondStep is started. 3) After SecondStep is completed with COMPLETED status, ThirdStep is started. 4) After ThirdStep is completed with COMPLETED status, FirstJob execution is completed with COMPLETED status. SecondJob’ s flow : 1) FourthStep is started. 2) After FourthStep is completed with COMPLETED status, FifthStep is started. 3) After FifthStep is completed with COMPLETED status, SecondJob execution is completed with COMPLETED status. ThirdJob’ s flow : 1) SixthStep is started. 2) After SixthStep is completed with COMPLETED status, SeventhStep is started. 3) After SeventhStep is completed with FAILED status, ThirdJob execution is completed FAILED status. FirstJob’ s flow is same with the first execution. STEP 8 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.onlinetechvision.spring.batch.BatchProcessStarter; /** * Application Class starts the application. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Application { /** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext appContext = new ClassPathXmlApplicationContext("jobContext.xml"); BatchProcessStarter batchProcessStarter = (BatchProcessStarter)appContext.getBean("batchProcessStarter"); batchProcessStarter.start(); } } STEP 9 : BUILD PROJECT After OTV_SpringBatch_TaskletStep_Oriented_Processing Project is built, OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar will be created. STEP 10 : RUN PROJECT After created OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar file is run, the following console output logs will be shown : First Job’ s console output : 25.11.2012 21:29:19 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:19 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=0, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN; exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=1 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=1 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=1, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=2 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=2 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=2, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=3 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=3, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=0, version=1, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:44) - JobExecution: id=0, version=2, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] Second Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=secondJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=1, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=secondJob.fourthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fourthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fourthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=4 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fourth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=4, version=3, name=fourthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fourthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fifthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fifthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=5 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fifth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=5, version=3, name=fifthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fifthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.end5 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.end5 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=1, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=secondJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:48) - JobExecution: id=1, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] Third Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=thirdJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=2, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=thirdJob.sixthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.sixthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [sixthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=6 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Sixth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=6, version=3, name=sixthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.sixthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.seventhStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [seventhStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=7 25.11.2012 21:29:20 DEBUG (FailedStepTasklet.java:33) - Task Result : Error occurred! 25.11.2012 21:29:20 DEBUG (TaskletStep.java:456) - Rollback for Exception: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (TransactionTemplate.java:152) - Initiating transaction rollback on application exception ... 25.11.2012 21:29:20 DEBUG (AbstractPlatformTransactionManager.java:821) - Initiating transaction rollback 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:54) - Rolling back resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@40874c04] 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:291) - Handling exception: java.lang.Exception, caused by: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:251) - Handling fatal exception explicitly (rethrowing first of 1): java.lang.Exception: Error occurred! 25.11.2012 21:29:20 ERROR (AbstractStep.java:222) - Encountered an error executing the step ... 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@66a7d863] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=7, version=2, name=seventhStep, status=FAILED, exitStatus=FAILED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=0, rollbackCount=1 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction@156f803c] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.seventhStep with status=FAILED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.fail8 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.fail8 with status=FAILED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=2, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=thirdJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [FAILED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:52) - JobExecution: id=2, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED; exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] First Job’ s console output after restarting : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878960660}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=3, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=8 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=8 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=8, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=9 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=9 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=9, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=10 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=10 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=10, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=3, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878960660}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:57) - JobExecution: id=3, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] STEP 11 : DOWNLOAD https://github.com/erenavsarogullari/OTV_SpringBatch_TaskletStep REFERENCES : Spring Batch – Reference Documentation Spring Batch – API Documentation
January 17, 2013
by Eren Avsarogullari
· 21,677 Views · 1 Like
article thumbnail
Changes to String.substring in Java 7
this post was originally published as part of the java advent series . if you like it, please spread the word by sharing, tweeting, fb, g+ and so on! want to write for the java advent blog? we are looking for contributors to fill all 24 slot and would love to have your contribution! contact attila balazs to contribute! it is common knowledge that java optimizes the substring operation for the case where you generate a lot of substrings of the same source string. it does this by using the (value, offset, count) way of storing the information. see an example below: in the above diagram you see the strings “hello” and “world!” derived from “hello world!” and the way they are represented in the heap: there is one character array containing “hello world!” and two references to it. this method of storage is advantageous in some cases, for example for a compiler which tokenizes source files. in other instances it may lead you to an outofmemorerror (if you are routinely reading long strings and only keeping a small part of it – but the above mechanism prevents the gc from collecting the original string buffer). some even call it a bug . i wouldn’t go so far, but it’s certainly a leaky abstraction because you were forced to do the following to ensure that a copy was made: new string(str.substring(5, 6)) . this all changed in may of 2012 or java 7u6. the pendulum is swung back and now full copies are made by default. what does this mean for you? for most probably it is just a nice piece of java trivia if you are writing parsers and such, you can not rely any more on the implicit caching provided by string. you will need to implement a similar mechanism based on buffering and a custom implementation of charsequence if you were doing new string(str.substring) to force a copy of the character buffer, you can stop as soon as you update to the latest java 7 (and you need to do that quite soon since java 6 is being eold as we speak ). thankfully the development of java is an open process and such information is at the fingertips of everyone! a couple of more references (since we don’t say pointers in java ) related to strings: if you are storing the same string over and over again (maybe you’re parsing messages from a socket for example), you should read up on alternatives to string.intern() (and also consider reading chapter 50 from the second edition of effective java: avoid strings where other types are more appropriate) look into (and do benchmarks before using them!) options like usecompressedstrings (which seems to have been removed ), usestringcache and stringcache hope i didn’t strung you along too much and you found this useful! until next time - attila balazs meta: this post is part of the java advent calendar and is licensed under the creative commons 3.0 attribution license. if you like it, please spread the word by sharing, tweeting, fb, g+ and so on! want to write for the blog? we are looking for contributors to fill all 24 slot and would love to have your contribution! contact attila balazs to contribute!
January 16, 2013
by Attila-Mihaly Balazs
· 10,746 Views
article thumbnail
Building a Game With JavaScript: Start Screen
This is a continuation from the previous post. Specification Many games have a start screen or main menu of some sort. (Though I love games like Braid that bypass the whole notion.) Let’s begin by designing our start screen. We’ll have a solid color background. Perhaps the ever lovely cornflower blue. Then we’ll draw the name of our game and provide an instruction to the player. In order to make sure we have the player’s attention, we’ll animate the color of the instruction. It will morph from black to red and back again. Finally, when the player clicks the screen we’ll transition to the main game. Or at least we’ll stub out the transition. Here’s a demo based on the code we’ll cover later in this post (as well as that from the previous post.) Implementation Here’s the code to implement our start screen. // `input` will be defined elsewhere, it's a means // for us to capture the state of input from the player var startScreen = (function(input) { // the red component of rgb var hue = 0; // are we moving toward red or black? var direction = 1; var transitioning = false; // record the input state from last frame // because we need to compare it in the // current frame var wasButtonDown = false; // a helper function // used internally to draw the text in // in the center of the canvas (with respect // to the x coordinate) function centerText(ctx, text, y) { var measurement = ctx.measureText(text); var x = (ctx.canvas.width - measurement.width) / 2; ctx.fillText(text, x, y); } // draw the main menu to the canvas function draw(ctx, elapsed) { // let's draw the text in the middle of the canvas // note that it's ineffecient to calculate this // in every frame since it never changes // however, I leave it here for simplicity var y = ctx.canvas.height / 2; // create a css color from the `hue` var color = 'rgb(' + hue + ',0,0)'; // clear the entire canvas // (this is not strictly necessary since we are always // updating the same pixels for this screen, however it // is generally what you need to do.) ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); // draw the title of the game // this is static and doesn't change ctx.fillStyle = 'white'; ctx.font = '48px monospace'; centerText(ctx, 'My Awesome Game', y); // draw instructions to the player // this animates the color based on the value of `hue` ctx.fillStyle = color; ctx.font = '24px monospace'; centerText(ctx, 'click to begin', y + 30); } // update the color we're drawing and // check for input from the user function update() { // we want `hue` to oscillate between 0 and 255 hue += 1 * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; // note that this logic is dependent on the frame rate, // that means if the frame rate is slow then the animation // is slow. // we could make it indepedent on the frame rate, but we'll // come to that later. // here we magically capture the state of the mouse // notice that we are not dealing with events inside the game // loop. // we'll come back to this too. var isButtonDown = input.isButtonDown(); // we want to know if the input (mouse click) _just_ happened // that means we only want to transition away from the menu to the // game if there _was_ input on the last frame _but none_ on the // current one. var mouseJustClicked = !isButtonDown && wasButtonDown; // we also check the value of `transitioning` so that we don't // initiate the transition logic more the once (like if the player // clicked the mouse repeatedly before we finished transitioning) if (mouseJustClicked && !transitioning) { transitioning = true; // do something here to transition to the actual game } // record the state of input for use in the next frame wasButtonDown = isButtonDown; } // this is the object that will be `startScreen` return { draw: draw, update: update }; }()); Explanation Recall that our start screen is meant to be invoked by our game loop. The game loop doesn’t know about the specifics of the start screen, but it does expect it to have a certain shape. This enables us to swap out screen objects without having to modify the game loop itself. The shape that the game loop expects is this: { update: function(timeElapsedSinceLastFrame) { }, draw: function(drawingContext) { } } Update Let’s begin with the start screen’s update function. The first bit of logic is this: hue += 1 * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; Perhaps hue is not the best choice of variable names. It represents the red component for an RGB color value. The range of values for this component is 0 (no red) to 255 (all the reds!). On each iteration of our loop we “move” the hue towards either the red or black. The variable direction can be either 1 or -1. A value of 1 means we are moving towards 255 and a value of -1 means we are moving towards 0. When we cross a boundary, we flip the direction. Keen observers will ask why we bother with 1 * direction. In our current logic, it’s an unnecessary step and unnecessary steps in game development are generally bad. In this case, I wanted to separate the rate of change from the direction. In order words, you could modify that expression to 2 * direction and the color would change twice as fast. This leads us to another important point. Our rate of change is tied to how quickly our loop iterates; most likely 60fps. However, it’s not guaranteed to be 60fps and that makes this approach a dangerous practice. Once way to detach ourselves from the loop’s speed would be to use the elapsed time that is being passed into our update function. Let’s say that we want to it to take 2 full seconds to go from red to black regardless of how often the update function is called. There’s a span of 256 discrete values between red and black. To make our calculations clear, let’s say there are 256 units and we’ll label these units R. Also, the elapsed time will be in milliseconds (ms). For a given frame, if were are given a slice of elapsed time in ms, we’ll want to calculate how many R units to increase (or decrease) hue by for that slice. Our rate of change can be defined as 256 **R** / 2000 **ms** or 0.128 R/ms. (You can read that as “0.128 units of red per millisecond”.) This rate of change is a constant for our start screen and as such we can define it once (as opposed to calculating it inside the update function). Now that we have the rate of change , we only need to multiply it by the elapsed time received in update to determine how many Rs we want. A revised version of the function would look like this: var rate = 0.128; // R/ms function update(elapsed) { var amount = rate * elapsed; hue += amount * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; } One consequence of this change is that hue will no longer be integral values (as much as that can be said in JavaScript.) This means that we’d really want to have two values for the hue: an actual value and a rounded value. This is because the RBG model requires an integral value for each color component. function update(elapsed) { var amount = rate * elapsed; hue += amount * direction; if (hue > 255) direction = -1; if (hue < 0) direction = 1; rounded_hue = Math.round(hue); } Draw Let’s turn our attention to draw for a moment. One of the first things you generally do is to clear the entire screen. This is simple to do with the canvas API’s clearRect method. ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Notice that ctx is an instance of CanvasRenderingContext2D and not a HTMLCanvasElement. However, there is a handy back reference to the canvas element that we use to grab the actual width and height. There are other options other than clearing the entire canvas, but I’m not going to address this in this post. Also, there are some performance considerations. See the article listed under references. After clearing the screen, we want to draw something new. In this case, the game title and the instructions. In both cases I want to center the text horizontally. I created a helper function that I can provide with the text to render as well as the vertical position (y). function centerText(ctx, text, y) { var measurement = ctx.measureText(text); var x = (ctx.canvas.width - measurement.width) / 2; ctx.fillText(text, x, y); } measureText returns the width in pixels that the rendered text will take up. We use this in combination with the canvas element’s width to determine the x position for the text. fillText is responsible for actually drawing the text. The rendering context ctx is stateful. Meaning that, what happens when you call methods like measureText or fillText depends on the state of the rendering context. The state can be modified by setting its properties. var y = ctx.canvas.height / 2; ctx.fillStyle = 'white'; ctx.font = '48px monospace'; centerText(ctx, 'My Awesome Game', y); The properties fillStyle and font change the state of the rendering context and hence affect the methods calls inside of centerText. This state applies to all future methods calls. This means that all calls to fillText will use the color white until you can the fillStyle. Notice too that we are calculating the x and y values for the text on every frame. This is potentially wasteful since these values are unlikely to change. However, if we want to respond to changes in canvas size (or even changes to the text itself) then we’d want to continue calculating these on every frame. Otherwise, if we were confident that we didn’t need to do this, we could calculate these values once and cache them. Now let’s use the red component calculated in update to render the instructional text. var color = 'rgb(' + hue + ',0,0)'; ctx.fillStyle = color; ctx.font = '24px monospace'; centerText(ctx, 'click to begin', y + 30); fillStyle can be set in a number of ways. Earlier, we used the simple value white. Here were are using rgb() to set the individual components explicitly. Any CSS color should work with fillStyle. (I won’t be too surprised if some don’t though.) Now you might be wondering why we bothered calculating hue inside update since hue is all about what to draw on the screen. The reason is that draw is concerned with the mechanics of rendering. Anything that is modeling the game state should live in update. The tell in this example is that hue is dependent on elapsed time and the draw doesn’t know anything about that. Update (again) Moving back to update, the next bit deals with input from the player. In the sample code I’ve extracted the input logic away. The key thing here is that we are not relying on events to tell us about input from the player. Instead we have some helper, input in this case, that gives us the current state of the input. If event-driven logic says “tell me when this happens” then our game logic says “tell me if this is happening now”. The primary reason for this is to be deterministic. We can establish at the beginning of our update what the current input state is and that it won’t change before the next invocation of the function. In simple games this might be inconsequential, but in others it can be a subtle source of bugs. var isButtonDown = input.isButtonDown(); var mouseJustClicked = !isButtonDown && wasButtonDown; if (mouseJustClicked && !transitioning) { transitioning = true; // do something here to transition to the actual game } wasButtonDown = isButtonDown; We only want transition when the mouse button has been released. In this case, “released” is defined as “down on the last frame but up on this one”. Hence, we need to track what the mouse button’s state was on the last frame. That’s wasButtonDown and it lives outside of update. Secondly, we don’t want to trigger multiple transitions. That is, if our transition takes some time (perhaps due to animation) then we want to ignore subsequent clicks. We have our transitioning variable outside of update to track that for us. More to come… Update: I just realized that I didn't include a shim for requestAnimationFrame for the demo on jsfiddle. That means the demo will fail on many browsers. (Of course, it will also fail if there's no canvas support either.) If it doesn't work, check your console for errors.
January 15, 2013
by Christopher Bennage
· 9,077 Views
article thumbnail
@Cacheable overhead in Spring
Spring 3.1 introduced great caching abstraction layer. Finally we can abandon all home-grown aspects, decorators and code polluting our business logic related to caching. Since then we can simply annotate heavyweight methods and let Spring and AOP machinery do the work: @Cacheable("books") public Book findBook(ISBN isbn) {...} "books" is a cache name, isbn parameter becomes cache key and returned Book object will be placed under that key. The meaning of cache name is dependant on the underlying cache manager (EhCache, concurrent map, etc.) - Spring makes it easy to plug different caching providers. But this post won't be about caching feature in Spring... Some time ago my teammate was optimizing quite low-level code and discovered an opportunity for caching. He quickly applied @Cacheable just to discover that the code performed worse then it used to. He got rid of the annotation and implemented caching himself manually, using good old java.util.ConcurrentHashMap. The performance was much better. He blamed @Cacheable and Spring AOP overhead and complexity. I couldn't believe that a caching layer can perform so poorly until I had to debug Spring caching aspects few times myself (some nasty bug in my code, you know, cache invalidation is one of the two hardest things in CS). Well, the caching abstraction code is much more complex than one would expect (after all it's just get and put!), but it doesn't necessarily mean it must be that slow? In science we don't believe and trust, we measure and benchmark. So I wrote a benchmark to precisely measure the overhead of @Cacheable layer. Caching abstraction layer in Spring is implemented on top of Spring AOP, which can further be implemented on top of Java proxies, CGLIB generated subclasses or AspectJ instrumentation. Thus I'll test the following configurations: no caching at all - to measure how fast the code is with no intermediate layer manual cache handling using ConcurrentHashMap in business code @Cacheable with CGLIB implementing AOP @Cacheable with java.lang.reflect.Proxy implementing AOP @Cacheable with AspectJ compile time weaving (as similar benchmark shows, CTW is slightly faster than LTW) Home-grown AspectJ caching aspect - something between manual caching in business code and Spring abstraction Let me reiterate: we are not measuring the performance gain of caching and we are not comparing various cache providers. That's why our test method is as fast as it can be and I will be using simplest ConcurrentMapCacheManager from Spring. So here is a method in question: public interface Calculator { int identity(int x); } public class PlainCalculator implements Calculator { @Cacheable("identity") @Override public int identity(int x) { return x; } } I know, I know there is no point in caching such a method. But I want to measure the overhead of caching layer (during cache hit to be specific). Each caching configuration will have its own ApplicationContext as you can't mix different proxying modes in one context: public abstract class BaseConfig { @Bean public Calculator calculator() { return new PlainCalculator(); } } @Configuration class NoCachingConfig extends BaseConfig {} @Configuration class ManualCachingConfig extends BaseConfig { @Bean @Override public Calculator calculator() { return new CachingCalculatorDecorator(super.calculator()); } } @Configuration abstract class CacheManagerConfig extends BaseConfig { @Bean public CacheManager cacheManager() { return new ConcurrentMapCacheManager(); } } @Configuration @EnableCaching(proxyTargetClass = true) class CacheableCglibConfig extends CacheManagerConfig {} @Configuration @EnableCaching(proxyTargetClass = false) class CacheableJdkProxyConfig extends CacheManagerConfig {} @Configuration @EnableCaching(mode = AdviceMode.ASPECTJ) class CacheableAspectJWeaving extends CacheManagerConfig { @Bean @Override public Calculator calculator() { return new SpringInstrumentedCalculator(); } } @Configuration @EnableCaching(mode = AdviceMode.ASPECTJ) class AspectJCustomAspect extends CacheManagerConfig { @Bean @Override public Calculator calculator() { return new ManuallyInstrumentedCalculator(); } } Each @Configuration class represents one application context. CachingCalculatorDecorator is a decorator around real calculator that does the caching (welcome to the 1990s): public class CachingCalculatorDecorator implements Calculator { private final Map cache = new java.util.concurrent.ConcurrentHashMap(); private final Calculator target; public CachingCalculatorDecorator(Calculator target) { this.target = target; } @Override public int identity(int x) { final Integer existing = cache.get(x); if (existing != null) { return existing; } final int newValue = target.identity(x); cache.put(x, newValue); return newValue; } } SpringInstrumentedCalculator and ManuallyInstrumentedCalculator are exactly the same as PlainCalculator but they are instrumented by AspectJ compile-time weaver with Spring and custom aspect accordingly. My custom caching aspect looks like this: public aspect ManualCachingAspect { private final Map cache = new ConcurrentHashMap(); pointcut cacheMethodExecution(int x): execution(int com.blogspot.nurkiewicz.cacheable.calculator.ManuallyInstrumentedCalculator.identity(int)) && args(x); Object around(int x): cacheMethodExecution(x) { final Integer existing = cache.get(x); if (existing != null) { return existing; } final Object newValue = proceed(x); cache.put(x, (Integer)newValue); return newValue; } } After all this preparation we can finally write the benchmark itself. At the beginning I start all the application contexts and fetch Calculator instances. Each instance is different. For example noCaching is a PlainCalculator instance with no wrappers, cacheableCglib is a CGLIB generated subclass while aspectJCustom is an instance of ManuallyInstrumentedCalculator with my custom aspect woven. private final Calculator noCaching = fromSpringContext(NoCachingConfig.class); private final Calculator manualCaching = fromSpringContext(ManualCachingConfig.class); private final Calculator cacheableCglib = fromSpringContext(CacheableCglibConfig.class); private final Calculator cacheableJdkProxy = fromSpringContext(CacheableJdkProxyConfig.class); private final Calculator cacheableAspectJ = fromSpringContext(CacheableAspectJWeaving.class); private final Calculator aspectJCustom = fromSpringContext(AspectJCustomAspect.class); private static Calculator fromSpringContext(Class config) { return new AnnotationConfigApplicationContext(config).getBean(Calculator.class); } I'm going to exercise each Calculator instance with the following test. The additional accumulator is necessary, otherwise JVM might optimize away the whole loop (!): private int benchmarkWith(Calculator calculator, int reps) { int accum = 0; for (int i = 0; i < reps; ++i) { accum += calculator.identity(i % 16); } return accum; } Here is the full caliper test without parts already discussed: public class CacheableBenchmark extends SimpleBenchmark { //... public int timeNoCaching(int reps) { return benchmarkWith(noCaching, reps); } public int timeManualCaching(int reps) { return benchmarkWith(manualCaching, reps); } public int timeCacheableWithCglib(int reps) { return benchmarkWith(cacheableCglib, reps); } public int timeCacheableWithJdkProxy(int reps) { return benchmarkWith(cacheableJdkProxy, reps); } public int timeCacheableWithAspectJWeaving(int reps) { return benchmarkWith(cacheableAspectJ, reps); } public int timeAspectJCustom(int reps) { return benchmarkWith(aspectJCustom, reps); } } I hope you are still following our experiment. We are now going to execute Calculate.identity() millions of times and see which caching configuration performs best. Since we only call identity() with 16 different arguments, we hardly ever touch the method itself as we always get cache hit. Curious to see the results? benchmark ns linear runtime NoCaching 1.77 = ManualCaching 23.84 = CacheableWithCglib 1576.42 ============================== CacheableWithJdkProxy 1551.03 ============================= CacheableWithAspectJWeaving 1514.83 ============================ AspectJCustom 22.98 = Interpretation Let's go step by step. First of all calling a method in Java is pretty darn fast! 1.77 nanoseconds, we are talking here about 3 CPU cycles on my Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz! If this doesn't convince you that Java is fast, I don't know what will. But back to our test. Hand-made caching decorator is also pretty fast. Of course it's slower by an order of magnitude compared to pure function call, but still blazingly fast compared to all @Scheduled benchmarks. We see a drop by 3 orders of magnitude, from 1.8 ns to 1.5 μs. I'm especially disappointed by the @Cacheable backed by AspectJ. After all caching aspect is precompiled directly into my Java .class file, I would expect it to be much faster compared to dynamic proxies and CGLIB. But that doesn't seem to be the case. All three Spring AOP techniques are similar. The greatest surprise is my custom AspectJ aspect. It's even faster than CachingCalculatorDecorator! maybe it's due to polymorphic call in the decorator? I strongly encourage you to clone this benchmark on GitHub and run it (mvn clean test, takes around 2 minutes) to compare your results. Conclusions You might be wondering why Spring abstraction layer is so slow? Well, first of all, check out the core implementation in CacheAspectSupport - it's actually quite complex. Secondly, is it really that slow? Do the math - you typically use Spring in business applications where database, network and external APIs are the bottleneck. What latencies do you typically see? Milliseconds? Tens or hundreds of milliseconds? Now add an overhead of 2 μs (worst case scenario). For caching database queries or REST calls this is completely negligible. It doesn't matter which technique you choose. But if you are caching very low-level methods close to the metal, like CPU-intensive, in-memory computations, Spring abstraction layer might be an overkill. The bottom line: measure! PS: both benchmark and contents of this article in Markdown format are freely available.
January 15, 2013
by Tomasz Nurkiewicz DZone Core CORE
· 29,262 Views · 2 Likes
article thumbnail
An Introduction to STOMP
STOMP (Simple (or Streaming) Text-Oriented Messaging Protocol) is a simple text-oriented protocol, similar to HTTP. STOMP provides an interoperable wire format that allows clients to communicate with almost every available message broker. STOMP is easy to implement and gives you flexibility since it is language-agnostic, meaning clients and brokers developed in different languages can send and receive messages to and from each other. There are lots of server implementations that support STOMP (mostly compliant with the STOMP 1.0 specification). The following is a list of STOMP 1.0 compliant message servers: Apache ActiveMQ – http://activemq.apache.org Apache Apollo – http://activemq.apache.org/apollo CoilMQ – http://code.google.com/p/coilmq Gozirra - http://www.germane-software.com/software/Java/Gozirra HornetQ – http://www.jboss.com/hornetq MorbidQ – http://www.morbidq.com RabbitMQ - http://www.rabbitmq.com/plugins.html#rabbitmq-stomp Sprinkle - http://www.thuswise.org/sprinkle/index.html StompServer - http://stompserver.rubyforge.org/ On the client side, there are many implementations for a vast number of technologies. Below, you will find the libraries available for the most popular languages. Language Libraries C libstomp - http://stomp.codehaus.org/C C++ Apache CMS - http://activemq.apache.org/cms/ C# and .Net Apache NMS - http://activemq.apache.org/nms/ Flash as3-stomp - http://code.google.com/p/as3-stomp/ Java Gozirra - http://www.germane-software.com/software/Java/Gozirra/ Objective-C objc-stomp - https://github.com/juretta/objc-stomp Perl Net::Stomp::Client - http://search.cpan.org/dist/Net-STOMP-Client/ Net::Stomp - http://search.cpan.org/dist/Net-Stomp/ PHP stomp - http://www.php.net/manual/en/book.stomp.php stomp-php - http://stomp.fusesource.org/documentation/php/book.html Python stomper - https://github.com/oisinmulvihill/stomper stomp.py - http://code.google.com/p/stomppy/ Ruby on Rails stomp gem - https://rubygems.org/gems/stomp activemessaging - http://code.google.com/p/activemessaging/ STOMP is an alternative to other open messaging protocols such as AMQP (Advanced Message Queueing Protocol) and implementation specific wire protocols used in JMS (Java Message Service) brokers such as OpenWire. It distinguishes itself by covering a small subset of commonly used messaging operations (commands) rather than providing a comprehensive messaging API. Because STOMP is language-agnostic and easy to implement, it’s popular to application developers and technical architects alike. STOMP is also text-based. Since it does not use binary protocols like other message brokers, a wide range of client technologies work with STOMP, like Ruby, Python, and Perl. STOMP is simple to implement, but supports a wide range of core enterprise messaging features, such as authentication, messaging models (point to point and publish and subscribe), message acknowledgement, transactions, message headers and properties, etc.
January 15, 2013
by Marcelo Jabali
· 34,764 Views · 3 Likes
  • Previous
  • ...
  • 782
  • 783
  • 784
  • 785
  • 786
  • 787
  • 788
  • 789
  • 790
  • 791
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: