DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Immutability Through Interfaces
It is often desirable to have immutable objects, objects that cannot be modified once constructed. Typically, an immutable object has fields that are declared as final and are set in the object constructor. There are getters for the fields, but no setters since the values cannot be saved. Once created, the object state doesn’t change and the objects can be shared across different threads since the state is fixed. There are plenty of caveats to that statement, for example if you have a list, while the list field reference may be final, the list itself may be able to change and have values added and removed which would spoil the immutability. Achieving this state can often be difficult because we rely on a number of tools and frameworks that may not support immutability such as any framework that builds an object by creating it and then setting values on it. However, one way around this would be to take a mutable object and make it immutable through interfaces. We do this by creating an interface that represents all the getters for an object, but none of the setters. Given a domain entity of a person with an id, first and last name fields we can define an immutable interface : public interface ImmutablePerson { public int getId(); public String getFirstName(); public String getLastName(); } We can then implement this interface in our person class : public class PersonEntity implements ImmutablePerson { private int id; private String firstName; private String lastName; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } } Our API deals with the PersonEntity class internally, but exposes the object as the ImmutablePerson only. public ImmutablePerson loadPerson(int id) { PersonEntity result = someLoadingMechanism.getPerson(id); result.setxxxxx(somevalue); //we can change it internally return result; //once it has been returned it is only accessed as an immutable person } To return a mutable instance for modification, you can return the PersonEntity. Alternatively, you can add aMutablePerson interface with just the setters and implement that. Unless you want to keep casting between the Immutable/Mutable types, the MutablePerson interface can extend the ImmutablePerson interface so the writable version is also readable. public interface MutablePerson extends ImmutablePerson { public void setId(int id) { public void setFirstName(String firstName); public void setLastName(String lastName); } Granted, someone can easily cast the object to a PersonEntity or MutablePerson and change the values, but then, you can also change static final variables with reflection if you wanted to. The ability to cast the object to make it writable only could be an internal-use only feature. This mechanism can be used to return immutable objects from persistence layers. There are some caveats with certain ORM tools since they use lazy loading mechanisms that would break the immutability. Underneath the interface it is still a mutable object. With some additional code, you could create immutable collections by wrapping the underlying collections in read only collections and returning them to the user. Some of the JVM benefits of immutability granted upon variables marked as final would not be given to this type of solution since the underlying variables are not final or immutable.
November 15, 2014
by Andy Gibson
· 13,593 Views · 1 Like
article thumbnail
Using Eclipse's Link Source Feature
NOTE: Apparently a bundle with a linked source will not be exported or built in an update site built. also Tycho will complain that it can't find linked sources, which severely limits the possibilities. A workaround is to export the bundles as plain old jars (this works fine for some reason), but the problem is far from ideal. See bug reports: https://bugs.eclipse.org/bugs/show_bug.cgi?id=457192 https://bugs.eclipse.org/bugs/show_bug.cgi?id=66177 Introduction I've been using Eclipse for more than ten years now, and so I like to think I know my way around it's offerings, but every now and then I get pleasantly surprised by discovering a feature – which usually had been there all along- but for which I finally have made the time to investigate. In this case, I am talking about the 'Link Source' feature in the Project Properties tab. Most experienced Eclipse users will at some point wander through the project properties, for instance when certain libraries are not found by the compiler, or when a plugin project starts to behave unexpectedly. The Project Properties tab comes into play when the Manifest.MF file no longer provides the answers for certain problems you face, and you need to delve deeper into the classpath and project settings. It also becomes topical when you need to make a custom project. At Project Chaupal we are currently maintaining and updating the code from Project JXTA. JXTA has been around for quite a bit in the open source community, and the development has had its ups and downs, so the code could do with a makeover here and there. I've been involved with keeping the code available in OSGI since 2006 or so – also with its ups and downs- and one of my ideals would be to automatically generate the OSGI bundles straight from the JXTA sources, without any handwork. The JXTA jar ships with a large number of third party libraries (e.g. Jetty and Log4j), some of which are available as OSGI bundles, so I don't want to include them in the JXTA OSGI bundle I make. A list of dependencies in the Manifest should be enough! Some of the third party libraries also aim to provide the same functionality (e.g. the database functionality provided by Derby and H2), so I would prefer to divide this over two bundles, and then just select the bundle that is needed. Ever since JXTA 2.6, the code has been mavenised, and so the code now conforms to the typical structure that Maven requires, with a specific location for JAVA code (src/main/java), resources (src/main/resources) and tests (src/main/test). I prefer to use Tycho for my OSGI bundles, so the regular Eclipse tooling is leading. As a result my goal is to: Create separate bundles for the core and the test source code Add the required resources, such as .properties files and the likes Split the core source code over different bundles, so that every bundle depends on one third party library at the most. It took me a day or two to get everything the way I wanted it, but in the end it was surprisingly easy, so I thought I'd share the experience.This tutorial assumes that you are well-versed in Eclipse and OSGI development. If not, a good tutorial on the subject can be found here. Preparation As was described earlier, the plan is to use a Maven project (available on GitHub) as a source for a number of OSGI bundles. For starters, we need to do the following; Prepare an Eclipse IDE with Egit and, optionally, with support for GitHub. As always, Lars Vogel's tutorial provides an excellent guide to achieve this. With the GitHub support you can actually search for the required repository, and clone it in your workspace within minutes. Add Maven Support for Eclipse. The tutorial can be found here. We now have an Eclipse IDE with one project loaded in the workspace, which conforms to the Maven structure. Now we can start to do the magic! Extracting Source Files in an OSGI Project First create a new plugin project, using the wizard (File → new → Plugin Project). Fill in the required details as requested (target platform→OSGI framework) and press 'finish'. We now have a standard textbook OSGI bundle project. For the sake of argument, let's call this bundle org.mybundle.host. Now we are going to add java source files and resources from the Maven project: open the project properties tab (right mouse click → properties) select the 'Java build path' option and choose the 'source' tab press 'Link Source' and browse to the 'main/java' subfolder of the Maven project. Close the project properties Include the source folder in the build.properties file and clean the workspace Update the Manifest to include the required dependencies, and export the packages as needed As you can see, the java files have been included in the bundle project, and will compile in a normal fashion. TIP: Currently the source file will by default have the same name as the folder. You can change this in the 'link source' wizard. For instance, you can delete the 'src' folder that is created by default, and replace it with the linked source if you want. This should only be considered if you are not going to make specific java files in the bundle. Next we are going to include the resources, such as .properties files that are included in the Maven project. As an exercise, we will exclude all html files that may be included. Open the project properties and follow the steps described previously, but now select the main/resources folder. Then press the 'next' tab, instead of closing You can now select which files to include or exclude in your bundle. Select the 'exclude' tab, and enter the following pattern: **/*.htm*. Close the project properties, update the build path and clean the project We have now included the desired resources, and in principle the bundle should now work as desired. With the include and exclude tabs, you can determine which files and folders you want to add to your bundle. The inclusion and exclusion patterns follow the conventions used by Apache ANT. TIP: You can check if the bundle has the correct source and resource files by opening a file explorer (in Windows) and browsing to the 'bin' folder of your bundle project. If you first refresh your bundle project (F5 in the Eclipse IDE), the correct class and resource files should be present there. NOTE: Although I would not recommend it, it is possible to link the sources of multiple non-OSGI code sources this way. Even though the folders need different names, they will be built as if they are one source folder. Now we create a second plugin project for the test files. We will call this bundle org.mybundle.test Open the project properties and follow the steps described previously, but now select the main/test folder. If required, you can exclude certain tests in the 'exclude' tab Close the project properties, update the build path and clean the project In the manifest editor, include the dependencies to org.mybundle.host, and for instance JUnit and JMock. When there are no more compiler errors, your two bundles should behave as regular OSGI bundles, with the only difference that the sources are extracted from the Maven project. TIP: It is also possible to make fragment bundles this way, and you can include library resources in your bundle (such as third party jars). This way you can restructure a non-OSGI project at will Using Variables When you store your projects in the cloud, such as on gitHub, you may often find multiple versions of your workspace scattered over different computers, and your repositories stored on different drives. This means that linking your sources with absolute paths, as we have done previously, is not a very versatile approach. Especially with linking this may become problematic, as the linked source project (e.g. the maven project) can be stored on a different location than the project that uses the source files. Luckily eclipse allows you to define variables in your project, which can help either to standardise the relative locations or, if this is not possible, to easily modify the links. In order to achive this, follow these steps: Select the project properties, and select the 'Java Build Path' option. Add or Edit a link source, and select the 'variables' button. Add one or more new variables by pressing the 'New' button and entering the locations. Then press 'OK' As an example, the following three variables point to two GitHub projects, one which holds the Maven project, while the bundle project is located in the same subfolder: - GITHUB_LOC: C:/Users/MyName/MyGithubLocation - MYPROJECT_LOC: ${GITHUB_LOC}/MyProject - MYSOURCEPROJECT_LOC: ${MYPROJECT_LOC}/MySource Now all you have to do is change the 'Linked folder location' to: MYSOURCEPROJECT_LOC/src/main/java in order to include the Maven project in your bundle.You can then also add a resource folder: MYSOURCEPROJECT_LOC/src/main/resources TIP: In the above example with a Maven project, the linked folder name will default to 'java' (and the resources to 'resources'). It is recommended to leave it that way, because you can then use the 'src' folder for bundle specific code that yuo may want to add, like a decalarative service. Also remember to update the build path to include the folders your project needs. Conclusion The 'link source' option provides a powerful way to make non-OSGI code accessible as OSGI bundles. The inclusion and exclusion patterns allow you to customise the bundles to your needs.
November 13, 2014
by Kees Pieters
· 15,668 Views · 1 Like
article thumbnail
Coldfusion Example: Using jQuery UI Accordion with a ColdFusion Query
A reader pinged me yesterday with a simple problem that I thought would be good to share on the blog. He had a query of events that he wanted to use with jQuery UI's Accordion control. The Accordion control simply takes content and splits into various "panes" with one visible at a time. For his data, he wanted to split his content into panes designated by a unique month and year. Here is a quick demo of that in action. I began by creating a query to store my data. I created a query with a date and title property and then random chose to add 0 to 3 "events" over the next twelve months. I specifically wanted to support 0 to ensure my demo handled noticing months without any data. 01. 04. 05.q = queryNew("date,title"); 06.for(i=1; i<12; i++) { 07. //for each month, we add 0-3 events (some months may not have data) 08. toAdd = randRange(0, 3); 09. 10. for(k=0; k To handle creating the accordion, I had to follow the rules jQuery UI set up for the control. Basically - wrap the entire set of data in a div, and separate each "pane" with an h3 and inner div. To handle this, I have to know when a new unique month/year "block" starts. I store this in a variable, lastDateStr, and just check it in every iteration over the query. I also need to ensure that on the last row I close the div. 01. 02. 03. 04. 05. 06. 07. 08. 09. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. #thisDateStr# 30. 31. 32. 33. 34. 35. 36. #title# 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. And the end result: So, not rocket science, but hopefully helpful to someone. Here is the entire template if you want to try it yourself. 01. 04. 05.q = queryNew("date,title"); 06.for(i=1; i<12; i++) { 07. //for each month, we add 0-3 events (some months may not have data) 08. toAdd = randRange(0, 3); 09. 10. for(k=0; k 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. #thisDateStr# 46. 47. 48. 49. 50. 51. 52. #title# 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63.
November 13, 2014
by Raymond Camden
· 4,379 Views
article thumbnail
How to Deal with MySQL Deadlocks
Originally Written by Peiran Song A deadlock in MySQL happens when two or more transactions mutually hold and request for locks, creating a cycle of dependencies. In a transaction system, deadlocks are a fact of life and not completely avoidable. InnoDB automatically detects transaction deadlocks, rollbacks a transaction immediately and returns an error. It uses a metric to pick the easiest transaction to rollback. Though an occasional deadlock is not something to worry about, frequent occurrences call for attention. Before MySQL 5.6, only the latest deadlock can be reviewed using SHOW ENGINE INNODB STATUS command. But with Percona Toolkit’s pt-deadlock-logger you can have deadlock information retrieved from SHOW ENGINE INNODB STATUS at a given interval and saved to a file or table for late diagnosis. For more information on using pt-deadlock-logger, see this post. With MySQL 5.6, you can enable a new variable innodb_print_all_deadlocks to have all deadlocks in InnoDB recorded in mysqld error log. Before and above all diagnosis, it is always an important practice to have the applications catch deadlock error (MySQL error no. 1213) and handle it by retrying the transaction. How to diagnose a MySQL deadlock A MySQL deadlock could involve more than two transactions, but the LATEST DETECTED DEADLOCK section only shows the last two transactions. Also it only shows the last statement executed in the two transactions, and locks from the two transactions that created the cycle. What are missed are the earlier statements that might have really acquired the locks. I will show some tips on how to collect the missed statements. Let’s look at two examples to see what information is given. Example 1: 1 141013 6:06:22 2 *** (1) TRANSACTION: 3 TRANSACTION 876726B90, ACTIVE 7 sec setting auto-inc lock 4 mysql tables in use 1, locked 1 5 LOCK WAIT 9 lock struct(s), heap size 1248, 4 row lock(s), undo log entries 4 6 MySQL thread id 155118366, OS thread handle 0x7f59e638a700, query id 87987781416 localhost msandbox update 7 INSERT INTO t1 (col1, col2, col3, col4) values (10, 20, 30, 'hello') 8 *** (1) WAITING FOR THIS LOCK TO BE GRANTED: 9 TABLE LOCK table `mydb`.`t1` trx id 876726B90 lock mode AUTO-INC waiting 10 *** (2) TRANSACTION: 11 TRANSACTION 876725B2D, ACTIVE 9 sec inserting 12 mysql tables in use 1, locked 1 13 876 lock struct(s), heap size 80312, 1022 row lock(s), undo log entries 1002 14 MySQL thread id 155097580, OS thread handle 0x7f585be79700, query id 87987761732 localhost msandbox update 15 INSERT INTO t1 (col1, col2, col3, col4) values (7, 86, 62, "a lot of things"), (7, 76, 62, "many more") 16 *** (2) HOLDS THE LOCK(S): 17 TABLE LOCK table `mydb`.`t1` trx id 876725B2D lock mode AUTO-INC 18 *** (2) WAITING FOR THIS LOCK TO BE GRANTED: 19 RECORD LOCKS space id 44917 page no 529635 n bits 112 index `PRIMARY` of table `mydb`.`t2` trx id 876725B2D lock mode S locks rec but not gap waiting 20 *** WE ROLL BACK TRANSACTION (1) Line 1 gives the time when the deadlock happened. If your application code catches and logs deadlock errors,which it should, then you can match this timestamp with the timestamps of deadlock errors in application log. You would have the transaction that got rolled back. From there, retrieve all statements from that transaction. Line 3 & 11, take note of Transaction number and ACTIVE time. If you log SHOW ENGINE INNODB STATUS output periodically(which is a good practice), then you can search previous outputs with Transaction number to hopefully see more statements from the same transaction. The ACTIVE sec gives a hint on whether the transaction is a single statement or multi-statement one. Line 4 & 12, the tables in use and locked are only with respect to the current statement. So having 1 table in use does not necessarily mean that the transaction involves 1 table only. Line 5 & 13, this is worth of attention as it tells how many changes the transaction had made, which is the “undo log entries” and how many row locks it held which is “row lock(s)”. These info hints the complexity of the transaction. Line 6 & 14, take note of thread id, connecting host and connecting user. If you use different MySQL users for different application functions which is another good practice, then you can tell which application area the transaction comes from based on the connecting host and user. Line 9, for the first transaction, it only shows the lock it was waiting for, in this case the AUTO-INC lock on table t1. Other possible values are S for shared lock and X for exclusive with or without gap locks. Line 16 & 17, for the second transaction, it shows the lock(s) it held, in this case the AUTO-INC lock which was what TRANSACTION (1) was waiting for. Line 18 & 19 shows which lock TRANSACTION (2) was waiting for. In this case, it was a shared not gap record lock on another table’s primary key. There are only a few sources for a shared record lock in InnoDB: 1) use of SELECT … LOCK IN SHARE MODE 2) on foreign key referenced record(s) 3) with INSERT INTO… SELECT, shared locks on source table The current statement of trx(2) is a simple insert to table t1, so 1 and 3 are eliminated. By checking SHOW CREATE TABLE t1, you could confirm that the S lock was due to a foreign key constraint to the parent table t2. Example 2: With MySQL community version, each record lock has the record content printed: 1 2014-10-11 10:41:12 7f6f912d7700 2 *** (1) TRANSACTION: 3 TRANSACTION 2164000, ACTIVE 27 sec starting index read 4 mysql tables in use 1, locked 1 5 LOCK WAIT 3 lock struct(s), heap size 360, 2 row lock(s), undo log entries 1 6 MySQL thread id 9, OS thread handle 0x7f6f91296700, query id 87 localhost ro ot updating 7 update t1 set name = 'b' where id = 3 8 *** (1) WAITING FOR THIS LOCK TO BE GRANTED: 9 RECORD LOCKS space id 1704 page no 3 n bits 72 index `PRIMARY` of table `tes t`.`t1` trx id 2164000 lock_mode X locks rec but not gap waiting 10 Record lock, heap no 4 PHYSICAL RECORD: n_fields 5; compact format; info bit s 0 11 0: len 4; hex 80000003; asc ;; 12 1: len 6; hex 000000210521; asc ! !;; 13 2: len 7; hex 180000122117cb; asc ! ;; 14 3: len 4; hex 80000008; asc ;; 15 4: len 1; hex 63; asc c;; 16 17 *** (2) TRANSACTION: 18 TRANSACTION 2164001, ACTIVE 18 sec starting index read 19 mysql tables in use 1, locked 1 20 3 lock struct(s), heap size 360, 2 row lock(s), undo log entries 1 21 MySQL thread id 10, OS thread handle 0x7f6f912d7700, query id 88 localhost r oot updating 22 update t1 set name = 'c' where id = 2 23 *** (2) HOLDS THE LOCK(S): 24 RECORD LOCKS space id 1704 page no 3 n bits 72 index `PRIMARY` of table `tes t`.`t1` trx id 2164001 lock_mode X locks rec but not gap 25 Record lock, heap no 4 PHYSICAL RECORD: n_fields 5; compact format; info bit s 0 26 0: len 4; hex 80000003; asc ;; 27 1: len 6; hex 000000210521; asc ! !;; 28 2: len 7; hex 180000122117cb; asc ! ;; 29 3: len 4; hex 80000008; asc ;; 30 4: len 1; hex 63; asc c;; 31 32 *** (2) WAITING FOR THIS LOCK TO BE GRANTED: 33 RECORD LOCKS space id 1704 page no 3 n bits 72 index `PRIMARY` of table `tes t`.`t1` trx id 2164001 lock_mode X locks rec but not gap waiting 34 Record lock, heap no 3 PHYSICAL RECORD: n_fields 5; compact format; info bit s 0 35 0: len 4; hex 80000002; asc ;; 36 1: len 6; hex 000000210520; asc ! ;; 37 2: len 7; hex 17000001c510f5; asc ;; 38 3: len 4; hex 80000009; asc ;; 39 4: len 1; hex 62; asc b;; Line 9 & 10: The ‘space id’ is tablespace id, ‘page no’ gives which page the record lock is on inside the tablespace. The ‘n bits’ is not the page offset, instead the number of bits in the lock bitmap. The page offset is the ‘heap no’ on line 10, Line 11~15: It shows the record data in hex numbers. Field 0 is the cluster index(primary key). Ignore the highest bit, the value is 3. Field 1 is the transaction id of the transaction which last modified this record, decimal value is 2164001 which is TRANSACTION (2). Field 2 is the rollback pointer. Starting from field 3 is the rest of the row data. Field 3 is integer column, value 8. Field 4 is string column with character ‘c’. By reading the data, we know exactly which row is locked and what is the current value. What else can we learn from analysis? Since most MySQL deadlocks happen between two transactions, we could start the analysis based on that assumption. In Example 1, trx (2) was waiting on a shared lock, so trx (1) either held a shared or exclusive lock on that primary key record of table t2. Let’s say col2 is the foreign key column, by checking the current statement of trx(1), we know it did not require the same record lock, so it must be some previous statement in trx(1) that required S or X lock(s) on t2’s PK record(s). Trx (1) only made 4 row changes in 7 seconds. Then you learned a few characteristics of trx(1): it does a lot of processing but a few changes; changes involve table t1 and t2, a single record insertion to t2. These information combined with other data could help developers to locate the transaction. Where else can we find previous statements of the transactions? Besides application log and previous SHOW ENGINE INNODB STATUS output, you may also leverage binlog, slow log and/or general query log. With binlog, if binlog_format=statement, each binlog event would have the thread_id. Only committed transactions are logged into binlog, so we could only look for Trx(2) in binlog. In the case of Example 1, we know when the deadlock happened, and we know Trx(2) started 9 seconds ago. We can run mysqlbinlog on the right binlog file and look for statements with thread_id = 155097580. It is always good to then cross refer the statements with the application code to confirm. $ mysqlbinlog -vvv --start-datetime=“2014-10-13 6:06:12” --stop-datatime=“2014-10-13 6:06:22” mysql-bin.000010 > binlog_1013_0606.out With Percona Server 5.5 and above, you can set log_slow_verbosity to include InnoDB transaction id in slow log. Then if you have long_query_time = 0, you would be able to catch all statements including those rolled back into slow log file. With general query log, the thread id is included and could be used to look for related statements. How to avoid a MySQL deadlock There are things we could do to eliminate a deadlock after we understand it. – Make changes to the application. In some cases, you could greatly reduce the frequency of deadlocks by splitting a long transaction into smaller ones, so locks are released sooner. In other cases, the deadlock rises because two transactions touch the same sets of data, either in one or more tables, with different orders. Then change them to access data in the same order, in another word, serialize the access. That way you would have lock wait instead of deadlock when the transactions happen concurrently. – Make changes to the table schema, such as removing foreign key constraint to detach two tables, or adding indexes to minimize the rows scanned and locked. – In case of gap locking, you may change transaction isolation level to read committed for the session or transaction to avoid it. But then the binlog format for the session or transaction would have to be ROW or MIXED.
November 12, 2014
by Peter Zaitsev
· 31,243 Views
article thumbnail
Ext JS Grid Grouping Tutorial
this tutorial shows you how to create a grouped grid in ext js. ext js grid grouping is a common scenario in business applications where users need to analyze tabular data grouped by different attributes. the ext js application that you will create in this tutorial will render an ext js grid containing fictional data describing model cars. this article is part of a series of ext js tutorials that i have posted on jorgeramon.me. creating an ext js project from scratch we will get started by creating the directories for our sample ext js application. let’s go ahead and create the directories as depicted in the screenshot below. this is a typical ext js project structure with an app directory under which we’ve placed the model, store and view directories. we will not create a controller folder because for this simple example we will use the ext.application instance that we will create as the only controller in the app. next, we will create the file that hosts our ext js application. let’s name it grid-grouping.html and place it in the root directory of the project. here’s the code that goes in the file: remember to make sure that the css and script references to the ext js library in the head section of the html document are pointing to the correct directories in your development workstation. the references above work for my development environment. in the head section we also have an entry for the app.js file, which we need to add to the root directory of the project, as shown below. let’s open app.js and type the following ext js application definition: ext.application({ name: 'app', autocreateviewport: true, launch: function () { } }); this is a super simple ext js application. we are using the autocreateviewport config to instruct the app to load and instantiate the viewport, which we will create in a few minutes, before firing the launch function. we will leave the launch function empty because in this example we do not need to execute any logic within the function. the ext js model to represent the model cars data that we will ultimately render in the grid, we will use the modelcar ext js model class. this class goes in the modelcar.js file that we will add to the model directory of the project. this is how we are going to define the modelcar model: ext.define('app.model.modelcar', { extend: 'ext.data.model', fields: [ { name: 'id', type: 'int' }, { name: 'category', type: 'string' }, { name: 'name', type: 'string' }, { name: 'vendor', type: 'string' } ] }); back in the app.js file we need to add a models config to the ext js application just so the app knows that it has a dependency on the modelcar module and it needs to load it as part of the app. ext.application({ name: 'app', models: ['modelcar'], autocreateviewport: true, launch: function () { } }); creating a store to feed the ext js grid now we are going to focus on the ext js store that will feed the grid. let’s add a modelcars.js file to the store directory of the project. we will type the following definition in the file: ext.define('app.store.modelcars', { extend: 'ext.data.store', model: 'app.model.modelcar', groupfield: 'category', groupdir: 'asc', sorters: ['name'], data: [ { 'id': 1, 'category': 'trucks and buses', 'name': '1940 ford pickup truck', 'vendor': 'motor city art classics' }, { 'id': 2, 'category': 'trucks and buses', 'name': '1957 chevy pickup', 'vendor': 'gearbox collectibles' }, { 'id': 3, 'category': 'classic cars', 'name': '1972 alfa romeo gta', 'vendor': 'motor city art classics' }, { 'id': 4, 'category': 'motorcycles', 'name': '2003 harley-davidson eagle drag bike', 'vendor': 'studio m art models' }, { 'id': 5, 'category': 'motorcycles', 'name': '1996 moto guzzi 1100i', 'vendor': 'motor city art classics' }, { 'id': 6, 'category': 'classic cars', 'name': '1952 alpine renault 1300', 'vendor': 'studio m art models' }, { 'id': 7, 'category': 'classic cars', 'name': '1993 mazda rx-7', 'vendor': 'motor city art classics' }, { 'id': 9, 'category': 'classic cars', 'name': '1965 aston martin db5', 'vendor': 'motor city art classics' }, { 'id': 10, 'category': 'classic cars', 'name': '1998 chrysler plymouth prowler', 'vendor': 'unimax art galleries' }, { 'id': 11, 'category': 'trucks and buses', 'name': '1926 ford fire engine', 'vendor': 'studio m art models' }, { 'id': 12, 'category': 'trucks and buses', 'name': '1962 volkswagen microbus', 'vendor': 'unimax art galleries' }, { 'id': 13, 'category': 'trucks and buses', 'name': '1980’s gm manhattan express', 'vendor': 'motor city art classics' }, { 'id': 13, 'category': 'motorcycles', 'name': '1997 bmw f650 st', 'vendor': 'gearbox collectibles' }, { 'id': 13, 'category': 'motorcycles', 'name': '1974 ducati 350 mk3 desmo', 'vendor': 'motor city art classics' }, { 'id': 13, 'category': 'motorcycles', 'name': '2002 yamaha yzr m1', 'vendor': 'motor city art classics' } ] }) the ext js model config of the store is set to the modelcar model that we created a couple of minutes ago. no surprises there. a couple of things that i don’t want you to miss in the store’s definition are the groupfield and groupdir configs, which tell the store which field and in which direction to group by: groupfield: 'category', groupdir: 'asc', also note how we use the sorters config to sort the grouped records by the name field. finally, as the data config indicates, we are using a hard-coded array as the data for the store. we are using hard-coded data in order to keep the example short. let’s make the application aware of the modelcars store by adding the stores config in the app.js file: ext.application({ name: 'app', models: ['modelcar'], stores: ['modelcars'], autocreateviewport: true, launch: function () { } }); creating the ext js viewport the app’s viewport is where we will define the ext js grouped grid. let’s create the viewport.js file in the project’s view directory: here’s the code that we will add to the viewport’s file: ext.define('app.view.viewport', { extend: 'ext.container.viewport', requires: ['ext.grid.panel'], style: 'padding:25px', layout: 'vbox', items: [ { xtype: 'gridpanel', width: 650, height: 475, title: 'ext js grid grouping', store: 'modelcars', columns: [ { text: 'id', hidden: true, dataindex: 'id' }, { text: 'name', sortable: true, dataindex: 'name', flex: 3 }, { text: 'vendor', sortable: true, dataindex: 'vendor', flex:2 }, { text: 'category', sortable: true, dataindex: 'category', flex: 2 } ], features: [{ ftype: 'grouping', // you can customize the group's header. groupheadertpl: '{name} ({children.length})', enablenogroups:true }] } ] }); let’s review this viewport definition in detail. first, we are using the viewport’s requires config to tell the ext js loader that it needs to load the ext.grid.panel class as part of the application. then, we define our grid instance within the items config of the viewport. we are telling the grid to use the modelcars ext js store as its data source by setting the store config to ‘modelcars’. this works because ext js components pass the value of the store config to the ext.data.storemanager class, which can look up or create a store, depending on whether the store config’s value is a store id, a store’s instance or a store’s configuration. we are using the grid’s columns config to define four columns: id, name, vendor and category. setting the hidden config of the id column to true will make the column invisible to the user. the sortable config of the visible columns is true to allow users to sort by those columns. setting up ext js grid grouping the features config allows us to specify one or more features to be added to the grid. as explained in the ext js docs, an ext js feature is a class that provides hooks that you can use to inject functionality into the grid at different points during the grid’s creation cycle. this is a clever approach to maintaining the grid’s core as lightweight as possible, while allowing for advanced functionality to be “plugged in” as needed. currently, extjs provides the following grid features: grouping, through the ext.grid.feature.grouping class. grouping summary, though the ext.grid.feature.groupingsummary class. row body, though the ext.grid.feature.rowbody class. summary, through the ext.grid.feature.summary class. in our app we are only using the grouping feature. we set the ftype property to ‘grouping’ to signal the grid that we will use the grouping feature. the groupheadertpl config allows us to enter a template for the group headers of the grid: groupheadertpl: '{name} ({children.length})' this template will render the name of the group along with the number of records in the group, producing a result similar to this picture: setting the enablenogroups config to true will let the user turn off grouping in the grid. note that if we wanted to specify multiple features, we would need to add them to the features config in array notation. one last step before we are ready to test the app. let’s return to the app.js file and the add views config, where we will place a reference to the viewport class that we just created: ext.application({ name: 'app', models: ['modelcar'], stores: ['modelcars'], views: ['viewport'], autocreateviewport: true, launch: function () { } }); now we can test the app on a browser. upon application load, the grid’s records are grouped by the category field, and sorted by the name field: summary and next steps in this tutorial we reviewed the steps needed to create a grouped grid in ext js, and we learned that this process involves attaching a “grouping” feature to an ext js grid that is connected to a grouped ext js store. as next steps, i would suggest that you explore how to use the “grouping summary” and “summary” features of the ext js grid, which are commonly used together with the “grouping” feature. i will cover these features in future tutorials. don’t forget to sign up for my mailing list so you can be among the first to know when i publish the next update. download the source code download the ext js grouped grid example here: ext js grid grouping tutorial on github
November 11, 2014
by Jorge Ramon
· 17,127 Views
article thumbnail
Plotting Data Online via Plotly and Python
I don’t do a lot of plotting in my job, but I recently heard about a website called Plotly that provides a plotting service for anyone’s data. They even have a plotly package for Python (among others)! So in this article we will be learning how to plot with their package. Let’s have some fun making graphs! Getting Started You will need the plotly package to follow along with this article. You can use pip to get the package and install it: pip install plotly Now that you have it installed, you’ll need to go to the Plotly website and create a free account. Once that’s done, you will get an API key. To make things super simple, you can use your username and API key to create a credentials file. Here’s how to do that: import plotly.tools as tls tls.set_credentials_file( username="your_username", api_key="your_api_key") # to get your credentials credentials = tls.get_credentials_file() If you don’t want to save your credentials, then you can also sign in to their service by doing the following: import plotly.plotly as py py.sign_in('your_username','your_api_key') For the purposes of this article, I’m assuming you have created the credentials file. I found that makes interacting with their service a bit easier to use. Creating a Graph Plotly seems to default to a Scatter Plot, so we’ll start with that. I decided to grab some data from a census website. You can download any US state’s population data, along with other pieces of data. In this case, I downloaded a CSV file that contained the population of each county in the state of Iowa. Let’s take a look: import csv import plotly.plotly as py #---------------------------------------------------------------------- def plot_counties(csv_path): """ http://census.ire.org/data/bulkdata.html """ counties = {} county = [] pop = [] counter = 0 with open(csv_path) as csv_handler: reader = csv.reader(csv_handler) for row in reader: if counter == 0: counter += 1 continue county.append(row[8]) pop.append(row[9]) trace = dict(x=county, y=pop) data = [trace] py.plot(data, filename='ia_county_populations') if __name__ == '__main__': csv_path = 'ia_county_pop.csv' plot_counties(csv_path) If you run this code, you should see a graph that looks like this: <br> You can also view the graph here. Anyway, as you can see in the code above, all I did was read the CSV file and extract out the county name and the population. Then I put that data into two different Python lists. Finally I created a dictionary of those lists and then wrapped that dictionary in a list. So you end up with a list that contains a dictionary that contains two lists! To make the Scatter Plot, I passed the data to plotly’s plot method. Converting to a Bar Chart Now let’s see if we can change the ScatterPlot to a Bar Chart. First off, we’ll play around with the plot data. The following was done via the Python interpreter: >>> scatter = py.get_figure('driscollis', '0') >>> print scatter.to_string() Figure( data=Data([ Scatter( x=[u'Adair County', u'Adams County', u'Allamakee County', u'..', ], y=[u'7682', u'4029', u'14330', u'12887', u'6119', u'26076', '..' ] ) ]) ) This shows how we can grab the figure using the username and the plot’s unique number. Then we printed out the data structure. You will note that it doesn’t print out the entire data structure. Now let’s do the actual conversion to a Bar Chart: from plotly.graph_objs import Data, Figure, Layout scatter_data = scatter.get_data() trace_bar = Bar(scatter_data[0]) data = Data([trace_bar]) layout = Layout(title="IA County Populations") fig = Figure(data=data, layout=layout) py.plot(fig, filename='bar_ia_county_pop') This will create a bar chart at the following URL: https://plot.ly/~driscollis/1. Here’s the image of the graph: <br> This code is slightly different than the code we used originally. In this case, we explicitly created a Bar object and passed it the scatter plot’s data. Then we put that data into a Data object. Next we created a Layout object and gave our chart a title. Then we created a Figure object using the data and layout objects. Finally we plotted the bar chart. Saving the Graph to Disk Plotly also allows you to save your graph to your hard drive. You can save it in the following formats: png, svg, jpeg, and pdf. Assuming you still have the Figure object from the previous example handy, you can do the following: py.image.save_as(fig, filename='graph.png') If you want to save using one of the other formats, then just use that format’s extension in the filename. Wrapping Up At this point you should be able to use the plotly package pretty well. There are many other graph types available, so be sure to read Plotly’s documentation thoroughly. They also support streaming graphs. As I understand it, Plotly allows you to create 10 graphs for free. After that you would either have to delete some of your graphs or pay a monthly fee. Additional Reading Plotly Python documentation Plotly User Guide
November 10, 2014
by Mike Driscoll
· 10,488 Views
article thumbnail
Missing Stack Traces for Repeated Exceptions
A long while ago a optimisation was added to the JVM so that if the same exception is thrown again and again and again a single instance of the Exception is created without the stack trace filled in in order to increase performance. This is an excellent idea unless you are trying to diagnose a problem and you have missed the original error. If you forgot about this optimisation you send the afternoon looking at the following log output and weeping slightly. (In my defence I have a little one in the house hence the fuzzy brain and lack of blogging action this year) java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException java.lang.ArrayIndexOutOfBoundsException ... java.lang.Exception: Uncaught exceptions during test at oracle.jdevstudio-testware-tests.level0.testADFcMATS(/.../work/mw9111/jdeveloper/jdev/extensions/oracle.jdevstudio-testware-tests/abbot/common-adf/CreateNewCustomAppAndProjectWithName.xml:25) at oracle.jdevstudio-testware-tests.level0.testADFcMATS(/.../work/mw9111/jdeveloper/jdev/extensions/oracle.jdevstudio-testware-tests/abbot/level0/testADFcMATS.xml:94) at oracle.abbot.JDevScriptFixture.runTest(JDevScriptFixture.java:555) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at junit.textui.TestRunner.doRun(TestRunner.java:116) at junit.textui.TestRunner.doRun(TestRunner.java:109) at oracle.abbot.AbbotRunner.run(AbbotRunner.java:614) at oracle.abbot.AbbotAddin$IdeAbbotRunner.run(AbbotAddin.java:634) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ArrayIndexOutOfBoundsException [Crickets] It turns out that you can turn off this optimisation with a simple flag: java -XX:-OmitStackTraceInFastThrow .... In my particular case the actual exception causing this trouble was a NPE from the GlyphView code in JDK8. (One that is being caused by a glitch in hotspot it seems) But that in turn was causing the AIOOBE in some logging code clouding the issue even more. This in particular is a good flag to add by default when running your automated tests, particularly in combination with the stack trace length override I have talked about before.
November 9, 2014
by Gerard Davison
· 11,272 Views · 1 Like
article thumbnail
AngularJS Interview Questions: Set 3
The article represents the 3rd set of 10 interview questions. The following are previous two sets that have been published earlier on our website. Following are other sets that we recommend you to go through. Interview questions Set 1 Interview questions Set 2 Q1: Directives can be applied to which all element type? Ans: Following represents the element type and directive declaration style: `E` – Element name: `` `A` – Attribute (default): `` `C` – Class: `` `M` – Comment: `` Q2. What is notion of “isolate” scope object when creating a custom directive? How is it different from the normal scope object? Ans: When creating a custom directive, there is a property called as “scope” which can be assigned different values such as true/false or {}. When assigned with the value “{}”, then a new “isolate” scope is created. The ‘isolate’ scope differs from normal scope in that it does not prototypically inherit from the parent scope. This is useful when creating reusable components, which should not accidentally read or modify data in the parent scope. Q3. What are different return types from compile function? Ans: A compile function can have a return value which can be either a function or an object. A (post-link) function: It is equivalent to registering the linking function via the `link` property of the config object when the compile function is empty. An object with function(s) registered via `pre` and `post` properties. It allows you to control when a linking function should be called during the linking phase. Q4. WHich API need to be invoked on the rootScope service to get the child scopes? Ans: $new Q5. Explain the relationship between scope.$apply & scope.$digest? Ans: As an event such as text change in a textfield happens, the event is caught with an eventhandler which then invokes $apply method on the scope object. The $apply method in turn evaluates the expression and finally invokes $digest method on the scope object. Following code does it all: $apply: function(expr) { try { beginPhase('$apply'); return this.$eval(expr); } catch (e) { $exceptionHandler(e); } finally { clearPhase(); try { $rootScope.$digest(); } catch (e) { $exceptionHandler(e); throw e; } } } Q6. Which angular module is loaded by default? Ans: ng Q7. What angular function is used to manually start an application? Ans: angular.bootstrap Q8. Name some of the methods that could be called on a module instance? For example, say, you instantiated a module such as ‘var helloApp = angular.module( “helloApp”, [] );’. What are different methods that could be called on helloApp instance? Ans: Following are some of the methods: controller factory directive filter constant service provider config Q9. Which angular function is used to wrap a raw DOM element or HTML string as a jQuery element? Ans: angular.element; If jQuery is available, `angular.element` is an alias for the jQuery function. If jQuery is not available, `angular.element` delegates to Angular’s built-in subset of jQuery, called “jQuery lite” or “jqLite.” Q10. Write sample code representing an injector that could be used to kick off your application? var $injector = angular.injector(['ng', 'appName']); $injector.invoke(function($rootScope, $compile, $document){ $compile($document)($rootScope); $rootScope.$digest(); }); Feel free to suggest any changes in above answers if you feel so.
November 9, 2014
by Ajitesh Kumar
· 11,526 Views
article thumbnail
Building Microservices with Spring Boot and Apache Thrift. Part 1
In the modern world of microservices it's important to provide strict and polyglot clients for your service. It's better if your API is self-documented. One of the best tools for it is Apache Thrift. I want to explain how to use it with my favorite platform for microservices - Spring Boot. All project source code is available on GitHub: https://github.com/bsideup/spring-boot-thrift Project skeleton I will use Gradle to build our application. First, we need our main build.gradle file: buildscript { repositories { jcenter() } dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:1.1.8.RELEASE") } } allprojects { repositories { jcenter() } apply plugin:'base' apply plugin: 'idea' } subprojects { apply plugin: 'java' } Nothing special for a Spring Boot project. Then we need a gradle file for thrift protocol modules (we will reuse it in next part): import org.gradle.internal.os.OperatingSystem repositories { ivy { artifactPattern "http://dl.bintray.com/bsideup/thirdparty/[artifact]-[revision](-[classifier]).[ext]" } } buildscript { repositories { jcenter() } dependencies { classpath "ru.trylogic.gradle.plugins:gradle-thrift-plugin:0.1.1" } } apply plugin: ru.trylogic.gradle.thrift.plugins.ThriftPlugin task generateThrift(type : ru.trylogic.gradle.thrift.tasks.ThriftCompileTask) { generator = 'java:beans,hashcode' destinationDir = file("generated-src/main/java") } sourceSets { main { java { srcDir generateThrift.destinationDir } } } clean { delete generateThrift.destinationDir } idea { module { sourceDirs += [file('src/main/thrift'), generateThrift.destinationDir] } } compileJava.dependsOn generateThrift dependencies { def thriftVersion = '0.9.1'; Map platformMapping = [ (OperatingSystem.WINDOWS) : 'win', (OperatingSystem.MAC_OS) : 'osx' ].withDefault { 'nix' } thrift "org.apache.thrift:thrift:$thriftVersion:${platformMapping.get(OperatingSystem.current())}@bin" compile "org.apache.thrift:libthrift:$thriftVersion" compile 'org.slf4j:slf4j-api:1.7.7' } We're using my Thrift plugin for Gradle. Thrift will generate source to the "generated-src/main/java" directory. By default, Thrift uses slf4j v1.5.8, while Spring Boot uses v1.7.7. It will cause an error in runtime when you will run your application, that's why we have to force a slf4j api dependency. Calculator service Let's start with a simple calculator service. It will have 2 modules: protocol and app.We will start with protocol. Your project should look as follows: calculator/ protocol/ src/ main/ thrift/ calculator.thrift build.gradle build.gradle settings.gradle thrift.gradle Where calculator/protocol/build.gradle contains only one line: apply from: rootProject.file('thrift.gradle') Don't forget to put these lines to settings.gradle, otherwise your modules will not be visible to Gradle: include 'calculator:protocol' include 'calculator:app' Calculator protocol Even if you're not familiar with Thrift, its protocol description file (calculator/protocol/src/main/thrift/calculator.thrift) should be very clear to you: namespace cpp com.example.calculator namespace d com.example.calculator namespace java com.example.calculator namespace php com.example.calculator namespace perl com.example.calculator namespace as3 com.example.calculator enum TOperation { ADD = 1, SUBTRACT = 2, MULTIPLY = 3, DIVIDE = 4 } exception TDivisionByZeroException { } service TCalculatorService { i32 calculate(1:i32 num1, 2:i32 num2, 3:TOperation op) throws (1:TDivisionByZeroException divisionByZero); } Here we define TCalculatorService with only one method - calculate. It can throw an exception of type TDivisionByZeroException. Note how many languages we're supporting out of the box (in this example we will use only Java as a target, though) Now run ./gradlew generateThrift, you will get generated Java protocol source in the calculator/protocol/generated-src/main/java/ folder. Calculator application Next, we need to create the service application itself. Just create calculator/app/ folder with the following structure: src/ main/ java/ com/ example/ calculator/ handler/ CalculatorServiceHandler.java service/ CalculatorService.java CalculatorApplication.java build.gradle Our build.gradle file for app module should look like this: apply plugin: 'spring-boot' dependencies { compile project(':calculator:protocol') compile 'org.springframework.boot:spring-boot-starter-web' testCompile 'org.springframework.boot:spring-boot-starter-test' } Here we have a dependency on protocol and typical starters for Spring Boot web app. CalculatorApplication is our main class. In this example I will configure Spring in the same file, but in your apps you should use another config class instead. package com.example.calculator; import com.example.calculator.handler.CalculatorServiceHandler; import org.apache.thrift.protocol.*; import org.apache.thrift.server.TServlet; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.*; import javax.servlet.Servlet; @Configuration @EnableAutoConfiguration @ComponentScan public class CalculatorApplication { public static void main(String[] args) { SpringApplication.run(CalculatorApplication.class, args); } @Bean public TProtocolFactory tProtocolFactory() { //We will use binary protocol, but it's possible to use JSON and few others as well return new TBinaryProtocol.Factory(); } @Bean public Servlet calculator(TProtocolFactory protocolFactory, CalculatorServiceHandler handler) { return new TServlet(new TCalculatorService.Processor(handler), protocolFactory); } } You may ask why Thrift servlet bean is called "calculator". In Spring Boot, it will register your servlet bean in context of the bean name and our servlet will be available at /calculator/. After that we need a Thrift handler class: package com.example.calculator.handler; import com.example.calculator.*; import com.example.calculator.service.CalculatorService; import org.apache.thrift.TException; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; @Component public class CalculatorServiceHandler implements TCalculatorService.Iface { @Autowired CalculatorService calculatorService; @Override public int calculate(int num1, int num2, TOperation op) throws TException { switch(op) { case ADD: return calculatorService.add(num1, num2); case SUBTRACT: return calculatorService.subtract(num1, num2); case MULTIPLY: return calculatorService.multiply(num1, num2); case DIVIDE: try { return calculatorService.divide(num1, num2); } catch(IllegalArgumentException e) { throw new TDivisionByZeroException(); } default: throw new TException("Unknown operation " + op); } } } In this example I want to show you that Thrift handler can be a normal Spring bean and you can inject dependencies in it. Now we need to implement CalculatorService itself: package com.example.calculator.service; import org.springframework.stereotype.Component; @Component public class CalculatorService { public int add(int num1, int num2) { return num1 + num2; } public int subtract(int num1, int num2) { return num1 - num2; } public int multiply(int num1, int num2) { return num1 * num2; } public int divide(int num1, int num2) { if(num2 == 0) { throw new IllegalArgumentException("num2 must not be zero"); } return num1 / num2; } } That's it. Well... almost. We still need to test our service somehow. And it should be an integration test. Usually, even if your application is providing JSON REST API, you still have to implement a client for it. Thrift will do it for you. We don't have to care about it. Also, it will support different protocols. Let's use a generated client in our test: package com.example.calculator; import org.apache.thrift.protocol.*; import org.apache.thrift.transport.THttpClient; import org.apache.thrift.transport.TTransport; import org.junit.*; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.*; import org.springframework.boot.test.IntegrationTest; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.web.WebAppConfiguration; import static org.junit.Assert.*; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = CalculatorApplication.class) @WebAppConfiguration @IntegrationTest("server.port:0") public class CalculatorApplicationTest { @Autowired protected TProtocolFactory protocolFactory; @Value("${local.server.port}") protected int port; protected TCalculatorService.Client client; @Before public void setUp() throws Exception { TTransport transport = new THttpClient("http://localhost:" + port + "/calculator/"); TProtocol protocol = protocolFactory.getProtocol(transport); client = new TCalculatorService.Client(protocol); } @Test public void testAdd() throws Exception { assertEquals(5, client.calculate(2, 3, TOperation.ADD)); } @Test public void testSubtract() throws Exception { assertEquals(3, client.calculate(5, 2, TOperation.SUBTRACT)); } @Test public void testMultiply() throws Exception { assertEquals(10, client.calculate(5, 2, TOperation.MULTIPLY)); } @Test public void testDivide() throws Exception { assertEquals(2, client.calculate(10, 5, TOperation.DIVIDE)); } @Test(expected = TDivisionByZeroException.class) public void testDivisionByZero() throws Exception { client.calculate(10, 0, TOperation.DIVIDE); } } This test will run your Spring Boot application, bind it to a random port and test it. All client-server communications will be performed in the same way real world clients are. Note how easy to use our service is from the client side. We're just calling methods and catching exceptions.
November 9, 2014
by Sergei Egorov
· 44,783 Views · 3 Likes
article thumbnail
Java regex matching hashmap
A hashmap which maintains keys as regular expressions. Any pattern matching the expression will be able to retrieve the same value. Internally it maintains two maps, one containing the regex to value, and another containing matched pattern to regex. Whenever there is a new pattern to 'get', there will be a O(n) search through the compiled regex(s) (which have been 'put' as keys) to find a match. Existing patterns will have constant time lookup through two maps. import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.WeakHashMap; import java.util.regex.Pattern; public class RegexHashMap implements Map { private class PatternMatcher { private final String regex; private final Pattern compiled; PatternMatcher(String name) { regex = name; compiled = Pattern.compile(regex); } boolean matched(String string) { if(compiled.matcher(string).matches()) { ref.put(string, regex); return true; } return false; } } /** * Map of input to pattern */ private final Map ref; /** * Map of pattern to value */ private final Map map; /** * Compiled patterns */ private final List matchers; @Override public String toString() { return "RegexHashMap [ref=" + ref + ", map=" + map + "]"; } /** * */ public RegexHashMap() { ref = new WeakHashMap(); map = new HashMap(); matchers = new ArrayList(); } /** * Returns the value to which the specified key pattern is mapped, or null if this map contains no mapping for the key pattern */ @Override public V get(Object weakKey) { if(!ref.containsKey(weakKey)) { for(PatternMatcher matcher : matchers) { if(matcher.matched((String) weakKey)) { break; } } } if(ref.containsKey(weakKey)) { return map.get(ref.get(weakKey)); } return null; } /** * Associates a specified regular expression to a particular value */ @Override public V put(String key, V value) { V v = map.put(key, value); if (v == null) { matchers.add(new PatternMatcher(key)); } return v; } /** * Removes the regular expression key */ @Override public V remove(Object key) { V v = map.remove(key); if(v != null) { for(Iterator iter = matchers.iterator(); iter.hasNext();) { PatternMatcher matcher = iter.next(); if(matcher.regex.equals(key)) { iter.remove(); break; } } for(Iterator> iter = ref.entrySet().iterator(); iter.hasNext();) { Entry entry = iter.next(); if(entry.getValue().equals(key)) { iter.remove(); } } } return v; } /** * Set of view on the regular expression keys */ @Override public Set> entrySet() { return map.entrySet(); } @Override public void putAll(Map m) { for(Entry entry : m.entrySet()) { put(entry.getKey(), entry.getValue()); } } @Override public int size() { return map.size(); } @Override public boolean isEmpty() { return map.isEmpty(); } /** * Returns true if this map contains a mapping for the specified regular expression key. */ @Override public boolean containsKey(Object key) { return map.containsKey(key); } /** * Returns true if this map contains a mapping for the specified regular expression matched pattern. * @param key * @return */ public boolean containsKeyPattern(Object key) { return ref.containsKey(key); } @Override public boolean containsValue(Object value) { return map.containsValue(value); } @Override public void clear() { map.clear(); matchers.clear(); ref.clear(); } /** * Returns a Set view of the regular expression keys contained in this map. */ @Override public Set keySet() { return map.keySet(); } /** * Returns a Set view of the regex matched patterns contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. * @return */ public Set keySetPattern() { return ref.keySet(); } @Override public Collection values() { return map.values(); } /** * Produces a map of patterns to values, based on the regex put in this map * @param patterns * @return */ public Map transform(List patterns) { for(String pattern : patterns) { get(pattern); } Map transformed = new HashMap(); for(Entry entry : ref.entrySet()) { transformed.put(entry.getKey(), map.get(entry.getValue())); } return transformed; } public static void main(String...strings) { RegexHashMap rh = new RegexHashMap(); rh.put("[o|O][s|S][e|E].?[1|2]", "This is a regex match"); rh.put("account", "This is a direct match"); System.out.println(rh); System.out.println("get:ose-1 -> "+rh.get("ose-1")); System.out.println("get:OSE2 -> "+rh.get("OSE2")); System.out.println("get:OSE112 -> "+rh.get("OSE112")); System.out.println("get:ose-2 -> "+rh.get("ose-2")); System.out.println("get:account -> "+rh.get("account")); System.out.println(rh); } }
November 7, 2014
by Sutanu Dalui
· 23,413 Views
article thumbnail
Spring Boot Based Websocket Application and Capturing HTTP Session ID
I was involved in a project recently where we needed to capture the http session id for a websocket request - the reason was to determine the number of websocket sessions utilizing the same underlying http session The way to do this is based on a sample utilizing the new spring-session module and is described here. The trick to capturing the http session id is in understanding that before a websocket connection is established between the browser and the server, there is a handshake phase negotiated over http and the session id is passed to the server during this handshake phase. Spring Websocket support provides a nice way to register a HandShakeInterceptor, which can be used to capture the http session id and set this in the sub-protocol(typically STOMP) headers. First, this is the way to capture the session id and set it to a header: public class HttpSessionIdHandshakeInterceptor implements HandshakeInterceptor { @Override public boolean beforeHandshake(ServerHttpRequest request, ServerHttpResponse response, WebSocketHandler wsHandler, Map attributes) throws Exception { if (request instanceof ServletServerHttpRequest) { ServletServerHttpRequest servletRequest = (ServletServerHttpRequest) request; HttpSession session = servletRequest.getServletRequest().getSession(false); if (session != null) { attributes.put("HTTPSESSIONID", session.getId()); } } return true; } public void afterHandshake(ServerHttpRequest request, ServerHttpResponse response, WebSocketHandler wsHandler, Exception ex) { } } And to register this HandshakeInterceptor with Spring Websocket support: @Configuration @EnableWebSocketMessageBroker public class WebSocketDefaultConfig extends AbstractWebSocketMessageBrokerConfigurer { @Override public void configureMessageBroker(MessageBrokerRegistry config) { config.enableSimpleBroker("/topic/", "/queue/"); config.setApplicationDestinationPrefixes("/app"); } @Override public void registerStompEndpoints(StompEndpointRegistry registry) { registry.addEndpoint("/chat").withSockJS().setInterceptors(httpSessionIdHandshakeInterceptor()); } @Bean public HttpSessionIdHandshakeInterceptor httpSessionIdHandshakeInterceptor() { return new HttpSessionIdHandshakeInterceptor(); } } Now that the session id is a part of the STOMP headers, this can be grabbed as a STOMP header, the following is a sample where it is being grabbed when subscriptions are registered to the server: @Component public class StompSubscribeEventListener implements ApplicationListener { private static final Logger logger = LoggerFactory.getLogger(StompSubscribeEventListener.class); @Override public void onApplicationEvent(SessionSubscribeEvent sessionSubscribeEvent) { StompHeaderAccessor headerAccessor = StompHeaderAccessor.wrap(sessionSubscribeEvent.getMessage()); logger.info(headerAccessor.getSessionAttributes().get("HTTPSESSIONID").toString()); } } or it can be grabbed from a controller method handling websocket messages as a MessageHeaders parameter: @MessageMapping("/chats/{chatRoomId}") public void handleChat(@Payload ChatMessage message, @DestinationVariable("chatRoomId") String chatRoomId, MessageHeaders messageHeaders, Principal user) { logger.info(messageHeaders.toString()); this.simpMessagingTemplate.convertAndSend("/topic/chats." + chatRoomId, "[" + getTimestamp() + "]:" + user.getName() + ":" + message.getMessage()); }
November 7, 2014
by Biju Kunjummen
· 41,403 Views · 1 Like
article thumbnail
Hibernate Collections: Optimistic Locking
Introduction Hibernate provides an optimistic locking mechanism to prevent lost updates even for long-conversations. In conjunction with an entity storage, spanning over multiple user requests (extended persistence context or detached entities) Hibernate can guarantee application-level repeatable-reads. The dirty checking mechanism detects entity state changes and increments the entity version. While basic property changes are always taken into consideration, Hibernate collections are more subtle in this regard. Owned vs. Inverse Collections In relational databases, two records are associated through a foreign key reference. In this relationship, the referenced record is the parent while the referencing row (the foreign key side) is the child. A non-null foreign key may only reference an existing parent record. In the Object-oriented space this association can be represented in both directions. We can have a many-to-one reference from a child to parent and the parent can also have a one-to-many children collection. Because both sides could potentially control the database foreign key state, we must ensure that only one side is the owner of this association. Only the owningside state changes are propagated to the database. The non-owning side has been traditionally referred as the inverse side. Next I’ll describe the most common ways of modelling this association. The Unidirectional Parent-Owning-Side-Child Association Mapping Only the parent side has a @OneToMany non-inverse children collection. The child entity doesn’t reference the parent entity at all. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List comments = new ArrayList (); ... } The Unidirectional Parent-Owning-Side-Child Component Association Mapping Mapping The child side doesn’t always have to be an entity and we might model it as acomponent type instead. An Embeddable object (component type) may contain both basic types and association mappings but it can never contain an @Id. The Embeddable object is persisted/removed along with its owning entity. The parent has an @ElementCollection children association. The child entity may only reference the parent through the non-queryable Hibernate specific @Parentannotation. @Entity(name = "post") public class Post { ... @ElementCollection @JoinTable(name = "post_comments", joinColumns = @JoinColumn(name = "post_id")) @OrderColumn(name = "comment_index") private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Embeddable public class Comment { ... @Parent private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The parent is the owning side so it has a @OneToMany non-inverse (without a mappedBy directive) children collection. The child entity references the parent entity through a @ManyToOne association that’s neither insertable nor updatable: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent entity through a @ManyToOne association, and the parent has a mappedBy @OneToMany children collection. The parent side is the inverse side so only the @ManyToOne state changes are propagated to the database. Even if there’s only one owning side, it’s always a good practice to keep both sides in sync by using the add/removeChild() methods. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true, mappedBy = "post") private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } The Unidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent through a @ManyToOne association. The parent doesn’t have a @OneToMany children collection so the child entity becomes the owning side. This association mapping resembles the relational data foreign key linkage. @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } Collection Versioning The 3.4.2 section of the JPA 2.1 specification defines optimistic locking as: The version attribute is updated by the persistence provider runtime when the object is written to the database. All non-relationship fields and proper ties and all relationships owned by the entity are included in version checks[35]. [35] This includes owned relationships maintained in join tables N.B. Only owning-side children collection can update the parent version. Testing Time Let’s test how the parent-child association type affects the parent versioning. Because we are interested in the children collection dirty checking, theunidirectional child-owning-side-parent association is going to be skipped, as in that case the parent doesn’t contain a children collection. Test Case The following test case is going to be used for all collection type use cases: protected void simulateConcurrentTransactions(final boolean shouldIncrementParentVersion) { final ExecutorService executorService = Executors.newSingleThreadExecutor(); doInTransaction(new TransactionCallable () { @Override public Void execute(Session session) { try { P post = postClass.newInstance(); post.setId(1L); post.setName("Hibernate training"); session.persist(post); return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); doInTransaction(new TransactionCallable () { @Override public Void execute(final Session session) { final P post = (P) session.get(postClass, 1L); try { executorService.submit(new Callable () { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable () { @Override public Void execute(Session _session) { try { P otherThreadPost = (P) _session.get(postClass, 1L); int loadTimeVersion = otherThreadPost.getVersion(); assertNotSame(post, otherThreadPost); assertEquals(0L, otherThreadPost.getVersion()); C comment = commentClass.newInstance(); comment.setReview("Good post!"); otherThreadPost.addComment(comment); _session.flush(); if (shouldIncrementParentVersion) { assertEquals(otherThreadPost.getVersion(), loadTimeVersion + 1); } else { assertEquals(otherThreadPost.getVersion(), loadTimeVersion); } return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); } }).get(); } catch (Exception e) { throw new IllegalArgumentException(e); } post.setName("Hibernate Master Class"); session.flush(); return null; } }); } The Unidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null, comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comment_index, comments_id) values (?, ?, ?)][1,0,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnUnidirectionalCollectionTest$Post#1] The Unidirectional Parent-Owning-Side-Child Component Association Testing #create tables Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comments (post_id bigint not null, review varchar(255), comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comments add constraint FK_gh9apqeduab8cs0ohcq1dgukp foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_0_0_, entityopti0_.name as name2_0_0_, entityopti0_.version as version3_0_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_0_0_, comments0_.review as review2_1_0_, comments0_.comment_index as comment_3_0_ from post_comments comments0_ where comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comments (post_id, comment_index, review) values (?, ?, ?)][1,0,Good post!]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnComponentCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnBidirectionalParentOwningCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #post version is not incremented in secondary transaction Query:{[insert into comment (id, post_id, review) values (default, ?, ?)][1,Good post!]} Query:{[selectcount(id) from comment where post_id =?][1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Overruling Default Collection Versioning If the default owning-side collection versioning is not suitable for your use case, you can always overrule it with Hibernate [a href="http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html_single/#d0e2903" style="font-family: inherit; font-size: 14px; font-style: inherit; font-weight: inherit; text-decoration: none; color: rgb(1, 160, 219); -webkit-tap-highlight-color: rgb(240, 29, 79); background: transparent;"]@OptimisticLock annotation. Let’s overrule the default parent version update mechanism for bidirectional parent-owning-side-child association: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @OptimisticLock(excluded = true) private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } This time, the children collection changes won’t trigger a parent version update: #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoyed this article, I bet you are going to love my book as well. Conclusion It’s very important to understand how various modeling structures impact concurrency patterns. The owning side collections changes are taken into consideration when incrementing the parent version number, and you can always bypass it using the @OptimisticLock annotation. Code available on GitHub. If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, you just need to follow my blog.
November 4, 2014
by Vlad Mihalcea
· 61,285 Views · 1 Like
article thumbnail
Using REST with the CQRS Pattern to Blend NoSQL & SQL Data
REST Easy with SQL/NoSQL Integration and CQRS Pattern implementation New demands are being put on IT organizations everyday to deliver agile, high-performance, integrated mobile and web applications. In the meantime, the technology landscape is getting complex everyday with the advent of new technologies like REST, NoSQL, Cloud while existing technologies like SOAP and SQL still rule everyday work. Rather than taking religious side of the debate, NoSQL can successfully co-exist with SQL in this ‘polyglot’ of data storage and formats. However, this integration also adds another layer of complexity both in architecture and implementation. This document offers a guide on how some of the relatively newer technologies like REST can help bridge the gap between SQL and NoSQL with an example of a well known pattern called CQRS. This document is organized as follows: Introduction to SQL development process NoSQL Do I have to choose between SQL and NoSQL? CQRS Pattern How to implement CQRS pattern using REST services Introduction to SQL development process Developers have been using SQL Databases for decades to build and deliver enterprise business applications. The process of creating tables, attributes,and relationships is second nature for most developers. Data architects think in terms of tables and columns and navigate relationships for data. The basic concepts of delivery and transformation takes place at the web server level which means the server developer is reading and ‘binding’ to the tables and mapping attributes to a REST response. Application development lifecycle meant changes to the database schema first, followed by the bindings, then internal schema mapping, and finally the SOAP or JSON services, and eventually the client code. This all costs the project time and money. It also means that the ‘code’ (pick your language here) and the business logic would also need to be modified to handle the changes to the model. NoSQL NoSQL is gaining supporters among many SQL shops for various reasons including: Low cost Ability to handle unstructured dataa Scalability Performance The first thing database folks notice is that there is no schema. These document style storage engines can handle huge volumes of structured, semi-structured, and unstructured data. The very nature of schema-less documents allows change to a document structure without having to go through the formal change management process (or data architect). The other major difference is that NoSQL (no-schema) also means no joins or relationships. The document itself contains the embedded information by design. So an order entry would contain the customer with all the orders and line items for each order in a single document. There are many different NoSQL vendors (popular NoSQL databases include MongoDB, Casandra) that are being used for BI and Analytics (read-only) purposes. We are also seeing many customers starting to use NoSQL for auditing, logging, and archival transactions. Do I have to choose between SQL and NoSQL? The purpose of this article is to not get into the religious debate about whether to use SQL or NoSQL. Bottom line is both have their place and are suited for certain type of data – SQL for structured data and NoSQL for unstructured data. So why not have the capability to mix and match this data depending on the application. This can be done by creating a single REST API across both SQL and NoSQL databases. Why a single REST API? The answer is simple – the new agile and mobile world demands this ‘mashup’ of data into a document style JSON response. CQRS (Command Query Responsibility Segmentation) Pattern There are many design patterns for delivery of high performance RESTful services but the one that stands out was described in an article written by Martin Fowler, one of the software industry veterans. He described the pattern called CQRS that is more relevant today in a ‘polyglot’ of servers, data, services, and connections. “We may want to look at the information in a different way to the record store, perhaps collapsing multiple records into one, or forming virtual records by combining information for different places. On the update side we may find validation rules that only allow certain combinations of data to be stored, or may even infer data to be stored that’s different from that we provide.” – Martin Fowler 2011 In this design pattern, the REST API requests (GET) return documents from multiple sources (e.g. mashups). In the update process, the data is subject to business logic derivations, validations, event processing, and database transactions. This data may then be pushed back into the NoSQL using asynchronous events. With the wide-spread adoption of NoSQL databases like MongoDB and schema-less, high capacity data store; most developers are challenged with providing security, business logic, event handling, and integration to other systems. MongoDB; one the popular NoSQL databases and SQL databases share many similar concepts. However the MongoDB programming language itself is very different from the SQL we all know. How to implement CQRS pattern using a RESTFul Architecture A REST server should meet certain requirements to support the CQRS pattern. The server should run on-premise or in the cloud and appears to the mobile and web developer as an HTTP endpoint. The server architecture should implement the following: Connections and Mapping necessary for SQL and NoSQL connectivity and API services needed to create and return GET, PUT, POST, and DELETE REST responses Security Business Logic Connections and Mapping There are two main approaches to creating REST Servers and APIs for SQL and NoSQL databases: Open source frameworks like Apache Tomcat, Spring/Hibernate Commercial framework like Espresso Logic Open source Frameworks Using various open source frameworks like Tomcat, Spring/Hibernate, Node.js, JDBC and MongoDB drivers, a REST server can be created, but we would still be left with the following tasks: Creation and mapping of the necessary SQL objects Create a REST server container and configurations Create Jersey/Jackson classes and annotations Create and define REST API for tables, views, and procedures Hand write validation, event and business logic Handle persistence, optimistic locking, transaction paging Adding identity management and security by roles Now we can start down the same path to connect to MongoDB and write code to connect, select, and return data in JSON and then create the REST calls to merge these two different document styles into a single RESTful endpoint. This is a lot of work for a development team to manage and control and frankly pretty boring and repetitive and is better done by a well designed framework Commercial Frameworks Many commercial frameworks may take care of this complexity without the need to do extensive programming. Here is an example from Espresso Logic and how it handles this complexity with a point and click interface: Running REST server in the cloud or on-premise Connections to external SQL databases Object mapping to tables, views, and procedures Automatic creation of RESTful endpoints from model Reactive business rules and rich event model Integrated role-based security and authentication services. Point-and-click document API creation for SQL and MongoDB endpoints In the example below, the editor shows an SQL (customersTransactions) joined with archived details from MongoDB (archivedTransactions). The MongoDB document for each customer may include transaction details, check images, customer service notes and other relevant account information. This new mashup becomes a single REST call that can be published to mobile and web application developer. Security Security is an important part of building and delivery of RESTful services which can be broken down into two parts; authentication and access control. Authentication Before allowing anyone access to corporate data you want to use the existing corporate identity management (some call this authentication services) to capture and validate the user. This identity management service is based on using existing corporate standards such as LDAP, Windows AD, SQL Database. Role-based Access Control Each user may be assigned one or more corporate roles and these roles are then assigned specific access privileges to each resource (e.g. READ, INSERT, UPDATE, and DELETE). Role-based access should also be able to restrict permissions to specific rows and columns of the API (e.g. only sales reps can see their own orders or a manager can see and change his department salaries but cannot change his own). This restriction should be applied regardless of how or where the API is used or called. Remember, the SQL database already provides some level of security and access which must be considered when designing and delivering new front-end services to internal and external users. Business Logic for REST When data is updated to a REST Server several things need to happen. First, the authentication and access control should determine if this is a valid request and if the user has rights to the endpoint. In addition, the server may need to de-alias REST attributes back to the actual SQL column names. In a full featured business logic server, there should be a series of events and business rules to perform various calculations, validations, and fire other events on dependent tables. Finally, the entire multi-table transaction is written back to the SQL database in a single transaction. Updates are then sent asynchronously to MongoDB as part of the commit event (after the SQL transaction has completed). Conclusion In the real-world of API services, the demand for more complex document style RESTful services is a requirement. That is, the ability to create ‘mashups’ of data from multiple tables, NoSQL collections, and other external systems is a large part of this new design pattern. In addition, the ability to alias attribute names and formats from these source fields has become critical for partners and customers systems. Using REST with the CQRS pattern to blend MongoDB and SQL seamlessly to your existing data will become a major part of your future mobile strategy. To implement these REST services, one can use open source tools and spend a lot of time or select a right commercial framework. This framework should support cloud or on-premise connectivity, security, API integration, as well as business logic. This will make the design and delivery of new application services more rapid and agile in the heterogeneous world of information.
November 4, 2014
by Val Huber DZone Core CORE
· 15,810 Views
article thumbnail
Spring Caching Abstraction and Google Guava Cache
Spring provides a great out of the box support for caching expensive method calls. The caching abstraction is covered in a great detail here. My objective here is to cover one of the newer cache implementations that Spring now provides with 4.0+ version of the framework - using Google Guava Cache In brief, consider a service which has a few slow methods: public class DummyBookService implements BookService { @Override public Book loadBook(String isbn) { // Slow method 1. } @Override public List loadBookByAuthor(String author) { // Slow method 2 } } With Spring Caching abstraction, repeated calls with the same parameter can be sped up by an annotation on the method along these lines - here the result of loadBook is being cached in to a "book" cache and listing of books cached into another "books" cache: public class DummyBookService implements BookService { @Override @Cacheable("book") public Book loadBook(String isbn) { // slow response time.. } @Override @Cacheable("books") public List loadBookByAuthor(String author) { // Slow listing } } Now, Caching abstraction support requires a CacheManager to be available which is responsible for managing the underlying caches to store the cached results, with the new Guava Cache support the CacheManager is along these lines: @Bean public CacheManager cacheManager() { return new GuavaCacheManager("books", "book"); } Google Guava Cache provides a rich API to be able to pre-load the cache, set eviction duration based on last access or created time, set the size of the cache etc, if the cache is to be customized then a guava CacheBuilder can be passed to the CacheManager for this customization: @Bean public CacheManager cacheManager() { GuavaCacheManager guavaCacheManager = new GuavaCacheManager(); guavaCacheManager.setCacheBuilder(CacheBuilder.newBuilder().expireAfterAccess(30, TimeUnit.MINUTES)); return guavaCacheManager; } This works well if all the caches have a similar configuration, what if the caches need to be configured differently - for eg. in the sample above, I may want the "book" cache to never expire but the "books" cache to have an expiration of 30 mins, then the GuavaCacheManager abstraction does not work well, instead a better solution is actually to use a SimpleCacheManager which provides a more direct way to get to the cache and can be configured this way: @Bean public CacheManager cacheManager() { SimpleCacheManager simpleCacheManager = new SimpleCacheManager(); GuavaCache cache1 = new GuavaCache("book", CacheBuilder.newBuilder().build()); GuavaCache cache2 = new GuavaCache("books", CacheBuilder.newBuilder() .expireAfterAccess(30, TimeUnit.MINUTES) .build()); simpleCacheManager.setCaches(Arrays.asList(cache1, cache2)); return simpleCacheManager; } This approach works very nicely, if required certain caches can be configured to be backed by a different caching engines itself, say a simple hashmap, some by Guava or EhCache some by distributed caches like Gemfire.
November 3, 2014
by Biju Kunjummen
· 59,625 Views · 8 Likes
article thumbnail
ZooKeeper on Kubernetes
The last couple of weeks I've been playing around with docker and kubernetes. If you are not familiar with kubernetes let's just say for now that its an open source container cluster management implementation, which I find really really awesome. One of the first things I wanted to try out was running an Apache ZooKeeper ensemble inside kubernetes and I thought that it would be nice to share the experience. For my experiments I used Docker v. 1.3.0 and Openshift V3, which I built from source and includes Kubernetes. ZooKeeper on Docker Managing a ZooKeeper ensemble is definitely not a trivial task. You usually need to configure an odd number of servers and all of the servers need to be aware of each other. This is a PITA on its own, but it gets even more painful when you are working with something as static as docker images. The main difficulty could be expressed as: "How can you create multiple containers out of the same image and have them point to each other?" One approach would be to use docker volumes and provide the configuration externally. This would mean that you have created the configuration for each container, stored it somewhere in the docker host and then pass the configuration to each container as a volume at creation time. I've never tried that myself, I can't tell if its a good or bad practice, I can see some benefits, but I can also see that this is something I am not really excited about. It could look like this: docker run -p 2181:2181 -v /path/to/my/conf:/opt/zookeeper/conf my/zookeeper An other approach would be to pass all the required information as environment variables to the container at creation time and then create a wrapper script which will read the environment variables, modify the configuration files accordingly, launch zookeeper. This is definitely easier to use, but its not that flexible to perform other types of tuning without rebuilding the image itself. Last but not least one could combine the two approaches into one and do something like: Make it possible to provide the base configuration externally using volumes. Use env and scripting to just configure the ensemble. There are plenty of images out there that take one or the other approach. I am more fond of the environment variables approach and since I needed something that would follow some of the kubernetes conventions in terms of naming, I decided to hack an image of my own using the env variables way. Creating a custom image for ZooKeeper I will just focus on the configuration that is required for the ensemble. In order to configure a ZooKeeper ensemble, for each server one has to assign a numeric id and then add in its configuration an entry per zookeeper server, that contains the ip of the server, the peer port of the server and the election port. The server id is added in a file called myid under the dataDir. The rest of the configuration looks like: server.1=server1.example.com:2888:3888 server.2=server2.example.com:2888:3888 server.3=server3.example.com:2888:3888 ... server.current=[bind address]:[peer binding port]:[election biding port]Note that if the server id is X the server.X entry needs to contain the bind ip and ports and not the connection ip and ports. So what we actually need to pass to the container as environment variables are the following: The server id. For each server in the ensemble: The hostname or ip The peer port The election port If these are set, then the script that updates the configuration could look like: if [ ! -z "$SERVER_ID" ]; then echo "$SERVER_ID" > /opt/zookeeper/data/myid #Find the servers exposed in env. for i in `echo {1..15}`;do HOST=`envValue ZK_PEER_${i}_SERVICE_HOST` PEER=`envValue ZK_PEER_${i}_SERVICE_PORT` ELECTION=`envValue ZK_ELECTION_${i}_SERVICE_PORT` if [ "$SERVER_ID" = "$i" ];then echo "server.$i=0.0.0.0:2888:3888" >> conf/zoo.cfg elif [ -z "$HOST" ] || [ -z "$PEER" ] || [ -z "$ELECTION" ] ; then #if a server is not fully defined stop the loop here. break else echo "server.$i=$HOST:$PEER:$ELECTION" >> conf/zoo.cfg fi done fi For simplicity the function that read the keys and values from env are excluded. The complete image and helping scripts to launch zookeeper ensembles of variables size can be found in the fabric8io repository. ZooKeeper on Kubernetes The docker image above, can be used directly with docker, provided that you take care of the environment variables. Now I am going to describe how this image can be used with kubernetes. But first a little rambling... What I really like about using kubernetes with ZooKeeper, is that kubernetes will recreate the container, if it dies or the health check fails. For ZooKeeper this also means that if a container that hosts an ensemble server dies, it will get replaced by a new one. This guarantees that there will be constantly a quorum of ZooKeeper servers. I also like that you don't need to worry about the connection string that the clients will use, if containers come and go. You can use kubernetes services to load balance across all the available servers and you can even expose that outside of kubernetes. Creating a Kubernetes confing for ZooKeeper I'll try to explain how you can create 3 ZooKeeper Server Ensemble in Kubernetes. What we need is 3 docker containers all running ZooKeeper with the right environment variables: { "image": "fabric8/zookeeper", "name": "zookeeper-server-1", "env": [ { "name": "ZK_SERVER_ID", "value": "1" } ], "ports": [ { "name": "zookeeper-client-port", "containerPort": 2181, "protocol": "TCP" }, { "name": "zookeeper-peer-port", "containerPort": 2888, "protocol": "TCP" }, { "name": "zookeeper-election-port", "containerPort": 3888, "protocol": "TCP" } ] } The env needs to specify all the parameters discussed previously. So we need to add along with the ZK_SERVER_ID, the following: ZK_PEER_1_SERVICE_HOST ZK_PEER_1_SERVICE_PORT ZK_ELECTION_1_SERVICE_PORT ZK_PEER_2_SERVICE_HOST ZK_PEER_2_SERVICE_PORT ZK_ELECTION_2_SERVICE_PORT ZK_PEER_3_SERVICE_HOST ZK_PEER_3_SERVICE_PORT ZK_ELECTION_3_SERVICE_PORT An alternative approach could be instead of adding all these manual configuration, to expose peer and election as kubernetes services. I tend to favor the later approach as it can make things simpler when working with multiple hosts. It's also a nice exercise for learning kubernetes. So how do we configure those services? To configure them we need to know: the name of the port the kubernetes pod the provide the service The name of the port is already defined in the previous snippet. So we just need to find out how to select the pod. For this use case, it make sense to have a different pod for each zookeeper server container. So we just need to have a label for each pod, the designates that its a zookeeper server pod and also a label that designates the zookeeper server id. "labels": { "name": "zookeeper-pod", "server": 1 } Something like the above could work. Now we are ready to define the service. I will just show how we can expose the peer port of server with id 1, as a service. The rest can be done in a similar fashion: { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-peer-1", "kind": "Service", "port": 2888, "containerPort": "zookeeper-peer-port", "selector": { "name": "zookeeper-pod", "server": 1 } } The basic idea is that in the service definition, you create a selector which can be used to query/filter pods. Then you define the name of the port to expose and this is pretty much it. Just to clarify, we need a service definition just like the one above per zookeeper server container. And of course we need to do the same for the election port. Finally, we can define an other kind of service, for the client connection port. This time we are not going to specify the sever id, in the selector, which means that all 3 servers will be selected. In this case kubernetes will load balance across all ZooKeeper servers. Since ZooKeeper provides a single system image (it doesn't matter on which server you are connected) then this is pretty handy. { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-client", "kind": "Service", "port": 2181, "createExternalLoadBalancer": "true", "containerPort": "zookeeper-client-port", "selector": { "name": "zookeeper-pod" } } The basic idea is that in the service definition, you create a selector which can be used to query/filter pods. Then you define the name of the port to expose and this is pretty much it. Just to clarify, we need a service definition just like the one above per zookeeper server container. And of course we need to do the same for the election port. Finally, we can define an other kind of service, for the client connection port. This time we are not going to specify the sever id, in the selector, which means that all 3 servers will be selected. In this case kubernetes will load balance across all ZooKeeper servers. Since ZooKeeper provides a single system image (it doesn't matter on which server you are connected) then this is pretty handy. { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-client", "kind": "Service", "port": 2181, "createExternalLoadBalancer": "true", "containerPort": "zookeeper-client-port", "selector": { "name": "zookeeper-pod" } } I hope you found it useful. There is definitely room for improvement so feel free to leave comments.
November 3, 2014
by Ioannis Canellos
· 21,432 Views · 3 Likes
article thumbnail
BigList: a Scalable High-Performance List for Java
As memory gets cheaper and cheaper, our applications can keep more data readily available in main memory, or even all as in case of in-memory databases. To make real use of the growing heap memory, appropriate data structures must be used. Interesting enough, there seem to be no specialized implementations for lists - by far the most used collection. This article introduces BigList, a list designed for handling large collections where large means that all data still fit completely in the heap memory. The article will show the special requirements for handling large collections, how BigList is implemented and how it compares to other list implementations. 1. Requirements What are the special requirements we need to handle large collections efficiently? Memory: Sparing use of memory: The list should need little memory for its own implementation so memory can be used for storing application data. Specialized versions for primitives: It must be possible to store common primitives like ints in a memory saving way. Avoid copying large data blocks: If the list grows or shrinks, only a small part of the data must be copied around, as this operation becomes expensive and needs the same amount of memory again. Data sharing: copying collections is a frequent operation which should be efficiently possible even if the collection is large. An efficient implementation requires some sort of data sharing as copying all elements is per se a costly operation. Performance: Good performance for normal operations like reading, storing, adding or removing single elements. Great performance for bulk operations like adding or removing multiple elements. Predictable overhead of operations, so similar operations should need a similar amount of time without excessive worst case scenarios. If an implementation does not offer these features, some operations will not only be slow for really large collections, but will becomse just not feasible because memory or CPU usage will be too exhaustive. Introduction to BigList BigList is a member of the Brownies Collections library which also includes GapList, the fastest list implementation known. GapList is a drop-in replacement for ArrayList, LinkedList, or ArrayDequeue and offers fast access by index and fast insertion/removal at the beginning and at the end at the same time. GapList however has not been designed to cope with large collections, so adding or removing elements can make it necessary to copy a lot of elements around which will lead to performance problems. Also copying a large collection becomes an expensive operation, both in term of time and memory consumption. It will simply not be possible to make a copy of a large collections if not the same amount of memory is available a second time. And this is a common operation as you often want to return a copy of an internal list through your API which has no reference the original list. BigList addresses both problems. The first problem is solved by storing the collection elements in fixed size blocks. Add or remove operations are then implemented to use only data from one block. The copying problem is solved by maintaining a reference count on the fixed size blocks which allows to implement a copy-on-write approach. For efficient access to the fixed size blocks, they are maintained in a specialized tree structure. 2. BigList Details Each BigList instance stores the following information: Elements are stored in in fixed-size blocks A single block is implemented as GapList with a reference count for sharing All blocks are maintained in a tree for fast access Access information for the current block is cached for better performance The following illustration shows these details for two instances of BigList which share one block. 2.1 Use of Blocks Elements are stored in in fixed-size blocks with a default block size of 1000. Where this default may look pretty small, it is most of the time a good choice because it guarantees that write operation only need to move few elements. Read operations will profit from locality of reference by using the currently cached block to be fast. It is however possible to specify the block size for each created BigList instance. All blocks except the first are allocated with this fixed size and will not grow or shrink. The first block will grow to the specified block size to save memory for small lists. If a block has reached its maximum size and more data must be stored within, the block needs to be split up in two blocks before more elements can be stored. If elements are added to the head or tail of the list, the block will only be filled up to a threshold of 95%. This allows inserts into the block without the immediate need for split operations. To save memory, blocks are also merged. This happens automatically if two adjacent blocks are both filled less than 35% after a remove operation. 2.2 Locality of Reference For each operation on BigList, the affected block must be determined first. As predicted by locality of reference, most of the time the affected block will be the same as for the last operation. The implementation of BigList has therefore been designed to profit from locality of reference which makes common operations like iterating over a list very efficient. Instead of always traversing the block tree to determine the block needed for an operation, lower and upper index of the last used block are cached. So if the next operation happens near to the previous one, the same block can be used again without need to traverse the tree. 2.3 Reference Counting To support a copy-on-write approach, BigList stores a reference count for each fixed size blocks indicating whether this block is private or shared. Initially all lists are private having a reference count of 0, so that modification are allowed. If a list is copied, the reference count is incremented which prohibits further modifications. Before a modification then can be made, the block must be copied decrementing the block's reference count and setting the reference count of the copy to 0. The reference count of a block is then decremented by the finalizer of BigList. 3. Benchmarks To prove the excellence of BigList in time and memory consumption, we compare it with some other List implementations. And here are the nominees: Type Library Description BigList brownie-collections List optimized for storing large number of elements. Elements stored in fixed size blocks which are maintained in a tree. GapList brownie-collections Fastest list implementation known. Fast access by index and fast insertion/removal at end and at beginning. ArrayList JDK Maintains elements in a single array. Fast access by index, fast insertion/removal at end, but slow at beginning. LinkedList JDK Elements stored using a linked list. Slow access by index. Memory overhead for each element stored. TreeList commons-collections Elements stored in a tree. All operations are not really fast, but there are no very slow operations. Memory overhead for each element stored. FastTable javolution Elements stored in a "fractal"-like data structure. Good performance and use of memory. However no bulk operations and collection does not shrink. 3.1 Handling Objects In the first part of the benchmark, we compare memory consumption and performance of the different list implementations. Let's first have a look at the memory consumption. The following table shows the bytes used to hold a list with 1'000'000 null elements: BigList GapList ArrayList LinkedList TreeList FastTable 32 bit 4'298'466 4'021'296 4'861'992 8'000'028 18'000'028 4'142'892 64 bit 8'544'254 8'042'552 9'723'964 16'000'044 26'000'044 8'222'988 We can see that BigList, GapList, ArrayList, and FastTable only add small overhead to the stored elements, where as Linkedlist needs twice the memory and TreeList even more. Now to the performance. Here are the results of 9 benchmarks which have been run for each of the 6 candidates with JDK 8 in a 32 bit Windows environment and a list of 1'000'000 elements: The result table can be read as follows: the fastest candidate for each test has a relative performance indicator of 1 the value for the other candidates indicate how many times they have been slower, so a factor of 3 means that this implementation was 3 times slower than the best one The different factor are colored like this: 
factor 1: green (best)
 factor <5: blue (good) 
factor <25: yellow (moderate) 
factor >25: red (poor) If we look the benchmark result, we can see that the performance of BigList is best for all expect two benchmarks. The only moderate result is produces in getting elements in a totally random order. This could be expected as there is no locality of reference which can be exploited, so for each access, the block tree must be traversed to find the correct block. Luckily this is a rare use case in real applications. And the benchmark "Get local" shows that performance is back to good as soon as elements next to each other must be retrieved - as it is the case if we iterate over a range. 3.2 Handling Primitives In the second part of the benchmark, we want see how big the savings are if we use a data structure specialized for storing primitives compared to strong wrapped objects. For this reason, we compare IntBigList and BigList. The following table shows memory needed to store 1'000'000 integer values: BigList IntBigList 32 bit 16'298'454 4'534'840 64 bit 28'544'234 4'570'432 Obviously it is easy to save a lot of memory. In a 32 bit environment, IntBigList just needs 25% percent of memory, in a 64 bit environment only 14%! These figures become plausible if you recall that a simple object needs 8 bytes in a 32 bit, but already 16 bytes in a 64 bit environment, where as a primitive integer value always only needs 4 bytes. The measurable performance gain is not so impressive, it is something below 10% for simple get operations and something above 10% for add and remove operations. These numbers show that the JVM is impressively fast in creating wrapper objects and boxing and unboxing primitive values. We must however also consider that each created object will need to be garbage collected once and therefore adds to the total load of the JVM. 4. Summary BigList is a scalable high-performance list for storing large collections. Its design guarantees that all operations will be predictable and efficient both in term of performance and memory consumption, even copying large collections is tremendous fast. Benchmarks haven proven this and shown that BigList outperform other known list implementations. The library also offers specialized implementations for primitive types like IntBigList which save much memory and provide superior performance. BigList for handling objects and the specializations for handling primitives are part of the Brownies Collections library and can be downloaded from http://www.magicwerk.org/collections.
November 3, 2014
by Thomas Mauch
· 32,333 Views · 8 Likes
article thumbnail
How to Setup Custom SSLSocketFactory's TrustManager per Each URL Connection
We can see from javadoc that javax.net.ssl.HttpsURLConnection provided a static method to override withsetDefaultSSLSocketFory() method. This allow you to supply a custom javax.net.ssl.TrustManagerthat may verify your own CA certs handshake and validation etc. But this will override the default for all "https" URLs per your JVM! So how can we override just a single https URL? Looking at javax.net.ssl.HttpsURLConnection again we see instance method for setSSLSocketFactory(), but we can't instantiate HttpsURLConnection object directly! It took me some digging to realized that the java.net.URL is actually an factory class for its implementation! One can get an instance like this using new URL("https://localhost").openConnection() To complete this article, I will provide a simple working example that demonstrate this. package zemian; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; import java.security.SecureRandom; import java.security.cert.X509Certificate; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; public class WGetText { public static void main(String[] args) throws Exception { String urlString = System.getProperty("url", "https://google.com"); URL url = new URL(urlString); URLConnection urlConnection = url.openConnection(); HttpsURLConnection httpsUrlConnection = (HttpsURLConnection) urlConnection; SSLSocketFactory sslSocketFactory = createSslSocketFactory(); httpsUrlConnection.setSSLSocketFactory(sslSocketFactory); try (InputStream inputStream = httpsUrlConnection.getInputStream()) { BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream)); String line = null; while ((line = reader.readLine()) != null) { System.out.println(line); } } } private static SSLSocketFactory createSslSocketFactory() throws Exception { TrustManager[] byPassTrustManagers = new TrustManager[] { new X509TrustManager() { public X509Certificate[] getAcceptedIssuers() { return new X509Certificate[0]; } public void checkClientTrusted(X509Certificate[] chain, String authType) { } public void checkServerTrusted(X509Certificate[] chain, String authType) { } } }; SSLContext sslContext = SSLContext.getInstance("TLS"); sslContext.init(null, byPassTrustManagers, new SecureRandom()); return sslContext.getSocketFactory(); } }
October 31, 2014
by Zemian Deng
· 38,262 Views
article thumbnail
Spring Integration Error Handling with Router, ErrorChannel, and Transformer
This article explains how errors are handled when using the messaging system with Spring Integration and how to handle route and redirect to specific channel.
October 31, 2014
by Upender Chinthala
· 47,484 Views · 6 Likes
article thumbnail
Building a REST API with JAXB, Spring Boot and Spring Data
if someone asked you to develop a rest api on the jvm, which frameworks would you use? i was recently tasked with such a project. my client asked me to implement a rest api to ingest requests from a 3rd party. the project entailed consuming xml requests, storing the data in a database, then exposing the data to internal application with a json endpoint. finally, it would allow taking in a json request and turning it into an xml request back to the 3rd party. with the recent release of apache camel 2.14 and my success using it , i started by copying my apache camel / cxf / spring boot project and trimming it down to the bare essentials. i whipped together a simple hello world service using camel and spring mvc. i also integrated swagger into both. both implementations were pretty easy to create ( sample code ), but i decided to use spring mvc. my reasons were simple: its rest support was more mature, i knew it well, and spring mvc test makes it easy to test apis. camel's swagger support without web.xml as part of the aforementioned spike, i learned out how to configure camel's rest and swagger support using spring's javaconfig and no web.xml. i made this into a sample project and put it on github as camel-rest-swagger . this article shows how i built a rest api with java 8, spring boot/mvc, jaxb and spring data (jpa and rest components). i stumbled a few times while developing this project, but figured out how to get over all the hurdles. i hope this helps the team that's now maintaining this project (my last day was friday) and those that are trying to do something similar. xml to java with jaxb the data we needed to ingest from a 3rd party was based on the ncpdp standards. as a member, we were able to download a number of xsd files, put them in our project and generate java classes to handle the incoming/outgoing requests. i used the maven-jaxb2-plugin to generate the java classes. org.jvnet.jaxb2.maven2 maven-jaxb2-plugin 0.8.3 generate -xtostring -xequals -xhashcode -xcopyable org.jvnet.jaxb2_commons jaxb2-basics 0.6.4 src/main/resources/schemas/ncpdp the first error i ran into was about a property already being defined. [info] --- maven-jaxb2-plugin:0.8.3:generate (default) @ spring-app --- [error] error while parsing schema(s).location [ file:/users/mraible/dev/spring-app/src/main/resources/schemas/ncpdp/structures.xsd{1811,48}]. com.sun.istack.saxparseexception2; systemid: file:/users/mraible/dev/spring-app/src/main/resources/schemas/ncpdp/structures.xsd; linenumber: 1811; columnnumber: 48; property "multipletimingmodifierandtimingandduration" is already defined. use to resolve this conflict. at com.sun.tools.xjc.errorreceiver.error(errorreceiver.java:86) i was able to workaround this by upgrading to maven-jaxb2-plugin version 0.9.1. i created a controller and stubbed out a response with hard-coded data. i confirmed the incoming xml-to-java marshalling worked by testing with a sample request provided by our 3rd party customer. i started with a curl command, because it was easy to use and could be run by anyone with the file and curl installed. curl -x post -h 'accept: application/xml' -h 'content-type: application/xml' \ --data-binary @sample-request.xml http://localhost:8080/api/message -v this is when i ran into another stumbling block: the response wasn't getting marshalled back to xml correctly. after some research, i found out this was caused by the lack of @xmlrootelement annotations on my generated classes. i posted a question to stack overflow titled returning jaxb-generated elements from spring boot controller . after banging my head against the wall for a couple days, i figured out the solution . i created a bindings.xjb file in the same directory as my schemas. this causes jaxb to generate @xmlrootelement on classes. to add namespaces prefixes to the returned xml, i had to modify the maven-jaxb2-plugin to add a couple arguments. -extension -xnamespace-prefix and add a dependency: org.jvnet.jaxb2_commons jaxb2-namespace-prefix 1.1 then i modified bindings.xjb to include the package and prefix settings. i also moved into a global setting. i eventually had to add prefixes for all schemas and their packages. i learned how to add prefixes from the namespace-prefix plugins page . finally, i customized the code-generation process to generate joda-time's datetime instead of the default xmlgregoriancalendar . this involved a couple custom xmladapters and a couple additional lines in bindings.xjb . you can see the adapters and bindings.xjb with all necessary prefixes in this gist . nicolas fränkel's customize your jaxb bindings was a great resource for making all this work. i wrote a test to prove that the ingest api worked as desired. @runwith(springjunit4classrunner.class) @springapplicationconfiguration(classes = application.class) @webappconfiguration @dirtiescontext(classmode = dirtiescontext.classmode.after_class) public class initiaterequestcontrollertest { @inject private initiaterequestcontroller controller; private mockmvc mockmvc; @before public void setup() { mockitoannotations.initmocks(this); this.mockmvc = mockmvcbuilders.standalonesetup(controller).build(); } @test public void testgetnotallowedonmessagesapi() throws exception { mockmvc.perform(get("/api/initiate") .accept(mediatype.application_xml)) .andexpect(status().ismethodnotallowed()); } @test public void testpostpainitiationrequest() throws exception { string request = new scanner(new classpathresource("sample-request.xml").getfile()).usedelimiter("\\z").next(); mockmvc.perform(post("/api/initiate") .accept(mediatype.application_xml) .contenttype(mediatype.application_xml) .content(request)) .andexpect(status().isok()) .andexpect(content().contenttype(mediatype.application_xml)) .andexpect(xpath("/message/header/to").string("3rdparty")) .andexpect(xpath("/message/header/sendersoftware/sendersoftwaredeveloper").string("hid")) .andexpect(xpath("/message/body/status/code").string("010")); } } spring data for jpa and rest with jaxb out of the way, i turned to creating an internal api that could be used by another application. spring data was fresh in my mind after reading about it last summer. i created classes for entities i wanted to persist, using lombok's @data to reduce boilerplate. i read the accessing data with jpa guide, created a couple repositories and wrote some tests to prove they worked. i ran into an issue trying to persist joda's datetime and found jadira provided a solution. i added its usertype.core as a dependency to my pom.xml: org.jadira.usertype usertype.core 3.2.0.ga ... and annotated datetime variables accordingly. @column(name = "last_modified", nullable = false) @type(type="org.jadira.usertype.dateandtime.joda.persistentdatetime") private datetime lastmodified; with jpa working, i turned to exposing rest endpoints. i used accessing jpa data with rest as a guide and was looking at json in my browser in a matter of minutes. i was surprised to see a "profile" service listed next to mine, and posted a question to the spring boot team. oliver gierke provided an excellent answer . swagger spring mvc's integration for swagger has greatly improved since i last wrote about it . now you can enable it with a @enableswagger annotation. below is the swaggerconfig class i used to configure swagger and read properties from application.yml . @configuration @enableswagger public class swaggerconfig implements environmentaware { public static final string default_include_pattern = "/api/.*"; private relaxedpropertyresolver propertyresolver; @override public void setenvironment(environment environment) { this.propertyresolver = new relaxedpropertyresolver(environment, "swagger."); } /** * swagger spring mvc configuration */ @bean public swaggerspringmvcplugin swaggerspringmvcplugin(springswaggerconfig springswaggerconfig) { return new swaggerspringmvcplugin(springswaggerconfig) .apiinfo(apiinfo()) .genericmodelsubstitutes(responseentity.class) .includepatterns(default_include_pattern); } /** * api info as it appears on the swagger-ui page */ private apiinfo apiinfo() { return new apiinfo( propertyresolver.getproperty("title"), propertyresolver.getproperty("description"), propertyresolver.getproperty("termsofserviceurl"), propertyresolver.getproperty("contact"), propertyresolver.getproperty("license"), propertyresolver.getproperty("licenseurl")); } } after getting swagger to work, i discovered that endpoints published with @repositoryrestresource aren't picked up by swagger. there is an open issue for spring data support in the swagger-springmvc project. liquibase integration i configured this project to use h2 in development and postgresql in production. i used spring profiles to do this and copied xml/yaml (for maven and application*.yml files) from a previously created jhipster project. next, i needed to create a database. i decided to use liquibase to create tables, rather than hibernate's schema-export. i chose liquibase over flyway based of discussions in the jhipster project . to use liquibase with spring boot is dead simple: add the following dependency to pom.xml, then place changelog files in src/main/resources/db/changelog . org.liquibase liquibase-core i started by using hibernate's schema-export and changing hibernate.ddl-auto to "create-drop" in application-dev.yml . i also commented out the liquibase-core dependency. then i setup a postgresql database and started the app with "mvn spring-boot:run -pprod". i generated the liquibase changelog from an existing schema using the following command (after downloading and installing liquibase). liquibase --driver=org.postgresql.driver --classpath="/users/mraible/.m2/repository/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-jdbc41.jar:/users/mraible/snakeyaml-1.11.jar" --changelogfile=/users/mraible/dev/spring-app/src/main/resources/db/changelog/db.changelog-02.yaml --url="jdbc:postgresql://localhost:5432/mydb" --username=user --password=pass generatechangelog i did find one bug - the generatechangelog command generates too many constraints in version 3.2.2 . i was able to fix this by manually editing the generated yaml file. tip: if you want to drop all tables in your database to verify liquibase creation is working in postgesql, run the following commands: psql -d mydb drop schema public cascade; create schema public; after writing minimal code for spring data and configuring liquibase to create tables/relationships, i relaxed a bit, documented how everything worked and added a loggingfilter . the loggingfilter was handy for viewing api requests and responses. @bean public filterregistrationbean loggingfilter() { loggingfilter filter = new loggingfilter(); filterregistrationbean registrationbean = new filterregistrationbean(); registrationbean.setfilter(filter); registrationbean.seturlpatterns(arrays.aslist("/api/*")); return registrationbean; } accessing api with resttemplate the final step i needed to do was figure out how to access my new and fancy api with resttemplate . at first, i thought it would be easy. then i realized that spring data produces a hal -compliant api, so its content is embedded inside an "_embedded" json key. after much trial and error, i discovered i needed to create a resttemplate with hal and joda-time awareness. @bean public resttemplate resttemplate() { objectmapper mapper = new objectmapper(); mapper.configure(deserializationfeature.fail_on_unknown_properties, false); mapper.registermodule(new jackson2halmodule()); mapper.registermodule(new jodamodule()); mappingjackson2httpmessageconverter converter = new mappingjackson2httpmessageconverter(); converter.setsupportedmediatypes(mediatype.parsemediatypes("application/hal+json")); converter.setobjectmapper(mapper); stringhttpmessageconverter stringconverter = new stringhttpmessageconverter(); stringconverter.setsupportedmediatypes(mediatype.parsemediatypes("application/xml")); list> converters = new arraylist<>(); converters.add(converter); converters.add(stringconverter); return new resttemplate(converters); } the jodamodule was provided by the following dependency: com.fasterxml.jackson.datatype jackson-datatype-joda with the configuration complete, i was able to write a messagesapiitest integration test that posts a request and retrieves it using the api. the api was secured using basic authentication, so it took me a bit to figure out how to make that work with resttemplate. willie wheeler's basic authentication with spring resttemplate was a big help. @runwith(springjunit4classrunner.class) @contextconfiguration(classes = integrationtestconfig.class) public class messagesapiitest { private final static log log = logfactory.getlog(messagesapiitest.class); @value("http://${app.host}/api/initiate") private string initiateapi; @value("http://${app.host}/api/messages") private string messagesapi; @value("${app.host}") private string host; @inject private resttemplate resttemplate; @before public void setup() throws exception { string request = new scanner(new classpathresource("sample-request.xml").getfile()).usedelimiter("\\z").next(); responseentity response = resttemplate.exchange(gettesturl(initiateapi), httpmethod.post, getbasicauthheaders(request), org.ncpdp.schema.transport.message.class, collections.emptymap()); assertequals(httpstatus.ok, response.getstatuscode()); } @test public void testgetmessages() { httpentity request = getbasicauthheaders(null); responseentity> result = resttemplate.exchange(gettesturl(messagesapi), httpmethod.get, request, new parameterizedtypereference>() {}); httpstatus status = result.getstatuscode(); collection messages = result.getbody().getcontent(); log.debug("messages found: " + messages.size()); assertequals(httpstatus.ok, status); for (message message : messages) { log.debug("message.id: " + message.getid()); log.debug("message.datecreated: " + message.getdatecreated()); } } private httpentity getbasicauthheaders(string body) { string plaincreds = "user:pass"; byte[] plaincredsbytes = plaincreds.getbytes(); byte[] base64credsbytes = base64.encodebase64(plaincredsbytes); string base64creds = new string(base64credsbytes); httpheaders headers = new httpheaders(); headers.add("authorization", "basic " + base64creds); headers.add("content-type", "application/xml"); if (body == null) { return new httpentity<>(headers); } else { return new httpentity<>(body, headers); } } } to get spring data to populate the message id, i created a custom restconfig class to expose it. i learned how to do this from tommy ziegler . /** * used to expose ids for resources. */ @configuration public class restconfig extends repositoryrestmvcconfiguration { @override protected void configurerepositoryrestconfiguration(repositoryrestconfiguration config) { config.exposeidsfor(message.class); config.setbaseuri("/api"); } } summary this article explains how i built a rest api using jaxb, spring boot, spring data and liquibase. it was relatively easy to build, but required some tricks to access it with spring's resttemplate. figuring out how to customize jaxb's code generation was also essential to make things work. i started developing the project with spring boot 1.1.7, but upgraded to 1.2.0.m2 after i found it supported log4j2 and configuring spring data rest's base uri in application.yml. when i handed the project off to my client last week, it was using 1.2.0.build-snapshot because of a bug when running in tomcat . this was an enjoyable project to work on. i especially liked how easy spring data makes it to expose jpa entities in an api. spring boot made things easy to configure once again and liquibase seems like a nice tool for database migrations. if someone asked me to develop a rest api on the jvm, which frameworks would i use? spring boot, spring data, jackson, joda-time, lombok and liquibase. these frameworks worked really well for me on this particular project.
October 30, 2014
by Matt Raible
· 63,436 Views
article thumbnail
Python: Converting a Date String to Timestamp
I’ve been playing around with Python over the last few days while cleaning up a data set and one thing I wanted to do was translate date strings into a timestamp. I started with a date in this format: date_text = "13SEP2014" So the first step is to translate that into a Python date – the strftime section of the documentation is useful for figuring out which format code is needed: import datetime date_text = "13SEP2014" date = datetime.datetime.strptime(date_text, "%d%b%Y") print(date) $ python dates.py 2014-09-13 00:00:00 The next step was to translate that to a UNIX timestamp. I thought there might be a method or property on the Date object that I could access but I couldn’t find one and so ended up using calendar to do the transformation: import datetime import calendar date_text = "13SEP2014" date = datetime.datetime.strptime(date_text, "%d%b%Y") print(date) print(calendar.timegm(date.utctimetuple())) $ python dates.py 2014-09-13 00:00:00 1410566400 It’s not too tricky so hopefully I shall remember next time.
October 29, 2014
by Mark Needham
· 49,635 Views
  • Previous
  • ...
  • 747
  • 748
  • 749
  • 750
  • 751
  • 752
  • 753
  • 754
  • 755
  • 756
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: