DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
A Beginner's Guide to JPA and Hibernate Cascade Types
Introduction JPA translates entity state transitions to database DML statements. Because it’s common to operate on entity graphs, JPA allows us to propagate entity state changes from Parents to Child entities. This behavior is configured through the CascadeType mappings. JPA vs Hibernate Cascade Types Hibernate supports all JPA Cascade Types and some additional legacy cascading styles. The following table draws an association between JPA Cascade Types and their Hibernate native API equivalent: JPA EntityManager action JPA CascadeType Hibernate native Session action Hibernate native CascadeType Event Listener detach(entity) DETACH evict(entity) DETACH or EVICT Default Evict Event Listener merge(entity) MERGE merge(entity) MERGE Default Merge Event Listener persist(entity) PERSIST persist(entity) PERSIST Default Persist Event Listener refresh(entity) REFRESH refresh(entity) REFRESH Default Refresh Event Listener remove(entity) REMOVE delete(entity) REMOVE orDELETE Default Delete Event Listener saveOrUpdate(entity) SAVE_UPDATE Default Save Or Update Event Listener replicate(entity, replicationMode) REPLICATE Default Replicate Event Listener lock(entity, lockModeType) buildLockRequest(entity, lockOptions) LOCK Default Lock Event Listener All the above EntityManager methods ALL All the above Hibernate Session methods ALL From this table we can conclude that: There’s no difference between calling persist, merge or refresh on the JPAEntityManager or the Hibernate Session. The JPA remove and detach calls are delegated to Hibernate delete and evict native operations. Only Hibernate supports replicate and saveOrUpdate. While replicate is useful for some very specific scenarios (when the exact entity state needs to be mirrored between two distinct DataSources), the persist and merge combo is always a better alternative than the native saveOrUpdate operation. As a rule of thumb, you should always use persist for TRANSIENT entities and merge for DETACHED ones.The saveOrUpdate shortcomings (when passing a detached entity snapshot to aSession already managing this entity) had lead to the merge operation predecessor: the now extinct saveOrUpdateCopy operation. The JPA lock method shares the same behavior with Hibernate lock request method. The JPA CascadeType.ALL doesn’t only apply to EntityManager state change operations, but to all Hibernate CascadeTypes as well. So if you mapped your associations with CascadeType.ALL, you can still cascade Hibernate specific events. For example, you can cascade the JPA lock operation (although it behaves as reattaching, instead of an actual lock request propagation), even if JPA doesn’t define a LOCK CascadeType. Cascading best practices Cascading only makes sense only for Parent – Child associations (the Parent entity state transition being cascaded to its Child entities). Cascading from Child to Parent is not very useful and usually, it’s a mapping code smell. Next, I’m going to take analyse the cascading behaviour of all JPA Parent – Childassociations. One-To-One The most common One-To-One bidirectional association looks like this: @Entity public class Post { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @OneToOne(mappedBy = "post", cascade = CascadeType.ALL, orphanRemoval = true) private PostDetails details; public Long getId() { return id; } public PostDetails getDetails() { return details; } public String getName() { return name; } public void setName(String name) { this.name = name; } public void addDetails(PostDetails details) { this.details = details; details.setPost(this); } public void removeDetails() { if (details != null) { details.setPost(null); } this.details = null; } } @Entity public class PostDetails { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column(name = "created_on") @Temporal(TemporalType.TIMESTAMP) private Date createdOn = new Date(); private boolean visible; @OneToOne @PrimaryKeyJoinColumn private Post post; public Long getId() { return id; } public void setVisible(boolean visible) { this.visible = visible; } public void setPost(Post post) { this.post = post; } } The Post entity plays the Parent role and the PostDetails is the Child. The bidirectional associations should always be updated on both sides, therefore the Parent side should contain the addChild andremoveChild combo. These methods ensure we always synchronize both sides of the association, to avoid Object or Relational data corruption issues. In this particular case, the CascadeType.ALL and orphan removal make sense because the PostDetails life-cycle is bound to that of its Post Parent entity. Cascading the one-to-one persist operation The CascadeType.PERSIST comes along with the CascadeType.ALL configuration, so we only have to persist the Post entity, and the associated PostDetails entity is persisted as well: Post post = new Post(); post.setName("Hibernate Master Class"); PostDetails details = new PostDetails(); post.addDetails(details); session.persist(post); Generating the following output: INSERT INTO post(id, NAME) VALUES (DEFAULT, Hibernate Master Class'') insert into PostDetails (id, created_on, visible) values (default, '2015-03-03 10:17:19.14', false) Cascading the one-to-one merge operation The CascadeType.MERGE is inherited from the CascadeType.ALL setting, so we only have to merge the Post entity and the associated PostDetails is merged as well: Post post = newPost(); post.setName("Hibernate Master Class Training Material"); post.getDetails().setVisible(true); doInTransaction(session -> { session.merge(post); }); The merge operation generates the following output: SELECT onetooneca0_.id AS id1_3_1_, onetooneca0_.NAME AS name2_3_1_, onetooneca1_.id AS id1_4_0_, onetooneca1_.created_on AS created_2_4_0_, onetooneca1_.visible AS visible3_4_0_ FROM post onetooneca0_ LEFT OUTER JOIN postdetails onetooneca1_ ON onetooneca0_.id = onetooneca1_.id WHERE onetooneca0_.id = 1 UPDATE postdetails SET created_on = '2015-03-03 10:20:53.874', visible = true WHERE id = 1 UPDATE post SET NAME = 'Hibernate Master Class Training Material' WHERE id = 1 Cascading the one-to-one delete operation The CascadeType.REMOVE is also inherited from the CascadeType.ALL configuration, so the Post entity deletion triggers a PostDetails entity removal too: Post post = newPost(); doInTransaction(session -> { session.delete(post); }); Generating the following output: delete from PostDetails where id = 1 delete from Post where id = 1 The one-to-one delete orphan cascading operation If a Child entity is dissociated from its Parent, the Child Foreign Key is set to NULL. If we want to have the Child row deleted as well, we have to use the orphan removalsupport. doInTransaction(session -> { Post post = (Post) session.get(Post.class, 1L); post.removeDetails(); }); The orphan removal generates this output: SELECT onetooneca0_.id AS id1_3_0_, onetooneca0_.NAME AS name2_3_0_, onetooneca1_.id AS id1_4_1_, onetooneca1_.created_on AS created_2_4_1_, onetooneca1_.visible AS visible3_4_1_ FROM post onetooneca0_ LEFT OUTER JOIN postdetails onetooneca1_ ON onetooneca0_.id = onetooneca1_.id WHERE onetooneca0_.id = 1 delete from PostDetails where id = 1 Unidirectional one-to-one association Most often, the Parent entity is the inverse side (e.g. mappedBy), the Child controling the association through its Foreign Key. But the cascade is not limited to bidirectional associations, we can also use it for unidirectional relationships: @Entity public class Commit { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String comment; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name = "Branch_Merge_Commit", joinColumns = @JoinColumn( name = "commit_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn( name = "branch_merge_id", referencedColumnName = "id") ) private BranchMerge branchMerge; public Commit() { } public Commit(String comment) { this.comment = comment; } public Long getId() { return id; } public void addBranchMerge( String fromBranch, String toBranch) { this.branchMerge = new BranchMerge( fromBranch, toBranch); } public void removeBranchMerge() { this.branchMerge = null; } } @Entity public class BranchMerge { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String fromBranch; private String toBranch; public BranchMerge() { } public BranchMerge( String fromBranch, String toBranch) { this.fromBranch = fromBranch; this.toBranch = toBranch; } public Long getId() { return id; } } Cascading consists in propagating the Parent entity state transition to one or more Child entities, and it can be used for both unidirectional and bidirectional associations. One-To-Many The most common Parent – Child association consists of a one-to-many and a many-to-one relationship, where the cascade being useful for the one-to-many side only: @Entity public class Post { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @OneToMany(cascade = CascadeType.ALL, mappedBy = "post", orphanRemoval = true) private List comments = new ArrayList<>(); public void setName(String name) { this.name = name; } public List getComments() { return comments; } public void addComment(Comment comment) { comments.add(comment); comment.setPost(this); } public void removeComment(Comment comment) { comment.setPost(null); this.comments.remove(comment); } } @Entity public class Comment { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne private Post post; private String review; public void setPost(Post post) { this.post = post; } public String getReview() { return review; } public void setReview(String review) { this.review = review; } } Like in the one-to-one example, the CascadeType.ALL and orphan removal are suitable because the Comment life-cycle is bound to that of its Post Parent entity. Cascading the one-to-many persist operation We only have to persist the Post entity and all the associated Comment entities are persisted as well: Post post = new Post(); post.setName("Hibernate Master Class"); Comment comment1 = new Comment(); comment1.setReview("Good post!"); Comment comment2 = new Comment(); comment2.setReview("Nice post!"); post.addComment(comment1); post.addComment(comment2); session.persist(post); The persist operation generates the following output: insert into Post (id, name) values (default, 'Hibernate Master Class') insert into Comment (id, post_id, review) values (default, 1, 'Good post!') insert into Comment (id, post_id, review) values (default, 1, 'Nice post!') Cascading the one-to-many merge operation Merging the Post entity is going to merge all Comment entities as well: Post post = newPost(); post.setName("Hibernate Master Class Training Material"); post.getComments() .stream() .filter(comment -> comment.getReview().toLowerCase() .contains("nice")) .findAny() .ifPresent(comment -> comment.setReview("Keep up the good work!") ); doInTransaction(session -> { session.merge(post); }); Generating the following output: SELECT onetomanyc0_.id AS id1_1_1_, onetomanyc0_.NAME AS name2_1_1_, comments1_.post_id AS post_id3_1_3_, comments1_.id AS id1_0_3_, comments1_.id AS id1_0_0_, comments1_.post_id AS post_id3_0_0_, comments1_.review AS review2_0_0_ FROM post onetomanyc0_ LEFT OUTER JOIN comment comments1_ ON onetomanyc0_.id = comments1_.post_id WHERE onetomanyc0_.id = 1 update Post set name = 'Hibernate Master Class Training Material' where id = 1 update Comment set post_id = 1, review='Keep up the good work!' where id = 2 Cascading the one-to-many delete operation When the Post entity is deleted, the associated Comment entities are deleted as well: Post post = newPost(); doInTransaction(session -> { session.delete(post); }); Generating the following output: delete from Comment where id = 1 delete from Comment where id = 2 delete from Post where id = 1 The one-to-many delete orphan cascading operation The orphan-removal allows us to remove the Child entity whenever it’s no longer referenced by its Parent: newPost(); doInTransaction(session -> { Post post = (Post) session.createQuery( "select p " + "from Post p " + "join fetch p.comments " + "where p.id = :id") .setParameter("id", 1L) .uniqueResult(); post.removeComment(post.getComments().get(0)); }); The Comment is deleted, as we can see in the following output: SELECT onetomanyc0_.id AS id1_1_0_, comments1_.id AS id1_0_1_, onetomanyc0_.NAME AS name2_1_0_, comments1_.post_id AS post_id3_0_1_, comments1_.review AS review2_0_1_, comments1_.post_id AS post_id3_1_0__, comments1_.id AS id1_0_0__ FROM post onetomanyc0_ INNER JOIN comment comments1_ ON onetomanyc0_.id = comments1_.post_id WHERE onetomanyc0_.id = 1 delete from Comment where id = 1 If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Many-To-Many The many-to-many relationship is tricky because each side of this association plays both the Parent and the Child role. Still, we can identify one side from where we’d like to propagate the entity state changes. We shouldn’t default to CascadeType.ALL, because the CascadeTpe.REMOVE might end-up deleting more than we’re expecting (as you’ll soon find out): @Entity public class Author { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Column(name = "full_name", nullable = false) private String fullName; @ManyToMany(mappedBy = "authors", cascade = {CascadeType.PERSIST, CascadeType.MERGE}) private List books = new ArrayList<>(); private Author() {} public Author(String fullName) { this.fullName = fullName; } public Long getId() { return id; } public void addBook(Book book) { books.add(book); book.authors.add(this); } public void removeBook(Book book) { books.remove(book); book.authors.remove(this); } public void remove() { for(Book book : new ArrayList<>(books)) { removeBook(book); } } } @Entity public class Book { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Column(name = "title", nullable = false) private String title; @ManyToMany(cascade = {CascadeType.PERSIST, CascadeType.MERGE}) @JoinTable(name = "Book_Author", joinColumns = { @JoinColumn( name = "book_id", referencedColumnName = "id" ) }, inverseJoinColumns = { @JoinColumn( name = "author_id", referencedColumnName = "id" ) } ) private List authors = new ArrayList<>(); private Book() {} public Book(String title) { this.title = title; } } Cascading the many-to-many persist operation Persisting the Author entities will persist the Books as well: Author _John_Smith = new Author("John Smith"); Author _Michelle_Diangello = new Author("Michelle Diangello"); Author _Mark_Armstrong = new Author("Mark Armstrong"); Book _Day_Dreaming = new Book("Day Dreaming"); Book _Day_Dreaming_2nd = new Book("Day Dreaming, Second Edition"); _John_Smith.addBook(_Day_Dreaming); _Michelle_Diangello.addBook(_Day_Dreaming); _John_Smith.addBook(_Day_Dreaming_2nd); _Michelle_Diangello.addBook(_Day_Dreaming_2nd); _Mark_Armstrong.addBook(_Day_Dreaming_2nd); session.persist(_John_Smith); session.persist(_Michelle_Diangello); session.persist(_Mark_Armstrong); The Book and the Book_Author rows are inserted along with the Authors: insert into Author (id, full_name) values (default, 'John Smith') insert into Book (id, title) values (default, 'Day Dreaming') insert into Author (id, full_name) values (default, 'Michelle Diangello') insert into Book (id, title) values (default, 'Day Dreaming, Second Edition') insert into Author (id, full_name) values (default, 'Mark Armstrong') insert into Book_Author (book_id, author_id) values (1, 1) insert into Book_Author (book_id, author_id) values (1, 2) insert into Book_Author (book_id, author_id) values (2, 1) insert into Book_Author (book_id, author_id) values (2, 2) insert into Book_Author (book_id, author_id) values (3, 1) Dissociating one side of the many-to-many association To delete an Author, we need to dissociate all Book_Author relations belonging to the removable entity: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); _Mark_Armstrong.remove(); session.delete(_Mark_Armstrong); }); This use case generates the following output: SELECT manytomany0_.id AS id1_0_0_, manytomany2_.id AS id1_1_1_, manytomany0_.full_name AS full_nam2_0_0_, manytomany2_.title AS title2_1_1_, books1_.author_id AS author_i2_0_0__, books1_.book_id AS book_id1_2_0__ FROM author manytomany0_ INNER JOIN book_author books1_ ON manytomany0_.id = books1_.author_id INNER JOIN book manytomany2_ ON books1_.book_id = manytomany2_.id WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 2 delete from Book_Author where book_id = 2 insert into Book_Author (book_id, author_id) values (2, 1) insert into Book_Author (book_id, author_id) values (2, 2) delete from Author where id = 3 The many-to-many association generates way too many redundant SQL statements and often, they are very difficult to tune. Next, I’m going to demonstrate the many-to-many CascadeType.REMOVE hidden dangers. The many-to-many CascadeType.REMOVE gotchas The many-to-many CascadeType.ALL is another code smell, I often bump into while reviewing code. The CascadeType.REMOVE is automatically inherited when usingCascadeType.ALL, but the entity removal is not only applied to the link table, but to the other side of the association as well. Let’s change the Author entity books many-to-many association to use theCascadeType.ALL instead: @ManyToMany(mappedBy = "authors", cascade = CascadeType.ALL) private List books = new ArrayList<>(); When deleting one Author: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); session.delete(_Mark_Armstrong); Author _John_Smith = getByName(session, "John Smith"); assertEquals(1, _John_Smith.books.size()); }); All books belonging to the deleted Author are getting deleted, even if other Authorswe’re still associated to the deleted Books: SELECT manytomany0_.id AS id1_0_, manytomany0_.full_name AS full_nam2_0_ FROM author manytomany0_ WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 3 delete from Book_Author where book_id=2 delete from Book where id=2 delete from Author where id=3 Most often, this behavior doesn’t match the business logic expectations, only being discovered upon the first entity removal. We can push this issue even further, if we set the CascadeType.ALL to the Book entity side as well: @ManyToMany(cascade = CascadeType.ALL) @JoinTable(name = "Book_Author", joinColumns = { @JoinColumn( name = "book_id", referencedColumnName = "id" ) }, inverseJoinColumns = { @JoinColumn( name = "author_id", referencedColumnName = "id" ) } ) This time, not only the Books are being deleted, but Authors are deleted as well: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); session.delete(_Mark_Armstrong); Author _John_Smith = getByName(session, "John Smith"); assertNull(_John_Smith); }); The Author removal triggers the deletion of all associated Books, which further triggers the removal of all associated Authors. This is a very dangerous operation, resulting in a massive entity deletion that’s rarely the expected behavior. If you enjoyed this article, I bet you are going to love my book as well. SELECT manytomany0_.id AS id1_0_, manytomany0_.full_name AS full_nam2_0_ FROM author manytomany0_ WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 3 SELECT authors0_.book_id AS book_id1_1_0_, authors0_.author_id AS author_i2_2_0_, manytomany1_.id AS id1_0_1_, manytomany1_.full_name AS full_nam2_0_1_ FROM book_author authors0_ INNER JOIN author manytomany1_ ON authors0_.author_id = manytomany1_.id WHERE authors0_.book_id = 2 SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 1 SELECT authors0_.book_id AS book_id1_1_0_, authors0_.author_id AS author_i2_2_0_, manytomany1_.id AS id1_0_1_, manytomany1_.full_name AS full_nam2_0_1_ FROM book_author authors0_ INNER JOIN author manytomany1_ ON authors0_.author_id = manytomany1_.id WHERE authors0_.book_id = 1 SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 2 delete from Book_Author where book_id=2 delete from Book_Author where book_id=1 delete from Author where id=2 delete from Book where id=1 delete from Author where id=1 delete from Book where id=2 delete from Author where id=3 This use case is wrong in so many ways. There are a plethora of unnecessary SELECT statements and eventually we end up deleting all Authors and all their Books. That’s why CascadeType.ALL should raise your eyebrow, whenever you spot it on a many-to-many association. When it comes to Hibernate mappings, you should always strive for simplicity. TheHibernate documentation confirms this assumption as well: Practical test cases for real many-to-many associations are rare. Most of the time you need additional information stored in the “link table”. In this case, it is much better to use two one-to-many associations to an intermediate link class. In fact, most associations are one-to-many and many-to-one. For this reason, you should proceed cautiously when using any other association style. Conclusion Cascading is a handy ORM feature, but it’s not free of issues. You should only cascade from Parent entities to Children and not the other way around. You should always use only the casacde operations that are demanded by your business logic requirements, and not turn the CascadeType.ALL into a default Parent-Child association entity state propagation configuration. Code available on GitHub.
March 13, 2015
by Vlad Mihalcea
· 96,794 Views · 8 Likes
article thumbnail
How to Test a REST API With JUnit
RESTEasy (and Jersey as well) contain a minimal web server within their libraries which enables their users to start up a tiny web server.
March 13, 2015
by Mark Paluch
· 311,094 Views · 6 Likes
article thumbnail
Java 8 Stream to Rx-Java Observable
I was recently looking at a way to convert a Java 8 Stream to Rx-JavaObservable. There is one api in Observable that appears to do this : public static final Observable from(java.lang.Iterable iterable) So now the question is how do we transform a Stream to an Iterable. Stream does not implement the Iterable interface, and there are good reasons for this. So to return an Iterable from a Stream, you can do the following: Iterable iterable = new Iterable() { @Override public Iterator iterator() { return aStream.iterator(); } }; Observable.from(iterable); Since Iterable is a Java 8 functional interface, this can be simplified to the following using Java 8 Lambda expressions!: Observable.from(aStream::iterator); First look it does appear cryptic, however if it is seen as a way to simplify the expanded form of Iterable then it slowly starts to make sense. Reference: This is entirely based on what I read on this Stackoverflow question.
March 12, 2015
by Biju Kunjummen
· 12,472 Views · 2 Likes
article thumbnail
Java Mapper and Model Testing Using eXpectamundo
As a long time Java application developer working in variety of corporate environments one of the common activities I have to perform is to write mappings to translate one Java model object into another. Regardless of the technology or library I use to write the mapper, the same question comes up. What is the best way to unit test it? I've been through various approaches, all with a variety of pros and cons related to the amount of time it takes to write what is essentially a pretty simple test. The tendency (I hate to admit) is to skimp on testing all fields and focus on what I deem to be the key fields in order to concentrate on, dare I say it, more interesting areas of the codebase. As any coder knows, this is the road to bugs and the time spent writing the test is repaid many times over in reduced debugging later. Enter eXpectamundo eXpectamundo is an open source Java library hosted on github that takes a new approach to testing model objects. It allows the Java developer to write a prototype object which has been set up with expectations. This prototype can then be used to test the actual output in a unit test. The snippet below illustrates the setup of the prototype. ... User expected = prototype(User.class); expect(expected.getCreateTs()).isWithin(1, TimeUnit.SECONDS, Moments.today()); expect(expected.getFirstName()).isEqualTo("John"); expect(expected.getUserId()).isNull(); expect(expected.getDateOfBirth()).isComparableTo(AUG(9, 1975)); expectThat(actual).matches(expected); .. For a complete example lets take a simple Data Transfer Object (DTO) which transfers the definition of a new user from a UI. package org.exparity.expectamundo.sample.mapper; import java.util.Date; public class UserDTO { private String username, firstName, surname; private Date dateOfBirth; public UserDTO(String username, String firstName, String surname, Date dateOfBirth) { this.username = username; this.firstName = firstName; this.surname = surname; this.dateOfBirth = dateOfBirth; } public String getUsername() { return username; } public String getFirstName() { return firstName; } public String getSurname() { return surname; } public Date getDateOfBirth() { return dateOfBirth; } } This DTO needs to mapped into the domain model User object which can then be manipulated, stored, etc by the service layer. The domain User object is defined as below: package org.exparity.expectamundo.sample.mapper; import java.util.Date; public class User { private Integer userId; private Date createTs = new Date(); private String username, firstName, surname; private Date dateOfBirth; public User(String username, String firstName, String surname, final Date dateOfBirth) { this.username = username; this.firstName = firstName; this.surname = surname; this.dateOfBirth = dateOfBirth; } public Integer getUserId() { return userId; } public Date getCreateTs() { return createTs; } public String getUsername() { return username; } public String getFirstName() { return firstName; } public String getSurname() { return surname; } public Date getDateOfBirth() { return dateOfBirth; } } The code for the mapper is simple so we'll use a simple hand coded mapping layer however I've introduced a bug into the mapper which we'll detect later with our unit test. package org.exparity.expectamundo.sample.mapper; public class UserDTOToUserMapper { public User map(final UserDTO userDTO) { return new User(userDTO.getUsername(), userDTO.getSurname(), userDTO.getFirstName(), userDTO.getDateOfBirth()); } } We then write a unit test for the mapper using eXpectamundo to test the expectation. package org.exparity.expectamundo.sample.mapper; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.expectamundo.Expectamundo.*; import static org.exparity.hamcrest.date.Moments.now; public class UserDTOToUserMapperTest { @Test public void canMapUserDTOToUser() { UserDTO dto = new UserDTO("JohnSmith", "John", "Smith", AUG(9, 1975)); User actual = new UserDTOToUserMapper().map(dto); User expected = prototype(User.class); expect(expected.getCreateTs()).isWithin(1, TimeUnit.SECONDS, now()); expect(expected.getFirstName()).isEqualTo("John"); expect(expected.getSurname()).isEqualTo("Smith"); expect(expected.getUsername()).isEqualTo("JohnSmith"); expect(expected.getUserId()).isNull(); expect(expected.getDateOfBirth()).isSameDay(AUG(9, 1975)); expectThat(actual).matches(expected); } } The test shows how simple equality tests can be performed and also introduced some of the specialised tests which can be performed, such as testing for null, or testing the bounds of the create timestamp and performing a comparison check on the dateOfBirth property. Running the unit test reports the failure in the mapper where the firstname and surname properties have been transposed by the mapper. java.lang.AssertionError: Expected a User containing properties : getCreateTs() is expected within 1 seconds of Sun Jan 18 13:00:33 GMT 2015 getFirstName() is equal to John getSurname() is equal to Smith getUsername() is equal to JohnSmith getUserId() is null getDateOfBirth() is comparable to Sat Aug 09 00:00:00 BST 1975 But actual is a User containing properties : getFirstName() is Smith getSurname() is John A simple fix to the mapper resolves the issue: package org.exparity.expectamundo.sample.mapper; public class UserDTOToUserMapper { public User map(final UserDTO userDTO) { return new User(userDTO.getUsername(),userDTO.getFirstName(), userDTO.getSurname(), userDTO.getDateOfBirth()); } } But I can do this with hamcrest! The hamcrest equivalent to this test would follow one of two patterns; a custom implementation of org.hamcrest.Matcher for matching User objects, or a set of inline assertions as per the following example: package org.exparity.expectamundo.sample.mapper; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.hamcrest.date.DateMatchers.within; import static org.exparity.hamcrest.date.Moments.now; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; public class UserDTOToUserMapperHamcrestTest { @Test public void canMapUserDTOToUser() { UserDTO dto = new UserDTO("JohnSmith", "John", "Smith", AUG(9, 1975)); User actual = new UserDTOToUserMapper().map(dto); assertThat(actual.getCreateTs(), within(1, TimeUnit.SECONDS, now())); assertThat(actual.getFirstName(), equalTo("John")); assertThat(actual.getSurname(), equalTo("Smith")); assertThat(actual.getUsername(), equalTo("JohnSmith")); assertThat(actual.getUserId(), nullValue()); assertThat(actual.getDateOfBirth(), comparesEqualTo(AUG(9, 1975))); } } In this example the only difference eXpectamundo offers over hamcrest is a different way of reporting mismatches. eXpectamundo will report all differences between the expected vs the actual whereas the hamcrest test will fail on the first difference. An improvement, but not really a reason to consider alternatives. Where the approach eXpectomundo offers starts to differentiate itself is when testing more complex object collections and graphs. Collection testing with eXpectamundo If we move our code forward and we create a repository to allow us to store and retrieve User instances. For the sake of simplicity I've used a basic HashMap backed repository. The code for the repository is as follows: package org.exparity.expectamundo.sample.mapper; import java.util.*; public class UserRepository { private Map userMap = new HashMap<>(); public List getAll() { return new ArrayList<>(userMap.values()); } public void addUser(final User user) { this.userMap.put(user.getUsername(), user); } public User getUserByUsername(final String username) { return userMap.get(username); } } We then write a unit test to confirm the behaviour of repository package org.exparity.expectamundo.sample.mapper; import java.util.Date; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.expectamundo.Expectamundo.*; public class UserRepositoryTest { private static String FIRST_NAME = "John"; private static String SURNAME = "Smith"; private static String USERNAME = "JohnSmith"; private static Date DATE_OF_BIRTH = AUG(9, 1975); private static User EXPECTED_USER; static { EXPECTED_USER = prototype(User.class); expect(EXPECTED_USER.getCreateTs()).isWithin(1, TimeUnit.SECONDS, new Date()); expect(EXPECTED_USER.getFirstName()).isEqualTo(FIRST_NAME); expect(EXPECTED_USER.getSurname()).isEqualTo(SURNAME); expect(EXPECTED_USER.getUsername()).isEqualTo(USERNAME); expect(EXPECTED_USER.getUserId()).isNull(); expect(EXPECTED_USER.getDateOfBirth()).isComparableTo(DATE_OF_BIRTH); } @Test public void canGetAll() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); expectThat(repos.getAll()).contains(EXPECTED_USER); } @Test public void canGetByUsername() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); expectThat(repos.getUserByUsername(USERNAME)).matches(EXPECTED_USER); } } The test shows how the prototype, once constructed, can be used to perform a deep verification of an object and, if desired, can be re-used in multiple tests. The equivalent matcher in hamcrest is to write a custom matcher for the User object, or as below with flat objects using a multi matcher. (Note there are a number of ways to write the matcher, the one below I felt was the most terse example). package org.exparity.expectamundo.sample.mapper; import java.util.Date; import java.util.concurrent.TimeUnit; import org.hamcrest.*; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.hamcrest.BeanMatchers.hasProperty; import static org.exparity.hamcrest.date.DateMatchers.*; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; public class UserRepositoryHamcrestTest { private static String FIRST_NAME = "John"; private static String SURNAME = "Smith"; private static String USERNAME = "JohnSmith"; private static Date DATE_OF_BIRTH = AUG(9, 1975); private static final Matcher EXPECTED_USER = Matchers.allOf( hasProperty("CreateTs", within(1, TimeUnit.SECONDS, new Date())), hasProperty("FirstName", equalTo(FIRST_NAME)), hasProperty("Surname", equalTo(SURNAME)), hasProperty("Username", equalTo(USERNAME)), hasProperty("UserId", nullValue()), hasProperty("DateOfBirth", sameDay(DATE_OF_BIRTH))); @Test public void canGetAll() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); assertThat(repos.getAll(), hasItem(EXPECTED_USER)); } @Test public void canGetByUsername() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); assertThat(repos.getUserByUsername(USERNAME), is(EXPECTED_USER)); } } In comparison this hamcrest-based test matches the eXpectamundo test in compactness but not in type-safety. A type-safe matcher can be created which checks each property individual which would make considerably more code for no benefit over the eXpectamundo equivalent. The error reporting during failures is also clear and intuitive for the eXpectamundo test, less so for the hamcrest-equivalent. (Again an equivalent descriptive test can be written using hamcrest but will require much more code). An example of the error reporting is below where the surname is returned in place of the firstname: java.lang.AssertionError: Expected a list containing a User with properties: getCreateTs() is a expected within 1 seconds of Fri Mar 06 17:29:52 GMT 2015 getFirstName() is equal to John getSurname() is equal to Smith getUsername() is equal to JohnSmith getUserId() is is null getDateOfBirth() is is comparable to Sat Aug 09 00:00:00 BST 1975 but actual list contains: User containing properties getFirstName() is Smith Summary In summary eXpectamundo offers a new approach to perform verification of models during testing. It provides a type-safe interface to set expectations making creation of deep model tests, especially in an IDE with auto-complete, particularly simple. Failures are also reported with a clear to understand error trace. Full details of eXpectamundo and the other expectations and features it supports are available on the eXpectamundo page on github. The example code is also available on github. Try it out To try eXpectamundo out for yourself include the dependency in your maven pom or other dependency manager org.exparity expectamundo 0.9.15 test
March 12, 2015
by Stewart Bissett
· 7,842 Views
article thumbnail
Using Java 8 Lambda Expressions in Java 7 or Older
I think nobody declines the usefulness of Lambda expressions, introduced by Java 8. However, many projects are stuck with Java 7 or even older versions. Upgrading can be time consuming and costly. If third party components are incompatible with Java 8 upgrading might not be possible at all. Besides that, the whole Android platform is stuck on Java 6 and 7. Nevertheless, there is still hope for Lambda expressions! Retrolambda provides a backport of Lambda expressions for Java 5, 6 and 7. From the Retrolambda documentation: Retrolambda lets you run Java 8 code with lambda expressions and method references on Java 7 or lower. It does this by transforming your Java 8 compiled bytecode so that it can run on a Java 7 runtime. After the transformation they are just a bunch of normal .class files, without any additional runtime dependencies. To get Retrolambda running, you can use the Maven or Gradle plugin. If you want to use Lambda expressions on Android, you only have to add the following lines to your gradle build files: /build.gradle: buildscript { dependencies { classpath 'me.tatarka:gradle-retrolambda:2.4.0' } } /app/build.gradle: apply plugin: 'com.android.application' // Apply retro lambda plugin after the Android plugin apply plugin: 'retrolambda' android { compileOptions { // change compatibility to Java 8 to get Java 8 IDE support sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } }
March 11, 2015
by Michael Scharhag
· 10,223 Views
article thumbnail
Minor GC vs Major GC vs Full GC
The post expects the reader to be familiar with generic garbage collection principles built into the JVM.
March 10, 2015
by Nikita Salnikov-Tarnovski
· 51,373 Views · 6 Likes
article thumbnail
Using Jenkins as a Reverse Proxy for IIS
Jenkins is one of the most popular build servers and it runs on a wide variety of platforms (Windows, Linux, Mac OS X) and can build software for most programming languages (Java, C#, C++, …). And best of all, it is fully open source and free to use. By default Jenkins runs on the port 8080, which can be troublesome as this not the standard port 80 used by most web applications. But running on port 80 is in most cases not possible as the webserver is already using this port. Luckily IIS has a neat feature that allows it to act as a reverse proxy. The reverse proxy mode allows to forward traffic from IIS to another web server (Jenkins in this example) and send the responses back through IIS. This allows us to assign a regular DNS address to Jenkins and use the standard HTTP port 80. In this guide, I will explain you how you can set this up. What is required? You need an installation of IIS 7 or higher and you need to install the additional modules “URL Rewrite and “Application Request Routing”. The easiest way to install these modules is through the Microsoft Web Platform Installer. Configuring IIS Once the two necessary modules are installed, you have to create a new website in IIS. In my example I bind this website to the DNS alias “Jenkins.test.intranet”. You can bind this of course to the DNS of your choice (or to no specific DNS entry). Next you must copy the following web.config to the root of newly created website. This rule forwards all the traffic to http://localhost:8080/, the address on which Jenkins is running. It is also possible to configure this through the GUI with the URL Rewrite dialog boxes. I you are not forwarding to a localhost address, you need to go into the dialogs of Application Requet Routing and check the “Enable proxy” property.
March 9, 2015
by Pieter De Rycke
· 10,810 Views
article thumbnail
ExecutorService vs ExecutorCompletionService in Java
Suppose we have list of four tasks: Task A, Task B, Task C and Task D which perform some complex computation and result into an integer value. These tasks may take random time depending upon various parameters. We can submit these tasks to executor as: ExecutorService executorService = Executors.newFixedThreadPool(4); List futures = new ArrayList>(); futures.add(executorService.submit(A)); futures.add(executorService.submit(B)); futures.add(executorService.submit(C)); futures.add(executorService.submit(D)); Then we can iterate over the list to get the computed result of each future: for (Future future:futures) { Integer result = future.get(); // rest of the code here. } Now the similar functionality can also be achieved using ExecutorCompletionService as: ExecutorService executorService = Executors.newFixedThreadPool(4); CompletionService executorCompletionService= new ExecutorCompletionService<>(executorService ); Then again we can submit the tasks and get the result like: List futures = new ArrayList>(); futures.add(executorCompletionService.submit(A)); futures.add(executorCompletionService.submit(B)); futures.add(executorCompletionService.submit(C)); futures.add(executorCompletionService.submit(D)); for (int i=0; i> solvers) throws InterruptedException { CompletionService ecs = new ExecutorCompletionService(e); int n = solvers.size(); List> futures = new ArrayList>(n); Result result = null; try { for (Callable s : solvers) futures.add(ecs.submit(s)); for (int i = 0; i < n; ++i) { try { Result r = ecs.take().get(); if (r != null) { result = r; break; } } catch(ExecutionException ignore) {} } } finally { for (Future f : futures) f.cancel(true); } if (result != null) use(result); } In the above example the moment we get the result we break out of the loop and cancel out all the other futures. One important thing to note is that the implementation ofExecutorCompletionService contains a queue of results. We need to remember the number of tasks we have added to the service and then should use take or poll to drain the queue otherwise a memory leak will occur. Some people use the Future returned by submit to process results and this is NOT correct usage. There is one interesting solution provided by Dr. Heinz M, Kabutz here. Peeking into ExecutorCompletionService When we look inside the code for this we observe that it makes use of Executor,AbstractExecutorService (default implementations of ExecutorService execution methods) and BlockingQueue (actual instance is of class LinkedBlockingQueue>). public class ExecutorCompletionService implements CompletionService { private final Executor executor; private final AbstractExecutorService aes; private final BlockingQueue> completionQueue; // Remaining code.. } Another important thing to observe is class QueueingFuture which extends FutureTaskand in the method done() the result is pushed to queue. private class QueueingFuture extends FutureTask { QueueingFuture(RunnableFuture task) { super(task, null); this.task = task; } protected void done() { completionQueue.add(task); } private final Future task; } For the curios ones, class FutureTask is base implementation of Future interface with methods to start and cancel a computation, query to see if the computation is complete, and retrieve the result of the computation. And constructor takes RunnableFuture as parameter which is again an interface which extends Future interface and adds only one method run as: public interface RunnableFuture extends Runnable, Future { /** * Sets this Future to the result of its computation * unless it has been cancelled. */ void run(); } That is all for now. Enjoy!!
March 8, 2015
by Akhil Mittal
· 40,003 Views · 6 Likes
article thumbnail
Introduction to Hypertext Application Language (HAL)
Principles of REST architectural style were put forth by Dr. Roy Fielding in his thesis “Architectural Styles and the Design of Network-based Software Architectures”. One of the main principles of the style is that REST applications should be hypermedia driven, that is the change of an application's state or, in other words, transition from one resource to another, should be done by following the links. The rationale behind this principle is that all possible operations with the resource can be discovered without the need of any out-of-band documentation and if some URI changes, there is no need to change the client as it is server's responsibility to generate URIs and insert them into representations. This principle is also called Hypermedia As The Engine Of An Application state (HATEOAS). While the Thesis gives the prescription to use hyperlinks in the representations of resources, Hypertext Application Language (HAL) is one possible recipe how to do design representations with links. In particular, it describes how to design JSON representations, although there is a XML counterpart too. Our discussion will be limited only to the JSON variety. As to to HAL media type, currently, according to the document, its media type is application/vnd.hal+json. It has something to do with the fact that there are several registration trees, that is buckets for media types, with different requirements. Examples include standards, vendor and personal trees. The standards tree has no prefix for a media type. The latter two do have vnd and prs respectively. When HAL moves to standards tree, its content type name will be application/hal+json. Currently, the example application provided by the author of HAL, Mike Kelly, produces application/json in its Content-Type header, so the exact media type is not that important for our discussion. There is a convenient way to tell whether an API is RESTful or not. A so-called Richardson Maturity Model (RMM) was introduced by Leonard Richardson in his 2008 QCon talk and later popularized by Martin Fowler. The model introduces four levels of API maturity starting from Level 0, so that at level two each resource is not only identified by its own URI, but all operations with resources are done using HTTP methods like GET, PUT etc. If an API is at Level 3 it can be considered as RESTful. From the point of view of the RMM HAL helps to upgrade a Level 2 API to Level 3 whereby hypermedia is used. To make our hands dirty with HAL let's discuss a simple book catalog API where a user can browse books; this API could be a part of a larger application like a bookstore, but its mission only to be a catalog of books. The data for our API could be scraped from amazon.com. Some URIs and methods are listed below. Method URI Description GET /books Show all books GET /books/{id} Show details of the book with identificator id GET /authors/{id} Show details for author with identificator id Let's start with the representation of a single book. A book has certain properties such as the title, the price, number of pages, language and so on. The simplest JSON representation of a book could be like the following.As we deal with representations, the methods to update or modify resources were omitted, although they could be the part of the administrative interface of the service.s { "Title":"RESTful Web APIs", "Price":"$31.92", "Paperback":"408 pages", "Language":"English" } It is high time to add some links. The first candidate is a link to the object itself, as it is recommended by the HAL specification. To add links there is a reserved keyword in HAL, _links. The first character of it is an underscore which was selected to make the reserved keywords different from the names of the properties of objects in representations, although not all names starting with an underscore are reserved. Actually, _links is the name of an object the properties of which describe the meaning of a particular link. The values of the properties should contain the URI as well as may contain some other goodies. Are the names of the properties fixed? Well, some are, and the list is here, but one can add her own as will be described later. Going back to our self link there is an eponymous relation type in the list and our book object can look like the snippet below. { "_links":{ "self":{ "href":"/book/123" } }, "Title":"RESTful Web APIs", "Price":"$31.92", "Paperback":"408 pages", "Language":"English" } The href property contains the URI by following which our resource can be accessed. That is nice, but most books should have authors. While it is possible to add an array of authors' names to our representation, it is more convenient to add links to authors due to the fact that somebody could wish to navigate to the representation of the author's resource and discover what other books a particular author could have written. While there is an author relation type, there is no keyword for the case when there are several authors, so we can add a relation type for this particular case. Although the relation type could be registered in IANA for public usage to be added to the list, it is important to know how to add one's own rel types. A common practice to do the job is to use a URI by navigating which a description of the relation type could be accessed. The description may contain possible actions, supported representation formats and the purpose of the resource. The first stab at adding authors to our book could look like this. { "_links":{ "self":{ "href":"/book/123" }, "http://booklistapi.com/rels/authors":[ { "href":"/author/4554", "title":"Leonard Richardson" }, { "href":"/author/5758", "title":"Mike Amundsen" }, { "href":"/author/6853", "title":"Sam Ruby" } ] }, "Title":"RESTful Web APIs", "Price":"$31.92", "Paperback":"408 pages", "Language":"English" } By navigating to http://booklistapi.com/rels/authors one can read additional information about this relation type. As it is seen from the snippet above, the properties of a link object are not limited to href, which is the only required one, there are several more properties including title, a human-readable identifier. Some other examples are type for media type and hreflang to hint about the language of the representation. One more notice concerning custom relations is that in its form used above it seems a little bit wordy. HAL has a way to cope with this concern and it is called Compact URI or CURIE. HAL adds to relation types its own type called curies, example usage of which is demonstrated by the following snippet. { "_links":{ "self":{ "href":"/book/123" }, "curries":[ { "name":"ns", "href":"http://booklistapi.com/rels/{rel}", "templated":true } ], "ns:authors":[ { "href":"/author/4554", "title":"Leonard Richardson" }, { "href":"/author/5758", "title":"Mike Amundsen" }, { "href":"/author/6853", "title":"Sam Ruby" } ] }, "Title":"RESTful Web APIs", "Price":"$31.92", "Paperback":"408 pages", "Language":"English" } We used the name property of the link object to provide a key to select an object, which is later used in the name of our custom relation type - ns:authors. Then, we used a templated URI which can be expanded to produce the full URI, same as in the previous example, to access documentation. One additional property is templated, which is false by default but should be set to true when using URI templates. Another place where templates can be used are links that enable search whereby search terms are added to the template to produce the full URI. How this representation can be further improved? Links in a representation could not only be used to add related data, but also to show what possible actions could be taken by the client. For example, if it is an administrative interface, links to edit or delete the book using eponymous relation types may be added. Otherwise, one can add links by following which the client may be able to add a review or rate the book. We have spent enough time with our book and now let's turn to authors. The example above shows that HAL representation may contain the properties of an object as well as some links. Links can help navigate to related objects, such as authors in our book example. Another way to add information about related objects to our representation is to embed some objects in our representation. For example, if one navigates to the representation of an author, a list of all books with some detail is shown. { "_links":{ "self":{ "href":"/author/4554" }, "curries":[ { "name":"ns", "href":"http://booklistapi.com/rels/{rel}", "templated":true } ] }, "_embedded":{ "ns:books":[ { "_links":{ "self":{ "href":"/books/123" } }, "Title":"RESTful Web APIs", "Price":"$31.92" }, { "_links":{ "self":{ "href":"/books/366" } }, "Title":"RESTful Web Services", "Price":"$79.78" }, { "_links":{ "self":{ "href":"/books/865" } }, "Title":"Ruby Cookbook", "Price":"$34.35" } ] } } Another keyword starting with an underscore, _embedded, is used to add data from another object to our representation. Embedded resources are the values of properties which are relation types. The difference from links is that instead of link objects resource objects are used as values. Each embedded object can have properties, links and its own embedded objects, although the latter option is not shown by the example above. The idea is that each representation can contain the trio, including embedded objects, so one has a repeating pattern in nested objects at all levels. Up to this moment we have dealt only with small lists which may not overwhelm a client if transmitted. A representation of the resource which is a list of books could be extremely demanding from the point of view of network bandwidth and memory consumption on the client side, so it may require pagination. For example, if we deal with the resource which is the list of all possible books, one can include some predefined amount of books along with links which enable the client to transition to the next and previous pages using next and previous relation types respectively. It should be noted that a representation can contain a link to some resource along with the same resource being embedded and both share the same relation type. This is done to reduce the number of trips to the server and client can extract the necessary information from representation by using relation type. We have just scratched the surface in learning HAL which is an invaluable tool in designing representations. By reading its specification and pocking around in the example application one can gain full grasp of the format. Two final notes: first, representations can be validated against schema using some on-line tool; second, list of libraries for working with HAL using various programming languages is here. References JSON Linking with HAL HAL – Hypertext Application Language JSON Hypertext Application Language An Interview with HAL Creator Mike Kelly
March 7, 2015
by Dmitry Noranovich
· 37,435 Views · 2 Likes
article thumbnail
Do it in Java 8: Recursive lambdas
Searching on the Internet about recursive lambdas, I found no relevant information. Only very old post talking about how to create recursive lambdas in ways that don't work with final version of Java8, or more recent (two years old only) post explaining this possibility had been remove from the JSR specification in October 2012, because “it was felt that there was too much work involved to be worth supporting a corner-case with a simple workaround.” Although it might have been removed from the spec, it is perfectly possible to create recursive lambdas. Lambdas are often used to create anonymous functions. This is not related to the fact that lambdas are implemented through anonymous classes. It is perfectly possible to create a named function, such as: UnaryOperator square = x -> x * x; We may then apply this function as: Integer x = square.apply(4); If we want to create a recursive factorial function, we might want to write: UnaryOperator factorial = x -> x == 0 ? 1 : x * factorial.apply(x - 1 ); But this won't compile because we can't use factorial while it is being defined. The compiler complains that “Cannot reference a field before it is defined”. One possible solution is to first declare the field, and then initialize it later, in the constructor, or in an initializer: UnaryOperator factorial; { factorial = i -> i == 0 ? 1 : i * factorial.apply( i - 1 ); } This works, but it is not very nice. There is however a much simpler way. Just add this. before the name of the function, as in: UnaryOperator factorial = x -> x == 0 ? 1 : x * this.factorial.apply(x - 1 ); Not only this works, but it even allows making the reference final: final UnaryOperator factorial = x -> x== 0 ? 1 : x * this.factorial.apply(x - 1 ); If you prefer a static field, just replace this with the name of the class: static final UnaryOperator factorial = x -> x== 0 ? 1 : x * MyClass.factorial.apply(x - 1 ); Interesting things to note: This function will silently produce an arithmetic overflow for factorial(26), producing a negative result. It will produce 0 for factorial(66) and over, until around 3000, where is will overflow the stack since recursion is implemented on the stack. If you try to find the exact limit, you may be surprised to see that it sometimes happens and sometimes not, for the same values. Try the following example: public class Test { static final UnaryOperator count = x -> x == 0 ? 0 : x + Test.count.apply(x - 1); public static void main(String[] args) { for (int i = 0;; i++) { System.out.println(i + ": " + count.apply(i)); } } } Here's the kind of result you might get: ... 18668: 174256446 18669: 174275115 18670: 174293785 18671: 174312456 Exception in thread "main" java.lang.StackOverflowError You could think that Java is able to handle 18671 level of recursion, but it is not. Try to call count(4000) and you will get a stack overflow. Obviously, Java is memoizing results on each loop execution, allowing to go much further than with a single call. Of course, it is possible to push the limits much beyond by implementing recursion on the heap through the use of trampolining. Another interesting point is that the same function implemented as a method uses much less stack space: static int count(int x) { return x == 0 ? 0 : x + count(x - 1); } This method overflows the stack only around count(6200).
March 6, 2015
by Pierre-Yves Saumont
· 35,160 Views · 3 Likes
article thumbnail
Exposing and Consuming SOAP Web Service Using Apache Camel-CXF Component and Spring
Let’s take the customer endpoint in my earlier article. Here I am going to use Apache Camel-CXF component to expose this customer endpoint as a web service. @WebService(serviceName="customerService") public interface CustomerService { public Customer getCustomerById(String customerId); } public class CustomerEndpoint implements CustomerService { private CustomerEndPointService service; @Override public Customer getCustomerById(String customerId) { Customer customer= service.getCustomerById(customerId); return customer; } } Exposing the service using Camel-CXF component Remember to specify the schema Location and namespace in spring context file Consuming SOAP web service using Camel-CXF Say you have a SOAP web service to the address http://localhost:8181/OrderManagement/order Then you can invoke this web service from a camel route. Please the code snippet below. In a camel-cxf component you can also specify the data format for an endpoint like given below. Hope this will help you to create SOAP web service using Camel-CXF component.
March 5, 2015
by Roshan Thomas
· 52,425 Views
article thumbnail
Why I Use OrientDB on Production Applications
Like many other Java developers, when i start a new Java development project that requires a database, i have hopes and dreams of what my database looks like: Java API (of course) Embeddable Pure Java Simple jar file for inclusion in my project Database stored in a directory on disk Faster than a rocket First I’m going to review these points, and then i’m going to talk about the database i chose for my latest project, which is in production now with hundreds of users accessing the web application each month. What I Want from My Database Here’s what i’m looking for in my database. These are the things that literally make me happy and joyous when writing code. Java API I code in Java. It’s natural for me to want to use a modern Java API for my database work. Embeddable My development productivity and programming enjoyment skyrocket when my database is embedded. The database starts and stops with my application. It’s easy to destroy my database and restart from scratch. I can upgrade my database by updating my database jar file. It’s easy to deploy my application into testing and production, because there’s no separate database server to startup and manage. (I know about the issue with clustering and an embedded database, but i’ll get to that.) Pure Java Back when i developed software that would be deployed on all manner of hardware, i was a stickler that all my code be pure Java, so that i could be confident that my code would run wherever customers and users deployed it. In this day of SaaS, i’m less picky. I develop on the Mac. I test and run in production on Linux. Those are the systems i care about, so if my database has some platform-specific code in it to make it run fast and well, i’m fine with that. Just as long as that platform-specific configuration is not exposed to me as the developer. Simple Jar File for Inclusion in My Project I really just want one database jar file to add to my project. And i don’t want that jar file messing with my code or the dependencies i include in my project. If the database uses Guava 1.2, and i’m using Guava 0.8, that can mess me up. I want my database to not interfere with jars that i use by introducing newer or older versions of class files that i already reference in my project’s jars. Database Stored in a Directory on Disk I like to destroy my database by deleting a directory. I like to run multiple, simultaneous databases by configuring each database to use a separate directory. That makes me super productive during development, and it makes it more fun for me to program to a database. Faster Than a Rocket I think that’s just a given. My Latest Project That Needs a Database My latest project is Floify.com. Floify is a Mortgage Borrower Portal, automating the process of collecting mortgage loan documents from borrowers and emailing milestone loan status updates to real estate agents and borrowers. Mortgage loan originators use Floify to automate the labor-intensive parts of their loan processes. The web application receives about 500 unique visitors per month. Floify experienced 28% growth in january 2015. Floify’s vital statistics are: 38,301 loan documents under management 3,619 registered users 3,113 loan packages under management The Database I Chose for My Latest Project When i started Floify, i looked for a database that met all the criteria i’ve described above. I decided against databases that were server-based (Postgres, etc). I decided against databases that weren’t Java-based (MongoDB, etc). I decided against databases that didn’t support ACID transactions. I narrowed my choices to OrientDB and Neo4j. It’s been a couple years since that decision process occurred, but i distinctly remember a few reasons why i ultimately chose OrientDB over Neo4j: Performance benchmarks for OrientDB were very impressive. The OrientDB development team was very active. Cost. OrientDB is free. Neo4j cost more than what i was willing to pay or what i could afford. I forget which it was. My Favourite OrientDB Features Here are some of my favourite features in OrientDB. These are not competitive advantages to OrientDB. It’s just some of the things that make me happy when coding against an embeddable database. I can create the database in code. I don’t have to use SQL for querying, but most of the time, i do. I already know SQL, and it’s just easy for me. I use the document database, and it’s very pleasant inserting new documents in Java. I can store multi-megabyte binary objects directly in the database. My database is stored in a directory on disk. When scalability demands it, i can upgrade to a two-server distributed database. I haven’t been there yet. Speed. For me, OrientDB is very fast, and in the few years i’ve been using it, it’s become faster. OrientDB doesn’t come in a single jar file, as would be my ideal. I have to include a few different jars, but that’s an easy tradeoff for me. Future In the future, as Floify’s performance and scalability needs demand it, i’ll investigate a multi-server database configuration on OrientDB. In the meantime, i’m preparing to upgrade to OrientDB 2.0, which was recently released and promises even more speed. Go speed. :-)
March 5, 2015
by Dave Sims
· 17,978 Views · 6 Likes
article thumbnail
Determining File Types in Java
Programmatically determining the type of a file can be surprisingly tricky and there have been many content-based file identification approaches proposed and implemented.
March 4, 2015
by Dustin Marx
· 168,573 Views · 8 Likes
article thumbnail
Swifter Swift Image Processing With GPUImage
I'm a big fan of Apple's Core Image technology: my Nodality application is based entirely around Core Image filters. However, for new users, the code for adding a simple filter to an image is a little oblique and the implementation is very"stringly" typed This post looks at an alternative, GPUImage from Brad Larson. GPUImage is a framework containing a rich set of image filters, many of which aren't in Core Image. It has a far simpler and more strongly typed API and, in some cases, is faster than Core Image. To kick off, let's look at the code required to apply a Gaussian blue to an image (inputImage) using Core Filter: let inputImage = UIImage() let ciContext = CIContext(options: nil) let blurFilter = CIFilter(name: "CIGaussianBlur") blurFilter.setValue(CIImage(image: inputImage), forKey: "inputImage") blurFilter.setValue(10, forKey: "inputRadius") let outputImageData = blurFilter.valueForKey("outputImage") as CIImage! let outputImageRef: CGImage = ciContext.createCGImage(outputImageData, fromRect: outputImageData.extent()) let outputImage = UIImage(CGImage: outputImageRef)! ...not only do we need to explicitly define the context, both the filter name and parameter are strings and we need a few steps to convert the filter's output into a UIImage. Here's the same functionality using GPUImage: let inputImage = UIImage() let blurFilter = GPUImageGaussianBlurFilter() blurFilter.blurRadiusInPixels = 10 let outputImage = blurFilter.imageByFilteringImage(inputImage) Here, both the filter and its blur radius parameter are properly typed and the filter returns a UIImage instance. On the flip-side, there is some setting up to do. Once you've got a local copy of GPUImage, drag the framework project into your application's project. Then under the application target's build phases, add a target dependency, a reference to GPUImage.framework under link binaries and a copy files stage. Your build phases screen should look like this: Then, by simply importing GPUImage, you're ready to roll. To show off some of the funkier filters contained in GPUImage, I've created a little demonstration app,GPUImageDemo. The app demonstrates Polar Pixellate, Polka Dot, Sketch, Threshold Sketch, Toon, Smooth Toon, Emboss, Sphere Refraction and Glass Sphere - none of which are available in Core Image. The filtering work is all done in my GPUImageDelegate class where a switch statement declares aGPUImageOutput variable (the class that includes the imageByFiltering() method) and sets it to the appropriate concrete class depending on the user interface. For example, if the picker is set the threshold sketch, the following case statement is executed: case ImageFilter.ThresholdSketch: gpuImageFilter = GPUImageThresholdSketchFilter() if let gpuImageFilter = gpuImageFilter as? GPUImageThresholdSketchFilter { if values.count > 1 { gpuImageFilter.edgeStrength = values[0] gpuImageFilter.threshold = values[1] } } If you build this project, you may encounter a build error on the documentation target. I've simply deleted this target on affected machines. GPUImage is fast enough to filter video. I've taken my recent two million particles experiment and added a post processing step that consists of a cartoon filter and an emboss filter. These are packaged together in aGPUImageFilterGroup: let toonFilter = GPUImageSmoothToonFilter() let embossFilter = GPUImageEmbossFilter() let filterGroup = GPUImageFilterGroup() toonFilter.threshold = 1 embossFilter.intensity = 2 filterGroup.addFilter(toonFilter) filterGroup.addFilter(embossFilter) toonFilter.addTarget(embossFilter) filterGroup.initialFilters = [ toonFilter ] filterGroup.terminalFilter = embossFilter Since GPUImageFilterGroupextends GPUImageFilterOutput, I can take the output from the Metal texture, create aUIImage instance of it and pass it to the composite filter: self.imageView.image = self.filterGroup.imageByFilteringImage(UIImage(CGImage: imageRef)!) On my iPad Air 2, the final result of 2,000,000 particles with a two filter post process on a 1,024 x 1,024 image still runs at around 20 frames per second. Here's a real time screen capture: The source code for my GPUImageDemo is available at my GitHub repository here and GPUImagelives here.
March 4, 2015
by Simon Gladman
· 8,425 Views
article thumbnail
Using JUnit for Something Else
junit != unit test Junit is the Java unit testing framework. We use it for unit testing usually, but many times we use it to execute integration tests as well. The major difference is that unit tests test individual units, while integration tests test how the different classes work together. This way integration tests cover longer execution chain. This means that they may discover more errors than unit tests, but at the same time they usually run longer times and it is harder to locate the bug if a test fails. If you, as a developer are aware of these differences there is nothing wrong to use junit to execute non-unit tests. I have seen examples in production code when the junit framework was used to execute system tests, where the execution chain of the test included external service call over the network. Junit is just a tool, so still, if you are aware of the drawbacks there is nothing inherently wrong with it. However in the actual case the execution of the junit tests were executed in the normal maven test phase and once the external service went down the code failed to build. That is bad, clearly showing the developer creating the code was not aware of the big picture that includes the external services and the build process. After having all that said, let me tell you a different story and join the two threads later. We speak languages… many Our programs have user interface, most of the time. The interface contains texts, usually in different languages. Usually in English and local language where the code is targeted. The text literals are usually externalized stored in “properties” files. Having multiple languages we have separate properties file for each language, each defining a literal text for an id. For example we have the files messages-de.properties messages-fr.properties messages-en.properties messages-pl.properties messages.properties and in the Java code we were accessing these via the Spring MessageSource calling String label = messageSource.getMessage("my.label.name",null,"label",locale); We, programmers are kind of lazy The problems came when we did not have some of the translations of the texts. The job of specifying the actual text of the labels in different languages does not belong to the programmers. Programmers are good speaking Java, C and other programming languages but are not really shining when it comes to natural languages. Most of us just do not speak all the languages needed. There are people who have the job to translate the text. Different people usually for different languages. Some of them work faster, others slower and the coding just could not wait for the translations to be ready. For the time till the final translation is available we use temporary strings. All temporary solutions become final. The temporary strings, which were just the English version got into the release. Process and discipline: failed To avoid that we implemented a process. We opened a Jira issue for each translation. When the translation was ready it got attached to the issue. When it got edited into the properties file and committed to git the issue was closed. It was such a burden and overhead that programmers were slowed down by it and less disciplined programmers just did not follow the process. Generally it was a bad idea. We concluded that not having a translation into the properties files is not the real big issue. The issue is not knowing that it was missing and creating a release. So we needed a process to check the correctness of the properties files before release. Light-way process and control Checking would have been cumbersome manually. We created junit tests that compared the different language files and checked that there is no key missing from one present in an other and that the values are not the same as the default English version. The junit test was to be executed each time when the project was to be released. Then we realized that some of the values are really the same as the English version so we started to use the letter ‘X’ at the first position in the language files to signal a label waiting for real translated value replacement. At this point somebody suggested that the junit test could be replaced by a simple ‘grep’. It was almost true, except we still wanted to discover missing keys and test running automatically during the release process. Summary, and take-away The Junit framework was designed to execute unit tests, but frameworks can and will be used not only for the purpose they were designed for. (Side note: this is actually true for any tool be it simple as a hammer or complex as default methods in Java interfaces.) You can use junit to execute tasks that can be executed during the testing phase of build and/or release. The tasks should execute fast, since the execution time adds to the build/release cycle. Should not depend on external sources, especially those that are reachable over the network, because these going down may also render the build process fail. When something is not acceptable for the build use the junit api to signal failure. Do not just write warnings. Nobody reads warnings.
March 3, 2015
by Peter Verhas DZone Core CORE
· 5,006 Views · 1 Like
article thumbnail
HTML/CSS/JavaScript GUI in Java Swing Application
The following code demonstrates how simple the process of embedding web browser component into your Java Swing/AWT/JavaFX desktop application.
March 3, 2015
by Vladimir Ikryanov
· 145,154 Views · 8 Likes
article thumbnail
Quick Way to Open Closed Project in Eclipse
Sometimes it is all about knowing the simple tricks, even if they might be obvious ;-). In my post “Eclipse Performance Improvement Tip: Close Unused Projects” I explained why it is important to close the ‘not used’ projects in the workspace to improve Eclipse performance: Closing Project in Eclipse Workspace To open the projects (or the selected projects), the ‘Open Project’ context menu (or menu Project > Open Project can be used: Open Project Context Menu An even easier way (and this might not be obvious!) is simply to double-click on the closed project folder: Double Click on the Closed Project to Open it That’s it! It will open the project which much easier, simpler and faster than using the menu or context menu. Unfortunately I’m not aware of a similar trick to close it. Anyone? Happy Opening :-)
March 3, 2015
by Erich Styger
· 16,544 Views · 1 Like
article thumbnail
Using a Full-Size None-Stretched Background Image in a Xamarin.Forms App
Intro I always like to use a kind of a translucent background image to my app’s screens, that makes it look a bit more professional than just a plain single-colored screen – a trick I learned from my fellow MVP Mark Monster in the very early days of Windows Phone development. Now that I am trying to learn some Xamarin development, I want to do the same thing – but it turns out that works a bit different from what I am used to. Setting up the basic application I created a Xamarin Forms portable app “BackGroundImageDemo”, but when you create a new Xamarin Forms application using the newest templates in Xamarin 3.9, you get an application that uses forms, but no XAML. Having lived and dreamed XAML for the last 5 years I don’t quite like that, so start out with making the following changes: 1. Update all NuGet packages - this will get you (at the time of this writing) the 1.3.2 forms packages 2. Add StartPage.Xaml to BackGroundImageDemo (Portable) 3. Make some changes to the App.cs in BackGroundImageDemo (Portable) to make it use the XAML page: namespace BackGroundImageDemo { public class App : Application { public App() { // The root page of your application MainPage = new StartPage(); } // stuff omitted } } And when you run that, for instance on Windows Phone, it looks like this: Adding a background picture Now suppose I want to make an app related to astronomy – then I might use this beautiful picture of Jupiter, that I nicked of Wikipedia, as a background image: It has a nice transparent background, so that will do. And guess what, the ContentPage class has a nice BackgroundImage attribute, so we are nearly done, right? As per instructions found on the Xamarin developer pages, images will need to be: For Windows Phone, in the root For Android, in the Resources/drawable folder For iOS, in Resources folder In addition, you must set the right build properties for this image: For Windows Phone, set “Build Action” to “Content” (this is default) and “Copy to Output Directory” to “Copy if newer” For Android, this is “AndroidResource” and “Do not copy” For iOS, this is “BundleResource” and “Do not copy” So I copy Jupiter.png three times in all folders (yeah I know, there are smarter ways to do that, that’s not the point here) addBackgroundImage=’'Jupiter.png” to the ContentPage tag and… the result, as we can see on the to the right, is not quite what we hoped for. On Android, Jupiter is looking like a giant Easter egg. Windows Phone gives the same display. On the Cupertino side, we get a different but equally undesirable effect. RelativeLayout to the rescue Using RelativeLayout and constraint expressions, we can more or less achieve the same result as Windows XAML’s “Uniform”. All elements within a RelativeLayout will essentially be drawn on top of each other, unless you specify a BoundsConstraint. I don’t do that here, so essentially every object will drawn from 0,0. By setting width and height of the RelativeLayout’s children to essentially the width and height of the RelativeLayout itself, is will automatically stretch to fill the screen. And thus the image ends up in the middle, as does the Grid with the actual UI in it. Just make sure you put the image Image first and the Grid second, or else the image will appear over your text. I also added Opacity = “0.3” to make the image translucent and not so bright that it actually wipes out your UI. The exactly value of the opacity is a matter of taste and you will need to determine how it affects the readability of the actual UI on a real device. Also, you might consider editing the image in Paint.Net or the like and set its to 0.3 opacity hard coded in the image, I guess that would save the device some work. Anyway, net result: Demo solution, as always, can be downloaded here.
March 2, 2015
by Joost van Schaik
· 83,642 Views
article thumbnail
Using MongoDB with Hadoop & Spark: Part 2 - Hive Example
Originally Written by Matt Kalan Welcome to part two of our three-part series on MongoDB and Hadoop. In part one, we introduced Hadoop and how to set it up. In this post, we'll look at a Hive example. Introduction & Setup of Hadoop and MongoDB Hive Example Spark Example & Key Takeaways For more detail on the use case, see the first paragraph of part 1. Summary Use case: aggregating 1 minute intervals of stock prices into 5 minute intervals Input:: 1 minute stock prices intervals in a MongoDB database Simple Analysis: performed in: - Hive - Spark Output: 5 minute stock prices intervals in Hadoop Hive Example I ran the following example from the Hive command line (simply typing the command “hive” with no parameters), not Cloudera’s Hue editor, as that would have needed additional installation steps. I immediately noticed the criticism people have with Hive, that everything is compiled into MapReduce which takes considerable time. I ran most things with just 20 records to make the queries run quickly. This creates the definition of the table in Hive that matches the structure of the data in MongoDB. MongoDB has a dynamic schema for variable data shapes but Hive and SQL need a schema definition. CREATE EXTERNAL TABLE minute_bars ( id STRUCT, Symbol STRING, Timestamp STRING, Day INT, Open DOUBLE, High DOUBLE, Low DOUBLE, Close DOUBLE, Volume INT ) STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler' WITH SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id", "Symbol":"Symbol", "Timestamp":"Timestamp", "Day":"Day", "Open":"Open", "High":"High", "Low":"Low", "Close":"Close", "Volume":"Volume"}') TBLPROPERTIES('mongo.uri'='mongodb://localhost:27017/marketdata.minbars'); Recent changes in the Apache Hive repo make the mappings necessary even if you are keeping the field names the same. This should be changed in the MongoDB Hadoop Connector soon if not already by the time you read this. Then I ran the following command to create a Hive table for the 5 minute bars: CREATE TABLE five_minute_bars ( id STRUCT, Symbol STRING, Timestamp STRING, Open DOUBLE, High DOUBLE, Low DOUBLE, Close DOUBLE ); This insert statement uses the SQL windowing functions to group 5 1-minute periods and determine the OHLC for the 5 minutes. There are definitely other ways to do this but here is one I figured out. Grouping in SQL is a little different from grouping in the MongoDB aggregation framework (in which you can pull the first and last of a group easily), so it took me a little while to remember how to do it with a subquery. The subquery takes each group of 5 1-minute records/documents, sorts them by time, and takes the open, high, low, and close price up to that record in each 5-minute period. Then the outside WHERE clause selects the last 1-minute bar in that period (because that row in the subquery has the correct OHLC information for its 5-minute period). I definitely welcome easier queries to understand but you can run the subquery by itself to see what it’s doing too. INSERT INTO TABLE five_minute_bars SELECT m.id, m.Symbol, m.OpenTime as Timestamp, m.Open, m.High, m.Low, m.Close FROM (SELECT id, Symbol, FIRST_VALUE(Timestamp) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as OpenTime, LAST_VALUE(Timestamp) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as CloseTime, FIRST_VALUE(Open) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as Open, MAX(High) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as High, MIN(Low) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as Low, LAST_VALUE(Close) OVER ( PARTITION BY floor(unix_timestamp(Timestamp, 'yyyy-MM-dd HH:mm')/(5*60)) ORDER BY Timestamp) as Close FROM minute_bars) as m WHERE unix_timestamp(m.CloseTime, 'yyyy-MM-dd HH:mm') - unix_timestamp(m.OpenTime, 'yyyy-MM-dd HH:mm') = 60*4; I can definitely see the benefit of being able to use SQL to access data in MongoDB and optionally in other databases and file formats, all with the same commands, while the mapping differences are handled in the table declarations. The downside is that the latency is quite high, but that could be made up some with the ability to scale horizontally across many nodes. I think this is the appeal of Hive for most people - they can scale to very large data volumes using traditional SQL, and latency is not a primary concern. Post #3 in this blog series shows similar examples using Spark. Introduction & Setup of Hadoop and MongoDB Hive Example Spark Example & Key Takeaways To learn more, watch our video on MongoDB and Hadoop. We will take a deep dive into the MongoDB Connector for Hadoop and how it can be applied to enable new business insights. WATCH MONGODB & HADOOP << Read Part 1
March 2, 2015
by Francesca Krihely
· 10,617 Views
article thumbnail
ASCII Art Generator in Java
ascii art is a technique that uses printable characters from ascii standard to produce visual art. it had it’s purpose in history when printers lacked graphics ability and it was also used in emails when embedding images was yet not possible. i present you a very simple ascii art generator written in java with configurable font and contrast. since it was built over a few hours during the weekend, it is not optimal but it was a fun experiment. down below you can see the code in action and an explanation of how it works. the algorithm the idea is rather simple. first, we create an image of each character we want to use in our ascii art and cache it. then we go through the original image and for each block of size of the characters we search for the best fit. we do so by first doing some preprocessing of the original image: we convert the image to grayscale and apply a threshold filter. by doing so we get a black and white only contrasted image that we can compare with each character and calculate the difference. we then simply pick the most similar character and do so until the whole image is converted. it is possible to experiment with threshold value to impact contrast and enhance the final result as needed. a very simple method to accomplish this is to set red, green and blue values to the average of all three: red = green = blue = (red + green + blue) / 3 if that value is lower than a threshold value, we make it white, otherwise we make it black. finally, we compare that image with each character pixel by pixel and calculate average error. this is demonstrated in the images and snippet below: int r1 = (charpixel >> 16) & 0xff; int g1 = (charpixel >> 8) & 0xff; int b1 = charpixel & 0xff; int r2 = (sourcepixel >> 16) & 0xff; int g2 = (sourcepixel >> 8) & 0xff; int b2 = sourcepixel & 0xff; int thresholded = (r2 + g2 + b2) / 3 < threshold ? 0 : 255; error = math.sqrt((r1 - thresholded) * (r1 - thresholded) + (g1 - thresholded) * (g1 - thresholded) + (b1 - thresholded) * (b1 - thresholded)); since colors are stored in a single integer, we first extract individual color components and perform calculations i explained. another challenge was to measure character dimensions accurately and to draw them centered. after a lot of experimentation with different methods i finally found this good enough: rectangle rect = new textlayout(character.tostring((char) i), fm.getfont(), fm.getfontrendercontext()).getoutline(null).getbounds(); g.drawstring(character, 0, (int) (rect.getheight() - rect.getmaxy())); you can download the complete source code from github repo . here are a few examples with different font sizes and threshold:
February 28, 2015
by Ivan Korhner
· 14,336 Views · 1 Like
  • Previous
  • ...
  • 740
  • 741
  • 742
  • 743
  • 744
  • 745
  • 746
  • 747
  • 748
  • 749
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: