DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
Java Mapper and Model Testing Using eXpectamundo
As a long time Java application developer working in variety of corporate environments one of the common activities I have to perform is to write mappings to translate one Java model object into another. Regardless of the technology or library I use to write the mapper, the same question comes up. What is the best way to unit test it? I've been through various approaches, all with a variety of pros and cons related to the amount of time it takes to write what is essentially a pretty simple test. The tendency (I hate to admit) is to skimp on testing all fields and focus on what I deem to be the key fields in order to concentrate on, dare I say it, more interesting areas of the codebase. As any coder knows, this is the road to bugs and the time spent writing the test is repaid many times over in reduced debugging later. Enter eXpectamundo eXpectamundo is an open source Java library hosted on github that takes a new approach to testing model objects. It allows the Java developer to write a prototype object which has been set up with expectations. This prototype can then be used to test the actual output in a unit test. The snippet below illustrates the setup of the prototype. ... User expected = prototype(User.class); expect(expected.getCreateTs()).isWithin(1, TimeUnit.SECONDS, Moments.today()); expect(expected.getFirstName()).isEqualTo("John"); expect(expected.getUserId()).isNull(); expect(expected.getDateOfBirth()).isComparableTo(AUG(9, 1975)); expectThat(actual).matches(expected); .. For a complete example lets take a simple Data Transfer Object (DTO) which transfers the definition of a new user from a UI. package org.exparity.expectamundo.sample.mapper; import java.util.Date; public class UserDTO { private String username, firstName, surname; private Date dateOfBirth; public UserDTO(String username, String firstName, String surname, Date dateOfBirth) { this.username = username; this.firstName = firstName; this.surname = surname; this.dateOfBirth = dateOfBirth; } public String getUsername() { return username; } public String getFirstName() { return firstName; } public String getSurname() { return surname; } public Date getDateOfBirth() { return dateOfBirth; } } This DTO needs to mapped into the domain model User object which can then be manipulated, stored, etc by the service layer. The domain User object is defined as below: package org.exparity.expectamundo.sample.mapper; import java.util.Date; public class User { private Integer userId; private Date createTs = new Date(); private String username, firstName, surname; private Date dateOfBirth; public User(String username, String firstName, String surname, final Date dateOfBirth) { this.username = username; this.firstName = firstName; this.surname = surname; this.dateOfBirth = dateOfBirth; } public Integer getUserId() { return userId; } public Date getCreateTs() { return createTs; } public String getUsername() { return username; } public String getFirstName() { return firstName; } public String getSurname() { return surname; } public Date getDateOfBirth() { return dateOfBirth; } } The code for the mapper is simple so we'll use a simple hand coded mapping layer however I've introduced a bug into the mapper which we'll detect later with our unit test. package org.exparity.expectamundo.sample.mapper; public class UserDTOToUserMapper { public User map(final UserDTO userDTO) { return new User(userDTO.getUsername(), userDTO.getSurname(), userDTO.getFirstName(), userDTO.getDateOfBirth()); } } We then write a unit test for the mapper using eXpectamundo to test the expectation. package org.exparity.expectamundo.sample.mapper; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.expectamundo.Expectamundo.*; import static org.exparity.hamcrest.date.Moments.now; public class UserDTOToUserMapperTest { @Test public void canMapUserDTOToUser() { UserDTO dto = new UserDTO("JohnSmith", "John", "Smith", AUG(9, 1975)); User actual = new UserDTOToUserMapper().map(dto); User expected = prototype(User.class); expect(expected.getCreateTs()).isWithin(1, TimeUnit.SECONDS, now()); expect(expected.getFirstName()).isEqualTo("John"); expect(expected.getSurname()).isEqualTo("Smith"); expect(expected.getUsername()).isEqualTo("JohnSmith"); expect(expected.getUserId()).isNull(); expect(expected.getDateOfBirth()).isSameDay(AUG(9, 1975)); expectThat(actual).matches(expected); } } The test shows how simple equality tests can be performed and also introduced some of the specialised tests which can be performed, such as testing for null, or testing the bounds of the create timestamp and performing a comparison check on the dateOfBirth property. Running the unit test reports the failure in the mapper where the firstname and surname properties have been transposed by the mapper. java.lang.AssertionError: Expected a User containing properties : getCreateTs() is expected within 1 seconds of Sun Jan 18 13:00:33 GMT 2015 getFirstName() is equal to John getSurname() is equal to Smith getUsername() is equal to JohnSmith getUserId() is null getDateOfBirth() is comparable to Sat Aug 09 00:00:00 BST 1975 But actual is a User containing properties : getFirstName() is Smith getSurname() is John A simple fix to the mapper resolves the issue: package org.exparity.expectamundo.sample.mapper; public class UserDTOToUserMapper { public User map(final UserDTO userDTO) { return new User(userDTO.getUsername(),userDTO.getFirstName(), userDTO.getSurname(), userDTO.getDateOfBirth()); } } But I can do this with hamcrest! The hamcrest equivalent to this test would follow one of two patterns; a custom implementation of org.hamcrest.Matcher for matching User objects, or a set of inline assertions as per the following example: package org.exparity.expectamundo.sample.mapper; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.hamcrest.date.DateMatchers.within; import static org.exparity.hamcrest.date.Moments.now; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; public class UserDTOToUserMapperHamcrestTest { @Test public void canMapUserDTOToUser() { UserDTO dto = new UserDTO("JohnSmith", "John", "Smith", AUG(9, 1975)); User actual = new UserDTOToUserMapper().map(dto); assertThat(actual.getCreateTs(), within(1, TimeUnit.SECONDS, now())); assertThat(actual.getFirstName(), equalTo("John")); assertThat(actual.getSurname(), equalTo("Smith")); assertThat(actual.getUsername(), equalTo("JohnSmith")); assertThat(actual.getUserId(), nullValue()); assertThat(actual.getDateOfBirth(), comparesEqualTo(AUG(9, 1975))); } } In this example the only difference eXpectamundo offers over hamcrest is a different way of reporting mismatches. eXpectamundo will report all differences between the expected vs the actual whereas the hamcrest test will fail on the first difference. An improvement, but not really a reason to consider alternatives. Where the approach eXpectomundo offers starts to differentiate itself is when testing more complex object collections and graphs. Collection testing with eXpectamundo If we move our code forward and we create a repository to allow us to store and retrieve User instances. For the sake of simplicity I've used a basic HashMap backed repository. The code for the repository is as follows: package org.exparity.expectamundo.sample.mapper; import java.util.*; public class UserRepository { private Map userMap = new HashMap<>(); public List getAll() { return new ArrayList<>(userMap.values()); } public void addUser(final User user) { this.userMap.put(user.getUsername(), user); } public User getUserByUsername(final String username) { return userMap.get(username); } } We then write a unit test to confirm the behaviour of repository package org.exparity.expectamundo.sample.mapper; import java.util.Date; import java.util.concurrent.TimeUnit; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.expectamundo.Expectamundo.*; public class UserRepositoryTest { private static String FIRST_NAME = "John"; private static String SURNAME = "Smith"; private static String USERNAME = "JohnSmith"; private static Date DATE_OF_BIRTH = AUG(9, 1975); private static User EXPECTED_USER; static { EXPECTED_USER = prototype(User.class); expect(EXPECTED_USER.getCreateTs()).isWithin(1, TimeUnit.SECONDS, new Date()); expect(EXPECTED_USER.getFirstName()).isEqualTo(FIRST_NAME); expect(EXPECTED_USER.getSurname()).isEqualTo(SURNAME); expect(EXPECTED_USER.getUsername()).isEqualTo(USERNAME); expect(EXPECTED_USER.getUserId()).isNull(); expect(EXPECTED_USER.getDateOfBirth()).isComparableTo(DATE_OF_BIRTH); } @Test public void canGetAll() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); expectThat(repos.getAll()).contains(EXPECTED_USER); } @Test public void canGetByUsername() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); expectThat(repos.getUserByUsername(USERNAME)).matches(EXPECTED_USER); } } The test shows how the prototype, once constructed, can be used to perform a deep verification of an object and, if desired, can be re-used in multiple tests. The equivalent matcher in hamcrest is to write a custom matcher for the User object, or as below with flat objects using a multi matcher. (Note there are a number of ways to write the matcher, the one below I felt was the most terse example). package org.exparity.expectamundo.sample.mapper; import java.util.Date; import java.util.concurrent.TimeUnit; import org.hamcrest.*; import org.junit.Test; import static org.exparity.dates.en.FluentDate.AUG; import static org.exparity.hamcrest.BeanMatchers.hasProperty; import static org.exparity.hamcrest.date.DateMatchers.*; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; public class UserRepositoryHamcrestTest { private static String FIRST_NAME = "John"; private static String SURNAME = "Smith"; private static String USERNAME = "JohnSmith"; private static Date DATE_OF_BIRTH = AUG(9, 1975); private static final Matcher EXPECTED_USER = Matchers.allOf( hasProperty("CreateTs", within(1, TimeUnit.SECONDS, new Date())), hasProperty("FirstName", equalTo(FIRST_NAME)), hasProperty("Surname", equalTo(SURNAME)), hasProperty("Username", equalTo(USERNAME)), hasProperty("UserId", nullValue()), hasProperty("DateOfBirth", sameDay(DATE_OF_BIRTH))); @Test public void canGetAll() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); assertThat(repos.getAll(), hasItem(EXPECTED_USER)); } @Test public void canGetByUsername() { User user = new User(USERNAME, FIRST_NAME, SURNAME, DATE_OF_BIRTH); UserRepository repos = new UserRepository(); repos.addUser(user); assertThat(repos.getUserByUsername(USERNAME), is(EXPECTED_USER)); } } In comparison this hamcrest-based test matches the eXpectamundo test in compactness but not in type-safety. A type-safe matcher can be created which checks each property individual which would make considerably more code for no benefit over the eXpectamundo equivalent. The error reporting during failures is also clear and intuitive for the eXpectamundo test, less so for the hamcrest-equivalent. (Again an equivalent descriptive test can be written using hamcrest but will require much more code). An example of the error reporting is below where the surname is returned in place of the firstname: java.lang.AssertionError: Expected a list containing a User with properties: getCreateTs() is a expected within 1 seconds of Fri Mar 06 17:29:52 GMT 2015 getFirstName() is equal to John getSurname() is equal to Smith getUsername() is equal to JohnSmith getUserId() is is null getDateOfBirth() is is comparable to Sat Aug 09 00:00:00 BST 1975 but actual list contains: User containing properties getFirstName() is Smith Summary In summary eXpectamundo offers a new approach to perform verification of models during testing. It provides a type-safe interface to set expectations making creation of deep model tests, especially in an IDE with auto-complete, particularly simple. Failures are also reported with a clear to understand error trace. Full details of eXpectamundo and the other expectations and features it supports are available on the eXpectamundo page on github. The example code is also available on github. Try it out To try eXpectamundo out for yourself include the dependency in your maven pom or other dependency manager org.exparity expectamundo 0.9.15 test
March 12, 2015
by Stewart Bissett
· 7,842 Views
article thumbnail
Using Java 8 Lambda Expressions in Java 7 or Older
I think nobody declines the usefulness of Lambda expressions, introduced by Java 8. However, many projects are stuck with Java 7 or even older versions. Upgrading can be time consuming and costly. If third party components are incompatible with Java 8 upgrading might not be possible at all. Besides that, the whole Android platform is stuck on Java 6 and 7. Nevertheless, there is still hope for Lambda expressions! Retrolambda provides a backport of Lambda expressions for Java 5, 6 and 7. From the Retrolambda documentation: Retrolambda lets you run Java 8 code with lambda expressions and method references on Java 7 or lower. It does this by transforming your Java 8 compiled bytecode so that it can run on a Java 7 runtime. After the transformation they are just a bunch of normal .class files, without any additional runtime dependencies. To get Retrolambda running, you can use the Maven or Gradle plugin. If you want to use Lambda expressions on Android, you only have to add the following lines to your gradle build files: /build.gradle: buildscript { dependencies { classpath 'me.tatarka:gradle-retrolambda:2.4.0' } } /app/build.gradle: apply plugin: 'com.android.application' // Apply retro lambda plugin after the Android plugin apply plugin: 'retrolambda' android { compileOptions { // change compatibility to Java 8 to get Java 8 IDE support sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } }
March 11, 2015
by Michael Scharhag
· 10,223 Views
article thumbnail
ExecutorService vs ExecutorCompletionService in Java
Suppose we have list of four tasks: Task A, Task B, Task C and Task D which perform some complex computation and result into an integer value. These tasks may take random time depending upon various parameters. We can submit these tasks to executor as: ExecutorService executorService = Executors.newFixedThreadPool(4); List futures = new ArrayList>(); futures.add(executorService.submit(A)); futures.add(executorService.submit(B)); futures.add(executorService.submit(C)); futures.add(executorService.submit(D)); Then we can iterate over the list to get the computed result of each future: for (Future future:futures) { Integer result = future.get(); // rest of the code here. } Now the similar functionality can also be achieved using ExecutorCompletionService as: ExecutorService executorService = Executors.newFixedThreadPool(4); CompletionService executorCompletionService= new ExecutorCompletionService<>(executorService ); Then again we can submit the tasks and get the result like: List futures = new ArrayList>(); futures.add(executorCompletionService.submit(A)); futures.add(executorCompletionService.submit(B)); futures.add(executorCompletionService.submit(C)); futures.add(executorCompletionService.submit(D)); for (int i=0; i> solvers) throws InterruptedException { CompletionService ecs = new ExecutorCompletionService(e); int n = solvers.size(); List> futures = new ArrayList>(n); Result result = null; try { for (Callable s : solvers) futures.add(ecs.submit(s)); for (int i = 0; i < n; ++i) { try { Result r = ecs.take().get(); if (r != null) { result = r; break; } } catch(ExecutionException ignore) {} } } finally { for (Future f : futures) f.cancel(true); } if (result != null) use(result); } In the above example the moment we get the result we break out of the loop and cancel out all the other futures. One important thing to note is that the implementation ofExecutorCompletionService contains a queue of results. We need to remember the number of tasks we have added to the service and then should use take or poll to drain the queue otherwise a memory leak will occur. Some people use the Future returned by submit to process results and this is NOT correct usage. There is one interesting solution provided by Dr. Heinz M, Kabutz here. Peeking into ExecutorCompletionService When we look inside the code for this we observe that it makes use of Executor,AbstractExecutorService (default implementations of ExecutorService execution methods) and BlockingQueue (actual instance is of class LinkedBlockingQueue>). public class ExecutorCompletionService implements CompletionService { private final Executor executor; private final AbstractExecutorService aes; private final BlockingQueue> completionQueue; // Remaining code.. } Another important thing to observe is class QueueingFuture which extends FutureTaskand in the method done() the result is pushed to queue. private class QueueingFuture extends FutureTask { QueueingFuture(RunnableFuture task) { super(task, null); this.task = task; } protected void done() { completionQueue.add(task); } private final Future task; } For the curios ones, class FutureTask is base implementation of Future interface with methods to start and cancel a computation, query to see if the computation is complete, and retrieve the result of the computation. And constructor takes RunnableFuture as parameter which is again an interface which extends Future interface and adds only one method run as: public interface RunnableFuture extends Runnable, Future { /** * Sets this Future to the result of its computation * unless it has been cancelled. */ void run(); } That is all for now. Enjoy!!
March 8, 2015
by Akhil Mittal
· 40,002 Views · 6 Likes
article thumbnail
Do it in Java 8: Recursive lambdas
Searching on the Internet about recursive lambdas, I found no relevant information. Only very old post talking about how to create recursive lambdas in ways that don't work with final version of Java8, or more recent (two years old only) post explaining this possibility had been remove from the JSR specification in October 2012, because “it was felt that there was too much work involved to be worth supporting a corner-case with a simple workaround.” Although it might have been removed from the spec, it is perfectly possible to create recursive lambdas. Lambdas are often used to create anonymous functions. This is not related to the fact that lambdas are implemented through anonymous classes. It is perfectly possible to create a named function, such as: UnaryOperator square = x -> x * x; We may then apply this function as: Integer x = square.apply(4); If we want to create a recursive factorial function, we might want to write: UnaryOperator factorial = x -> x == 0 ? 1 : x * factorial.apply(x - 1 ); But this won't compile because we can't use factorial while it is being defined. The compiler complains that “Cannot reference a field before it is defined”. One possible solution is to first declare the field, and then initialize it later, in the constructor, or in an initializer: UnaryOperator factorial; { factorial = i -> i == 0 ? 1 : i * factorial.apply( i - 1 ); } This works, but it is not very nice. There is however a much simpler way. Just add this. before the name of the function, as in: UnaryOperator factorial = x -> x == 0 ? 1 : x * this.factorial.apply(x - 1 ); Not only this works, but it even allows making the reference final: final UnaryOperator factorial = x -> x== 0 ? 1 : x * this.factorial.apply(x - 1 ); If you prefer a static field, just replace this with the name of the class: static final UnaryOperator factorial = x -> x== 0 ? 1 : x * MyClass.factorial.apply(x - 1 ); Interesting things to note: This function will silently produce an arithmetic overflow for factorial(26), producing a negative result. It will produce 0 for factorial(66) and over, until around 3000, where is will overflow the stack since recursion is implemented on the stack. If you try to find the exact limit, you may be surprised to see that it sometimes happens and sometimes not, for the same values. Try the following example: public class Test { static final UnaryOperator count = x -> x == 0 ? 0 : x + Test.count.apply(x - 1); public static void main(String[] args) { for (int i = 0;; i++) { System.out.println(i + ": " + count.apply(i)); } } } Here's the kind of result you might get: ... 18668: 174256446 18669: 174275115 18670: 174293785 18671: 174312456 Exception in thread "main" java.lang.StackOverflowError You could think that Java is able to handle 18671 level of recursion, but it is not. Try to call count(4000) and you will get a stack overflow. Obviously, Java is memoizing results on each loop execution, allowing to go much further than with a single call. Of course, it is possible to push the limits much beyond by implementing recursion on the heap through the use of trampolining. Another interesting point is that the same function implemented as a method uses much less stack space: static int count(int x) { return x == 0 ? 0 : x + count(x - 1); } This method overflows the stack only around count(6200).
March 6, 2015
by Pierre-Yves Saumont
· 35,157 Views · 3 Likes
article thumbnail
Exposing and Consuming SOAP Web Service Using Apache Camel-CXF Component and Spring
Let’s take the customer endpoint in my earlier article. Here I am going to use Apache Camel-CXF component to expose this customer endpoint as a web service. @WebService(serviceName="customerService") public interface CustomerService { public Customer getCustomerById(String customerId); } public class CustomerEndpoint implements CustomerService { private CustomerEndPointService service; @Override public Customer getCustomerById(String customerId) { Customer customer= service.getCustomerById(customerId); return customer; } } Exposing the service using Camel-CXF component Remember to specify the schema Location and namespace in spring context file Consuming SOAP web service using Camel-CXF Say you have a SOAP web service to the address http://localhost:8181/OrderManagement/order Then you can invoke this web service from a camel route. Please the code snippet below. In a camel-cxf component you can also specify the data format for an endpoint like given below. Hope this will help you to create SOAP web service using Camel-CXF component.
March 5, 2015
by Roshan Thomas
· 52,425 Views
article thumbnail
Why I Use OrientDB on Production Applications
Like many other Java developers, when i start a new Java development project that requires a database, i have hopes and dreams of what my database looks like: Java API (of course) Embeddable Pure Java Simple jar file for inclusion in my project Database stored in a directory on disk Faster than a rocket First I’m going to review these points, and then i’m going to talk about the database i chose for my latest project, which is in production now with hundreds of users accessing the web application each month. What I Want from My Database Here’s what i’m looking for in my database. These are the things that literally make me happy and joyous when writing code. Java API I code in Java. It’s natural for me to want to use a modern Java API for my database work. Embeddable My development productivity and programming enjoyment skyrocket when my database is embedded. The database starts and stops with my application. It’s easy to destroy my database and restart from scratch. I can upgrade my database by updating my database jar file. It’s easy to deploy my application into testing and production, because there’s no separate database server to startup and manage. (I know about the issue with clustering and an embedded database, but i’ll get to that.) Pure Java Back when i developed software that would be deployed on all manner of hardware, i was a stickler that all my code be pure Java, so that i could be confident that my code would run wherever customers and users deployed it. In this day of SaaS, i’m less picky. I develop on the Mac. I test and run in production on Linux. Those are the systems i care about, so if my database has some platform-specific code in it to make it run fast and well, i’m fine with that. Just as long as that platform-specific configuration is not exposed to me as the developer. Simple Jar File for Inclusion in My Project I really just want one database jar file to add to my project. And i don’t want that jar file messing with my code or the dependencies i include in my project. If the database uses Guava 1.2, and i’m using Guava 0.8, that can mess me up. I want my database to not interfere with jars that i use by introducing newer or older versions of class files that i already reference in my project’s jars. Database Stored in a Directory on Disk I like to destroy my database by deleting a directory. I like to run multiple, simultaneous databases by configuring each database to use a separate directory. That makes me super productive during development, and it makes it more fun for me to program to a database. Faster Than a Rocket I think that’s just a given. My Latest Project That Needs a Database My latest project is Floify.com. Floify is a Mortgage Borrower Portal, automating the process of collecting mortgage loan documents from borrowers and emailing milestone loan status updates to real estate agents and borrowers. Mortgage loan originators use Floify to automate the labor-intensive parts of their loan processes. The web application receives about 500 unique visitors per month. Floify experienced 28% growth in january 2015. Floify’s vital statistics are: 38,301 loan documents under management 3,619 registered users 3,113 loan packages under management The Database I Chose for My Latest Project When i started Floify, i looked for a database that met all the criteria i’ve described above. I decided against databases that were server-based (Postgres, etc). I decided against databases that weren’t Java-based (MongoDB, etc). I decided against databases that didn’t support ACID transactions. I narrowed my choices to OrientDB and Neo4j. It’s been a couple years since that decision process occurred, but i distinctly remember a few reasons why i ultimately chose OrientDB over Neo4j: Performance benchmarks for OrientDB were very impressive. The OrientDB development team was very active. Cost. OrientDB is free. Neo4j cost more than what i was willing to pay or what i could afford. I forget which it was. My Favourite OrientDB Features Here are some of my favourite features in OrientDB. These are not competitive advantages to OrientDB. It’s just some of the things that make me happy when coding against an embeddable database. I can create the database in code. I don’t have to use SQL for querying, but most of the time, i do. I already know SQL, and it’s just easy for me. I use the document database, and it’s very pleasant inserting new documents in Java. I can store multi-megabyte binary objects directly in the database. My database is stored in a directory on disk. When scalability demands it, i can upgrade to a two-server distributed database. I haven’t been there yet. Speed. For me, OrientDB is very fast, and in the few years i’ve been using it, it’s become faster. OrientDB doesn’t come in a single jar file, as would be my ideal. I have to include a few different jars, but that’s an easy tradeoff for me. Future In the future, as Floify’s performance and scalability needs demand it, i’ll investigate a multi-server database configuration on OrientDB. In the meantime, i’m preparing to upgrade to OrientDB 2.0, which was recently released and promises even more speed. Go speed. :-)
March 5, 2015
by Dave Sims
· 17,978 Views · 6 Likes
article thumbnail
Determining File Types in Java
Programmatically determining the type of a file can be surprisingly tricky and there have been many content-based file identification approaches proposed and implemented.
March 4, 2015
by Dustin Marx
· 168,573 Views · 8 Likes
article thumbnail
Using JUnit for Something Else
junit != unit test Junit is the Java unit testing framework. We use it for unit testing usually, but many times we use it to execute integration tests as well. The major difference is that unit tests test individual units, while integration tests test how the different classes work together. This way integration tests cover longer execution chain. This means that they may discover more errors than unit tests, but at the same time they usually run longer times and it is harder to locate the bug if a test fails. If you, as a developer are aware of these differences there is nothing wrong to use junit to execute non-unit tests. I have seen examples in production code when the junit framework was used to execute system tests, where the execution chain of the test included external service call over the network. Junit is just a tool, so still, if you are aware of the drawbacks there is nothing inherently wrong with it. However in the actual case the execution of the junit tests were executed in the normal maven test phase and once the external service went down the code failed to build. That is bad, clearly showing the developer creating the code was not aware of the big picture that includes the external services and the build process. After having all that said, let me tell you a different story and join the two threads later. We speak languages… many Our programs have user interface, most of the time. The interface contains texts, usually in different languages. Usually in English and local language where the code is targeted. The text literals are usually externalized stored in “properties” files. Having multiple languages we have separate properties file for each language, each defining a literal text for an id. For example we have the files messages-de.properties messages-fr.properties messages-en.properties messages-pl.properties messages.properties and in the Java code we were accessing these via the Spring MessageSource calling String label = messageSource.getMessage("my.label.name",null,"label",locale); We, programmers are kind of lazy The problems came when we did not have some of the translations of the texts. The job of specifying the actual text of the labels in different languages does not belong to the programmers. Programmers are good speaking Java, C and other programming languages but are not really shining when it comes to natural languages. Most of us just do not speak all the languages needed. There are people who have the job to translate the text. Different people usually for different languages. Some of them work faster, others slower and the coding just could not wait for the translations to be ready. For the time till the final translation is available we use temporary strings. All temporary solutions become final. The temporary strings, which were just the English version got into the release. Process and discipline: failed To avoid that we implemented a process. We opened a Jira issue for each translation. When the translation was ready it got attached to the issue. When it got edited into the properties file and committed to git the issue was closed. It was such a burden and overhead that programmers were slowed down by it and less disciplined programmers just did not follow the process. Generally it was a bad idea. We concluded that not having a translation into the properties files is not the real big issue. The issue is not knowing that it was missing and creating a release. So we needed a process to check the correctness of the properties files before release. Light-way process and control Checking would have been cumbersome manually. We created junit tests that compared the different language files and checked that there is no key missing from one present in an other and that the values are not the same as the default English version. The junit test was to be executed each time when the project was to be released. Then we realized that some of the values are really the same as the English version so we started to use the letter ‘X’ at the first position in the language files to signal a label waiting for real translated value replacement. At this point somebody suggested that the junit test could be replaced by a simple ‘grep’. It was almost true, except we still wanted to discover missing keys and test running automatically during the release process. Summary, and take-away The Junit framework was designed to execute unit tests, but frameworks can and will be used not only for the purpose they were designed for. (Side note: this is actually true for any tool be it simple as a hammer or complex as default methods in Java interfaces.) You can use junit to execute tasks that can be executed during the testing phase of build and/or release. The tasks should execute fast, since the execution time adds to the build/release cycle. Should not depend on external sources, especially those that are reachable over the network, because these going down may also render the build process fail. When something is not acceptable for the build use the junit api to signal failure. Do not just write warnings. Nobody reads warnings.
March 3, 2015
by Peter Verhas DZone Core CORE
· 5,006 Views · 1 Like
article thumbnail
HTML/CSS/JavaScript GUI in Java Swing Application
The following code demonstrates how simple the process of embedding web browser component into your Java Swing/AWT/JavaFX desktop application.
March 3, 2015
by Vladimir Ikryanov
· 145,153 Views · 8 Likes
article thumbnail
ASCII Art Generator in Java
ascii art is a technique that uses printable characters from ascii standard to produce visual art. it had it’s purpose in history when printers lacked graphics ability and it was also used in emails when embedding images was yet not possible. i present you a very simple ascii art generator written in java with configurable font and contrast. since it was built over a few hours during the weekend, it is not optimal but it was a fun experiment. down below you can see the code in action and an explanation of how it works. the algorithm the idea is rather simple. first, we create an image of each character we want to use in our ascii art and cache it. then we go through the original image and for each block of size of the characters we search for the best fit. we do so by first doing some preprocessing of the original image: we convert the image to grayscale and apply a threshold filter. by doing so we get a black and white only contrasted image that we can compare with each character and calculate the difference. we then simply pick the most similar character and do so until the whole image is converted. it is possible to experiment with threshold value to impact contrast and enhance the final result as needed. a very simple method to accomplish this is to set red, green and blue values to the average of all three: red = green = blue = (red + green + blue) / 3 if that value is lower than a threshold value, we make it white, otherwise we make it black. finally, we compare that image with each character pixel by pixel and calculate average error. this is demonstrated in the images and snippet below: int r1 = (charpixel >> 16) & 0xff; int g1 = (charpixel >> 8) & 0xff; int b1 = charpixel & 0xff; int r2 = (sourcepixel >> 16) & 0xff; int g2 = (sourcepixel >> 8) & 0xff; int b2 = sourcepixel & 0xff; int thresholded = (r2 + g2 + b2) / 3 < threshold ? 0 : 255; error = math.sqrt((r1 - thresholded) * (r1 - thresholded) + (g1 - thresholded) * (g1 - thresholded) + (b1 - thresholded) * (b1 - thresholded)); since colors are stored in a single integer, we first extract individual color components and perform calculations i explained. another challenge was to measure character dimensions accurately and to draw them centered. after a lot of experimentation with different methods i finally found this good enough: rectangle rect = new textlayout(character.tostring((char) i), fm.getfont(), fm.getfontrendercontext()).getoutline(null).getbounds(); g.drawstring(character, 0, (int) (rect.getheight() - rect.getmaxy())); you can download the complete source code from github repo . here are a few examples with different font sizes and threshold:
February 28, 2015
by Ivan Korhner
· 14,336 Views · 1 Like
article thumbnail
Standing Up a Local Netflix Eureka
Here I will consider two different ways of standing up a local instance of Netflix Eureka. If you are not familiar with Eureka, it provides a central registry where (micro)services can register themselves and client applications can use this registry to look up specific instances hosting a service and to make the service calls. Approach 1: Native Eureka Library The first way is to simply use the archive file generated by the Netflix Eureka build process: 1. Clone the Eureka source repository here: https://github.com/Netflix/eureka 2. Run "./gradlew build" at the root of the repository, this should build cleanly generating a war file in eureka-server/build/libs folder 3. Grab this file, rename it to "eureka.war" and place it in the webapps folder of either tomcat or jetty. For this exercise I have used jetty. 4. Start jetty, by default jetty will boot up at port 8080, however I wanted to instead bring it up at port 8761, so you can start it up this way, "java -jar start.jar -Djetty.port=8761" The server should start up cleanly and can be verified at this endpoint - "http://localhost:8761/eureka/v2/apps" Approach 2: Spring-Cloud-Netflix Spring-Cloud-Netflix provides a very neat way to bootstrap Eureka. To bring up Eureka server using Spring-Cloud-Netflix the approach that I followed was to clone the sample Eureka server application available here: https://github.com/spring-cloud-samples/eureka 1. Clone this repository 2. From the root of the repository run "mvn spring-boot:run", and that is it!. The server should boot up cleanly and the REST endpoint should come up here: "http://localhost:8761/eureka/apps". As a bonus, Spring-Cloud-Netflix provides a neat UI showing the various applications who have registered with Eureka at the root of the webapp at "http://localhost:8761/". Just a few small issues to be aware of, note that the context url's are a little different in the two cases "eureka/v2/apps" vs "eureka/apps", this can be adjusted on the configurations of the services which register with Eureka. Conclusion Your mileage with these approaches may vary. I have found Spring-Cloud-Netflix a little unstable at times but it has mostly worked out well for me. The documentation at the Spring-Cloud site is also far more exhaustive than the one provided at the Netflix Eureka site.
February 26, 2015
by Biju Kunjummen
· 13,045 Views
article thumbnail
A JAXB Nuance: String Versus Enum from Enumerated Restricted XSD String
Although Java Architecture for XML Binding (JAXB) is fairly easy to use in nominal cases (especially since Java SE 6), it also presents numerous nuances. Some of the common nuances are due to the inability to exactlymatch (bind) XML Schema Definition (XSD) types to Java types. This post looks at one specific example of this that also demonstrates how different XSD constructs that enforce the same XML structure can lead to different Java types when the JAXB compiler generates the Java classes. The next code listing, for Food.xsd, defines a schema for food types. The XSD mandates that valid XML will have a root element called "Food" with three nested elements "Vegetable", "Fruit", and "Dessert". Although the approach used to specify the "Vegetable" and "Dessert" elements is different than the approach used to specify the "Fruit" element, both approaches result in similar "valid XML." The "Vegetable" and "Dessert" elements are declared directly as elements of the prescribed simpleTypes defined later in the XSD. The "Fruit" element is defined via reference (ref=) to another defined element that consists of a simpleType. Food.xsd Although Vegetable and Dessert elements are defined in the schema differently than Fruit, the resulting valid XML is the same. A valid XML file is shown next in the code listing for food1.xml. food1.xml Spinach Watermelon Pie At this point, I'll use a simple Groovy script to validate the above XML against the above XSD. The code for this Groovy XML validation script (validateXmlAgainstXsd.groovy) is shown next. validateXmlAgainstXsd.groovy #!/usr/bin/env groovy // validateXmlAgainstXsd.groovy // // Accepts paths/names of two files. The first is the XML file to be validated // and the second is the XSD against which to validate that XML. if (args.length < 2) { println "USAGE: groovy validateXmlAgainstXsd.groovy " System.exit(-1) } String xml = args[0] String xsd = args[1] import javax.xml.validation.Schema import javax.xml.validation.SchemaFactory import javax.xml.validation.Validator try { SchemaFactory schemaFactory = SchemaFactory.newInstance(javax.xml.XMLConstants.W3C_XML_SCHEMA_NS_URI) Schema schema = schemaFactory.newSchema(new File(xsd)) Validator validator = schema.newValidator() validator.validate(new javax.xml.transform.stream.StreamSource(xml)) } catch (Exception exception) { println "\nERROR: Unable to validate ${xml} against ${xsd} due to '${exception}'\n" System.exit(-1) } println "\nXML file ${xml} validated successfully against ${xsd}.\n" The next screen snapshot demonstrates running the above Groovy XML validation script against food1.xmland Food.xsd. The objective of this post so far has been to show how different approaches in an XSD can lead to the same XML being valid. Although these different XSD approaches prescribe the same valid XML, they lead to different Java class behavior when JAXB is used to generate classes based on the XSD. The next screen snapshot demonstrates running the JDK-provided JAXB xjc compiler against the Food.xsd to generate the Java classes. The output from the JAXB generation shown above indicates that Java classes were created for the "Vegetable" and "Dessert" elements but not for the "Fruit" element. This is because "Vegetable" and "Dessert" were defined differently than "Fruit" in the XSD. The next code listing is for the Food.java class generated by the xjc compiler. From this we can see that the generated Food.java class references specific generated Java types for Vegetable and Dessert, but references simply a generic Java String for Fruit. Food.java (generated by JAXB jxc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlSchemaType; import javax.xml.bind.annotation.XmlType; /** * Java class for anonymous complex type. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * * * * * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "vegetable", "fruit", "dessert" }) @XmlRootElement(name = "Food") public class Food { @XmlElement(name = "Vegetable", required = true) @XmlSchemaType(name = "string") protected Vegetable vegetable; @XmlElement(name = "Fruit", required = true) protected String fruit; @XmlElement(name = "Dessert", required = true) @XmlSchemaType(name = "string") protected Dessert dessert; /** * Gets the value of the vegetable property. * * @return * possible object is * {@link Vegetable } * */ public Vegetable getVegetable() { return vegetable; } /** * Sets the value of the vegetable property. * * @param value * allowed object is * {@link Vegetable } * */ public void setVegetable(Vegetable value) { this.vegetable = value; } /** * Gets the value of the fruit property. * * @return * possible object is * {@link String } * */ public String getFruit() { return fruit; } /** * Sets the value of the fruit property. * * @param value * allowed object is * {@link String } * */ public void setFruit(String value) { this.fruit = value; } /** * Gets the value of the dessert property. * * @return * possible object is * {@link Dessert } * */ public Dessert getDessert() { return dessert; } /** * Sets the value of the dessert property. * * @param value * allowed object is * {@link Dessert } * */ public void setDessert(Dessert value) { this.dessert = value; } } The advantage of having specific Vegetable and Dessert classes is the additional type safety they bring as compared to a general Java String. Both Vegetable.java and Dessert.java are actually enums because they come from enumerated values in the XSD. The two generated enums are shown in the next two code listings. Vegetable.java (generated with JAXB xjc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlEnum; import javax.xml.bind.annotation.XmlEnumValue; import javax.xml.bind.annotation.XmlType; /** * Java class for Vegetable. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * * */ @XmlType(name = "Vegetable") @XmlEnum public enum Vegetable { @XmlEnumValue("Carrot") CARROT("Carrot"), @XmlEnumValue("Squash") SQUASH("Squash"), @XmlEnumValue("Spinach") SPINACH("Spinach"), @XmlEnumValue("Celery") CELERY("Celery"); private final String value; Vegetable(String v) { value = v; } public String value() { return value; } public static Vegetable fromValue(String v) { for (Vegetable c: Vegetable.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } } Dessert.java (generated with JAXB xjc compiler) // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.8-b130911.1802 // See http://java.sun.com/xml/jaxb // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2015.02.11 at 10:17:32 PM MST // package com.blogspot.marxsoftware.foodxml; import javax.xml.bind.annotation.XmlEnum; import javax.xml.bind.annotation.XmlEnumValue; import javax.xml.bind.annotation.XmlType; /** * Java class for Dessert. * * The following schema fragment specifies the expected content contained within this class. * * * * * * * * * * * */ @XmlType(name = "Dessert") @XmlEnum public enum Dessert { @XmlEnumValue("Pie") PIE("Pie"), @XmlEnumValue("Cake") CAKE("Cake"), @XmlEnumValue("Ice Cream") ICE_CREAM("Ice Cream"); private final String value; Dessert(String v) { value = v; } public String value() { return value; } public static Dessert fromValue(String v) { for (Dessert c: Dessert.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } } Having enums generated for the XML elements ensures that only valid values for those elements can be represented in Java. Conclusion JAXB makes it relatively easy to map Java to XML, but because there is not a one-to-one mapping between Java and XML types, there can be some cases where the generated Java type for a particular XSD prescribed element is not obvious. This post has shown how two different approaches to building an XSD to enforce the same basic XML structure can lead to very different results in the Java classes generated with the JAXB xjccompiler. In the example shown in this post, declaring elements in the XSD directly on simpleTypes restricting XSD's string to a specific set of enumerated values is preferable to declaring elements as references to other elements wrapping a simpleType of restricted string enumerated values because of the type safety that is achieved when enums are generated rather than use of general Java Strings.
February 25, 2015
by Dustin Marx
· 21,012 Views · 1 Like
article thumbnail
How to Detect Java Deadlocks Programmatically
Deadlocks are situations in which two or more actions are waiting for the others to finish, making all actions in a blocked state forever. They can be very hard to detect during development, and they usually require restart of the application in order to recover. To make things worse, deadlocks usually manifest in production under the heaviest load, and are very hard to spot during testing. The reason for this is it’s not practical to test all possible interleavings of a program’s threads. Although some statical analysis libraries exist that can help us detect the possible deadlocks, it is still necessary to be able to detect them during runtime and get some information which can help us fix the issue or alert us so we can restart our application or whatever. Detect deadlocks programmatically using ThreadMXBean class Java 5 introduced ThreadMXBean - an interface that provides various monitoring methods for threads. I recommend you to check all of the methods as there are many useful operations for monitoring the performance of your application in case you are not using an external tool. The method of our interest is findMonitorDeadlockedThreads, or, if you are using Java 6,findDeadlockedThreads. The difference is that findDeadlockedThreads can also detect deadlocks caused by owner locks (java.util.concurrent), while findMonitorDeadlockedThreads can only detect monitor locks (i.e. synchronized blocks). Since the old version is kept for compatibility purposes only, I am going to use the second version. The idea is to encapsulate periodical checking for deadlocks into a reusable component so we can just fire and forget about it. One way to impement scheduling is through executors framework - a set of well abstracted and very easy to use multithreading classes. ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); this.scheduler.scheduleAtFixedRate(deadlockCheck, period, period, unit); Simple as that, we have a runnable called periodically after a certain amount of time determined by period and time unit. Next, we want to make our utility is extensive and allow clients to supply the behaviour that gets triggered after a deadlock is detected. We need a method that receives a list of objects describing threads that are in a deadlock: void handleDeadlock(final ThreadInfo[] deadlockedThreads); Now we have everything we need to implement our deadlock detector class. public interface DeadlockHandler { void handleDeadlock(final ThreadInfo[] deadlockedThreads); } public class DeadlockDetector { private final DeadlockHandler deadlockHandler; private final long period; private final TimeUnit unit; private final ThreadMXBean mbean = ManagementFactory.getThreadMXBean(); private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); final Runnable deadlockCheck = new Runnable() { @Override public void run() { long[] deadlockedThreadIds = DeadlockDetector.this.mbean.findDeadlockedThreads(); if (deadlockedThreadIds != null) { ThreadInfo[] threadInfos = DeadlockDetector.this.mbean.getThreadInfo(deadlockedThreadIds); DeadlockDetector.this.deadlockHandler.handleDeadlock(threadInfos); } } }; public DeadlockDetector(final DeadlockHandler deadlockHandler, final long period, final TimeUnit unit) { this.deadlockHandler = deadlockHandler; this.period = period; this.unit = unit; } public void start() { this.scheduler.scheduleAtFixedRate( this.deadlockCheck, this.period, this.period, this.unit); } } Let’s test this in practice. First, we will create a handler to output deadlocked threads information to System.err. We could use this to send email in a real world scenario, for example: public class DeadlockConsoleHandler implements DeadlockHandler { @Override public void handleDeadlock(final ThreadInfo[] deadlockedThreads) { if (deadlockedThreads != null) { System.err.println("Deadlock detected!"); Map stackTraceMap = Thread.getAllStackTraces(); for (ThreadInfo threadInfo : deadlockedThreads) { if (threadInfo != null) { for (Thread thread : Thread.getAllStackTraces().keySet()) { if (thread.getId() == threadInfo.getThreadId()) { System.err.println(threadInfo.toString().trim()); for (StackTraceElement ste : thread.getStackTrace()) { System.err.println("\t" + ste.toString().trim()); } } } } } } } } This iterates through all stack traces and prints stack trace for each thread info. This way we can know exactly on which line each thread is waiting, and for which lock. This approach has one downside - it can give false alarms if one of the threads is waiting with a timeout which can actually be seen as a temporary deadlock. Because of that, original thread could no longer exist when we handle our deadlock and findDeadlockedThreads will return null for such threads. To avoid possible NullPointerExceptions, we need to guard for such situations. Finally, lets force a simple deadlock and see our system in action: DeadlockDetector deadlockDetector = new DeadlockDetector(new DeadlockConsoleHandler(), 5, TimeUnit.SECONDS); deadlockDetector.start(); final Object lock1 = new Object(); final Object lock2 = new Object(); Thread thread1 = new Thread(new Runnable() { @Override public void run() { synchronized (lock1) { System.out.println("Thread1 acquired lock1"); try { TimeUnit.MILLISECONDS.sleep(500); } catch (InterruptedException ignore) { } synchronized (lock2) { System.out.println("Thread1 acquired lock2"); } } } }); thread1.start(); Thread thread2 = new Thread(new Runnable() { @Override public void run() { synchronized (lock2) { System.out.println("Thread2 acquired lock2"); synchronized (lock1) { System.out.println("Thread2 acquired lock1"); } } } }); thread2.start(); Output: Thread1 acquired lock1 Thread2 acquired lock2 Deadlock detected! “Thread-1” Id=11 BLOCKED on java.lang.Object@68ab95e6 owned by “Thread-0” Id=10 deadlock.DeadlockTester$2.run(DeadlockTester.java:42) java.lang.Thread.run(Thread.java:662) “Thread-0” Id=10 BLOCKED on java.lang.Object@58fe64b9 owned by “Thread-1” Id=11 deadlock.DeadlockTester$1.run(DeadlockTester.java:28) java.lang.Thread.run(Thread.java:662) Keep in mind that deadlock detection can be an expensive operation and you should test it with your application to determine if you even need to use it and how frequent you will check. I suggest an interval of at least several minutes as it is not crucial to detect deadlock more frequently than this as we don’t have a recovery plan anyway - we can only debug and fix the error or restart the application and hope it won’t happen again. If you have any suggestions about dealing with deadlocks, or a question about this solution, drop a comment below.
February 25, 2015
by Ivan Korhner
· 52,151 Views · 4 Likes
article thumbnail
Per Client Cookie Handling With Jersey
A lot of REST services will use cookies as part of the authentication / authorisation scheme. This is a problem because by default the old Jersey client will use the singletonCookieHandler.getDefault which is most cases will be null and if not null will not likely work in a multithreaded server environment. (This is because in the background the default Jersey client will use URL.openConnection) Now you can work around this by using the Apache HTTP Client adapter for Jersey; but this is not always available. So if you want to use the Jersey client with cookies in a server environment you need to do a little bit of reflection to ensure you use your own private cookie jar. final CookieHandler ch = new CookieManager(); Client client = new Client(new URLConnectionClientHandler( new HttpURLConnectionFactory() { @Override public HttpURLConnection getHttpURLConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })); This will only work if your environment is using the internal implementation ofsun.net.www.protocol.http.HttpURLConnection that comes with the JDK. This appears to be the case for modern versions of WLS. For JAX-RS 2.0 you can do a similar change using Jersey 2.x specific ClientConfig classand HttpUrlConnectorProvider. final CookieHandler ch = new CookieManager(); Client client = ClientBuilder.newClient(new ClientConfig().connectorProvider(new HttpUrlConnectorProvider().connectionFactory(new HttpUrlConnectorProvider.ConnectionFactory() { @Override public HttpURLConnection getConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })));
February 24, 2015
by Gerard Davison
· 7,977 Views
article thumbnail
Sneak Peek into the JCache API (JSR 107)
This post covers the JCache API at a high level and provides a teaser – just enough for you to (hopefully) start itching about it ;-) In this post …. JCache overview JCache API, implementations Supported (Java) platforms for JCache API Quick look at Oracle Coherence Fun stuff – Project Headlands (RESTified JCache by Adam Bien) , JCache related talks at Java One 2014, links to resources for learning more about JCache What is JCache? JCache (JSR 107) is a standard caching API for Java. It provides an API for applications to be able to create and work with in-memory cache of objects. Benefits are obvious – one does not need to concentrate on the finer details of implementing the Caching and time is better spent on the core business logic of the application. JCache components The specification itself is very compact and surprisingly intuitive. The API defines high level components (interfaces) some of which are listed below Caching Provider – used to control Caching Managers and can deal with several of them, Cache Manager – deals with create, read, destroy operations on a Cache Cache – stores entries (the actual data) and exposes CRUD interfaces to deal with the entries Entry – abstraction on top of a key-value pair akin to a java.util.Map Hierarchy of JCache API components JCache Implementations JCache defines the interfaces which of course are implemented by different vendors a.k.a Providers. Oracle Coherence Hazelcast Infinispan ehcache Reference Implementation – this is more for reference purpose rather than a production quality implementation. It is per the specification though and you can be rest assured of the fact that it does in fact pass the TCK as well From the application point of view, all that’s required is the implementation to be present in the classpath. The API also provides a way to further fine tune the properties specific to your provider via standard mechanisms. You should be able to track the list of JCache reference implementations from the JCP website link public class JCacheUsage{ public static void main(String[] args){ //bootstrap the JCache Provider CachingProvider jcacheProvider = Caching.getCachingProvider(); CacheManager jcacheManager = jcacheProvider.getCacheManager(); //configure cache MutableConfiguration jcacheConfig = new MutableConfiguration<>(); jcacheConfig.setTypes(String.class, MyPreciousObject.class); //create cache Cache cache = jcacheManager.createCache("PreciousObjectCache", jcacheConfig); //play around String key = UUID.randomUUID().toString(); cache.put(key, new MyPreciousObject()); MyPreciousObject inserted = cache.get(key); cache.remove(key); cache.get(key); //will throw javax.cache.CacheException since the key does not exist } } JCache provider detection JCache provider detection happens automatically when you only have a single JCache provider on the class path You can choose from the below options as well //set JMV level system property -Djavax.cache.spi.cachingprovider=org.ehcache.jcache.JCacheCachingProvider //code level config System.setProperty("javax.cache.spi.cachingprovider","org.ehcache.jcache.JCacheCachingProvider //you want to choose from multiple JCache providers at runtime CachingProvider ehcacheJCacheProvider = Caching.getCachingProvider("org.ehcache.jcache.JCacheCachingProvider"); //which JCache providers do I have on the classpath? Iterable jcacheProviders = Caching.getCachingProviders(); Java Platform support Compliant with Java SE 6 and above Does not define any details in terms of Java EE integration. This does not mean that it cannot be used in a Java EE environment – it’s just not standardized yet. Could not be plugged into Java EE 7 as a tried and tested standard Candidate for Java EE 8 Project Headlands: Java EE and JCache in tandem By none other than Adam Bien himself ! Java EE 7, Java SE 8 and JCache in action Exposes the JCache API via JAX-RS (REST) Uses Hazelcast as the JCache provider Highly recommended ! Oracle Coherence This post deals with high level stuff w.r.t JCache in general. However, a few lines about Oracle Coherence in general would help put things in perspective Oracle Coherence is a part of Oracle’s Cloud Application Foundation stack It is primarily an in-memory data grid solution Geared towards making applications more scalable in general What’s important to know is that from version 12.1.3 onwards, Oracle Coherence includes a reference implementation for JCache (more in the next section) JCache support in Oracle Coherence Support for JCache implies that applications can now use a standard API to access the capabilities of Oracle Coherence This is made possible by Coherence by simply providing an abstraction over its existing interfaces (NamedCache etc). Application deals with a standard interface (JCache API) and the calls to the API are delegated to the existing Coherence core library implementation Support for JCache API also means that one does not need to use Coherence specific APIs in the application resulting in vendor neutral code which equals portability How ironic – supporting a standard API and always keeping your competitors in the hunt ;-) But hey! That’s what healthy competition and quality software is all about ! Talking of healthy competition – Oracle Coherence does support a host of other features in addition to the standard JCache related capabilities. The Oracle Coherence distribution contains all the libraries for working with the JCache implementation The service definition file in the coherence-jcache.jar qualifies it as a valid JCache provider implementation Curious about Oracle Coherence ? Quick Starter page Documentation Installation Further reading about Coherence and JCache combo – Oracle Coherence documentation JCache at Java One 2014 Couple of great talks revolving around JCache at Java One 2014 Come, Code, Cache, Compute! by Steve Millidge Using the New JCache by Brian Oliver and Greg Luck Hope this was fun :-) Cheers !
February 23, 2015
by Abhishek Gupta DZone Core CORE
· 5,879 Views · 1 Like
article thumbnail
Java Message Service (JMS)—Explained
Java message service enables loosely coupled communication between two or more systems. It provides reliable and asynchronous form of communication. There are two types of messaging models in JMS. Point-to-Point Messaging Domain Applications are built on the concept of message queues, senders, and receivers. Each message is send to a specific queue, and receiving systems consume messages from the queues established to hold their messages. Queues retain all messages sent to them until the messages are consumed by the receiver or expire. Here there is only one consumer for a message. If the receiver is not available at any point, message will remain in the message broker (Queue) and will be delivered to the consumer when it is available or free to process the message. Also receiver acknowledges the consumption on each message. Publish/Subscribe Messaging Domain Applications send message to a message broker called Topic. This topic publishes the message to all the subscribers. Topic retains the messages until it is delivered to the systems at the receiving end. Applications are loosely coupled and do not need to be on the same server. Message communications are handled by the message broker; in this case it is called a topic. A message can have multiple consumers and consumers will get the messages only after it gets subscribed and consumers need to remain active in order to get new messages. Message Sender Message Sender object is created by a session and used for sending messages to a destination queue. It implements the MessageProducer interface. First we need to create a connection object using the ActiveMQConnectionFactory factory object. Then we create a session object. Using the session object we set the message broker (Queue) and create the message sender object. Here we are sending a map message object. Please see the code snippet for message sender. public class MessageSender { public static void main(String[] args) { Connection connection = null; try { Context ctx = new InitialContext(); ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616"); connection = cf.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); Destination destination = session.createQueue("test.message.queue"); MessageProducer messageProducer = session.createProducer(destination); MapMessage message = session.createMapMessage(); message.setString("Name", "Tim"); message.setString("Role", "Developer"); message.setDouble("Salary", 850000); messageProducer.send(message); } catch (Exception e) { System.out.println(e); } finally { if (connection != null) { try { connection.close(); } catch (JMSException e) { System.out.println(e); } } System.exit(0); } } } Message Receiver Message Receiver object is created by a session and used for receiving messages from a queue. It implements the MessageProducer interface. Please see the code snippet for message receiver. The process remains same in message sender and receiver. In case of receiver, we use a Message Listener. Listener remains active and gets invoked when the receiver consumes any message from the broker. Please see the code snippets below. public class MessageReceiver { public static void main(String[] args) { try { InitialContext ctx = new InitialContext(); ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616"); Connection connection = cf.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); Destination destination = session.createQueue("test.prog.queue"); MessageConsumer consumer = session.createConsumer(destination); consumer.setMessageListener(new MapMessageListener()); connection.start(); } catch (Exception e) { System.out.println(e); } } } Please see the code snippet for a message listener receiving map message object. public class MapMessageListener implements MessageListener { public void onMessage(Message message) { if (message instanceof MapMessage) { MapMessage mapMessage = (MapMessage)message; try { String name = mapMessage.getString("Name"); System.out.println("Name : " + name); } catch (JMSException e) { throw new RuntimeException(e); } } else { System.out.println("Invalid Message Received"); } } } Hope this will help you to understand the basics of JMS and write a production ready message sender and receiver programs.
February 23, 2015
by Roshan Thomas
· 81,259 Views · 12 Likes
article thumbnail
Retry-After HTTP Header in Practice
Retry-After is a lesser known HTTP response header.
February 20, 2015
by Tomasz Nurkiewicz DZone Core CORE
· 15,859 Views
article thumbnail
Converting an Application to JHipster
I've been intrigued by JHipster ever since I first tried it last September. I'd worked with AngularJS and Spring Boot quite a bit, and I liked the idea that someone had combined them, adding some nifty features along the way. When I spoke about AngularJS earlier this month, I included a few slides on JHipster near the end of the presentation. This week, I received an email from someone who attended that presentation. Hey Matt, We met a few weeks back when you presented at DOSUG. You were talking about JHipster which I had been eyeing for a few months and wanted your quick .02 cents. I have built a pretty heavy application over the last 6 months that is using mostly the same tech as JHipster. Java Spring JPA AngularJS Compass Grunt It's ridiculously close for most of the tech stack. So, I was debating rolling it over into a JHipster app to make it a more familiar stack for folks. My concern is that it I will spend months trying to shoehorn it in for not much ROI. Any thoughts on going down this path? What are the biggest issues you've seen in using JHipster? It seems pretty straightforward except for the entity generators. I'm concerned they are totally different than what I am using. The main difference in what I'm doing compared to JHipster is my almost complete use of groovy instead of old school Java in the app. I would have to be forced into going back to regular java beans... Thoughts? I replied with the following advice: JHipster is great for starting a project, but I don't know that it buys you much value after the first few months. I would stick with your current setup and consider JHipster for your next project. I've only prototyped with it, I haven't created any client apps or put anything in production. I have with Spring Boot and AngularJS though, so I like that JHipster combines them for me. JHipster doesn't generate Scala or Groovy code, but you could still use them in a project as long as you had Maven/Gradle configured properly. You might try generating a new app with JHipster and examine how they're doing this. At the very least, it can be a good learning tool, even if you're not using it directly. Java Hipsters: Do you agree with this advice? Have you tried migrating an existing app to JHipster? Are any of you using Scala or Groovy in your JHipster projects?
February 13, 2015
by Matt Raible
· 8,287 Views · 2 Likes
article thumbnail
The API Gateway Pattern: Angular JS and Spring Security Part IV
Written by Dave Syer in the Spring blog In this article we continue our discussion of how to use Spring Security with Angular JS in a “single page application”. Here we show how to build an API Gateway to control the authentication and access to the backend resources using Spring Cloud. This is the fourth in a series of articles, and you can catch up on the basic building blocks of the application or build it from scratch by reading the first article, or you can just go straight to the source code in Github. In the last article we built a simple distributed application that used Spring Session to authenticate the backend resources. In this one we make the UI server into a reverse proxy to the backend resource server, fixing the issues with the last implementation (technical complexity introduced by custom token authentication), and giving us a lot of new options for controlling access from the browser client. Reminder: if you are working through this article with the sample application, be sure to clear your browser cache of cookies and HTTP Basic credentials. In Chrome the best way to do that for a single server is to open a new incognito window. Creating an API Gateway An API Gateway is a single point of entry (and control) for front end clients, which could be browser based (like the examples in this article) or mobile. The client only has to know the URL of one server, and the backend can be refactored at will with no change, which is a significant advantage. There are other advantages in terms of centralization and control: rate limiting, authentication, auditing and logging. And implementing a simple reverse proxy is really simple with Spring Cloud. If you were following along in the code, you will know that the application implementation at the end of the last article was a bit complicated, so it’s not a great place to iterate away from. There was, however, a halfway point which we could start from more easily, where the backend resource wasn’t yet secured with Spring Security. The source code for this is a separate project in Github so we are going to start from there. It has a UI server and a resource server and they are talking to each other. The resource server doesn’t have Spring Security yet so we can get the system working first and then add that layer. Declarative Reverse Proxy in One Line To turn it into an API Gateawy, the UI server needs one small tweak. Somewhere in the Spring configuration we need to add an @EnableZuulProxy annotation, e.g. in the main (only)application class: @SpringBootApplication @RestController @EnableZuulProxy public class UiApplication { ... } and in an external configuration file we need to map a local resource in the UI server to a remote one in the external configuration (“application.yml”): security: ... zuul: routes: resource: path: /resource/** url: http://localhost:9000 This says “map paths with the pattern /resource/** in this server to the same paths in the remote server at localhost:9000”. Simple and yet effective (OK so it’s 6 lines including the YAML, but you don’t always need that)! All we need to make this work is the right stuff on the classpath. For that purpose we have a few new lines in our Maven POM: org.springframework.cloud spring-cloud-starter-parent 1.0.0.BUILD-SNAPSHOT pom import org.springframework.cloud spring-cloud-starter-zuul ... Note the use of the “spring-cloud-starter-zuul” - it’s a starter POM just like the Spring Boot ones, but it governs the dependencies we need for this Zuul proxy. We are also using because we want to be able to depend on all the versions of transitive dependencies being correct. Consuming the Proxy in the Client With those changes in place our application still works, but we haven’t actually used the new proxy yet until we modify the client. Fortunately that’s trivial. We just need to go from this implementation of the “home” controller: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('http://localhost:9000/').success(function(data) { $scope.greeting = data; }) }); to a local resource: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('resource/').success(function(data) { $scope.greeting = data; }) }); Now when we fire up the servers everything is working and the requests are being proxied through the UI (API Gateway) to the resource server. Further Simplifications Even better: we don’t need the CORS filter any more in the resource server. We threw that one together pretty quickly anyway, and it should have been a red light that we had to do anything as technically focused by hand (especially where it concerns security). Fortunately it is now redundant, so we can just throw it away, and go back to sleeping at night! Securing the Resource Server You might remember in the intermediate state that we started from there is no security in place for the resource server. Aside: Lack of software security might not even be a problem if your network architecture mirrors the application architecture (you can just make the resource server physically inaccessible to anyone but the UI server). As a simple demonstration of that we can make the resource server only accessible on localhost. Just add this to application.properties in the resource server: server.address: 127.0.0.1 Wow, that was easy! Do that with a network address that’s only visible in your data center and you have a security solution that works for all resource servers and all user desktops. Suppose that we decide we do need security at the software level (quite likely for a number of reasons). That’s not going to be a problem, because all we need to do is add Spring Security as a dependency (in the resource server POM): org.springframework.boot spring-boot-starter-security That’s enough to get us a secure resource server, but it won’t get us a working application yet, for the same reason that it didn’t in Part III: there is no shared authentication state between the two servers. Sharing Authentication State We can use the same mechanism to share authentication (and CSRF) state as we did in the last, i.e. Spring Session. We add the dependency to both servers as before: org.springframework.session spring-session 1.0.0.RELEASE org.springframework.boot spring-boot-starter-redis but this time the configuration is much simpler because we can just add the same Filterdeclaration to both. First the UI server (adding @EnableRedisHttpSession): @SpringBootApplication @RestController @EnableZuulProxy @EnableRedisHttpSession public class UiApplication { ... } and then the resource server. There are two changes to make: one is adding@EnableRedisHttpSession and a HeaderHttpSessionStrategy bean to theResourceApplication: @SpringBootApplication @RestController @EnableRedisHttpSession class ResourceApplication { ... @Bean HeaderHttpSessionStrategy sessionStrategy() { new HeaderHttpSessionStrategy(); } } and the other is to explicitly ask for a non-stateless session creation policy inapplication.properties: security.sessions: NEVER As long as redis is still running in the background (use the fig.yml if you like to start it) then the system will work. Load the homepage for the UI at http://localhost:8080 and login and you will see the message from the backend rendered on the homepage. How Does it Work? What is going on behind the scenes now? First we can look at the HTTP requests in the UI server (and API Gateway): VERB PATH STATUS RESPONSE GET / 200 index.html GET /css/angular-bootstrap.css 200 Twitter bootstrap CSS GET /js/angular-bootstrap.js 200 Bootstrap and Angular JS GET /js/hello.js 200 Application logic GET /user 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /resource 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /login.html 200 Angular login form partial POST /login 302 Redirect to home page (ignored) GET /user 200 JSON authenticated user GET /resource 200 (Proxied) JSON greeting That’s identical to the sequence at the end of Part II except for the fact that the cookie names are slightly different (“SESSION” instead of “JSESSIONID”) because we are using Spring Session. But the architecture is different and that last request to “/resource” is special because it was proxied to the resource server. We can see the reverse proxy in action by looking at the “/trace” endpoint in the UI server (from Spring Boot Actuator, which we added with the Spring Cloud dependencies). Go tohttp://localhost:8080/trace in a browser and scroll to the end (if you don’t have one already get a JSON plugin for your browser to make it nice and readable). You will need to authenticate with HTTP Basic (browser popup), but the same credentials are valid as for your login form. At or near the end you should see a pair of requests something like this: { "timestamp": 1420558194546, "info": { "method": "GET", "path": "/", "query": "" "remote": true, "proxy": "resource", "headers": { "request": { "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260", "x-forwarded-prefix": "/resource", "x-forwarded-host": "localhost:8080" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } }, } }, { "timestamp": 1420558200232, "info": { "method": "GET", "path": "/resource/", "headers": { "request": { "host": "localhost:8080", "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } } } }, The second entry there is the request from the client to the gateway on “/resource” and you can see the cookies (added by the browser) and the CSRF header (added by Angular as discussed inPart II). The first entry has remote: true and that means it’s tracing the call to the resource server. You can see it went out to a uri path “/” and you can see that (crucially) the cookies and CSRF headers have been sent too. Without Spring Session these headers would be meaningless to the resource server, but the way we have set it up it can now use those headers to re-constitute a session with authentication and CSRF token data. So the request is permitted and we are in business! Conclusion We covered quite a lot in this article but we got to a really nice place where there is a minimal amount of boilerplate code in our two servers, they are both nicely secure and the user experience isn’t compromised. That alone would be a reason to use the API Gateway pattern, but really we have only scratched the surface of what that might be used for (Netflix uses it for a lot of things). Read up on Spring Cloud to find out more on how to make it easy to add more features to the gateway. The next article in this series will extend the application architecture a bit by extracting the authentication responsibilities to a separate server (the Single Sign On pattern).
February 9, 2015
by Pieter Humphrey DZone Core CORE
· 16,018 Views
article thumbnail
The State of Java
I think living in a beautiful city in a fantastic climate has its advantages. Not just the obvious ones, but we find people unusually keen to come and visit us on the pretence of presenting at the Sevilla Java User Group (and please, DO come and present at our JUG, welove visitors). This week we were really lucky, we had Georges Saab and Aurelio Garcia-Ribeyro giving us an update on where Java is now and where it looks like it’s going in the future. I’m just starting to use Java 8 in real life, so this could not have been better timed - I got to ask the guys a bunch of questions about the intentions behind some of the Java 8 features, and the current vision for the future. 2015 Java update and roadmap, JUG sevilla from Georges Saab and Aurelio Garcia-Ribeyro My notes from the session: Lambdas could be just a syntax change, but they could be more than that - they could impact the language, the libraries and the JVM. They could have a positive impact on performance, and this work can continue to go on through small updates to Java that don’t impact the syntax. Streams are a pipeline of operations, made possible/easier/more readable by lambdas. The aim is to make operations on collections easier and more readable. In the Old World Order, you had to care about how to perform certain operations. With streams, you don’t need to tell the computer exactly how it’s done, you can simply say what operations you want performed. This makes it easier for developers Streams will take all the operations you pass in and perform them in a single pass of the data, so you don’t have to write multiple loops to perform multiple operations on the same data structure, or tie your brain in knots figuring out how to do it in one loop. There are also no intermediate data structures when you use streams. The implementation can be optimised under the covers (e.g. not performing the sort operation if the data is already ordered correctly), and the developer doesn’t have to worry about it. Java can introduce further optimisations in later releases without changing the API or impacting the code a developer has already written. These new features in Java have a focus on readability, since code is much more often read than written. The operations are easier to parallelise, because the developer is no longer dictating the how - multiple for loops might not be easy to parallelise, but a series of operations can be. Default methods and new support for static methods on interfacesare interesting. I’d forgotten you could put static methods on interfaces and I’m going to sneak them into my latest project. Nashorn is here to replace Rhino. Personally I haven’t worked in the sort of environment that would need server-side JavaScript so this whole area has passed me by somewhat, but seems it might be interesting for Node.js or creating a REPL in JavaScript that you want to run on the JVM. The additional annotation support in Java 8 will be useful for Java EE. As this is something I’m currently playing with (specifically web sockets) I’m interested in this, but it seems like it will be a while before this filters into the Java EE that’s used on a daily basis. Mission Control and Flight Recorder - look interesting. Feel like I should play with them. Many people are skipping straight from Java 6 to 8 - the new language features and improved performance are major driving factors End of public updates of Java 7 in April. Having libraries that, um…encourage… adoption of the latest version of Java makes life a lot easier for those who develop the Java language, as they can concentrate on moving the language forward and not be tied down supporting old versions. Either this is the first time I’ve heard of JavaDB, or my memory has completely discarded it. I had no idea what it was. Java 9 is well under way, check out the JEPs. (This is the JEP process) Jigsaw was explained, and actually I could see myself using it for the project I’m working on right now. I had a look to see if I could use it via the OpenJDK, but it looks like the groundwork is there, but not the actual modules themselves. The G1 Garbage Collector is “…the go forward GC”, it’s the one that’s being actively worked on. This is the first I’ve heard of Enhanced Volatiles, I’m so behind the times! Access to internal packages is going away in Java 9. So don’t use any sun.* packages. Use jdeps to identify any dependencies in your code that need to change. …and, looking further ahead than Java 9, we have value types andProject Valhalla …a REPL for Java …possibly a lightweight JSON API …and VarHandles were also mentioned. Finally, the guys mentioned a talk by Brian Goetz called “Stewardship: the Sobering Parts”, which has gone onto my to-watch list. Ideas It became clear throughout the talk there are plenty of ideas that we could explore in later presentations. If you want to see any of the following, add a comment or ping me or IsraKaos on twitter or Meetup and we’ll try and schedule it. Similarly, if you can present on any of these topics and want to come to a beautiful, sunny city with amazing food to do so, drop me a line: The OpenJDK The JCP, the purpose, the processes, the people Adopt a JSR, Adopt OpenJDK New Date/Time (JSR310) JavaFX Code optimisation vs Data optimisation (I honestly don’t know what this means, but I wrote it down in my notes) Java EE Further Reading I really liked Richard Warburton’s Lambdas and Streams book Java 8 in Action has details on other Java 8 features like Date and Time Java 8 for the Really Impatient covers JavaFX too, and highlights some Java 7 features you might like Brian Goetz did a great talk last year, “Lambdas in Java: A peek under the hood”. I had to watch it twice before even half of the info sank in, but it’s really interesting. Stephen Colebourne, the guy behind Joda time and the new Date and Time API, has this talk about the new API. Java 9 OpenJDK page
February 9, 2015
by Trisha Gee
· 8,189 Views · 2 Likes
  • Previous
  • ...
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: