Even from highly experienced technologists I often hear talk about how certain operations cause a CPU cache to "flush". This seems to be illustrating a very common fallacy about how CPU caches work, and how the cache sub-system interacts with the execution cores. In this article I will attempt to explain the function CPU caches fulfil, and how the cores, which execute our programs of instructions, interact with them. For a concrete example I will dive into one of the latest Intel x86 server CPUs. Other CPUs use similar techniques to achieve the same ends. Most modern systems that execute our programs are shared-memory multi-processor systems in design. A shared-memory system has a single memory resource that is accessed by 2 or more independent CPU cores. Latency to main memory is highly variable from 10s to 100s of nanoseconds. Within 100ns it is possible for a 3.0GHz CPU to process up to 1200 instructions. Each Sandy Bridge core is capable of retiring up to 4 instructions-per-cycle (IPC) in parallel. CPUs employ cache sub-systems to hide this latency and allow them to exercise their huge capacity to process instructions. Some of these caches are small, very fast, and local to each core; others are slower, larger, and shared across cores. Together with registers and main-memory, these caches make up our non-persistent memory hierarchy. Next time you are developing an important algorithm, try pondering that a cache-miss is a lost opportunity to have executed ~500 CPU instructions! This is for a single-socket system, on a multi-socket system you can effectively double the lost opportunity as memory requests cross socket interconnects. Memory Hierarchy Figure 1. For the circa 2012 Sandy Bridge E class servers our memory hierarchy can be decomposed as follows: Registers: Within each core are separate register files containing 160 entries for integers and 144 floating point numbers. These registers are accessible within a single cycle and constitute the fastest memory available to our execution cores. Compilers will allocate our local variables and function arguments to these registers. When hyperthreading is enabled these registers are shared between the co-located hyperthreads. Memory Ordering Buffers (MOB): The MOB is comprised of a 64-entry load and 36-entry store buffer. These buffers are used to track in-flight operations while waiting on the cache sub-system. The store buffer is a fully associative queue that can be searched for existing store operations, which have been queued when waiting on the L1 cache. These buffers enable our fast processors to run asynchronously while data is transferred to and from the cache sub-system. When the processor issues asynchronous reads and writes then the results can come back out-of-order. The MOB is used to disambiguate the ordering for compliance to the published memory model. Level 1 Cache: The L1 is a core-local cache split into separate 32K data and 32K instruction caches. Access time is 3 cycles and can be hidden as instructions are pipelined by the core for data already in the L1 cache. Level 2 Cache: The L2 cache is a core-local cache designed to buffer access between the L1 and the shared L3 cache. The L2 cache is 256K in size and acts as an effective queue of memory accesses between the L1 and L3. L2 contains both data and instructions. L2 access latency is 12 cycles. Level 3 Cache: The L3 cache is shared across all cores within a socket. The L3 is split into 2MB segments each connected to a ring-bus network on the socket. Each core is also connected to this ring-bus. Addresses are hashed to segments for greater throughput. Latency can be up to 38 cycles depending on cache size. Cache size can be up to 20MB depending on the number of segments, with each additional hop around the ring taking an additional cycle. The L3 cache is inclusive of all data in the L1 and L2 for each core on the same socket. This inclusiveness, at the cost of space, allows the L3 cache to intercept requests thus removing the burden from private core-local L1 & L2 caches. Main Memory: DRAM channels are connected to each socket with an average latency of ~65ns for socket local access on a full cache-miss. This is however extremely variable, being much less for subsequent accesses to columns in the same row buffer, through to significantly more when queuing effects and memory refresh cycles conflict. 4 memory channels are aggregated together on each socket for throughput, and to hide latency via pipelining on the independent memory channels. NUMA: In a multi-socket server we have non-uniform memory access. It is non-uniform because the required memory maybe on a remote socket having an additional 40ns hop across the QPI bus. Sandy Bridge is a major step forward for 2-socket systems over Westmere and Nehalem. With Sandy Bridge the QPI limit has been raised from 6.4GT/s to 8.0GT/s, and two lanes can be aggregated thus eliminating the bottleneck of the previous systems. For Nehalem and Westmere the QPI link is only capable of ~40% the bandwidth that could be delivered by the memory controller for an individual socket. This limitation made accessing remote memory a choke point. In addition, the QPI link can now forward pre-fetch requests which previous generations could not. Associativity Levels Caches are effectively hardware based hash tables. The hash function is usually a simple masking of some low-order bits for cache indexing. Hash tables need some means to handle a collision for the same slot. The associativity level is the number of slots, also known as ways or sets, which can be used to hold a hashed version of an address. Having more levels of associativity is a trade off between storing more data vs. power requirements and time to search each of the ways. For Sandy Bridge the L1 and L2 are 8-way and the L3 is 12-way associative. Cache Coherence With some caches being local to cores, we need a means of keeping them coherent so all cores can have a consistent view of memory. The cache sub-system is considered the "source of truth" for mainstream systems. If memory is fetched from the cache it is never stale; the cache is the master copy when data exists in both the cache and main-memory. This style of memory management is known as write-back whereby data in the cache is only written back to main-memory when the cache-line is evicted because a new line is taking its place. An x86 cache works on blocks of data that are 64-bytes in size, known as a cache-line. Other processors can use a different size for the cache-line. A larger cache-line size reduces effective latency at the expense of increased bandwidth requirements. To keep the caches coherent the cache controller tracks the state of each cache-line as being in one of a finite number of states. The protocol Intel employs for this is MESIF, AMD employs a variant know as MOESI. Under the MESIF protocol each cache-line can be in 1 of the 5 following states: Modified: Indicates the cache-line is dirty and must be written back to memory at a later stage. When written back to main-memory the state transitions to Exclusive. Exclusive: Indicates the cache-line is held exclusively and that it matches main-memory. When written to, the state then transitions to Modified. To achieve this state a Request-For-Ownership (RFO) message is sent which involves a read plus an invalidate broadcast to all other copies. Shared: Indicates a clean copy of a cache-line that matches main-memory. Invalid: Indicates an unused cache-line. Forward: Indicates a specialised version of the shared state i.e. this is the designated cache which should respond to other caches in a NUMA system. To transition from one state to another, a series of messages are sent between the caches to effect state changes. Previous to Nehalem for Intel, and Opteron for AMD, this cache coherence traffic between sockets had to share the memory bus which greatly limited scalability. These days the memory controller traffic is on a separate bus. The Intel QPI, and AMD HyperTransport, buses are used for cache coherence between sockets. The cache controller exists as a module within each L3 cache segment that is connected to the on-socket ring-bus network. Each core, L3 cache segment, QPI controller, memory controller, and integrated graphics sub-system are connected to this ring-bus. The ring is made up of 4 independent lanes for: request, snoop, acknowledge, and 32-bytes data per cycle. The L3 cache is inclusive in that any cache-line held in the L1 or L2 caches is also held in the L3. This provides for rapid identification of the core containing a modified line when snooping for changes. The cache controller for the L3 segment keeps track of which core could have a modified version of a cache-line it owns. If a core wants to read some memory, and it does not have it in a Shared, Exclusive, or Modified state; then it must make a read on the ring bus. It will then either be read from main-memory if not in the cache sub-systems, or read from L3 if clean, or snooped from another core if Modified. In any case the read will never return a stale copy from the cache sub-system, it is guaranteed to be coherent. Concurrent Programming If our caches are always coherent then why do we worry about visibility when writing concurrent programs? This is because within our cores, in their quest for ever greater performance, data modifications can appear out-of-order to other threads. There are 2 major reasons for this. Firstly, our compilers can generate programs that store variables in registers for relatively long periods of time for performance reasons, e.g. variables used repeatedly within a loop. If we need these variables to be visible across cores then the updates must not be register allocated. This is achieved in C by qualifying a variable as "volatile". Beware that C/C++ volatile is inadequate for telling the compiler to order other instructions. For this you need fences/barriers. The second major issue with ordering we have to be aware of is a thread could write a variable and then, if it reads it shortly after, could see the value in its store buffer which may be older than the latest value in the cache sub-system. This is never an issue for algorithms following the Single Writer Principle but is an issue for the likes of the Dekker and Peterson lock algorithms. To overcome this issue, and ensure the latest value is observed, the thread must wait for the store buffer to drain on that core. This can be achieved by issuing a fence instruction. The write of a volatile variable in Java, in addition to never being register allocated, is accompanied by a full fence instruction. This fence instruction on x86 has a significant performance impact by preventing progress on the issuing thread until the store buffer is drained. Fences on other processors can have more efficient implementations that simply put a marker in the store buffer for the search boundary, e.g. the Azul Vega does this. If you want to ensure memory ordering across Java threads when following the Single Writer Principle, and avoid the store fence, it is possible by using the j.u.c.Atomic(Int|Long|Reference).lazySet() method, as opposed to setting a volatile variable. The Fallacy Returning to the fallacy of "flushing the cache" as part of a concurrent algorithm. I think we can safely say that we never "flush" the CPU cache within our user space programs. I believe the source of this fallacy is the need to flush, mark or drain to a point, the store buffer for some classes of concurrent algorithms so the latest value can be observed on a subsequent load operation. For this we require a memory ordering fence and not a cache flush. Another possible source of this fallacy is that L1 caches, or the TLB, may need to be flushed based on address indexing policy on a context switch. ARM, previous to ARMv6, did not use address space tags on TLB entries thus requiring the whole L1 cache to be flushed on a context switch. Many processors require the L1 instruction cache to be flushed for similar reasons, in many cases this is simply because instruction caches are not required to be kept coherent. The bottom line is, context switching is expensive and a bit off topic, so in addition to the cache pollution of the L2, a context switch can also cause the TLB and/or L1 caches to require a flush. Intel x86 processors require only a TLB flush on context switch.
Testing threads is hard, very hard and this makes writing good integration tests for multithreaded systems under test... hard. This is because in JUnit there's no built in synchronisation between the test code, the object under test and any threads. This means that problems usually arise when you have to write a test for a method that creates and runs a thread. One of the most common scenarios in this domain is in making a call to a method under test, which starts a new thread running before returning. At some point in the future when the thread's job is done you need assert that everything went well. Examples of this scenario could include asynchronously reading data from a socket or carrying out a long and complex set of operations on a database. For example, the ThreadWrapper class below contains a single public method: doWork(). Calling doWork() sets the ball rolling and at some point in the future, at the discretion of the JVM, a thread runs adding data to a database. public class ThreadWrapper { /** * Start the thread running so that it does some work. */ public void doWork() { Thread thread = new Thread() { /** * Run method adding data to a fictitious database */ @Override public void run() { System.out.println("Start of the thread"); addDataToDB(); System.out.println("End of the thread method"); } private void addDataToDB() { // Dummy Code... try { Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } }; thread.start(); System.out.println("Off and running..."); } } A straightforward test for this code would be to call the doWork() method and then check the database for the result. The problem is that, owing to the use of a thread, there's no co-ordination between the object under test, the test and the thread. A common way of achieving some co-ordination when writing this kind of test is to put some kind of delay in between the call to the method under test and checking the results in the database as demonstrated below: public class ThreadWrapperTest { @Test public void testDoWork() throws InterruptedException { ThreadWrapper instance = new ThreadWrapper(); instance.doWork(); Thread.sleep(10000); boolean result = getResultFromDatabase(); assertTrue(result); } /** * Dummy database method - just return true */ private boolean getResultFromDatabase() { return true; } } In the code above there is a simple Thread.sleep(10000) between two method calls. This technique has the benefit of being incredabile simple; however it's also very risky. This is because it introduces a race condition between the test and the worker thread as the JVM makes no guarantees about when threads will run. Often it'll work on a developer's machine only to fail consistently on the build machine. Even if it does work on the build machine it atificially lengthens the duration of the test; remember that quick builds are important. The only sure way of getting this right is to synchronise the two different threads and one technique for doing this is to inject a simple CountDownLatch into the instance under test. In the example below I've modified the ThreadWrapper class's doWork() method adding the CountDownLatch as an argument. public class ThreadWrapper { /** * Start the thread running so that it does some work. */ public void doWork(final CountDownLatch latch) { Thread thread = new Thread() { /** * Run method adding data to a fictitious database */ @Override public void run() { System.out.println("Start of the thread"); addDataToDB(); System.out.println("End of the thread method"); countDown(); } private void addDataToDB() { try { Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } private void countDown() { if (isNotNull(latch)) { latch.countDown(); } } private boolean isNotNull(Object obj) { return latch != null; } }; thread.start(); System.out.println("Off and running..."); } } he Javadoc API describes a count down latch as: A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using a CyclicBarrier. A CountDownLatch is a versatile synchronization tool and can be used for a number of purposes. A CountDownLatch initialized with a count of one serves as a simple on/off latch, or gate: all threads invoking await wait at the gate until it is opened by a thread invoking countDown(). A CountDownLatchinitialized to N can be used to make one thread wait until N threads have completed some action, or some action has been completed N times. A useful property of a CountDownLatch is that it doesn't require that threads calling countDown wait for the count to reach zero before proceeding, it simply prevents any thread from proceeding past an await until all threads could pass. The idea here is that the test code will never check the database for the results until the run() method of the worker thread has called latch.countdown(). This is because the test code thread is blocking at the call to latch.await(). latch.countdown() decrements latch's count and once this is zero the blocking call the latch.await() returns and the test code continues executing, safe in the knowledge that any results which should be in the database, are in the database. The test can then retrieve these results and make a valid assertion. Obviously, the code above merely fakes the database connection and operations. The thing is you may not want to, or need to, inject a CountDownLatch directly into your code; after all it's not used in production and it doesn't look particularly clean or elegant. One quick way around this is to simply make the doWork(CountDownLatch latch) method package private and expose it through a public doWork() method. public class ThreadWrapper { /** * Start the thread running so that it does some work. */ public void doWork() { doWork(null); } @VisibleForTesting void doWork(final CountDownLatch latch) { Thread thread = new Thread() { /** * Run method adding data to a fictitious database */ @Override public void run() { System.out.println("Start of the thread"); addDataToDB(); System.out.println("End of the thread method"); countDown(); } private void addDataToDB() { try { Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } private void countDown() { if (isNotNull(latch)) { latch.countDown(); } } private boolean isNotNull(Object obj) { return latch != null; } }; thread.start(); System.out.println("Off and running..."); } } The code above uses Google's Guava @VisibleForTesting annotation to tell us that the doWork(CountDownLatch latch) method visibility has been relaxed slightly for testing purposes. Now I realise that making a method call package private for testing purposes in highly controversial; some people hate the idea, whilst others include it everywhere. I could write a whole blog on this subject (and may do one day), but for me it should be used judiciously, when there's no other choice, for example when you're writing characterisation tests for legacy code. If possible it should be avoided, but never ruled out. After all tested code is better than untested code. With this in mind the next iteration of ThreadWrapper designs out the need for a method marked as @VisibleForTesting together with the need to inject a CountDownLatch into your production code. The idea here is to use the Strategy Pattern and separate the Runnable implementation from the Thread. Hence, we have a very simple ThreadWrapper public class ThreadWrapper { /** * Start the thread running so that it does some work. */ public void doWork(Runnable job) { Thread thread = new Thread(job); thread.start(); System.out.println("Off and running..."); } } and a separate job: public class DatabaseJob implements Runnable { /** * Run method adding data to a fictitious database */ @Override public void run() { System.out.println("Start of the thread"); addDataToDB(); System.out.println("End of the thread method"); } private void addDataToDB() { try { Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } } You'll notice that the DatabaseJob class doesn't use a CountDownLatch. How is it synchronised? The answer lies in the test code below... public class ThreadWrapperTest { @Test public void testDoWork() throws InterruptedException { ThreadWrapper instance = new ThreadWrapper(); CountDownLatch latch = new CountDownLatch(1); DatabaseJobTester tester = new DatabaseJobTester(latch); instance.doWork(tester); latch.await(); boolean result = getResultFromDatabase(); assertTrue(result); } /** * Dummy database method - just return true */ private boolean getResultFromDatabase() { return true; } private class DatabaseJobTester extends DatabaseJob { private final CountDownLatch latch; public DatabaseJobTester(CountDownLatch latch) { super(); this.latch = latch; } @Override public void run() { super.run(); latch.countDown(); } } } The test code above contains an inner class DatabaseJobTester, which extends DatabaseJob. In this class the run() method has been overridden to include a call to latch.countDown() after our fake database has been updated via the call to super.run(). This works because the test passes a DatabaseJobTester instance to the doWork(Runnable job) method adding in the required thread testing capability. The idea of sub-classing objects under test is something I've mentioned before in one of my blogs on testing techniques and is a really powerful technique. So, to conclude: Testing threads is hard. Testing anonymous inner classes is almost impossible. Using Thead.sleep(...) is a risky idea and should be avoided. You can refactor out these problems using the Strategy Pattern. Programming is the Art of Making the Right Decision ...and that relaxing a method's visibility for testing may or may not be a good idea, but more on that later... The code above is available on Github in the captain debug repository (git://github.com/roghughe/captaindebug.git) under the unit-testing-threads project.
usually, one of the first things i see if i launch eclipse is this dialog: select a workspace dialog actually, that ‘workspace’ thing is one of the most important things in eclipse to understand. to mess around it can cause a lot of pain. so i have collected some ‘lessons learned’ around workspaces. the workspace .metadata folder the workspace is where eclipse stores that .metadata folder: workspace metadata folder in this folder, eclipse stores all the workspace settings or preferences i configure e.g. using the menu window > preferences . sometimes that .metadata folder is named ‘framework’ too. e.g. if i’m are asked by eclipse to store some settings in the ‘frame work’ then this means it will be stored in the .metadata. eclipse uses this folder as well to store internal files and data structures. and many plugins store their settings in here to. consider the content of this folder as a ‘black box’: so do not change it, do not touch it unless you *really* know what you are doing! do not copy or move that folder. if you want to copy/share your workspace settings, then do *not* copy the .metadata folder, as typically you cannot use this folder on another machine or for another user. if you want to transfer/copy your settings, then see this post . workspace and eclipse versions as eclipse stores information in the .metadata workspace structure, the data/format might be different from version of eclipse to another (e.g. from one version of codewarrior to another). while using the same workspace with different versions of eclipse might work, it is *not* recommended. the eclipse community tries hard to keep things compatible, but using a different workspace for different eclipse versions is what i recommend. i started to name my workspace(s) like ‘wsp_lecture_10.2′ or ‘wsp_lecture_10.3′ to show that i’m using it for a specific version of codewarrior. workspace and projects this leads to the question: “do i have to duplicate then my projects if using with different versions of eclipse in parallel?” the answer is ‘no’. because the workspace folder does *not* have to have the projects in it (as folders). they can, but it is not needed. for example i have different workspaces (“wsp_10.3″, “wsp_10.2″), but my projects are in the “projects” folder somewhere else on my disk. what i do is to import the projects into each workspace, keep the projects in their original folder location. the menu file > import > general > existing projects into workspace can be used, with ‘copy projects into workspace’ * unchecked *: importing projects into workspace an easy trick is to drag&drop the project folders into eclipse: that’s much faster and simpler in my view than using above dialog. tip: showing the current workspace in the title bar using multiple workspaces can be confusing at some time. see this post how you can show the workspace in the application title: workspace shown in title bar processor expert processor expert has a special setting in the workspace pointing to its ‘data base’. that data base is inside the installation folder, in the mcu\processorexpert folder. if a launch eclipse and use a workspace from a different installation, i get a warning dialog: processor expert workspace warning: current worksapce is configured to use data from another installation of processor expert. that path setting of processor expert (pointing to the installation folder) is in my view the biggest argument to *not* share a workspace between different versions of eclipse. pressing the ‘open preferences’ opens the settings, and with ‘restore defaults’ it will (after a restart of eclipse) use the new installation path: processor expert directory that warning might not come up if using multiple installations of codewarrior. it seems that as long there is a valid path to a processor expert data base, the warning might not show up, and it will use that data base. that can lead to weird behaviour, so better check that the path in the workspace setting is pointing to the right folder. workspace dialog on startup remember that dialog at the beginning of this post? if i want to change if (and what) is shown at eclipse startup, then this is configured with the menu window > preferences > general > startup and shutdown : startup and workspaces in this dialog i can remove items from the list (e.g. if a workspace folder does not exist any more). tabula rasa as eclipse stores a lot of information in the workspace .metadata, that folder can grow to a substantial size (several hundreds of mbytes, depending on usage and configuration). one reason is because eclipse stores local history and undo information into that .metadata folder. just have a look at .metadata\.plugins\org.eclipse.core.resources\.history so from time to time (especially if i think that eclipse is slowing down), i export my workspace settings ( file > export > general > preferences , see copy my workspace settings ) and re-import it again into the new workspace. summary eclipse stores all its workspace settings and files in the .metadata folder. never touch/copy/change/move the .metadata folder. use different workspaces for each eclipse version. store the projects outside of the workspace if you want to share them across different eclipse versions. happy workspacing
the great thing with eclipse is that you can configure a lot. in general, i’m happy with most of the defaults in eclipse and codewarrior. here are my top 10 things i change in eclipse to make it even better: add -showlocation to the eclipse startup command line: show workspace location in the title bar disable the heuristik settings for the indexer : fixing the eclipse index disable build (if required) for the debugger: speeding up the debug launch in codewarrior using spaces and not tabs: spaces vs. tabs in eclipse highlight the selected line : color makes the difference! configuring more hovers : hovering and debugging enabling static software analysis : free static code analysis with eclipse processor expert expert settings: enabling the expert level in processor expert custom dictionary settings: eclipse spell checker show line numbers : eclipse and line numbers i don’t have to waste time to change the settings for each of my workspaces: after i have changed the settings, i can simply export and import them again into another workspace: change my preferences using the menu window > preferences use the menu file > export > general > preferences and save the settings in a file: file export switch to the new workspace use the menu file > import > general > preferences to import the settings from the file: file import happy customizing
Being an itinerant programmer one of the things I've noticed over the years is that every project you come across seems to have a slightly different way of organising its Maven modules. There seems to be no conventional way of characterising the contents of a project's sub-modules and not that much discussion on it either. This is strange, as defining the responsibilities of your Maven modules seems to me to be as critical as good class design and coding technique to a project's success. So, in light of this dearth of wisdom, here's my two penneth worth... When you first come across a new project, you'll generally find a layout convention that vaguely matches that defined by the Better Builds With Maven manual. The 'clean' project directory usually contains a POM file, a src folder and several sub-modules, each in their own subdirectory, as shown in the diagram below: If we all agree that this is the standard way of approaching the top level of project layout (and I have seen it done slightly differently) then there seems to be three different approaches taken when organising the responsibilities of each of a project's sub-modules. These are: Totally haphazardly. By class type. By functional area. I'm not going to linger on those projects that are organised seemingly without any structure or order except to say that they probably started off well organised but were not designed well enough to endure the changes forced upon them. In saying that a project's sub-modules are organised 'by class type', I mean that modules are used to group together all classes that comprise, but are not limited to, a layer in the program's architecture. For example a module could contain all classes that make up the program's service or persistence layers or a module could contain all model class (i.e. beans). Conversely, in saying that a project's sub-modules are organised by functional area I'm talking about a situation where each module contains, as close as possible, a vertical slice of the application, including model beans, service layer, controllers etc. If the truth be told then there are any number of ways to organise your project's sub-modules. Most project set-ups are fairly flat in structure, which is what I've demonstrated above; however, if you take a look at Erik Putrycz's 2009 talk Maven – or how to automate java builds, tests and version management with open source tools, he demonstrates that you can have modules within modules within modules. In order to explore this a little further, I'm going to invent my usual preposterously contrived scenario and in this scenario, you've got to write a program for a Welsh dental practice owned by a man called Jones also known locally as 'Jones The Driller'. The requirements would be pretty standard, I suspect, for a dental practice and would include handling: Patients details: name, address, DOB, phone number etc. Medical records, including treatments and outcomes. Appointments. Accounting, e.g. sales, purchase, wages etc. Auditing: as in who did what to whom... As a solution to Jones The Driller's problem, you propose that you write a multi-module web application based upon Spring, MVC and tomcat that, when assembled, has a standard 'n' tier design of a mySQL database, a database layer, service layer, a set of controllers and some JSPs that comprise the view. In creating your project your idea is to organise your sub-modules 'by class type' and you come up with the following module organisation, shown below roughly in build dependency order dentists-model dentists-utils dentist-repository dentists-services dentists-controllers dentists-web ...which on your screen looks something like this: Your dentists-model module contains the project's beans that model object used from the persistence layer right up to the JSPs. dentists-repository, dentists-services and dentists-controllers reflect the various layers of your application, with dentists-web module containing all the JSPs, CSS and other view paraphernalia. As for dentists-utils, well every project has a utils module where all the really useful, but disparate classes end up. Meanwhile, in a different universe, a different version of you decides to organise your project's sub-modules by functional area and you come up with the following breakdown: dentists-utils dentists-audit dentists-user-details dentists-medical-records dentists-appointments dentists-accounts dentists-repository dentists-integration dentists-web In this scenario, the build order is somewhat different; virtually all modules will depend upon dentists-utils and, depending upon your exact audit requirements, most modules will rely upon dentists-audit. You can also see in the following images that the sub-module package structure has been arranged on layer and type boundaries in that each module has its own model, repository (which contains interface definitions only) services and controller packages and that the layout of each module is identical at the top level. Another discussion to have here is the organisation of your project's package structure, where you can ask the same kind of questions: do you organise 'by class type' or 'by functional area' as shown above? You may have noticed that the dentists-repository modules can be fairly near the end of the build cycle as it only contains the implementation of the repository classes and not their interface definitions. You may have also noticed that dentists-web is again a separate module. This is because you're a pretty savvy business guy and in keeping the JSPs etc. in their own module, you hope to re-skin your app and sell it to that other Welsh dentist down the road: Williams The Puller. From a test perspective, each module contains its own unit tests, but there's a separate integration test module that, as it'll take longer to execute can be run when required. There are generally two ways of defining integration tests: firstly by putting them in their own module, which I prefer, and secondly by using a integration test naming convention such as somefileIT.java, and running all *IT.java files separately to all *Test.java files. Your two identical selves have proposed two different solutions to the same problem, so I guess that it's now time to takes a look at the pros and cons of each. Taking the 'by class type' solution first, what can be said about it? On the plus side, it's pretty maintainable in that you always known where to find stuff. Need a service class? Then that's in the dentist-service module. Also, the build order is very straight forward. On the down side, organisation 'by class type' is prone to problems with circular dependencies creeping in and classes with totally different responsibilities are all mixed up together making it difficult to re-use functionality in other projects without unnecessarily dragging in the who shebang. So, what about the pros and cons of the 'by functional area' approach? To my way of thinking, given the package structure of each module, it's just about as easy to locate a class using this technique as it is when using 'by class type'. The real benefit of using this approach is that it's far simpler to re-use a functional area of code in other projects. For example, I've worked on many projects in different companies and have implemented auditing several times. Each time I implement it I usually do it in roughly the same way, so wouldn't it be good just to reuse the first implementation? Not withstanding code ownership issues... The same idea also applies to dentists-user-details; the requirement to manage names and addresses applies equally as well to a shoe sales web site as it does a dental practice. And the downside? One of the benefits of this approach is that the modules are highly decoupled, but from experience no matter how hard you try, you always end up with more coupling that you'd like. You may have already spotted that both of these proposals are not 100% pure; 'by class type' contains a bit of 'by functionality' and conversely 'by functional area' contains a couple of 'by class type' modules. This may be avoidable, but I'm purposely being pragmatic. As I said earlier you always see a utils module in a project. Furthermore creating a separate database module allows you to change your project's database implementation fairly easily, which may make testing easier in some circumstances and likewise, having a separate web module allows you to re-skin your code should you be lucky enough to sell the same product to multiple customers with their own branding. Finally, one of the unwritten truths in software development is that once you've organised your project into its sub-modules you'll rarely get the opportunity to reorganise and improve them: there usually isn't the time or the political will as doing so costs money; however, it should be remembered that, in Agile terms, project module composition is, like code, a form of technical debt, which if done badly also costs you a lot of cash. It therefore seems a really good idea, as a team, to plan out your project thoroughly before starting to code. So be radical, do some design or have a meeting, you know it'll be worth it in the end.
one of the nice things of modern ide’s are: they offer many extras for free. many times it is related to programming and coding. but i love as well the ones which makes things easier and better which is not directly related to the executed code. one thing eclipse offers is an on-the-fly spell-checking, similar to microsoft word: spellchecked sources hovering over the text offers me to correct the flagged error: initialization vs. initialisation but wait: is that example not spelled correctly? and indeed, eclipse offers to customize the spell checking. the option page is in the windows > preferences > general > editors > text editors > spelling page: spelling preferences ‘initialization’ vs. ‘initialisation’: that’s an ‘english us’ vs. english uk’ thing, and is easily changed. and i prefer the us english: changing dictionary with this, everything is ok now: not flagged any more after changing the platform dictionary, it usually takes a few minutes until the sources are checked again. but what if eclipse does not know a word? then it offers to add it to a dictionary: adding to the dictionary if i do not have a user dictionary yet, it will prompt a dialog: missing user dictionary if pressing ‘yes’, it will prompt the settings page from above where i can specify my user dictionary file: user defined dictionary the user dictionary is a normal text file with one word on each line. that makes it easy to edit and to have it in a version control system. i have one common dictionary file for all my workspaces. but of course it is possible to have different dictionaries per workspace, as the settings are per workspace too. summary i feel having reasonable spelled comments in the sources is just something an engineer should care about. and the eclipse spelling engine does not have to be as good as the one in ms word (which is pretty good in my view). but for making sources better something like correctly spelled comments is a plus. but only if the code works like a charm :mrgreen: happy spelling
When using an external API for WebHooks or Callbacks as discussed in Chapters 3 and 5 of Getting Started with Mule Cloud Connect; The API provider running somewhere out there on the web needs to callback your application that is happily running in isolation on your local machine. For an API provider to callback your application, the application must be accessible over the web. Sure, you could upload and test your application on a public facing server, but you may find it quicker and easier to work on your local development machine and these are typically behind firewalls, NAT, or otherwise not able to provide a public URL. You need a way to make your local application available over the web. There are a few good services and tools out there to help with this. Examples include ProxyLocal, and Forward.io. Alternatively, you can set up your own reverse SSH Tunnel if you already have a remote system to forward your requests, but this is cumbersome to say the least. I find Localtunnel to be an excellent fit for this need and localtunnel have just recently released v2 of its service with a host of new features and enhancements. More information can be found here: http://progrium.com/blog/2012/12/25/localtunnel-v2-available-in-beta/ Installing Localtunnel Those familiar with version 1 of the service will know that the v1 Localtunnel client was written in Ruby and required Rubygems to install it. The v2 client is now written in Python and can instead be installed via easy_install or pip. If instead you're interested in using Localtunnel v1, then I have wrote a previous blog post on the subject here: http://blogs.mulesoft.org/connector-callback-testing-local/ To get started, you will first need to check that you have Python installed. Localtunnel requires Python 2.6 or later. Most systems come with Python installed as standard, but if not you can check via the following command: $ python -version More info on installing Python can be found here: http://wiki.python.org/moin/BeginnersGuide/Download Once complete, you will need easy_install to install the Localtunnel client.If you don't have easy_install after you install Python, you can install it with this bootstrap script: $ curl http://peak.telecommunity.com/dist/ez_setup.py | python Once complete, you can install the Localtunnel client using the following command: $ easy_install localtunnel First run with LocalTunnel Once installed, creating a tunnel is as simple as running the following command: $ localtunnel-beta 8082 The parameter after the command: "8000" is the local port we want Localtunnel to forward to. So whatever port your app is running on should replace this value. Each time you run the command you should get output similar to the following: Port 8082 is now accessible from http://fb0322605126.v2.localtunnel.com ... Note: As v2 is still in beta; the command local-tunnel-beta will eventually be installed as just localtunnel. This lets you keep the v1 just in case anything goes wrong with v2 during the beta. Configuring the Connector Now onto Mule! To demonstrate I will use the Twilio Cloud Connector example from Chapter 5. Twilio has an awesome WebHook implementation with great debugging tools. Twilio uses callbacks to tell you about the status of your requests; When you use Twilio to a place a phone call or send an SMS the Twilio API allows you to send a URL where you'll receive information about the phone call once it ends or the status of the outbound SMS message after it's processed. This example uses the Twilio Cloud Connector to send a simple SMS message. The most important thing to note is that the "status-callback-flow-ref" attribute. All connector operations that support callback's will have an optional attribute ending in "-flow-ref". In this case : "status-callback-flow-ref". As the name suggests, this attribute should reference a flow. This value must be a valid flow id from within your configuration. It is this flow that will be used to listen for the callback. Notice that the flow has no inbound endpoint? This is where the magic happens; when Twilio process the SMS message it will send a callback automatically to that flow without you having to define an inbound endpoint. The connector automatically generates an inbound endpoint and sends the auto generated URL to Twilio for you. Customizing the Callback The URL generated for the callback URL is built using 'localhost' as the host, the 'http.port' environment variable or 'localPort' value as the port and the path of the URL is typically just a random generated string or static value. So if I run this locally it would send Twilio my non public address, something like: http://localhost:80/...vv3v3er342fvvn. Each connector that accepts HTTP callbacks will provide you with an optional http-callback-config child element to override these settings. These settings can be set at the connector's config level as follows: Here we have amended the previous example to add the additonal http-callback-config configuration. The configuration takes three additional arguments: domain, localPort and remotePort. These settings will be used to constuct the URL that is passed to the external system. The URL will be the same as the default generated URL of the HTTP inbound-endpoint except that the host is replaced by the 'domain' setting (or its default value) and the port is replaced by the 'remotePort' setting (or its default value). In this case we have used the domain from the URL that Localtunnel generated for us earlier: fb0322605126.v2.localtunnel.com and set the localPort to 8082 as we run the Localtunnel command using port 8082 and the remotePort to 80 as the localtunnel server just runs on port 80. And that's it! If you run this configuration you should start seeing your callback being printed to the console. The same goes for any OAuth connectors too. If your using any OAuth connectors built using the DevKit OAuth modules, you can configure the OAuth callback in a similar fashion. A full Mule/Twilio WebHook project can be found here: https://github.com/ryandcarter/GettingStarted-MuleCloudConnect-OReilly/tree/master/chapter05/twilio-webhooks
Testing and debugging multi threaded programs is hard. Now take the same programs and massively distribute them across multiple JVMs deployed on a cluster of machines and the complexity goes off the roof. One way to overcome this complexity is to do testing in isolation and catch as many bugs as possible locally. MRUnit is a testing framework that lets you test and debug Map Reduce jobs in isolation without spinning up a Hadoop cluster. In this blog post we will cover various features of MRUnit by walking through a simple MapReduce job. Lets say we want to take the input below and create an inverted index using MapReduce. Input www.kohls.com,clothes,shoes,beauty,toys www.amazon.com,books,music,toys,ebooks,movies,computers www.ebay.com,auctions,cars,computers,books,antiques www.macys.com,shoes,clothes,toys,jeans,sweaters www.kroger.com,groceries Expected output antiques www.ebay.com auctions www.ebay.com beauty www.kohls.com books www.ebay.com,www.amazon.com cars www.ebay.com clothes www.kohls.com,www.macys.com computers www.amazon.com,www.ebay.com ebooks www.amazon.com jeans www.macys.com movies www.amazon.com music www.amazon.com shoes www.kohls.com,www.macys.com sweaters www.macys.com toys www.macys.com,www.amazon.com,www.kohls.com groceries www.kroger.com below are the Mapper and Reducer that do the transformation public class InvertedIndexMapper extends MapReduceBase implements Mapper { public static final int RETAIlER_INDEX = 0; @Override public void map(LongWritable longWritable, Text text, OutputCollector outputCollector, Reporter reporter) throws IOException { final String[] record = StringUtils.split(text.toString(), ","); final String retailer = record[RETAIlER_INDEX]; for (int i = 1; i < record.length; i++) { final String keyword = record[i]; outputCollector.collect(new Text(keyword), new Text(retailer)); } } } public class InvertedIndexReducer extends MapReduceBase implements Reducer { @Override public void reduce(Text text, Iterator textIterator, OutputCollector outputCollector, Reporter reporter) throws IOException { final String retailers = StringUtils.join(textIterator, ','); outputCollector.collect(text, new Text(retailers)); } } Implementation details are not really important but basically Mapper gets a line at a time, splits the line and emits key value pairs where Key is a category of product and value is the website which is selling the product. For example line retailer,category1,category2 will be emitted as (category1,retailer) and (category2,retailer). Reducer gets a key and a list of values, transforms the list of values to a comma delimited String and emits the key and value out. Now lets use MRUnit to write various tests for this Job. Three key classes in MRUnits are MapDriver for Mapper Testing, ReduceDriver for Reducer Testing and MapReduceDriver for end to end MapReduce Job testing. This is how we will setup the Test Class. public class InvertedIndexJobTest { private MapDriver mapDriver; private ReduceDriver reduceDriver; private MapReduceDriver mapReduceDriver; @Before public void setUp() throws Exception { final InvertedIndexMapper mapper = new InvertedIndexMapper(); final InvertedIndexReducer reducer = new InvertedIndexReducer(); mapDriver = MapDriver.newMapDriver(mapper); reduceDriver = ReduceDriver.newReduceDriver(reducer); mapReduceDriver = MapReduceDriver.newMapReduceDriver(mapper, reducer); } } MRUnit supports two style of testings. First style is to tell the framework both input and output values and let the framework do the assertions, second is the more traditional approach where you do the assertion yourself. Lets write a test using the first approach. @Test public void testMapperWithSingleKeyAndValue() throws Exception { final LongWritable inputKey = new LongWritable(0); final Text inputValue = new Text("www.kroger.com,groceries"); final Text outputKey = new Text("groceries"); final Text outputValue = new Text("www.kroger.com"); mapDriver.withInput(inputKey, inputValue); mapDriver.withOutput(outputKey, outputValue); mapDriver.runTest(); } In the test above we tell the framework both input and output Key and Value pairs and the framework does the assertion for us. This test can be written in a more traditional way as follow @Test public void testMapperWithSingleKeyAndValueWithAssertion() throws Exception { final LongWritable inputKey = new LongWritable(0); final Text inputValue = new Text("www.kroger.com,groceries"); final Text outputKey = new Text("groceries"); final Text outputValue = new Text("www.kroger.com"); mapDriver.withInput(inputKey, inputValue); final List> result = mapDriver.run(); assertThat(result) .isNotNull() .hasSize(1) .containsExactly(new Pair(outputKey, outputValue)); } Sometimes Mapper emits multiple Key Value pairs for a single input. MRUnit provides a fluent API to support this use case. Here is an example @Test public void testMapperWithSingleInputAndMultipleOutput() throws Exception { final LongWritable key = new LongWritable(0); mapDriver.withInput(key, new Text("www.amazon.com,books,music,toys,ebooks,movies,computers")); final List> result = mapDriver.run(); final Pair books = new Pair(new Text("books"), new Text("www.amazon.com")); final Pair toys = new Pair(new Text("toys"), new Text("www.amazon.com")); assertThat(result) .isNotNull() .hasSize(6) .contains(books, toys); } You write the test for the reduce exactly the same way. @Test public void testReducer() throws Exception { final Text inputKey = new Text("books"); final ImmutableList inputValue = ImmutableList.of(new Text("www.amazon.com"), new Text("www.ebay.com")); reduceDriver.withInput(inputKey,inputValue); final List> result = reduceDriver.run(); final Pair pair2 = new Pair(inputKey, new Text("www.amazon.com,www.ebay.com")); assertThat(result) .isNotNull() .hasSize(1) .containsExactly(pair2); } Finally you can use MapReduceDriver to test your Mapper, Combiner and Reducer together as a single job. You can also pass multiple key value pairs as input to your job. Test below demonstrate MapReduceDriver in action @Test public void testMapReduce() throws Exception { mapReduceDriver.withInput(new LongWritable(0), new Text("www.kohls.com,clothes,shoes,beauty,toys")); mapReduceDriver.withInput(new LongWritable(1), new Text("www.macys.com,shoes,clothes,toys,jeans,sweaters")); final List> result = mapReduceDriver.run(); final Pair clothes = new Pair(new Text("clothes"), new Text("www.kohls.com,www.macys.com")); final Pair jeans = new Pair(new Text("jeans"), new Text("www.macys.com")); assertThat(result) .isNotNull() .hasSize(6) .contains(clothes, jeans); }
Curator's Note: This article was co-authored by Andrzej Jarzyna. At 3scale we find Amazon to be a fantastic platform for running APIs due to the complete control you have on the application stack. For people new to AWS the learning curve is quite steep. So we put together our best practices into this short tutorial. Besides Amazon EC2 we will use the Ruby Grape gem to create the API interface and an Nginx proxy to handle access control. Best of all everything in this tutorial is completely FREE! For the purpose of this tutorial you will need a running API based on Ruby and Thin server. If you don’t have one you can simply clone an example repo as described below (in the “Deploying the Application” section). If you are interested in the background of this example (Sentiment API), you can see a couple of previous guides which 3scale has published. Here we use version_1 of the API(‘API up and running in 10 minutes‘) with some extra sentiment analysis functionality (this part is covered in the second tutorial of the Sentiment API tutorial). Now we will start the creation and configuration of the Amazon EC2 instance. If you already have an EC2 instance (micro or not), you can jump to the next step -> Preparing Instance for Deployment. Creating and configuring EC2 Instance Let’s start by signing up for the Amazon Elastic Compute Cloud (Amazon EC2). For our needs the free tier http://aws.amazon.com/free/ is enough, covering all the basic needs. Once the account is created go to the EC2 dashboard under your AWS Management Console and click on the Launch Instance button. That will transfer you to a popup window where you will continue the process: Choose the classic wizard Choose an AMI (Ubuntu Server 12.04.1 LTS 32bit, T1micro instance) leaving all the other settings for Instance Details as default Create a keypair and download it – this will be the key which you will use to make an ssh connection to the server, it’s VERY IMPORTANT! Add inbound rules for the firewall with source always 0.0.0.0/0 (HTTP, HTTPS, ALL ICMP, TCP port 3000 used by the Ruby thin server) Preparing Instance for Deployment Now, as we have the instance created and running, we can directly connect there from our console (Windows users from PuTTY). Right click on your instance, connect and choose Connect with a standalone SSH Client. Follow the steps and change the username to ubuntu (instead of root) in the given example. After executing this step you are connected to your instance. We will have to install new packages. Some of them require root credentials, so you will have to set a new root password: sudo passwd root. Then login as root: su root. Now with root credentials execute: sudo apt-get update and switch back to your normal user with exit command and install all the required packages: install some libraries which will be required by rvm, ruby and git: sudo apt-get install build-essential git zlib1g-dev libssl-dev libreadline-gplv2-dev imagemagick libxml2-dev libxslt1-dev openssl libreadline6 libreadline6-dev zlib1g libyaml-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison libpq-dev libpq5 libeditline-dev install git (on Linux rather than from Source): http://www.git-scm.com/book/en/Getting-Started-Installing-Git install rvm: https://rvm.io/rvm/install/ install ruby rvm install 1.9.3 rvm use 1.9.3 --default Deploying the Application Our sample Sentiment API is located on Github. Try cloning the repository: git clone git@github.com:jerzyn/api-demo.git you can once again review the code and tutorial on creating and deploying this app here: http://www.3scale.net/2012/06/the-10-minute-api-up-running-3scale-grape-heroku-api-10-minutes/ and here http://www.3scale.net/2012/07/how-to-out-of-the-box-api-analytics/ note the changes (we are using only v1, as authentication will go through the proxy). Now you can deploy the app by issuing: bundle install. Now you can start the thin server: thin start. To access the API directly (i.e. without any security or access control) access: your-public-dns:3000/v1/words/awesome.json (you can find your-public-dns in the AWS EC2 Dashboard->Instances in the details window of your instance) For the Nginx integration you will have to create an elastic IP address. Inside the AWS EC2 dashboard create an elastic IP in the same region as your instance and associate that IP to it (you won’t have to pay anything for the elastic IP as long as it is associated with your instance in the same region). OPTIONAL: If you want to assign a custom domain to your amazon instance you will have to do one thing: add an A record to the DNS record of your domain mapping the domain to the elastic IP address you have previously created. Your domain provider should either give you some way to set the A record (the IPv4 address), or it will give you a way to edit the nameservers of your domain. If they do not allow you to set the A record directly, find a DNS management service, register your domain as a zone there and the service will give you the nameservers to enter in the admin panel of your domain provider. You can then add the A record for the domain. Some possible DNS management services include ZoneEdit (basic, free), Amazon route 53, etc. At this point you API is open to the world. This is good and bad – great that you are sharing, but bad in the sense that without rate limits a few apps could kill the resources of your server, and you have no insight into who is using your API and how it is being used. The solution is to add some management for your API… Enabling API Management with 3scale Rather than reinvent the wheel and implement rate limits, access controls and analytics from scratch we will leverage the handy 3scale API Management service. Get your free 3scale account, activate and log-in to the new instance through the provided links. The first time you log-in you can choose the option for some sample data to be created, so you will have some API keys to use later. Next you would probably like to go through the tour to get a glimpse on the system functionality (optional) and then start with the implementation. To get some instant results we will start with the sandbox proxy which can be used while in development. Then we will also configure an Nginx proxy which can scale up for full production deployments. There is some documentation on the configuration of the API proxy at 3scale: https://support.3scale.net/howtos/api-configuration/nginx-proxy and for more advanced configuration options here: https://support.3scale.net/howtos/api-configuration/nginx-proxy-advanced Once you sign into your 3scale account, Launch your API on the main Dashboard screen or Go to API->Select the service (API)->Integration in the sidebar->Proxy Set the address of of your API backend – this has to be the Elastic IP address unless the custom domain has been set, including http protocol and port 3000. Now you can save and turn on the sandbox proxy to test your API by hitting the sandbox endpoint (after creating some app credentials in 3scale): http://sandbox-endpoint/v1/words/awesome.json?app_id=APP_ID&app_key=APP_KEY where, APP_ID and APP_KEY are id and key of one of the sample applications which you created when you first logged into your 3scale account (if you missed that step just create a developer account and an application within that account). Try it without app credentials, next with incorrect credentials, and then once authenticated within and over any rate limits that you have defined. Only once it is working to your satisfaction do you need to download the config files for Nginx. Note: any time you have errors check whether you can access the API directly: your-public-dns:3000/v1/words/awesome.json. If that is not available, then you need to check if the AWS instance is running and if the Thin Server is running on the instance. Implement an Nginx Proxy for Access Control In order to streamline this step we recommend that you install the fantastic OpenResty web application that is basically a bundle of the standard Nginx core with almost all the necessary 3rd party Nginx modules built-in. Install dependencies: sudo apt-get install libreadline-dev libncurses5-dev libpcre3-dev perl Compile and install Nginx: cd ~ sudo wget http://agentzh.org/misc/nginx/ngx_openresty-1.2.3.8.tar.gz sudo tar -zxvf ngx_openresty-1.2.3.8.tar.gz cd ngx_openresty-1.2.3.8/ ./configure --prefix=/opt/openresty --with-luajit --with-http_iconv_module -j2 make sudo make install In the config file make the following changes: edit the .conf file from nginx download in line 28, which is preceded by info to change your server name put the correct domain (of your Elastic IP or custom domain name) in line 78 change the path to the .lua file, downloaded together with the .conf file. We are almost finished! Our last step is to start the NGINX proxy and put some traffic through it. If it is not running yet (remember, that thin server has to be started first), please go to your EC2 instance terminal (the one you were connecting through ssh before) and start it now: sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf The last step will be verifying that the traffic goes through with a proper authorization. To do that, access: http://your-public-dns/v1/words/awesome.json?app_id=APP_ID&app_key=APP_KEY where, APP_ID and APP_KEY are key and id of the application you want to access through the API call. Once everything is confirmed as working correctly, you will want to block public access to the API backend on port 3000, which bypasses any access controls. If encounter some problems with the Nginx configuration or need a more detailed guide, I encourage you to check the 3scale guide on configuring Nginx proxy: https://support.3scale.net/howtos/api-configuration/nginx-proxy. You can go completely wild with customization of your API gateway. If you want to dive more into the 3scale system configuration (like usage and monitoring of your API traffic) feel encouraged to browse our Quickstart guides and HowTo’s.
It has been few months since SOA Patterns was published and so far the book sold somewhere between 2K-3K copies which I guess is not bad for an unknown author – so first off, thanks to all of you who bought a copy (by the way, if you found the book useful I’d be grateful if you could also rate it on Amazon so that others would know about it too) I know at least a few of you actually read the book as from time to time I get questions about it :). Not all the questions are interesting to “the general public” but some are. One interesting question I got is about the so called “Canonical schema pattern“. I have a post in the making (for too long now,sorry about that Bill) that explains why I don’t consider it a pattern and why I think it verges on being an anti-pattern. Another question I got more recently, which is also the subject of this post, was about the Saga pattern. Here is (most of) the email I got from Ashic : “Garcia-Molina’s paper focuses on failure management and compensation so as to prevent partial success. It discusses a variety of approaches – with an SEC, with application code outside of the database, backward-forward and even forward-only (the latter having no “compensate” step per activity, rather a forward flow that takes care of the partial success). Nowadays, I see two viewpoints regarding sagas: 1. People calling process managers sagas, which is obviously incorrect. [e.g. NServiceBus "sagas".] 2. People focusing very strongly on a “context” of work, whereby the context gets passed around from activity to activity. For linear up front workflows, routing slips are an easy solution. An example of this can be found at Clemens’s post here: http://vasters.com/clemensv/2012/09/01/Sagas.aspx . For more complicated workflows, graph-like slips may be used. After discussing with some enthusiasts, they seem very keen to suggest that the context has to move along. They seem to reject the notion of a saga where a central coordinator controls the process. In other words, even if a process manager takes care of only routing messages, and that routing includes compensations to alleviate partial successes, they are unwilling (sometimes vehemently) to call that a saga. They acknowledge it can be useful, but say that is not a saga. I find this to be confusing. In this case the process manager acts as the SEC would in a Garcia-Molina saga capable database. This approach still allows interleaved transactions (or steps) without a global lock. Why would this not be a saga? In your book, I did see you mentioned orchestration as a way of implementing sagas. However, when this was brought up, the proponents of point 2 suggest that that is not what you really mean. To me it seems quite clear, and it aligns with Hector’s paper. I just want to make sure I have this right. I’d love your thoughts on this.” Let’s start with the answer to the question: When I think about the Saga pattern I see it as the application of the notions in the Garcia-Molina paper (which talked about databases) to SOA. In other words, I see sagas as the notion of getting distributed agreement of a process with reduced guarantees (vs. distributed transactions that propose ACID guarantees across systems). – So,basically, a Saga is loose transaction-like flow where, in case of failures, involved services perform compensation steps (which may be nothing, a complete undo or something else entirely). The Saga pattern can augment this process with temporary promises (which I call reservations). Under this definition both centrally managed processes and a “choreographed” processes are Sagas – as long as the semantics and intent mentioned above are kept. The centrally managed orchestration provides visibility of processes, ease of management etc; The cooperative event based, context shared sagas provide flexibility and allow serendipity of new processes; Both have merit and both have a place, at least in my opinion :) The main reason both of these, very different, approaches are valid designs and implementations for the Saga pattern is that the Saga pattern (like others in the book) is an Architectural pattern and not a Design pattern. Which brings us to the second reason for this post, the difference between “Architecture” and “Design”. In a nutshell, architecture is a type of design where the focus is quality attributes and wide(er) scope whereas design focuses on functional requirements and more localized concerns. The Saga pattern is an architectural pattern that focused on the integrity reliability quality attributes and it pertains to the communication patterns between services. When it comes to design the implementation of the pattern. you need to decide how to implement the concerns and roles defined in the pattern -e.g. controlling the flow and the status of the saga. One decision can be to implement it centrally and use orchestration another decision can be to decentralize it and use context… Design decision can be very meaningful sometimes it can be hard to find what’s left of the architecture – consider for example the whole idea behind blogging and RSS feeds. The architectural notion is a publish/subscribe system where the blog writer publish an “event” (a new post) and subscribers get a copy. When it came to design and implementation, considering it was implemented on top of HTTP and REST where there is no publish/subscribe capability it was actually designed as a pull system where the publisher provides a list of recent changes (the feed) and subscribers sample it and check if anything changed since the last time. So architecturally pub/sub, design pull a centralized server that exposes latest changes – a really big difference Does it matter at all? I think yes. Architecture lets us think about the system at a higher level of abstraction and thus tackle more complex systems. When we design and focus on more local issues we can tackle the nitty gritty details and make sure things actually work. we need to check the effects of design on architecture and vice versa to make sure the whole thing sticks together and actually does what we want/need. Note that architecture and design are not the complete story – another variable is the technology (e.g. HTTP in the example above) which affects the design decision and thus also the architecture (you can read a little more about it in my posts on SAF)
One of the most popular posts on my blog is the short tutorial about the JDBC Security Realm and form based Authentication on GlassFish with Primefaces. After I received some comments about it that it isn't any longer working with latest GlassFish 3.1.2.2 I thought it might be time to revisit it and present an updated version. Here we go: Preparation As in the original tutorial I am going to rely on some stuff. Make sure to have a recent NetBeans 7.3 beta2 (which includes GlassFish 3.1.2.2) and the MySQL Community Server (5.5.x) installed. You should have verified that everything is up an running and that you can start GlassFish and the MySQL Server also is started. Some Basics A GlassFish authentication realm, also called a security policy domain or security domain, is a scope over which the GlassFish Server defines and enforces a common security policy. GlassFish Server is preconfigured with the file, certificate, and administration realms. In addition, you can set up LDAP, JDBC, digest, Oracle Solaris, or custom realms. An application can specify which realm to use in its deployment descriptor. If you want to store the user credentials for your application in a database your first choice is the JDBC realm. Prepare the Database Fire up NetBeans and switch to the Services tab. Right click the "Databases" node and select "Register MySQL Server". Fill in the details of your installation and click "ok". Right click the new MySQL node and select "connect". Now you see all the already available databases. Right click again and select "Create Database". Enter "jdbcrealm" as the new database name. Remark: We're not going to do all that with a separate database user. This is something that is highly recommended but I am using the root user in this examle. If you have a user you can also grant full access to it here. Click "ok". You get automatically connected to the newly created database. Expand the bold node and right click on "Tables". Select "Execute Command" or enter the table details via the wizard. CREATE TABLE USERS ( `USERID` VARCHAR(255) NOT NULL, `PASSWORD` VARCHAR(255) NOT NULL, PRIMARY KEY (`USERID`) ); CREATE TABLE USERS_GROUPS ( `GROUPID` VARCHAR(20) NOT NULL, `USERID` VARCHAR(255) NOT NULL, PRIMARY KEY (`GROUPID`) ); That is all for now with the database. Move on to the next paragraph. Let GlassFish know about MySQL First thing to do is to get the latest and greatest MySQL Connector/J from the MySQL website which is 5.1.22 at the time of writing this. Extract the mysql-connector-java-5.1.22-bin.jar file and drop it into your domain folder (e.g. glassfish\domains\domain1\lib). Done. Now it is finally time to create a project. Basic Project Setup Start a new maven based web application project. Choose "New Project" > "Maven" > Web Application and hit next. Now enter a name (e.g. secureapp) and all the needed maven cordinates and hit next. Choose your configured GlassFish 3+ Server. Select Java EE 6 Web as your EE version and hit "Finish". Now we need to add some more configuration to our GlassFish domain.Right click on the newly created project and select "New > Other > GlassFish > JDBC Connection Pool". Enter a name for the new connection pool (e.g. SecurityConnectionPool) and underneath the checkbox "Extract from Existing Connection:" select your registered MySQL connection. Click next. review the connection pool properties and click finish. The newly created Server Resources folder now shows your sun-resources.xml file. Follow the steps and create a "New > Other > GlassFish > JDBC Resource" pointing the the created SecurityConnectionPool (e.g. jdbc/securityDatasource).You will find the configured things under "Other Sources / setup" in a file called glassfish-resources.xml. It gets deployed to your server together with your application. So you don't have to care about configuring everything with the GlassFish admin console.Additionally we still need Primefaces. Right click on your project, select "Properties" change to "Frameworks" category and add "JavaServer Faces". Switch to the Components tab and select "PrimeFaces". Finish by clicking "OK". You can validate if that worked by opening the pom.xml and checking for the Primefaces dependency. 3.4 should be there. Feel free to change the version to latest 3.4.2. Final GlassFish Configuration Now it is time to fire up GlassFish and do the realm configuration. In NetBeans switch to the "Services" tab again and right click on the "GlassFish 3+" node. Select "Start" and watch the Output window for a successful start. Right click again and select "View Domain Admin Console", which should open your default browser pointing you to http://localhost:4848/. Select "Configurations > server-config > Security > Realms" and click "New..." on top of the table. Enter a name (e.g. JDBCRealm) and select the com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm from the drop down. Fill in the following values into the textfields: JAAS jdbcRealm JNDI jdbc/securityDatasource User Table users User Name Column username Password Column password Group Table groups Group Name Column groupname Leave all the other defaults/blanks and select "OK" in the upper right corner. You are presented with a fancy JavaScript warning window which tells you to _not_ leave the Digest Algorithm Field empty. I field a bug about it. It defaults to SHA-256. Which is different to GlassFish versions prior to 3.1 which used MD5 here. The older version of this tutorial didn't use a digest algorithm at all ("none"). This was meant to make things easier but isn't considered good practice at all. So, let's stick to SHA-256 even for development, please. Secure your application Done with configuring your environment. Now we have to actually secure the application. First part is to think about the resources to protect. Jump to your Web Pages folder and create two more folders. One named "admin" and another called "users". The idea behind this is, to have two separate folders which could be accessed by users belonging to the appropriate groups. Now we have to create some pages. Open the Web Pages/index.xhtml and replace everything between the h:body tags with the following: Select where you want to go: Now add a new index.xhtml to both users and admin folders. Make them do something like this: Hello Admin|User On to the login.xhtml. Create it with the following content in the root of your Web Pages folder. Username: Password: As you can see, whe have the basic Primefaces p:panel component which has a simple html form which points to the predefined action j_security_check. This is, where all the magic is happening. You also have to include two input fields for username and password with the predefined names j_username and j_password. Now we are going to create the loginerror.xhtml which is displayed, if the user did not enter the right credentials. (use the same DOCTYPE and header as seen in the above example). Sorry, you made an Error. Please try again: Login The only magic here is the href link of the Login anchor. We need to get the correct request context and this could be done by accessing the faces context. If a user without the appropriate rights tries to access a folder he is presented a 403 access denied error page. If you like to customize it, you need to add it and add the following lines to your web.xml: 403 /faces/403.xhtml That snippet defines, that all requests that are not authorized should go to the 403 page. If you have the web.xml open already, let's start securing your application. We need to add a security constraint for any protected resource. Security Constraints are least understood by web developers, even though they are critical for the security of Java EE Web applications. Specifying a combination of URL patterns, HTTP methods, roles and transport constraints can be daunting to a programmer or administrator. It is important to realize that any combination that was intended to be secure but was not specified via security constraints, will mean that the web container will allow those requests. Security Constraints consist of Web Resource Collections (URL patterns, HTTP methods), Authorization Constraint (role names) and User Data Constraints (whether the web request needs to be received over a protected transport such as TLS). Admin Pages Protected Admin Area /faces/admin/* GET POST HEAD PUT OPTIONS TRACE DELETE admin NONE All Access None Protected User Area /faces/users/* GET POST HEAD PUT OPTIONS TRACE DELETE NONE If the constraints are in place you have to define, how the container should challenge the user. A web container can authenticate a web client/user using either HTTP BASIC, HTTP DIGEST, HTTPS CLIENT or FORM based authentication schemes. In this case we are using FORM based authentication and define the JDBCRealm FORM JDBCRealm /faces/login.xhtml /faces/loginerror.xhtml The realm name has to be the name that you assigned the security realm before. Close the web.xml and open the sun-web.xml to do a mapping from the application role-names to the actual groups that are in the database. This abstraction feels weird, but it has some reasons. It was introduced to have the option of mapping application roles to different group names in enterprises. I have never seen this used extensively but the feature is there and you have to configure it. Other appservers do make the assumption that if no mapping is present, role names and group names do match. GlassFish doesn't think so. Therefore you have to put the following into the glassfish-web.xml. You can create it via a right click on your project's WEB-INF folder, selecting "New > Other > GlassFish > GlassFish Descriptor" admin admin hat was it _basically_ ... everything you need is in place. The only thing that is missing are the users in the database. It is still empty ...We need to add a test user: Adding a Test-User to the Database And again we start by right clicking on the jdbcrealm database on the "Services" tab in NetBeans. Select "Execute Command" and insert the following: INSERT INTO USERS VALUES ("admin", "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"); INSERT INTO USERS_GROUPS VALUES ("admin", "admin"); You can login with user: admin and password: admin and access the secured area. Sample code to generate the hash could look like this: try { MessageDigest md = MessageDigest.getInstance("SHA-256"); String text = "admin"; md.update(text.getBytes("UTF-8")); // Change this to "UTF-16" if needed byte[] digest = md.digest(); BigInteger bigInt = new BigInteger(1, digest); String output = bigInt.toString(16); System.out.println(output); } catch (NoSuchAlgorithmException | UnsupportedEncodingException ex) { Logger.getLogger(PasswordTest.class.getName()).log(Level.SEVERE, null, ex); } Have fun securing your apps and keep the questions coming! In case you need it, the complete source code is on https://github.com/myfear/JDBCRealmExample
Most Java programmers are familiar with the Java thread deadlock concept. It essentially involves 2 threads waiting forever for each other. This condition is often the result of flat (synchronized) or ReentrantLock (read or write) lock-ordering problems. Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting to lock monitor 0x0237ada4 (object 0x272200e8, a java.lang.Object), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x0237aa64 (object 0x272200f0, a java.lang.Object), which is held by "pool-1-thread-2" The good news is that the HotSpot JVM is always able to detect this condition for you…or is it? A recent thread deadlock problem affecting an Oracle Service Bus production environment has forced us to revisit this classic problem and identify the existence of “hidden” deadlock situations. This article will demonstrate and replicate via a simple Java program a very special lock-ordering deadlock condition which is not detected by the latest HotSpot JVM 1.7. You will also find a video at the end of the article explaining you the Java sample program and the troubleshooting approach used. The crime scene I usually like to compare major Java concurrency problems to a crime scene where you play the lead investigator role. In this context, the “crime” is an actual production outage of your client IT environment. Your job is to: Collect all the evidences, hints & facts (thread dump, logs, business impact, load figures…) Interrogate the witnesses & domain experts (support team, delivery team, vendor, client…) The next step of your investigation is to analyze the collected information and establish a potential list of one or many “suspects” along with clear proofs. Eventually, you want to narrow it down to a primary suspect or root cause. Obviously the law “innocent until proven guilty” does not apply here, exactly the opposite. Lack of evidence can prevent you to achieve the above goal. What you will see next is that the lack of deadlock detection by the Hotspot JVM does not necessary prove that you are not dealing with this problem. The suspect In this troubleshooting context, the “suspect” is defined as the application or middleware code with the following problematic execution pattern. Usage of FLAT lock followed by the usage of ReentrantLock WRITE lock (execution path #1) Usage of ReentrantLock READ lock followed by the usage of FLAT lock (execution path #2) Concurrent execution performed by 2 Java threads but via a reversed execution order The above lock-ordering deadlock criteria’s can be visualized as per below: Now let’s replicate this problem via our sample Java program and look at the JVM thread dump output. Sample Java program This above deadlock conditions was first identified from our Oracle OSB problem case. We then re-created it via a simple Java program. You can download the entire source code of our program here. The program is simply creating and firing 2 worker threads. Each of them execute a different execution path and attempt to acquire locks on shared objects but in different orders. We also created a deadlock detector thread for monitoring and logging purposes. For now, find below the Java class implementing the 2 different execution paths. package org.ph.javaee.training8; import java.util.concurrent.locks.ReentrantReadWriteLock; /** * A simple thread task representation * @author Pierre-Hugues Charbonneau * */ public class Task { // Object used for FLAT lock private final Object sharedObject = new Object(); // ReentrantReadWriteLock used for WRITE & READ locks private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); /** * Execution pattern #1 */ public void executeTask1() { // 1. Attempt to acquire a ReentrantReadWriteLock READ lock lock.readLock().lock(); // Wait 2 seconds to simulate some work... try { Thread.sleep(2000);}catch (Throwable any) {} try { // 2. Attempt to acquire a Flat lock... synchronized (sharedObject) {} } // Remove the READ lock finally { lock.readLock().unlock(); } System.out.println("executeTask1() :: Work Done!"); } /** * Execution pattern #2 */ public void executeTask2() { // 1. Attempt to acquire a Flat lock synchronized (sharedObject) { // Wait 2 seconds to simulate some work... try { Thread.sleep(2000);}catch (Throwable any) {} // 2. Attempt to acquire a WRITE lock lock.writeLock().lock(); try { // Do nothing } // Remove the WRITE lock finally { lock.writeLock().unlock(); } } System.out.println("executeTask2() :: Work Done!"); } public ReentrantReadWriteLock getReentrantReadWriteLock() { return lock; } } As soon ad the deadlock situation was triggered, a JVM thread dump was generated using JVisualVM. As you can see from the Java thread dump sample. The JVM did not detect this deadlock condition (e.g. no presence of Found one Java-level deadlock) but it is clear these 2 threads are in deadlock state. Root cause: ReetrantLock READ lock behavior The main explanation we found at this point is associated with the usage of the ReetrantLock READ lock. The read locks are normally not designed to have a notion of ownership. Since there is not a record of which thread holds a read lock, this appears to prevent the HotSpot JVM deadlock detector logic to detect deadlock involving read locks. Some improvements were implemented since then but we can see that the JVM still cannot detect this special deadlock scenario. Now if we replace the read lock (execution pattern #2) in our program by a write lock, the JVM will finally detect the deadlock condition but why? Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting for ownable synchronizer 0x272239c0, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x025cad3c (object 0x272236d0, a java.lang.Object), which is held by "pool-1-thread-2" Found one Java-level deadlock: ============================= "pool-1-thread-2": waiting for ownable synchronizer 0x272239c0, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "pool-1-thread-1" "pool-1-thread-1": waiting to lock monitor 0x025cad3c (object 0x272236d0, a java.lang.Object), which is held by "pool-1-thread-2" Java stack information for the threads listed above: =================================================== "pool-1-thread-2": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x272239c0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945) at org.ph.javaee.training8.Task.executeTask2(Task.java:54) - locked <0x272236d0> (a java.lang.Object) at org.ph.javaee.training8.WorkerThread2.run(WorkerThread2.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) "pool-1-thread-1": at org.ph.javaee.training8.Task.executeTask1(Task.java:31) - waiting to lock <0x272236d0> (a java.lang.Object) at org.ph.javaee.training8.WorkerThread1.run(WorkerThread1.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) This is because write locks are tracked by the JVM similar to flat locks. This means the HotSpot JVM deadlock detector appears to be currently designed to detect: Deadlock on Object monitors involving FLAT locks Deadlock involving Locked ownable synchronizers associated with WRITE locks The lack of read lock per-thread tracking appears to prevent deadlock detection for this scenario and significantly increase the troubleshooting complexity. I suggest that you read Doug Lea’s comments on this whole issue since concerns were raised back in 2005 regarding the possibility to add per-thread read-hold tracking due to some potential lock overhead. Find below my troubleshooting recommendations if you suspect a hidden deadlock condition involving read locks: Analyze closely the thread call stack trace, it may reveal some code potentially acquiring read locks and preventing other threads to acquire write locks. If you are the owner of the code, keep track of the read lock count via the usage of the lock.getReadLockCount() method I’m looking forward for your feedback, especially from individuals with experience on this type of deadlock involving read locks. Finally, find below a video explaining such findings via the execution and monitoring of our sample Java program.
in my last post i wrote about sorting files in linux. decently large files (in the tens of gb’s) can be sorted fairly quickly using that approach. but what if your files are already in hdfs, or ar hundreds of gb’s in size or larger? in this case it makes sense to use mapreduce and leverage your cluster resources to sort your data in parallel. mapreduce should be thought of as a ubiquitous sorting tool, since by design it sorts all the map output records (using the map output keys), so that all the records that reach a single reducer are sorted. the diagram below shows the internals of how the shuffle phase works in mapreduce. given that mapreduce already performs sorting between the map and reduce phases, then sorting files can be accomplished with an identity function (one where the inputs to the map and reduce phases are emitted directly). this is in fact what the sort example that is bundled with hadoop does. you can look at the how the example code works by examining the org.apache.hadoop.examples.sort class. to use this example code to sort text files in hadoop, you would use it as follows: shell$ export hadoop_home=/usr/lib/hadoop shell$ $hadoop_home/bin/hadoop jar $hadoop_home/hadoop-examples.jar sort \ -informat org.apache.hadoop.mapred.keyvaluetextinputformat \ -outformat org.apache.hadoop.mapred.textoutputformat \ -outkey org.apache.hadoop.io.text \ -outvalue org.apache.hadoop.io.text \ /hdfs/path/to/input \ /hdfs/path/to/output this works well, but it doesn’t offer some of the features that i commonly rely upon in linux’s sort, such as sorting on a specific column, and case-insensitive sorts. linux-esque sorting in mapreduce i’ve started a new github repo called hadoop-utils , where i plan to roll useful helper classes and utilities. the first one is a flexible hadoop sort. the same hadoop example sort can be accomplished with the hadoop-utils sort as follows: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ /hdfs/path/to/input \ /hdfs/path/to/output to bring sorting in mapreduce closer to the linux sort, the --key and --field-separator options can be used to specify one or more columns that should be used for sorting, as well as a custom separator (whitespace is the default). for example, imagine you had a file in hdfs called /input/300names.txt which contained first and last names: shell$ hadoop fs -cat 300names.txt | head -n 5 roy franklin mario gardner willis romero max wilkerson latoya larson to sort on the last name you would run: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --key 2 \ /input/300names.txt \ /hdfs/path/to/output the syntax of --key is pos1[,pos2] , where the first position (pos1) is required, and the second position (pos2) is optional - if it’s omitted then pos1 through the rest of the line is used for sorting. just like the linux sort, --key is 1-based, so --key 2 in the above example will sort on the second column in the file. lzop integration another trick that this sort utility has is its tight integration with lzop, a useful compression codec that works well with large files in mapreduce (see chapter 5 of hadoop in practice for more details on lzop). it can work with lzop input files that span multiple splits, and can also lzop-compress outputs, and even create lzop index files. you would do this with the codec and lzop-index options: shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --key 2 \ --codec com.hadoop.compression.lzo.lzopcodec \ --map-codec com.hadoop.compression.lzo.lzocodec \ --lzop-index \ /hdfs/path/to/input \ /hdfs/path/to/output multiple reducers and total ordering if your sort job runs with multiple reducers (either because mapreduce.job.reduces in mapred-site.xml has been set to a number larger than 1, or because you’ve used the -r option to specify the number of reducers on the command-line), then by default hadoop will use the hashpartitioner to distribute records across the reducers. use of the hashpartitioner means that you can’t concatenate your output files to create a single sorted output file. to do this you’ll need total ordering , which is supported by both the hadoop example sort and the hadoop-utils sort - the hadoop-utils sort enables this with the --total-order option. shell$ $hadoop_home/bin/hadoop jar hadoop-utils--jar-with-dependencies.jar \ com.alexholmes.hadooputils.sort.sort \ --total-order 0.1 10000 10 \ /hdfs/path/to/input \ /hdfs/path/to/output the syntax is for this option is unintuitive so let’s look at what each field means. more details on total ordering can be seen in chapter 4 of hadoop in practice . more details for details on how to download and run the hadoop-utils sort take a look at the cli guide in the github project page .
Almost all the implementation I see today are based on OAuth 2.0 Bearer Token Profile. Of course its an RFC proposed standard today. OAuth 2.0 Bearer Token profile brings a simplified scheme for authentication. This specification describes how to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a "bearer") can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport. Before dig in to the OAuth 2.0 MAC profile lets have quick high-level overview of OAuth 2.0 message flow. OAuth 2.0 has mainly three phases. 1. Requesting an Authorization Grant. 2. Exchanging the Authorization Grant for an Access Token. 3. Access the resources with the Access Token. Where does the token type come in to action ? OAuth 2.0 core specification does not mandate any token type. At the same time at any point token requester - client - cannot decide which token type it needs. It's purely up to the Authorization Server to decide which token type to be returned in the Access Token response. So, the token type comes in to action in phase-2 when Authorization Server returning back the OAuth 2.0 Access Token. The access token type provides the client with the information required to successfully utilize the access token to make a protected resource request (along with type-specific attributes). The client must not use an access token if it does not understand the token type. Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a protected resource request. For example following is what you get for Access Token response irrespective of which grant type you use. HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { "access_token":"mF_9.B5f-4.1JqM", "token_type":"Bearer", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA" } The above is for Bearer - following is for MAC. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "access_token":"SlAV32hkKG", "token_type":"mac", "expires_in":3600, "refresh_token":"8xLOxBtZp8", "mac_key":"adijq39jdlaska9asud", "mac_algorithm":"hmac-sha-256" } Here you can see MAC Access Token response has two additional attributes. mac_key and the mac_algorithm. Let me rephrase this - "Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter". This MAC Token Profile defines the HTTP MAC access authentication scheme, providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host. In the above response access_token is the MAC key identifier. Unlike in Bearer, MAC token profile never passes it's top secret over the wire. The access_token or the MAC key identifier is a string identifying the MAC key used to calculate the request MAC. The string is usually opaque to the client. The server typically assigns a specific scope and lifetime to each set of MAC credentials. The identifier may denote a unique value used to retrieve the authorization information (e.g. from a database), or self-contain the authorization information in a verifiable manner (i.e. a string consisting of some data and a signature). The mac_key is a shared symmetric secret used as the MAC algorithm key. The server will not reissue a previously issued MAC key and MAC key identifier combination. Now let's see what happens in phase-3. Following shows how the Authorization HTTP header looks like when Bearer Token been used. Authorization: Bearer mF_9.B5f-4.1JqM This adds very low overhead on client side. It simply needs to pass the exact access_token it got from the Authorization Server in phase-2. Under MAC token profile, this is how it looks like. Authorization: MAC id="h480djs93hd8", ts="1336363200", nonce="dj83hs9s", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=" This needs bit more attention. id is the MAC key identifier or the access_token from the phase-2. ts the request timestamp. The value is a positive integer set by the client when making each request to the number of seconds elapsed from a fixed point in time (e.g. January 1, 1970 00:00:00 GMT). This value is unique across all requests with the same timestamp and MAC key identifier combination. nonce is a unique string generated by the client. The value is unique across all requests with the same timestamp and MAC key identifier combination. The client uses the MAC algorithm and the MAC key to calculate the request mac. This is how you derive the normalized string to generate the HMAC. The normalized request string is a consistent, reproducible concatenation of several of the HTTP request elements into a single string. By normalizing the request into a reproducible string, the client and server can both calculate the request MAC over the exact same value. The string is constructed by concatenating together, in order, the following HTTP request elements, each followed by a new line character (%x0A): 1. The timestamp value calculated for the request. 2. The nonce value generated for the request. 3. The HTTP request method in upper case. For example: "HEAD", "GET", "POST", etc. 4. The HTTP request-URI as defined by [RFC2616] section 5.1.2. 5. The hostname included in the HTTP request using the "Host" request header field in lower case. 6. The port as included in the HTTP request using the "Host" request header field. If the header field does not include a port, the default value for the scheme MUST be used (e.g. 80 for HTTP and 443 for HTTPS). 7. The value of the "ext" "Authorization" request header field attribute if one was included in the request (this is optional), otherwise, an empty string. Each element is followed by a new line character (%x0A) including the last element and even when an element value is an empty string. Either you use Bearer of MAC - the end user or the resource owner is identified using the access_token. Authorization, throttling, monitoring or any other quality of service operations can be carried out against the access_token irrespective of which token profile you use.
In this post we will Utilize GitHub and/or BitBucket's static web page hosting capabilities to publish our project's Maven 3 Site Documentation. Each of the two SCM providers offer a slightly different solution to host static pages. The approach spelled out in this post would also be a viable solution to "backup" your site documentation in a supported SCM like Git or SVN. This solution does not directly cover site documentation deployment covered by the maven-site-plugin and the Wagon library (scp, WebDAV or FTP). There is one main project hosted on GitHub that I have posted with the full solution. The project URL is https://github.com/mike-ensor/clickconcepts-master-pom/. The POM has been pushed to Maven Central and will continue to be updated and maintained. com.clickconcepts.project master-site-pom 0.16 GitHub Pages GitHub hosts static pages by using a special branch "gh-pages" available to each GitHub project. This special branch can host any HTML and local resources like JavaScript, images and CSS. There is no server side development. To navigate to your static pages, the URL structure is as follows: http://.github.com/ An example of the project I am using in this blog post: http://mike-ensor.github.com/clickconcepts-master-pom/ where the first bold URL segment is a username and the second bold URL segment is the project. GitHub does allow you to create a base static hosted static site for your username by creating a repository with your username.github.com. The contents would be all of your HTML and associated static resources. This is not required to post documentation for your project, unlike the BitBucket solution. There is a GitHub Site plugin that publishes site documentation via GitHub's object API but this is outside the scope of this blog post because it does not provide a single solution for GitHub and BitBucket projects using Maven 3. BitBucket BitBucket provides a similar service to GitHub in that it hosts static HTML pages and their associated static resources. However, there is one large difference in how those pages are stored. Unlike GitHub, BitBucket requires you to create a new repository with a name fitting the convention. The files will be located on the master branch and each project will need to be a directory off of the root. mikeensor.bitbucket.org/ /some-project +index.html +... /css /img /some-other-project +index.html +... /css /img index.html .git .gitignore The naming convention is as follows: .bitbucket.org An example of a BitBucket static pages repository for me would be: http://mikeensor.bitbucket.org/. The structure does not require that you create an index.html page at the root of the project, but it would be advisable to avoid 404s. Generating Site Documentation Maven provides the ability to post documentation for your project by using the maven-site-plugin. This plugin is difficult to use due to the many configuration options that oftentimes are not well documented. There are many blog posts that can help you write your documentation including my post on maven site documentation. I did not mention how to use "xdoc", "apt" or other templating technologies to create documentation pages, but not to fear, I have provided this in my GitHub project. Putting it all Together The Maven SCM Publish plugin (http://maven.apache.org/plugins/maven-scm-publish-plugin/ publishes site documentation to a supported SCM. In our case, we are going to use Git through BitBucket or GitHub. Maven SCM Plugin does allow you to publish multi-module site documentation through the various properties, but the scope of this blog post is to cover single/mono module projects and the process is a bit painful. Take a moment to look at the POM file located in the clickconcepts-master-pom project. This master POM is rather comprehensive and the site documentation is only one portion of the project, but we will focus on the site documentation. There are a few things to point out here, first, the scm-publish plugin and the idiosyncronies when implementing the plugin. In order to create the site documentation, the "site" plugin must first be run. This is accomplished by running site:site. The plugin will generate the documentation into the "target/site" folder by default. The SCM Publish Plugin, by default, looks for the site documents to be in "target/staging" and is controlled by the content parameter. As you can see, there is a mismatch between folders. NOTE: My first approach was to run the site:stage command which is supposed to put the site documents into the "target/staging" folder. This is not entirely correct, the site plugin combines with the distributionManagement.site.url property to stage the documents, but there is very strange behavior and it is not documented well. In order to get the site plugin's site documents and the SCM Publish's location to match up, use the content property and set that to the location of the Site Plugin output (). If you are using GitHub, there is no modification to the siteOutputDirectory needed, however, if you are using BitBucket, you will need to modify the property to add in a directory layer into the site documentation generation (see above for differences between GitHub and BitBucket pages). The second property will tell the SCM Publish Plugin to look at the root "site" folder so that when the files are copied into the repository, the project folder will be the containing folder. The property will look like: ${project.build.directory}/site/ ${project.artifactId} ${project.build.directory} /site Next we will take a look at the custom properties defined in the master POM and used by the SCM Publish Plugin above. Each project will need to define several properties to use the Master POM that are used within the plugins during the site publishing. Fill in the variables with your own settings. BitBucket ... ... master scm:git:git@bitbucket.org:mikeensor/mikeensor.bitbucket.org.git ${project.build.directory}/site/${project.artifactId} ${project.build.directory}/site ${changelog.bitbucket.fileUri} ${changelog.revision.bitbucket.fileUri} ... ... GitHub ... ... gh-pages scm:git:git@github.com:mikeensor/clickconcepts-master-pom.git ${changelog.github.fileUri} ${changelog.revision.github.fileUri} ... ... NOTE: changelog parameters are required to use the Master POM and are not directly related to publishing site docs to GitHub or BitBucket How to Generate If you are using the Master POM (or have abstracted out the Site Plugin and the SCM Plugin) then to generate and publish the documentation is simple. mvn clean site:site scm-publish:publish-scm mvn clean site:site scm-publish:publish-scm -Dscmpublish.dryRun=true Gotchas In the SCM Publish Plugin documentation's "tips" they recommend creating a location to place the repository so that the repo is not cloned each time. There is a risk here in that if there is a git repository already in the folder, the plugin will overwrite the repository with the new site documentation. This was discovered by publishing two different projects and having my root repository wiped out by documentation from the second project. There are ways to mitigate this by adding in another folder layer, but make sure you test often! Another gotcha is to use the -Dscmpublish.dryRun=true to test out the site documentation process without making the SCM commit and push Project and Documentation URLs Here is a list of the fully working projects used to create this blog post: Master POM with Site and SCM Publish plugins &ndash https://github.com/mike-ensor/clickconcepts-master-pom. Documentation URL: http://mike-ensor.github.com/clickconcepts-master-pom/ Child Project using Master Pom &ndash http://mikeensor.bitbucket.org/fest-expected-exception. Documentation URL: http://mikeensor.bitbucket.org/fest-expected-exception/
I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects. Design objectives Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching CRUD implemented in seconds For small applications where JPA is an overkill Use when simplicity is needed or when future migration e.g. to JPA is considered Minimalistic support for database dialect differences (e.g. transparent paging of results) Features Each DAO provides built-in support for: Mapping to/from domain objects through RowMapper abstraction Generated and user-defined primary keys Extracting generated key Compound (multi-column) primary keys Immutable domain objects Paging (requesting subset of results) Sorting over several columns (database agnostic) Optional support for many-to-one relationships Supported databases (continuously tested): MySQL PostgreSQL H2 HSQLDB Derby ...and most likely most of the others Easily extendable to other database dialects via SqlGenerator class. Easy retrieval of records by ID API Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you: public interface PagingAndSortingRepository extends CrudRepository { T save(T entity); Iterable save(Iterable entities); T findOne(ID id); boolean exists(ID id); Iterable findAll(); long count(); void delete(ID id); void delete(T entity); void delete(Iterable entities); void deleteAll(); Iterable findAll(Sort sort); Page findAll(Pageable pageable); } Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting: Page page = userRepository.findAll( new PageRequest( 5, 10, new Sort( new Order(DESC, "reputation"), new Order(ASC, "user_name") ) ) ); Spring Data JDBC repository library will translate this call into (PostgreSQL syntax): SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC LIMIT 50 OFFSET 10 ...or even (Derby syntax): SELECT * FROM ( SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.* FROM ( SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC ) AS t ) AS a WHERE ROW_NUM BETWEEN 51 AND 60 No matter which database you use, you'll get Page object in return (you still have to provide RowMapper yourself to translate from ResultSet to domain object. If you don't know Spring Data project yet, Page is a wonderful abstraction, not only encapsulating List , but also providing metadata such as total number of records, on which page we currently are, etc. Reasons to use You consider migration to JPA or even some NoSQL database in the future. Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don't expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies - but at least you minimize the impact by using same DAO API. You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill You want to have full control over generated SQL if needed You want to work with objects, but don't need lazy loading, relationship handling, multi-level caching, dirty checking... You need CRUD and not much more You want to by DRY You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work You have very few database tables Getting started For more examples and working code don't forget to examine project tests. Prerequisites Maven coordinates: com.blogspot.nurkiewicz jdbcrepository 0.1 Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it: $ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git $ git checkout 0.1 $ mvn javadoc:jar source:jar install In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration: @EnableTransactionManagement @Configuration public class MinimalConfig { @Bean public PlatformTransactionManager transactionManager() { return new DataSourceTransactionManager(dataSource()); } @Bean public DataSource dataSource() { MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource(); ds.setUser("user"); ds.setPassword("secret"); ds.setDatabaseName("db_name"); return ds; } } Entity with auto-generated key Say you have a following database table with auto-generated key (MySQL syntax): CREATE TABLE COMMENTS ( id INT AUTO_INCREMENT, user_name varchar(256), contents varchar(1000), created_time TIMESTAMP NOT NULL, PRIMARY KEY (id) ); First you need to create domain object User mapping to that table (just like in any other ORM): public class Comment implements Persistable { private Integer id; private String userName; private String contents; private Date createdTime; @Override public Integer getId() { return id; } @Override public boolean isNew() { return id == null; } //getters/setters/constructors/... } Apart from standard Java boilerplate you should notice implementing Persistable where Integer is the type of primary key. Persistable is an interface coming from Spring Data project and it's the only requirement we place on your domain object. Finally we are ready to create our CommentRepository DAO: @Repository public class CommentRepository extends JdbcRepository { public CommentRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS"); } public static final RowMapper ROW_MAPPER = //see below private static final RowUnmapper
ROW_UNMAPPER = //see below @Override protected Comment postCreate(Comment entity, Number generatedId) { entity.setId(generatedId.intValue()); return entity; } } First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning. As you can see we extend JdbcRepository
which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper , RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used. If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don't want to work with raw JDBC results. It's quite straightforward: public static final RowMapper
ROW_MAPPER = new RowMapper
() { @Override public Comment mapRow(ResultSet rs, int rowNum) throws SQLException { return new Comment( rs.getInt("id"), rs.getString("user_name"), rs.getString("contents"), rs.getTimestamp("created_time") ); } }; RowUnmapper comes from this library and it's essentially the opposite of RowMapper : takes an object and turns it into a Map . This map is later used by the library to construct SQL CREATE / UPDATE queries: private static final RowUnmapper
ROW_UNMAPPER = new RowUnmapper
() { @Override public Map
mapColumns(Comment comment) { Map
mapping = new LinkedHashMap
(); mapping.put("id", comment.getId()); mapping.put("user_name", comment.getUserName()); mapping.put("contents", comment.getContents()); mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime())); return mapping; } }; If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper. Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don't need it, just don't override postCreate() . Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example. By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations... all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features. Entity with manually assigned key In this example we'll see how entities with user-defined primary keys are handled. Let's start from database model: CREATE TABLE USERS ( user_name varchar(255), date_of_birth TIMESTAMP NOT NULL, enabled BIT(1) NOT NULL, PRIMARY KEY (user_name) ); ...and User domain model: public class User implements Persistable
{ private transient boolean persisted; private String userName; private Date dateOfBirth; private boolean enabled; @Override public String getId() { return userName; } @Override public boolean isNew() { return !persisted; } public User withPersisted(boolean persisted) { this.persisted = persisted; return this; } //getters/setters/constructors/... } Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not ( isNew() ) method - there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you'll get away with return true all the time from isNew() . And finally our DAO, UserRepository bean: @Repository public class UserRepository extends JdbcRepository
{ public UserRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name"); } public static final RowMapper
ROW_MAPPER = //... public static final RowUnmapper
ROW_UNMAPPER = //... @Override protected User postUpdate(User entity) { return entity.withPersisted(true); } @Override protected User postCreate(User entity, Number generatedId) { return entity.withPersisted(true); } } "USERS" and "user_name" parameters designate table name and primary key column name. I'll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it. Check out JdbcRepositoryManualKeyTest for a working code based on this example. Compound primary key We also support compound primary keys (primary keys consisting of several columns). Take this table as an example: CREATE TABLE BOARDING_PASS ( flight_no VARCHAR(8) NOT NULL, seq_no INT NOT NULL, passenger VARCHAR(1000), seat CHAR(3), PRIMARY KEY (flight_no, seq_no) ); I would like you to notice the type of primary key in Peristable
: public class BoardingPass implements Persistable
as described in my previous post the ip (and dns) of your running ec2 ami will change after a reboot of that instance. of course this makes it very hard to make your applications on that machine available for the outside world, like in this case our wordpress blog. that is where elastic ip comes to the rescue. with this feature you can assign a static ip to your instance. assign one to your application as follows: click on the elastic ips link in the aws console allocate a new address associate the address with a running instance right click to associate the ip with an instance: pick the instance to assign this ip to: note the ip being assigned to your instance if you go to the ip address you were assigned then you see the home page of your server: and the nicest thing is that if you stop and start your instance you will receive a new public dns but your instance is still assigned to the elastic ip address: one important note: as long as an elastic ip address is associated with a running instance, there is no charge for it. however an address that is not associated with a running instance costs $0.01/hour. this prevents users from ‘reserving’ addresses while they are not being used.
To achieve high availability load balancing with HAProxy on FreeBSD you can use a CARP to setup backup node and using that configuration to avoid SPOF.
As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store. Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application. As always, we will start with Maven POM file for our project: 4.0.0 com.example.spring redis 0.0.1-SNAPSHOT jar UTF-8 3.1.0.RELEASE org.springframework.data spring-data-redis 1.0.0.RELEASE cglib cglib-nodep 2.2 log4j log4j 1.2.16 redis.clients jedis 2.0.0 jar org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post. We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore: package com.example.redis.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; @Configuration public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); } @Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; } } That's basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let's consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple: @Autowired private RedisTemplate< String, Object > template; public Object getValue( final String key ) { return template.opsForValue().get( key ); } public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); } Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second: public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); template.expire( key, 1, TimeUnit.SECONDS ); } Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User public class User { private final Long id; private String name; private String email; // Setters and getters are omitted for simplicity } into Redis using key pattern "user:": public void setUser( final User user ) { final String key = String.format( "user:%s", user.getId() ); final Map< String, Object > properties = new HashMap< String, Object >(); properties.put( "id", user.getId() ); properties.put( "name", user.getName() ); properties.put( "email", user.getEmail() ); template.opsForHash().putAll( key, properties); } Respectively, object could easily be inspected and retrieved using the id. public User getUser( final Long id ) { final String key = String.format( "user:%s", id ); final String name = ( String )template.opsForHash().get( key, "name" ); final String email = ( String )template.opsForHash().get( key, "email" ); return new User( id, name, email ); } There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let's add a bit more configuration here (as part of AppConfig class): @Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); } @Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) ); return container; } The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition: package com.example.redis.impl; import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener; public class RedisMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] paramArrayOfByte) { System.out.println( "Received by RedisMessageListener: " + message.toString() ); } } Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it's pretty simple: @Autowired private RedisTemplate< String, Object > template; public void publish( final String message ) { template.execute( new RedisCallback< Long >() { @SuppressWarnings( "unchecked" ) @Override public Long doInRedis( RedisConnection connection ) throws DataAccessException { return connection.publish( ( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ), ( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) ); } } ); } That's basically it for very quick introduction but definitely enough to fall in love with Redis.