DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Alex Staveley

Software Architect at DublinTech

Dublin, IE

Joined Dec 2011

https://dublintech.blogspot.com

About

Alex Staveley is a software professional passionate about software engineering and technical architecture. He blogs about architectural approaches, Java topics, web solutions and various technical bits and pieces. @dublintech

Stats

Reputation: 507
Pageviews: 740.3K
Articles: 10
Comments: 8
  • Articles
  • Comments

Articles

article thumbnail
Clean Unit Testing
In this article, I outline several tips that aim to improve the readability, maintainability and the quality of your unit tests.
Updated March 30, 2020
· 14,675 Views · 15 Likes
article thumbnail
Java Lambda Streams and Groovy Closures Comparisons
Want to learn more about the difference between in lambda streams in both Java and Groovy? Check out this post to learn more about the differences between them.
August 1, 2018
· 51,111 Views · 6 Likes
article thumbnail
Testing Your Code With Spock
Let's take a look at some of Spock's key features, how they stack up against vanilla JUnit, and see how it helps with your testing.
March 8, 2018
· 36,670 Views · 15 Likes
article thumbnail
Groovy Closures: this, owner, delegate Let's Make a DSL.
Groovy closures are super cool. To fully understand them, I think it's really important to understand the meaning of this, owner and delegate. In general: this: refers to the instance of the class that the closure was defined in. owner: is the same as this, unless the closure was defined inside another closure in which case the owner refers to the outer closure. delegate: is the same as owner. But, it is the only one that can be programmatically changed, and it is the one that makes Groovy closures really powerful. Confused? Let's look at some code. class MyClass { def outerClosure = { println this.class.name // outputs MyClass println owner.class.name // outputs MyClass println delegate.class.name //outputs MyClass def nestedClosure = { println this.class.name // outputs MyClass println owner.class.name // outputs MyClass$_closure1 println delegate.class.name // outputs MyClass$_closure1 } nestedClosure() } } def closure = new MyClass().closure closure() With respect to above code: The this value always refers to the instance of the enclosing class. owner is always the same as this, except for nested closures. delegate is the same as owner by default. It can be changed and we will see that in a sec. So what is the point of this, owner, delegate? Well remember, that closures are not just anonymous functions. If they were we could just call them Lambdas and we wouldn't have to come up with another word, would we? Where closures go beyond lambdas is that they bind or "close over" variables that are not explicitly defined in the closure's scope. Again, let's take a look at some code. class MyClass { String myString = "myString1" def outerClosure = { println myString; // outputs myString1 def nestedClosure = { println myString; // outputs myString1 } nestedClosure() } } MyClass myClass = new MyClass() def closure = new MyClass().outerClosure closure() println myClass.myString Ok, so both the closure and the nestedClosure have access to variables on the instance of the class they were defined in. That's obvious. But, how exactly do they resolve the myString reference? Well it's like this. If the variable was not defined explicitly in the closure, the this scope is then checked, then the owner scope and then the delegatescope. In this example, myString is not defined in either of the closures, so groovy checks their this references and sees the myString is defined there and uses that. Ok, let's take a look at an example where it can't find a variable in the closure and can't find it on the closure's this scope, but it can find's it in the closure's owner scope. class MyClass { def outerClosure = { def myString = "outerClosure"; def nestedClosure = { println myString; // outputs outerClosure } nestedClosure() } } MyClass myClass = new MyClass() def closure = new MyClass().closure closure() In this case, Groovy can't find myString in the nestedClosure or in the this scope. It then checks the owner scope, which for the nestedClosure is the outerClosure. It finds myString there and uses that. Now, let's take a look at an example where Groovy can't find a variable in the closure, or on this or the owner scope but can find it in on closure'sdelegate scope. As discussed earlier the owner delegate scope is the same as the owner scope, unless it is explicitly changed. So, to make this a bit more interesting, let's change the delegate. class MyOtherClass { String myString = "I am over in here in myOtherClass" } class MyClass { def closure = { println myString } } MyClass myClass = new MyClass() def closure = new MyClass().closure closure.delegate = new MyOtherClass() closure() // outputs: "I am over in here in myOtherClass" The ability to have so much control over the lexical scope of closures in Groovy gives enormous power. Even when the delegate is set it can be change to something else, this means we can make the behavior of the closure super dynamic. class MyOtherClass { String myString = "I am over in here in myOtherClass" } class MyOtherClass2 { String myString = "I am over in here in myOtherClass2" } class MyClass { def closure = { println myString } } MyClass myClass = new MyClass() def closure = new MyClass().closure closure.delegate = new MyOtherClass() closure() // outputs: I am over in here in myOtherClass closure = new MyClass().closure closure.delegate = new MyOtherClass2() closure() // outputs: I am over in here in myOtherClass2 Ok, so it should be a bit clearer now what this, owner and delegate actually correspond to. As stated, the closure itself will be checked first, followed by the closure's this scope, than the closure's owner, then its delegate. However, Groovy is so flexible this strategy can be changed. Every closure has a property called resolvedStrategy. This can be set to: Closure.OWNER_FIRST Closure.DELEGATE_FIRST Closure.OWNER_ONLY Closure.DELEGATE_ONLY So where is a good example of a practical usage of the dynamic setting of the delegate property. Well you see in the GORM for Grails. Suppose we have the following domain class: class Author { String name static constraints = { name size: 10..15 } } In the Author class we can see a constraint defined using what looks like a DSL whereas in the Java / Hibernate world we would not being able to write an expressive DSL and instead use an annotation (which is better than XML but still not as neat as a DSL). So, how come we can use a DSL in Groovy then? Well it is because of the capabilities delegate setting on closures adds to Groovy's metaprogramming toolbox. In the Author GORM object, constraints is a closure, that invokes a name method with one parameter of name size which has the value of the range between 10 and 15. It could also be written less DSL'y as: class Author { String name static constraints = { name(size: 10..15) } } Either way, behind the scenes, Grails looks for a constraints closure and assigns its delegate to a special object that synthesizes the constraints logic. In pseudo code, it would be something like this... // Set the constraints delegate constraints.delegate = new ConstraintsBuilder(); // delegate is assigned before the closure is executed. class ConstraintsBuilder = { // // ... // In every Groovy object methodMissing() is invoked when a method that does not exist on the object is invoked // In this case, there is no name() method so methodMissing will be invoked. // ... def methodMissing(String methodName, args) { // We can get the name variable here from the method name // We can get that size is 10..15 from the args ... // Go and do stuff with hibernate to enforce constraints } } So there you have it. Closures are very powerful, they can delegate out to objects that can be set dynamically at runtime. That plays an important part in Groovy's meta programming capabilities which mean that Groovy can have some very expressive DSLs.
May 7, 2014
· 65,917 Views · 11 Likes
article thumbnail
How Could Scala do a Merge Sort?
Merge sort is a classical "divide and conquer" sorting algorithm. You should have to never write one because you'd be silly to do that when a standard library class already will already do it for you. But, it is useful to demonstrate a few characteristics of programming techniques in Scala. Firstly a quick recap on the merge sort. It is a divide and conquer algorithm. A list of elements is split up into smaller and smaller lists. When a list has one element it is considered sorted. It is then merged with the list beside it. When there are no more lists to merged the original data set is considered sorted. Now let's take a look how to do that using an imperative approach in Java. public void sort(int[] values) { int[] numbers = values; int[] auxillaryNumbers = new int[values.length]; mergesort(numbers, auxillaryNumbers, 0, values.length - 1); } private void mergesort(int [] numbers, int [] auxillaryNumbers, int low, int high) { // Check if low is smaller then high, if not then the array is sorted if (low < high) { // Get the index of the element which is in the middle int middle = low + (high - low) / 2; // Sort the left side of the array mergesort(numbers, auxillaryNumbers, low, middle); // Sort the right side of the array mergesort(numbers, auxillaryNumbers, middle + 1, high); // Combine them both // Alex: the first time we hit this when there is min difference between high and low. merge(numbers, auxillaryNumbers, low, middle, high); } } /** * Merges a[low .. middle] with a[middle..high]. * This method assumes a[low .. middle] and a[middle..high] are sorted. It returns * a[low..high] as an sorted array. */ private void merge(int [] a, int[] aux, int low, int middle, int high) { // Copy both parts into the aux array for (int k = low; k <= high; k++) { aux[k] = a[k]; } int i = low, j = middle + 1; for (int k = low; k <= high; k++) { if (i > middle) a[k] = aux[j++]; else if (j > high) a[k] = aux[i++]; else if (aux[j] < aux[i]) a[k] = aux[j++]; else a[k] = aux[i++]; } } public static void main(String args[]){ ... ms.sort(new int[] {5, 3, 1, 17, 2, 8, 19, 11}); ... } } Discussion... An auxillary array is used to achieve the sort. Elements to be sorted are copied into it and then once sorted copied back. It is important this array is only created once otherwise there can be a performance hit from extensive array created. The merge method does not have to create an auxiliary array however since it changes an object it means the merge method has side effects. Merge sort big(O) performance is N log N. Now let's have a go at a Scala solution. def mergeSort(xs: List[Int]): List[Int] = { val n = xs.length / 2 if (n == 0) xs else { def merge(xs: List[Int], ys: List[Int]): List[Int] = (xs, ys) match { case(Nil, ys) => ys case(xs, Nil) => xs case(x :: xs1, y :: ys1) => if (x < y) x::merge(xs1, ys) else y :: merge(xs, ys1) } val (left, right) = xs splitAt(n) merge(mergeSort(left), mergeSort(right)) } } Key discussion points: It is the same divide and conquer idea. The splitAt function is used to divide up the data up each time into a tuple. For every recursion this will new a new tuple. The local function merge is then used to perform the merging. Local functions are a useful feature as they help promote encapsulation and prevent code bloat. Neiher the mergeSort() or merge() functions have any side effects. They don't change any object. They create (and throw away) objects. Because the data is not been passed across iterations of the merging, there is no need to pass beginning and ending pointers which can get very buggy. This merge recursion uses pattern matching to great effect here. Not only is there matching for data lists but when a match happens the data lists are assigned to variables: x meaning the top element in the left list xs1 the rest of the left list y meaning the top element in the right list ys1 meaning the rest of the data in the right list This makes it very easy to compare the top elements and to pass around the rest of the date to compare. Would the iterative approach be possible in Java? Of course. But it would be much more complex. You don't have any pattern matching and you don't get a nudge to declare objects as immutable as Scala does with making you make something val or var. In Java, it would always be easier to read the code for this problem if it was done in an imperative style where objects are being changed across iterations of a loop. But Scala a functional recursive approach can be quite neat. So here we see an example of how Scala makes it easier to achieve good, clean, concise recursion and a make a functional approach much more possible.
May 23, 2013
· 11,348 Views
article thumbnail
A-Z of JavaScript
Here is an A - Z list of some Javascript idioms and patterns. The idea is to convey in simple terms some features of the actual Javascript language (rather than how it can interact with DOM). Enjoy... Array Literals An array literal can be defined using a comma separated list in square brackets. var months = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec']; console.log(months[0]); // outputs jan console.log(months.length) // outputs 12 Arrays in javascript have a wide selection methods including push() and pop(). Suppose the world got taken over by a dictator who wanted to get rid of the last month of the year? The dictator would just do... months.pop(); And of course, the dictator will eventually want to add a month after himself when everyone will have to worship him: months.push("me"); Callbacks Since functions are objects, they can be passed as arguments to other functions. function peakOil(callback) { //... code callback(); // the parentheses mean the function is executed! } function changeCivilisationCallback(){ //... } // Now pass the changeCivilisationCallback to peakOil. // Note: no changeCivilisationCallback parentheses because it is not // executed at this point. // It will be excuted later inside peak oil. peakOil(changeCivilisationCallback); In the example above, the chanceCivilisationCallback callback function is invoked by peakOil. Logic could be added to check if the energy returns from solar panels and wind farms were sufficient in which case another callback, other than changeCivilisationCallback could be added. Configuration Object Instead of passing around a bunch of related properties... function addCar(colour, wheelsize, regplate) {...} Use a configuration object function addCar(carConf) {...} var myCarConf = { colour: "blue", wheelsize: "32", regplate: "00D98788" }; addCar(myCarConf); The use of a configuration object makes it makes it easier to write clean APIs that don't need to take a huge long list of parameters. They also means you are less likely to get silly errors if parameters are in the wrong order. Closures There are three ways to creats objects in Javascript: using literals, using the constuctor function and by using a closure. What closures offer that the other two approaches do not is encapsulation. Closures make it possible to hide away functions and variables. var counter = function(count) { console.log(">> setting count to " + this.count); return { getCount: function(){ return ++count; } } } mycounter = counter(0); console.log(mycounter.getCount()); // outputs 1 console.log(mycounter.getCount()); // outputs 2 console.log(mycounter.getCount()); // outputs 3 console.log(mycounter.getCount()); // outputs 4 // Same again with offset this time. mycounterWithOffset = counter(10); console.log(mycounterWithOffset .getCount()); // outputs 11 console.log(mycounterWithOffset .getCount()); // outputs 12 console.log(mycounterWithOffset .getCount()); // outputs 13 console.log(mycounterWithOffset .getCount()); // outputs 14 Note: The closure is the object literal returned from annoymous function. It "closes" over the count variable. No-one can access it except for the closure. It is encapsulated. The closure also has a sense of state. Note also how the it maintains the value of the counter. Constructor Functions (Built in) There are no classes in Javascript but there are construtor functions which use the new keyword syntax similar to the class based object creation in Java or other languages. Javascript has some built-in constructor functions. These include Object(), Date(), String() etc. var person = new Object(); // person variable is an Object person.name = "alex"; // properties can then be dynamically added Constructor Functions (Custom) When a function is invoked with the keyword new, it is referred to as a Constructor function. The new means that the new object will have a hidden link to value of the function's prototype member and the this keyword will be bound to the new object. function MyConstrutorFunction() { this.goodblog = "dublintech.blogspot.com"; } var newObject = new MyConstrutorFunction(); console.log(typeof newObject); // "object" console.log(newObject.goodblog); // "dublintech.blogspot.com" var noNewObject = MyConstrutorFunction(); console.log(typeof noNewObject); // "undefined" console.log(window.tastes); // "yummy" The convention is that constructor functions should begin with a capital letter. Note: if the new keyword is not used, then the 'this' variable inside the function will refer to the global object. Can you smell a potential mess? Hence why the capital letter convention for constructor functions is used. The capital letter means: "I am a constructor function, please use the new keyword". Currying Currying is the process of reducing the number of arguments passed to a function by setting some argument(s) to predefined values. Consider this function. function outputNumbers(begin, end) { var i; for (i = begin; i <= end; i++) { print(i); } } outputNumbers(0, 5); // outputs 0, 1, 2, 3, 4, 5 outputNumbers(1, 5); // outputs 1, 2, 3, 4, 5 Suppose, we want a similar function with a fixed "begin" value. Let's say the "begin" value was always 1. We could do: function outputNumbersFixedStart(start) { return function(end) { return outputNumbers(start, end); } } And then define a variable to be this new function... var outputFromOne = outputNumbersFixedStart(1); outputFromOne(3); 1, 2, 3 outputFromOne(5); 1, 2, 3, 4, 5 Delete Operator The delete operator can be used to remove properties from objects and arrays. var person = {name: 'Alex', age: 56}; // damn I don't want them to know my age remove it delete person.age; console.log("name" in person); // outputs true because it is still there console.log("age" in person); // outputs false var colours = ['red', 'green', 'blue'] // is red really in the array? console.log(colours.indexOf('red') > -1); // outputs true. // remove red, it's going out of fashion! delete colours[colours.indexOf('red')]; console.log(colours.indexOf('red') > -1); // outputs false console.log(colours.length) // length is still three, remember it's javascript! You cannot delete global variables or prototype attributes. console.log(delete Object.prototype) // can't be deleted, outputs false function MyFunction() { // ... } console.log(delete MyFunction.prototype) // can't be deleted, outputs false var myglobalVar = 1; console.log(delete this.myglobalVar) // can't be delete, outputs false Dynamic Arguments Arguments for a function do not have to be specifed in the function definition function myFunction(){ // ... Note myfunction has no arguments in signature for(var i=0; i < arguments.length; i++){ alert(arguments[i].value); } } myFunction("tony", "Magoo"); // any argument can be specified The arguments parameter is an array available to functions and gives access to all arguments that were specified in the invocation. for-in iterations for-in loops (also called enumeration) should be used to iterate over nonarray objects. var counties = { dublin: "good", kildare: "not bad", cork: "avoid" } for (var i in counties) { if (counties.hasOwnProperty(i)) { // filter out prototype properties console.log(i, ":", counties[i]); } } Function declaration In a function declaration, the function stands on its own and does not need to be assigned to anything. function multiple(a, b) { return a * b; } // Note, no semi colon is needed Function expressions When function is defined as part of something else's definition, it is considered a function expression. multiply = function multiplyFunction(a, b) { return a * b; }; // Note the semi colan must be placed after the function definition console.log(multiply(5, 10)); // outputs 50 In the above example, the function is named. It can also be anonymous, in which case the name property will be a blank string. multiply = function (a, b) { return a * b; } // Note the semi colan must be placed after the function definition console.log(multiply(5, 10)); // outputs 50 Functional Inheritance Functional inheritance is mechanism of inheritance that provides encapsulation by using closures. Before trying to understand the syntax, take an example first. Suppose we want to represent planets in the solar system. We decided to have a planet base object and then several planet child objects which inherit from the base object. Here is the base planet object: var planet = function(spec) { var that = {}; that.getName = function() { return spec.radius; }; that.getNumberOfMoons()= function() { return spec.numberOfMoons; }; return that; } Now for some planets. Let's start with Earth and Jupiter and to amuse ourselves let's add a function for Earth for people to leave and a function to Jupiter for people arriving. Sarah Palin has taken over and things have got pretty bad!!! var earth = function(spec) { var that = planet{spec}; // No need for new keyword! that.peopleLeave = function() { // ... people leave } return that; } var jupiter = function(spec) { var that = planet(spec); that.peopleArrive = function() { // .. people arrive } return that; } Now put the earth and jupiter in motion... var myEarth = earth({name:"earth",numberofmoons:1}); var myjupiter=jupiter({name:"jupiter",numberofmoons:66}); The three key points here: There is code reuse. There is encapsulation. The name and numberOfMoons properties are encapsulated. The child objects can add in their own specific functionality. Now an explanation of the syntax: The base object planet accepts some data in the spec object. The base object planet creates a closures called that which is returned. The that object has access to everything in the spec object. But, nothing else does. This provides a layer of encapsulation. The child objects, earth and jupiter, set up their own data and pass it to base planet object. The planet object returns a closure which contains base functionality. The child classes receive this closure and add further methods and variables to it. Hoisting No matter where var's are declared in a function, javascript will "hoist" them meaning that they behave as if they were declared at the top of the function. mylocation = "dublin"; // global variable function outputPosition() { console.log(mylocation); // outputs "undefined" not "dublin" var mylocation = "fingal" ; console.log(mylocation); // outputs "fingal" } outputPosition(); In the function above, the var declaration in the function means that the first log will "see" the mylocation in the function scope and not the one declared in the global scope. After declaration, the local mylocation var will have the value "undefined", hence why this is outputted first. Functions that are assigned to variables can also be hoisted. The only difference being that when functions are hoisted, their definitions also are - not just their declarations. Immediate Function Expressions Immediate function expression are executed as soon as they are defined. (function() { console.log("I ain't waiting around"); }()); There are two aspects of the syntax to note here. Firsty, there is a () immediately after the function definiton, this makes it execute. Secondly, the function can only execute if it is a function expression as opposed to a function declaration. The outer () make the function an expression. Another way to define a an immediate function expression is: var anotherWay = function() { console.log("I ain't waiting around"); }() JSON JavaScript Object Notation (JSON) is a notation used to represent objects. It is very similar to the format used for Javascript Object literals except the property names must be wrapped in quotes. The JSON format is not exclusive to javascript; it can be used by any language (Python, Ruby etc). JSON makes it very easy to see what's an array and what's an object. In XML this would be much harder. An external document - such as XSD - would have to be consulted. In this example, Mitt Romney has an array describing who might vore for him and an object which is his son. {"name": "Mitt Romney", "party": "republicans", "scary": "of course", "romneysMostLikelyVoters": ["oilguzzlers", "conservatives"], son : {name:'George Romney} Loose typing Javascript is loosely typed. This means that variables do not need to be typed. It also means there is no complex class hierarchies and there is no casting. var number1 = 50; var number2 = "51"; function output(varToOutput) { // function does not care about what type the parameter passed is. console.log(varToOutput); } output(number1); // outputs 50 output(number2); // outputs 51 Memoization Memoization is a mechanism whereby functions can cache data from previous executions. function myFunc(param){ if (!myFunc.cache) { myFunc.cache = {}; // If the cache doesn't exist, create it. } if (!myFunc.cache[param]) { //... Imagine the code to work out result below // is computationally intensive. var result = { //... }; myFunc.cache[param] = result; // now result is cached. } return myFunc.cache[param]; } Method When a function is stored as a property of an object, it is referred to as a method. var myObject { myProperty: function () { //... // the this keyword in here will refer to the myObject instance. // This means the "method" can read and change variables in the // object. } } Modules The goal of modules is to enable javascript code bases to more modular. Functions and variables are collated into a module and then the module can decide what functions and what variables the outside world can see - in the same way as encapsulations works in the object orientated paradigms. In javascript we create modules by combining characteristics of closures and immediate function expressions. var bankAccountModule = (function moduleScope() { var balance = 0; //private function doSomethingPrivate(){ // private method //... } return { //exposed to public addMoney: function(money) { //... }, withDrawMoney: function(money) { //... }, getBalance: function() { return balance; } }()); In the example above, we have a bank account module: The function expression moduleScope has its own scope. The private variable balance and the private function doSomethingPrivate, exist only within this scope and are only visible to functions within this scope. The moduleScope function returns an object literal. This is a closure which has access to the private variables and functions of moduleScope. The returned object's properties are "public" and accesible to the outside world. The returned object is automatically assigned to bankAccountModule The immediate function ()) syntax is used. This means that the module is initialised immediately. Because the returned object (the closure) is assigned to bankAccountModule, it means we can access the bankAccountModule as: bankAccountModule.addMoney(20); bankAccoumtModule.withdrawMoney(15); By convention, the filename of a module should match its namespace. So in this example, the filename should be bankAccountModule.js. Namespace Pattern Javascript doesn't have namespaces built into the language, meaning it is easy for variables to clash. Unless variables are defined in a function, they are considered global. However, it is possible to use "." in variables names. Meaning you can pretend you have name spaces. DUBLINTECH.myName = "Alex" DUBLINTECH.myAddress = "Dublin Object Literal Notation In javascript you can define an object as collection of name value pairs. The values can be property values or functions. var ireland = { capital: "Dublin", getCapital: function () { return this.capital; } }; Prototype properties (inheritance) Every object has a prototype object. It is useful when you want to add a property to all instances of a particular object. Suppose you have a constructor function, which representent Irish people who bought in the boom. function IrishPersonBoughtInTheBoom(){ } var mary = new IrishPersonBoughtInTheBoom (); var tony = new IrishPersonBoughtInTheBoom (); var peter = new IrishPersonBoughtInTheBoom (); ... Now, the Irish economy goes belly up, the property bubble explodes and you want to add a debt property to all instances of this function. To do this you would do: IrishPersonBoughtInTheBoom.prototype.debt = "ouch"; Then... console.log(mary.debt); // outputs "ouch" console.log(tony.debt); // outputs "ouch" console.log(peter.debt); // outputs "ouch" Now, when this approach is used, all instances of IrishPersonBoughtInTheBoom share the save copy of the debt property. This means, that they all have the same value as illustrated in this example. Returning functions A function always returns a value. If return is not specified for a function, the undefined value type will be returned. Javascript functions can also return some data or another function. var counter = function() { //... var count = 0; return function () { return count = count + 1; } } var nextValue = counter(); nextValue(); // outputs 1 nextValue(); // outputs 2 Note, in this case the inner function which is returned "closes" over the count variable - making it a closure - since it encapsulates its own count variable. This means it gets its own copy which is different to the variable return by nextValue.count. this keyword The this keyword in Java has different meanings, depending on the context it is used. In summary: In a method context, this refers to the object that contains the method. In a function context, this refers to the global object. Unless the function is a property of another object. In which case the this refers to that object. If this is used in a constructor, the this in the constructor function refers to the object which uses the constructor function. When the apply or call methods are used the value of this refers to what was explictly specified in the apply or call invocation. typeof typeof is a unary operator with one operand. It is used to determine the types of things (a bit like getClass() in Java). The values outputted by typeof are "number", "string", "boolean", "undefined", "function", "object". console.log(typeof "tony"); // outputs string console.log(typeof 6); // outputs number console.log(false); // outputs boolean console.log(this.doesNotExist); // outputs undefined if the global scope has no such var console.log(typeof function(){}); // outputs function console.log(typeof {name:"I am an object"}); //outputs object console.log(typeof ["I am an array"]) // typedef outputs object for arrays console.log(typeof null) // typedef outputs object for nulls Some implementations return "object" for typeof for regular expressions; others return "function". But the biggest problem with typeof is that it returns object for null. To test for null, use strict equality... if (myobject === null) { ... } Self-redefining functions This is a good performance technique. Suppose you have a function and the first time it is called you want it to perform some set up code that you never want to perfom again. You can execute the set up code and then make the function redefine itself after that so that the setup code is never re-excuted. var myFunction = function () { //set up code only to this once alert("set up, only called once"); // set up code now complete. // redefine function so that set up code is not re-executed myFunction = function() { alert("no set up code"); } } myFunction(); // outputs - Set up, only called once myFunction(); // outputs - no set up code this time myFunction(); // outputs - no set up code this time Note, any properties added to the set up part of this function will be lost when the function redefines itself. In addition, if this function is used with a different name (i.e. it is assigned to a variable), the re-definition will not happen and the set up code will re-execute. Scope In javascript there is a global scope and a function scope available for variables. The var keyword does not need to be used to define variable in the global scope but it must be used to define variable in the local function scope. When a variable is scoped to a local function shares the name with a global variable, the local scope takes precedence - unless var was not used to declare the local variable in which case any local references are pointing to the global reference. There is no block scope in javascript. By block we mean the code between {}, aka curly braces. var myFunction = function () { var noBlockScope = function ( ) { if (true) { // you'd think that d would only be visible to this if statement var d = 24; } if (true) { // this if statement can see the variable defined in the other if statement console.log(d); } } noBlockScope(); Single var pattern You can define all variables used by a function in one place. It is ensures tidy code and is considered best practise. function scrum() { var numberOfProps = 2, numberOfHookers = 1, numberOfSecondRows = 2, numberOfBackRow = 3 // function body... } If a variable is declared but not initialized with a value it will have the value undefined. Strict Equality In javascript it is possible to compare two objects using ==. However, in some cases this will perform type conversion which can yield unexpected equality matches. To ensure there is strict comparison (i.e. no type conversions) use the === syntax. console.log(1 == true) // outputs true console.log(1 === true) // outputs false console.log(45 == "45") // outputs true console.log(45 === "45") // outputs false Truthy and Falsey When javascript expects a boolean, you may specify a value of any type. Values that convert to true are said to be truthy and values that convert to false are said to be falsey. Example of truthy values are objects, arrays, functions, strings and numbers: // This will output 'Wow, they were all true' if ({} && {sillyproperty:"sillyvalue"} && [] && ['element'] && function() {} && "string" && 89) { console.log("wow, they were all true"); } Examples of falsey values are empty strings, undefined, null and the value 0. // This will out put: 'none of them were true' if (!("" || undefined || null || 0)) { console.log("none of them were true"); } Undefined and null In javascript, the undefined value means not initialised or unknown where null means an absence of a value. References JavaScript patterns Stoyan Stefanov JavaScript, The Definitive Guide David Flanagan JavaScript, The Good Parts Doug Crockford.
April 4, 2012
· 30,231 Views
article thumbnail
JAXB, SAX, DOM Performance
This post investigates the performance of unmarshalling an XML document to Java objects using a number of different approaches. The XML document is very simple. It contains a collection of Person entities. person0 name0 person1 name1 ... There is a corresponding Person Java object for the Person entity in the XML ... @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "id", "name" }) public class Person { private String id; private String name; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String value) { this.name = value; } } and a PersonList object to represent a collection of Persons. @XmlAccessorType(XmlAccessType.FIELD) @XmlRootElement(name = "persons") public class PersonList { @XmlElement(name="person") private List personList = new ArrayList(); public List getPersons() { return personList; } public void setPersons(List persons) { this.personList = persons; } } The approaches investigated were: Various flavours of JAXB SAX DOM In all cases, the objective was to get the entities in the XML document to the corresponding Java objects. The JAXB annotations on the Person and PersonList POJOS are used in the JAXB tests. The same classes can be used in SAX and DOM tests (the annotations will just be ignored). Initially the reference implementations for JAXB, SAX and DOM were used. The Woodstox STAX parsing was then used. This would have been called in some of the JAXB unmarshalling tests. The tests were carried out on my Dell Laptop, a Pentium Dual-Core CPU, 2.1 GHz running Windows 7. Test 1 - Using JAXB to unmarshall a Java File. @Test public void testUnMarshallUsingJAXB() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); PersonList obj = (PersonList)unmarshaller.unmarshal(new File(filename)); } Test 1 illustrates how simple the progamming model for JAXB is. It is very easy to go from an XML file to Java objects. There is no need to get involved with the nitty gritty details of marshalling and parsing. Test 2 - Using JAXB to unmarshall a Streamsource Test 2 is similar Test 1, except this time a Streamsource object wraps around a File object. The Streamsource object gives a hint to the JAXB implementation to stream the file. @Test public void testUnMarshallUsingJAXBStreamSource() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); StreamSource source = new StreamSource(new File(filename)); PersonList obj = (PersonList)unmarshaller.unmarshal(source); } Test 3 - Using JAXB to unmarshall a StAX XMLStreamReader Again similar to Test 1, except this time an XMLStreamReader instance wraps a FileReader instance which is unmarshalled by JAXB. @Test public void testUnMarshallingWithStAX() throws Exception { FileReader fr = new FileReader(filename); JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmler = xmlif.createXMLStreamReader(fr); PersonList obj = (PersonList)unmarshaller.unmarshal(xmler); } Test 4 - Just use DOM This test uses no JAXB and instead just uses the JAXP DOM approach. This means straight away more code is required than any JAXB approach. @Test public void testParsingWithDom() throws Exception { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document doc = builder.parse(filename); List personsAsList = new ArrayList(); NodeList persons = doc.getElementsByTagName("person"); for (int i = 0; i persons = new ArrayList(); DefaultHandler handler = new DefaultHandler() { boolean bpersonId = false; boolean bpersonName = false; public void startElement(String uri, String localName,String qName, Attributes attributes) throws SAXException { if (qName.equalsIgnoreCase("id")) { bpersonId = true; Person person = new Person(); persons.add(person); } else if (qName.equalsIgnoreCase("name")) { bpersonName = true; } } public void endElement(String uri, String localName, String qName) throws SAXException { } public void characters(char ch[], int start, int length) throws SAXException { if (bpersonId) { String personID = new String(ch, start, length); bpersonId = false; Person person = persons.get(persons.size() - 1); person.setId(personID); } else if (bpersonName) { String name = new String(ch, start, length); bpersonName = false; Person person = persons.get(persons.size() - 1); person.setName(name); } } }; saxParser.parse(filename, handler); } The tests were run 5 times for 3 files which contain a collection of Person entities. The first first file contained 100 Person entities and was 5K in size. The second contained 10,000 entities and was 500K in size and the third contained 250,000 Person entities and was 15 Meg in size. In no cases was any XSD used, or any validations performed. The results are given in result tables where the times for the different runs are comma separated. TEST RESULTS The tests were first run using JDK 1.6.26, 32 bit and the reference implementation for SAX, DOM and JAXB shipped with JDK was used. Unmarshall Type 100 Persons time (ms) 10K Persons time (ms) 250K Persons time (ms) JAXB (Default) 48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317 JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136 JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481 DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773 SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591 JDK 1.6.26 Test comments The first time unmarshalling happens is usually the longest. The memory usage for the JAXB and SAX is similar. It is about 2 Meg for the file with 10,000 persons and 36 - 38 Meg file with 250,000. DOM Memory usage is far higher. For the 10,000 persons file it is 6 Meg, for the 250,000 person file it is greater than 130 Meg. The performance times for pure SAX are better. Particularly, for very large files. The exact same tests were run again, using the same JDK (1.6.26) but this time the Woodstox implementation of StAX parsing was used. Unmarshall Type 100 Persons time (ms) 10K Persons time (ms) 250K Persons time (ms) JAXB (Default) 48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317 JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136 JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481 DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773 SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591 JDK 1.6.26 + Woodstox test comments Again, the first time unmarshalling happens is usually proportionally longer. Again, memory usage for SAX and JAXB is very similar. Both are far better than DOM. The results are very similar to Test 1. The JAXB (StAX) approach time has improved considerably. This is due to the Woodstox implementation of StAX parsing being used. The performance times for pure SAX are still the best. Particularly for large files. The the exact same tests were run again, but this time I used JDK 1.7.02 and the Woodstox implementation of StAX parsing. Unmarshall Type 100 Persons time (ms) 10,000 Persons time (ms) 250,000 Persons time (ms) JAXB (Default) 165,5, 3, 3,5 611,23, 24, 46, 28 578, 539, 511, 511, 519 JAXB(Streamsource) 13,4, 3, 4, 3 43,24, 21, 26, 22 678, 520, 509, 504, 627 JAXB (StAX) 21,1,0, 0, 0 300,69, 20, 16, 16 637, 487, 422, 435, 458 DOM 22,2,2,2,2 420,25, 24, 23, 24 1304, 807, 867, 747, 1189 SAX 7,2,2,1,1 169,15, 15, 19, 14 366, 364, 363, 360, 358 JDK 7 + Woodstox test comments: The performance times for JDK 7 overall are much better. There are some anomolies - the first time the 100 persons and the 10,000 person file is parsed. The memory usage is slightly higher. For SAX and JAXB it is 2 - 4 Meg for the 10,000 persons file and 45 - 49 Meg for the 250,000 persons file. For DOM it is higher again. 5 - 7.5 Meg for the 10,000 person file and 136 - 143 Meg for the 250,000 persons file. Note: W.R.T. all tests No memory analysis was done for the 100 persons file. The memory usage was just too small and so it would have pointless information. The first time to initialise a JAXB context can take up to 0.5 seconds. This was not included in the test results as it only took this time the very first time. After that the JVM initialises context very quickly (consistly < 5ms). If you notice this behaviour with whatever JAXB implementation you are using, consider initialising at start up. These tests are a very simple XML file. In reality there would be more object types and more complex XML. However, these tests should still provide a guidance. Conclusions: The peformance times for pure SAX are slightly better than JAXB but only for very large files. Unless you are using very large files the performance differences are not worth worrying about. The progamming model advantages of JAXB win out over the complexitiy of the SAX programming model. Don't forget JAXB also provides random accses like DOM does. SAX does not provide this. Performance times look a lot better with Woodstox, if JAXB / StAX is being used. Performance times with 64 bit JDK 7 look a lot better. Memory usuage looks slightly higher. From http://dublintech.blogspot.com/2011/12/jaxb-sax-dom-performance.html
December 31, 2011
· 45,851 Views · 4 Likes
article thumbnail
What are the differences between JAXB 1.0 and JAXB 2.0
What are the differences between JAXB 1.0 and JAXB 2.0? JAXB 1.0 only requires JDK 1.3 or later. JAXB 2.0 requires JDK 1.5 or later. JAXB 2.0 makes use of generics and thus provides compile time type safety checking thus reducing runtime errors. Validation is only available during marshalling in JAXB 1.0. Validation is also available during unmarshalling in JAXB 2.0. Termination occurs in JAXB 1.0 when a validation error occurs. In JAXB 2.0 custom ValidationEventHandlers can be used to deal with validation errors. JAXB 2.0 uses annotations and supports bi-directional mapping. JAXB 2.0 generates less code. JAXB 1.0 does not support key XML Schema components like anyAttribute, key, keyref, and unique. It also does not support attributes like complexType.abstract, element.abstract, element.substitutionGroup, xsi:type, complexType.block, complexType.final, element.block, element.final, schema.blockDefault, and schema.finalDefault. In version 2.0, support has been added for all of these schema constructs. References: http://javaboutique.internet.com/tutorials/jaxb/index3.html From http://dublintech.blogspot.com/2011/04/what-are-differences-between-jaxb-10.html
December 30, 2011
· 14,650 Views
article thumbnail
The “4+1” View Model of Software Architecture
In November 1995, while working as Lead software architect at Hughes Aircraft Of Canada Philippe Kruchten published a paper entitled: "Architectural Blueprints—The “4+1” View Model of Software Architecture". The intent was to come up with a mechanism to separate the different aspects of a software system into different views of the system. Why? Because different stakeholders always have different interest in a software system. Some aspects of a system are relevant to the Developers; others are relevant to System administrators. The Developers want to know about things like classes; System administrators want to know about deployment, hardware and network configurations and don't care about classes. Similar points can be made for Testers, Project Managers and Customers. Kruchten thought it made sense to decompose architecture into distinct views so stakeholders could get what they wanted. In total there were 5 views in his approach but he decided to call it 4 + 1. We'll discuss why it's called 4 + 1 later! But first, let's have a look at each of the different views. The logical view This contains information about the various parts of the system. In UML the logical view is modelled using Class, Object, State machine and Interaction diagrams (e.g Sequence diagrams). It's relevance is really to developers. The process view This describes the concurrent processes within the system. It encompasses some non-functional requirements such as performance and availability. In UML, Activity diagrams - which can be used to model concurrent behaviour - are used to model the process view. The development view The development view focusses on software modules and subsystems. In UML, Package and Component diagrams are used to model the development view. The physical view The physical view describes the physical deployment of the system. For example, how many nodes are used and what is deployed on what node. Thus, the physical view concerns some non-functional requirements such as scalability and availability. In UML, Deployment diagrams are used to model the physical view. The use case view This view describes the functionality of the system from the perspective from outside world. It contains diagrams describing what the system is supposed to do from a black box perspective. This view typically contains Use Case diagrams. All other views use this view to guide them. Why is it called the 4 + 1 instead of just 5? Well this is because of the special significance the use case view has. When all other views are finished, it's effectively redundant. However, all other views would not be possible without it. It details the high levels requirements of the system. The other views detail how those requirements are realised. 4 + 1 came before UML It's important to remember the 4 + 1 approach was put forward two years before the first the introduction of UML which did not manifest in its first guise until 1997. UML is how most enterprise architectures are modelled and the 4 + 1 approach still plays a relevance to UML today. UML 2.0 has 13 different types of diagrams - each diagram type can be categorised into one of the 4 + 1 views. UML is 4 + 1 friendly! So is it important? The 4 + 1 approach isn't just about satisfying different stakeholders. It makes modelling easier to do because it makes it easier to organise. A typical project will contain numerous diagrams of the various types. For example, a project may contain a few hundred sequence diagrams and several class diagrams. Grouping diagrams of similar types and purpose means there is an emphasis in separating concerns. Sure isn't it just the same with Java? Grouping Java classes of similar purpose and related responsibilities into packages means organisation is better. Similarly, grouping different components into different jar files means organisation is better. Modelling tools will usually support the 4 + 1 approach and this means projects will have templates for how to split the various types of diagrams. In a company when projects follow industry standard templates again it means things are better organised. The 4 + 1 approach also provides a way for architects to be able to prioritise modelling concerns. It is rare that a project will have enough time to model every single diagram possible for an architecture. Architects can prioritise different views. For example, for a business domain intensive project it would make sense to prioritise the logical view. In a project with high concurrency and complex timing it would make sense to ensure the process view gets ample time. Similarly, the 4 + 1 approach makes it possible for stakeholders to get the parts of the model that are relevant to them. References: Architectural Blueprints—The “4+1” View Model of Software Architecture Paper http://www.cs.ubc.ca/~gregor/teaching/papers/4+1view-architecture.pdf Learning UML 2.0 by Russ Miles & Kim Hamilton. O'Reilly From http://dublintech.blogspot.com/2011/05/41-view-model-of-software-architecture.html
December 28, 2011
· 52,620 Views
article thumbnail
Consistent Hashing
Consistent Hashing is a clever algorithm that is used in high volume caching architectures where scaling and availability are important. It is used in many high end web architectures for example: Amazon's Dynamo. Let me try and explain it! Firstly let's consider the problem. Let's say your website sells books (sorry Amazon but you're a brilliant example). Every book has an author, a price, details such as the number of pages and an ISBN which acts as a primary key uniquely identifying each book. To improve the performance of your system you decide to cache the books. You split the cache over four servers. You have to decide which book to put on which server. You want to do this using a deterministic function so you can be sure where things are. You also want to do this at low computational cost (otherwise what's the point caching). So you hash the book's ISBN and then mod the result by the number of servers which in our case is 4. Let's call this number the book's hash key. So let's say your books are: Toward the Light, A.C. Grayling (ISBN=0747592993) Aftershock, Philippe Legrain (ISBN=1408702231) The Outsider, Albert Camus (ISBN=9780141182506) This History of Western Philosophy, Bertrand Russell (ISBN=0415325056) The life you can save, Peter Singer (ISBN=0330454587) ... etc After hashing the ISBN and moding the result by 4, let's say the resulting hash keys are: Hash(Toward the Light) % 4 = 2. Hashkey 2 means this book will be cached by Server 2. Hash(Aftershock) % 4 = 1. Hashkey 1 means this book will be cached by Server 1. Hash(The Outsider) % 4 = 4. Hashkey 1 means this book will be cached by Server 4. Hash(The History of Western Philosophy) % 4 = 1. Hashkey 1 means this book will be cached by Server 1. Hash(The Life you can save) % 4 = 3. Hashkey 1 means this book will be cached by Server 3. Oh wow doesn't everything look so great. Anytime we have a book's ISBN we can work out its hash key and know what server its on! Isn't that just so amazing! Well no. Your website has become so cool, more and more people are using it. Reading has become so cool there are more books you need to cache. The only thing that hasn't become so cool is your system. Things are slowing down and you need to scale. Vertical scaling will only get you so far; you need to scale horizontally. Ok, so you go out and you buy another 2 servers thinking this will solve your problem. You now have six servers. This is where you think the pain will end but alas it won't. Because you know have 6 servers your algorithm changes. Instead of moding by 4 you mod by 6. What does this mean? Initially, when you look for a book because your moding by 6 you'll end up with a different hash key for it and hence a different server to look for it on. It won't be there and you'll have incurred a database read to bring it back into the cache. It's not just one book, it will be the for the majority of your books. Why? Because the only time a book will be on the correct server and not need to be re-read from the database is when the hash(isbn) % 4 = hash(isbn) % 6. Mathematically this will be the minority of your books. So, your attempt at scaling has put a burden on there majority of your cache to restructure itself resulting in massive database re-reads. This can bring your system down. Customers won't be happy with you sunshine! We need a solution! The solution is to come up with a system where when you add more servers and only a small minority will change books will move to new servers meaning a minimum number of database reads. Let's go for it! Consistent Hashing explained Consistent hashing is an approach where the books get the same hash key irrespective of the number of books and irrespective of the number of servers - unlike our previous algorithm which mod'ed by the number of servers. It doesn't matter if there is one server, 5 servers or 5 million servers, the books always always always always get the same hash key. So how exactly do we generate consistent hash values for the books? Simple. We use a similar approach to our initial approach except we stop moding on the number of servers. Instead we mod by something else, that is constant and independent of the number of servers. Ok, so let's hash the ISBN as before and then mod by 100. So if you have 1,000 books. You end up with a distribution of hash keys for the books between 0 - 100 irrespective of the number of servers. All good. All we need is a way to figure determinstically and at low computational cost which books reside on which servers. Otherwise again what would be the point in caching? So here's the ultra funky part... You take something unique and constant for each server (for example its IP address) and you pass that through the exact same algorithm. This means you also end up with a hash key (in this case a number between 0 and 100) for each server. Let's say: Server 1 gets: 12 Server 2 gets: 37 Server 3 gets: 54 Server 4 gets: 87 Now we assign each server to be responsible for caching the books with hash keys between its own hash key and that of the next neighbour (next in the upward direction). This means: Server 1 stores all the books with hash key between 12 and 37 Server 2 stores all the books with hash key between 37 and 54 Server 3 stores all the books with hash key between 54 and 87 Server 4 stores all the books with hash key between 87 and 100 and 0 and 12. If you are still with me... great. Because now we are going to scale. We are going to add two more servers. Lets say server 5 is added and gets the hash key 20. And server 6 is added and gets the hash value 70. This means: Server 1 will now only store books with hash key between 12 and 20 Server 5 will stores the books with hash key between 20 and 37. Server 3 will now only store books with hash key between 54 and 70. Server 6 will stores books with the hash key between 70 and 87. Server 2 and Server 4 are completly unaffected. Ok so this means: All books still get the same hash key. Their hash keys are consistent. Books with hash keys between 20 and 37 and between 70 and 87 are now sought from new servers. The first time they are sought they won't be there and they will be re-read from the system and then cached in the respective servers. This is ok as long as it's only for a small amount of books. There is a small initial impact to the system but its managable. Now, you're probably saying: "I get all this but I'd like to see some better distribution. When you added two servers, only two servers got their load lessoned. Could you share the benefits please?" Of course. To do that, we allocate each server a number of small ranges rather than just one large range. So, instead of server 2 getting one large range between 37 and 54. It gets a number of small ranges. So for example, if could get: 5 - 8, 12 - 17, 24 - 30, 43 - 49, 58 - 61, 71 - 74, 88 - 91. Same for all servers. The small ranges all randomly spread meaning that one server won't just have one adjacent neighbour but a collection of different neighbours for each of its small ranges. When a new server is added it will also get a number if ranges, and number of different neighbours which means its benefits will be distributed more evenly. Isn't that so cool! Consistent hashing benefits aren't just limited to scaling. They also are brilliant for availability. Let's say server 2 goes offlines. What happens is the complete opposite to what happens for when a new server is added. Each one of Server 2 segments will become the responsibility of the server who is responsible for the preceeding segment. Again, if servers are getting a fair distribution of ranges they are responsible for it means when a server fails, the burden will be evenly distributed amongst the remaining servers. Again the point has to emphasised, the books never have to rehashed. Their hashes are consistent. References http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html http://www.tomkleinpeter.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/ http://michaelnielsen.org/blog/consistent-hashing/ From http://dublintech.blogspot.com/2011/06/consistent-hashing.html
December 27, 2011
· 25,065 Views · 16 Likes

Comments

Java Lambda Streams and Groovy Closures Comparisons

Aug 08, 2018 · Lindsay Burk

Feedback taken and updates made


Writing while reads pile up

Mar 05, 2014 · Alex Miller

Nice article. I have heard Web developers makes the point that since Mongo is more JSON friendly that is another argument to use it. So when AJAX requests are sending and receiving if you use MongoDB, you get the option to persist and retrieve JSON very easily almost negating the need for a traditional server. What do you think of that?

When to Use MongoDB Rather than MySQL (or Other RDBMS): The Billing Example

Mar 05, 2014 · Alec Noller

Nice article. I have heard Web developers makes the point that since Mongo is more JSON friendly that is another argument to use it. So when AJAX requests are sending and receiving if you use MongoDB, you get the option to persist and retrieve JSON very easily almost negating the need for a traditional server. What do you think of that?

Free CSS templates includes some by Andreas Viklund

Jan 10, 2013 · DM 1407

@Russel Winder Thanks. You are right. I updated code. And good point about Java 8 as well. I don't think Java 8 will offer as much as Scala offers in this regards and will perhaps cover that interesting subject in another post.


Free CSS templates includes some by Andreas Viklund

Jan 10, 2013 · DM 1407

@Russel Winder Thanks. You are right. I updated code. And good point about Java 8 as well. I don't think Java 8 will offer as much as Scala offers in this regards and will perhaps cover that interesting subject in another post.


Free CSS templates includes some by Andreas Viklund

Jan 10, 2013 · DM 1407

@Russel Winder Thanks. You are right. I updated code. And good point about Java 8 as well. I don't think Java 8 will offer as much as Scala offers in this regards and will perhaps cover that interesting subject in another post.


Creating a spotlight effect in Flex with ActionScript 3

Jan 01, 2012 · Mark Haliday

Robert, typos fixed. Thanks. There is *no* difference in performance times.
JAXB, SAX, DOM Performance

Jan 01, 2012 · James Sugrue

Robert, typos fixed. Thanks. There is *no* difference in performance times.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: