DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Edge Side Includes with Varnish in 10 minutes
Varnish is a tool built to be an intermediate server in the HTTP chain, not an origin one like Apache or IIS. You can outsource caching, logging, zipping and other filters to Varnish, since they are not the main feature of an HTTP server like Apache. What we'll see today is how to work with Edge Side Includes in Varnish, as a way to compose dynamic pages from independently generated and cached fragments; we won't encounter logging or other features. If you are familiar with PHP, ESI is an (almost) standard for executing include()-like statements on a front end server like Varnish; the proxy is able not only to assembly pages but also to cache them according to different policies: a certain time, for a single user, and so on. Thijs Feryn and Alessandro Nadalin introduced me to Varnish and ESI respectively, for the first time. I recommend you to consider their blogs and talks as additional sources on these topics. Installation The default version of Varnish in Ubuntu 11.04 is instead 2.1, and apparently does not support ESI very much. Installation via packages means adding a public key and a repository to your list of software sources, and install the varnish package via apt-get or an equivalent command. You can install version 3.0.0 via packages, but only in Ubuntu LTS (10.04). A way that always works in these cases is the installation from sources. The linked page will list the package dependencies and give you a sequence of 3-4 commands to seamlessly compile varnish. I used checkinstall instead of make install to get a binary package that I can reuse later: $ sudo checkinstall -D --install=no --fstrans=no --maintainer=youraddress@gmail.com --reset-uids=yes --nodoc --pkgname=varnish --pkgversion=3.0.0 --pkgrelease=201108231000 --arch=i386 After installation with dpkg, check that varnishd is available and of the right version: [10:18:17][giorgio@Desmond:~]$ varnishd -V varnishd (varnish-3.0.0 revision 3bd5997) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS Varnish needs minimal configuration: a server to point at. For our tests you can edit /etc/varnish/default.vcl and check (or add) the following: backend default { .host = "127.0.0.1"; .port = "80"; } You can execute ps -A | grep varnishd at any time to see if varnish is already in execution. Execution [09:55:18][giorgio@Desmond:~]$ sudo varnishd -f /etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 storage_malloc: max size 1024 MB. 1 gigabyte of memory is allocated for keeping fragments in RAM. An administrative interface will respond on port 2000, and only be accessible from localhost. http://localhost:8080/ is the exposed HTTP server, and will point to http://localhost:80 as defined in the configuration. Look at man varnishd for more switched and to man vcl for additional explanations on the configuration language. A bit of ESI ESI is a technique for leveraging HTTP cache and at the same time build dynamic pages. The problem with today's pages is that they are highly dynamic: some sections change very often or according to the current user (Welcome, John Doe or the current posts timeline); some sections do not change at all for days (the navigation bar and the layout structure); some sections change in response to external events (the list of incoming messages only when a new message arrives). It would be ideal to set different caching configurations for all the page's fragments. But implementing this strategy in the application code is error-prone and means reinventing the wheel. To use HTTP cache you will be forced to load with Ajax every single fragment of the page, even a single paragraph. With ESI, your application produces only the pieces, and lets an implementor of the Edge Side Include specification like Varnish assemble the whole thing. Example HTML page (very static): Varnish will work on this page: . PHP page (really dynamic, can change at any time): Varnish will work on this page: 2011-08-23. No sign of Varnish interventions, and totally transparent for the client. And sometimes you can also throw away Zend_Layout and similar components to assemble HTML on the PHP side.
August 23, 2011
by Giorgio Sironi
· 24,924 Views · 1 Like
article thumbnail
Setting Default Value for @Html.EditorFor in ASP.NET MVC
In this blog post I am going to Explain How we create Default values for model Entities.
August 21, 2011
by Jalpesh Vadgama
· 72,298 Views
article thumbnail
Avoiding Java Serialization to increase performance
Many frameworks for storing objects in an off-line or cached manner, use standard Java Serialization to encode the object as bytes which can be turned back into the original object. Java Serialization is generic and can serialise just about any type of object. Why avoid it The main problem with Java Serialization is performance and efficiency. Java serialization is much slower than using in memory stores and tends to significantly expand the size of the object. Java Serialization also creates a lot of garbage. Access performance Say you have a collection and you want to update a field of many elements. Something like for (MutableTypes mt : mts) { mt.setInt(mt.getInt()); } If you update one million elements for about five seconds how long does each one take. Huge Collection update one field, took an average 5.1 ns. List update one field took an average 6.5 ns. List with Externalizable update one field took an average 5,841 ns. List update one field took an average 23,217 ns. If you update ten million elements for five seconds or more Huge Collection update one field, took an average 5.4 ns. List, update one field took an average 6.6 ns. List with readObject/writeObject update one field took an average 6,073 ns. List update one field took an average 22,943 ns. Huge Collection stores information in a column based based, so accessing just one field is much more CPU cache efficient than using JavaBeans. If you were to update every field, it would be about 2x or more times slower. Using an optimised Externalizable is much faster than the default Serializable, however is it 400x slower than using a a JavaBean Memory efficiency The per object memory used is also important as it impacts how many object you can store and the performance of accessing those objects. Collection type Heap used per million Direct memory per million Garbage produced per million Huge Collection 0.09 MB 34 MB 80 bytes List 68 MB none 30 bytes List using Externalizable 140 MB none 5,941 MB List 506 MB none 16,746 MB This test was performed on a collection of one million elements. To test the amount of garbage produced I set the Eden size target than 15 GB so no GC would be performed. -mx22g -XX:NewSize=20g -XX:-UseTLAB -verbosegc Conclusion Having an optimised readExternal/writeExternal can improve performance and the size of a serialised object by 2-4 times, however if you need to maximise performance and efficiency you can gain much more by not using it. From http://vanillajava.blogspot.com/2011/08/avoiding-java-serialization-to-increase.html
August 20, 2011
by Peter Lawrey
· 26,047 Views
article thumbnail
Attaching Java source with Eclipse IDE
In Eclipse, when you press Ctrl button and click on any Class names, the IDE will take you to the source file for that class. This is the normal behavior for the classes you have in your project. But, in case you want the same behavior for Java’s core classes too, you can have it by attaching the Java source with the Eclipse IDE. Once you attach the source, thereafter when you Ctrl+Click any Java class names (String for example), Eclipse will open the source code of that class. To attach the Java source code with Eclipse, When you install the JDK, you must have selected the option to install the Java source files too. This will copy the src.zip file in the installation directory. In Eclipse, go to Window -> Preferences -> Java -> Installed JREs -> Add and choose the JDK you have in your system. Eclipse will now list the JARs found in the dialog box. There, select the rt.jar and choose Source Attachment. By default, this will be pointing to the correct src.zip. If not, choose the src.zip file which you have in your java installation directory. Similarly, if you have the javadoc downloaded in your machine, you can configure that too in this dialog box. Done! Here after, for all the projects for which you are using the above JDK, you’ll be able to browse the Java’s source code just like how you browse your own code. From http://veerasundar.com/blog/2011/08/attaching-java-source-with-eclipse-ide
August 18, 2011
by Veera Sundar
· 143,203 Views · 4 Likes
article thumbnail
An introduction to JSDoc
JSDoc is the de facto standard for documenting JavaScript code. You need to know at least its syntax (which is also used by many other tools) if you publish code. Alas, documentation is still scarce, but this post can help – it shows you how to run JSDoc and how its syntax works. (The JSDoc wiki [2] is the main source of this post, some examples are borrowed from it.) As a tool, JSDoc takes JavaScript code with special /** */ comments and produces HTML documentation for it. For example: Given the following code. /** @namespace */ var util = { /** * Repeat str several times. * @param {string} str The string to repeat. * @param {number} [times=1] How many times to repeat the string. * @returns {string} */ repeat: function(str, times) { if (times === undefined || times < 1) { times = 1; } return new Array(times+1).join(str); } }; The generated HTML looks as follows in a web browser: This post begins with a quick start, so can try out JSDoc immediately if you are impatient. Afterwards, more background information is given. 1. Quick start For the steps described below, you need to have Java installed. JSDoc includes the shell script jsrun.sh that requires Unix (including OS X and Linux) to run. But it should be easy to translate that script to a Windows batch file. Download the latest jsdoc_toolkit. Unpack the archive into, say, $HOME/jsdoc-toolkit. Make the script $HOME/jsdoc-toolkit/jsrun.sh executable and tell it where to look for the JSDoc binary and the template (which controls what the result looks like). JSDOCDIR="$HOME/local/jsdoc-toolkit" JSDOCTEMPLATEDIR="$JSDOCDIR/templates/jsdoc" Now you can move the script anywhere you want to, e.g. a bin/ directory. For the purpose of this demonstration, we don’t move the script. Use jsrun.sh on a directory of JavaScript files: $HOME/jsdoc-toolkit/jsrun.sh -d=$HOME/doc $HOME/js Input: $HOME/js – a directory of JavaScript files (see below for an example). Output: $HOME/doc – where to write the generated files. If you put the JavaScript code at the beginning of this post into a file $HOME/js/util.js then JSDoc produces the following files: $HOME/doc +-- files.html +-- index.html +-- symbols +-- _global_.html +-- src ¦ +-- util.js.html +-- util.html 2. Introduction: What is JSDoc? It’s a common programming problem: You have written JavaScript code that is to be used by others and need a nice-looking HTML documentation of its API. Java has pioneered this domain via its JavaDoc tool. The quasi-standard in the JavaScript world is JSDoc. As seen above, you document an entity by putting before it a special comment that starts with two asterisks. Templates. In order to output anything, JSDoc always needs a template, a mix of JavaScript and specially marked-up HTML that tells it how to translate the parsed documentation to HTML. JSDoc comes with a built-in template, but there are others that you can download [3]. 2.1. Terminology and conventions of JSDoc Doclet: JSDoc calls its comments doclets which clashes with JavaDoc terminology where such comments are called doc comments and a doclet is similar to a JSDoc template, but written in Java. Variable: The term variable in JSDoc often refers to all documentable entities which include global variables, object properties, and inner members. Instance properties: In JavaScript one typically puts methods into a prototype to share them with all instances of a class, while fields (non-function-valued properties) are put into each instance. JSDoc conflates shared properties and per-instance properties and calls them instance properties. Class properties, static properties: are properties of classes, usually of constructor functions. For example, Object.create is a class property of Object. Inner members: An inner member is data nested inside a function. Most relevant for documentation is instance-private data nested inside a constructor function. function MyClass() { var privateCounter = 0; // an inner member this.inc = function() { // an instance property privateCounter++; }; } 2.2. Syntax Let’s review the comment shown at the beginning: /** * Repeat str several times. * @param {string} str The string to repeat. * @param {number} [times=1] How many times to repeat the string. * @returns {string} */ This demonstrates some of the JSDoc syntax which consists of the following pieces. JSDoc comment: is a JavaScript block comment whose first character is an asterisk. This creates the illusion that the token /** starts such a comment. Tags: Comments are structured by starting lines with tags, keywords that are prefixed with an @ symbol. @param is an example above. HTML: You can freely use HTML in JSDoc comments; for example, to display a word in a monospaced font. Type annotations: You can document the type of a value by putting the type name in braces after the appropriate tags. Variations: Single type: @param {string} name Multiple types: @param {string|number} idCode Arrays of a type: @param {string[]} names Name paths: are used to refer to variables inside JSDoc comments. The syntax of such paths is as follows. myFunction MyConstructor MyConstructor.classProperty MyConstructor#instanceProperty MyConstructor-innerMember 2.3. A word on types There are two kinds of values in JavaScript: primitives and objects [7]. Primitive types: boolean, number, string. The values undefined and null are also considered primitive. Object types: All other types are object types, including arrays and functions. Watch out: the names of primitive types start with a lowercase letter. Each primitive type has a corresponding wrapper type, an object type with a capital name whose instances are objects: The wrapper type of boolean is Boolean. The wrapper type of number is Number. The wrapper type of string is String. Getting the name of the type of a value: Primitive value p: via typeof p. > typeof "" 'string' Compare: an instance of the wrapper type is an object. > typeof new String() 'object' Object value o: via o.constructor.name. Example: > new String().constructor.name 'String' 3. Basic tags Meta-data: @fileOverview: marks a JSDoc comment that describes the whole file. @author: Who has written the variable being documented? @deprecated: indicates that the variable is not supported, any more. It is a good practice to document what to use instead. @example: contains a code example, illustrating how the given entity should be used. /** * @example * var str = "abc"; * console.log(repeat(str, 3)); // abcabcabc */ Linking: @see: points to a related resource. /** * @see MyClass#myInstanceMethod * @see The Example Project. */ {@link ...}: works like @see, but can be used inside other tags. @requires resourceDescription: a resource that the documented entity needs. The resource description is either a name path or a natural language description. Versioning: @version versionNumber: indicates the version of the documented entity. Example: @version 10.3.1 @since versionNumber: indicates since which version the documented entity has been available. Example: @since 10.2.0 4. Documenting functions and methods For functions and methods, one can document parameters, return values, and exceptions they might throw. @param {paramType} paramName description: describes the parameter whose name is paramName. Type and description are optional. Examples: @param str @param str The string to repeat. @param {string} str @param {string} str The string to repeat. Advanced features: Optional parameter: @param {number} [times] The number of times is optional. Optional parameter with default value: @param {number} [times=1] The number of times is optional. @returns {returnType} description: describes the return value of the function or method. Either type or description can be omitted. @throws {exceptionType} description: describes an exception that might be thrown during the execution of the function or method. Either type or description can be omitted. 4.1. Inline type information (“inline doc comments”) There are two ways of providing type information for parameters and return values. First, you can add a type annotation to @param and @returns. /** * @param {String} name * @returns {Object} */ function getPerson(name) { } Second, you can inline the type information: function getPerson(/**String*/ name) /**Object*/ { } 5. Documenting variables and fields Fields are properties with non-function values. Because instance fields are often created inside a constructor, you have to document them there. @type {typeName}: What type does the documented variable have? Example: /** @constructor */ function Car(make, owner) { /** @type {string} */ this.make = make; /** @type {Person} */ this.owner = owner; } @type {Person} This tag can also be used to document the return type of functions, but @returns is preferable in this case. @constant: A flag that indicates that the documented variable has a constant value. @default defaultValue: What is the default value of a variable? Example: /** @constructor */ function Page(title) { /** * @default "Untitled" */ this.title = title || "Untitled"; } @property {propType} propName description: Document an instance property in the class comment. Example: /** * @class * @property {string} name The name of the person. */ function Person(name) { this.name = name; } Without this tag, instance properties are documented as follows. /** * @class */ function Person(name) { /** * The name of the person. * @type {string} */ this.name = name; } Which one of those styles to use is a matter of taste. @property does introduce redundancies, though. 6. Documenting classes JavaScript’s built-in means for defining classes are weak, which is why there are many APIs that help with this task [5]. These APIs differ, often radically, so you have to help JSDoc with figuring out what is going on. There are three basic ways of defining a class: Constructor function: You must mark a constructor function, otherwise it will not be documented as a class. That is, capitalization alone does not mark a function as a constructor. /** * @constructor */ function Person(name) { } @class is a synonym for @constructor, but it also allows you to describe the class – as opposed to the function setting up an instance (see tag documentation below for an example). API call and object literal: You need two markers. First, you need to tell JSDoc that a given variable holds a class. Second, you need to mark an object literal as defining a class. The latter is done via the @lends tag. /** @class */ var Person = makeClass( /** @lends Person# */ { say: function(message) { return "This person says: " + message; } } ); API call and object literal with a constructor method: If one of the methods in an object literal performs the task of a constructor (setting up instance data, [8]), you need to mark it as such so that fields are found by JSDoc. Then the documentation of the class moves to that method. var Person = makeClass( /** @lends Person# */ { /** * A class for managing persons. * @constructs */ initialize: function(name) { this.name = name; }, say: function(message) { return this.name + " says: " + message; } } ); Tags: @constructor: marks a function as a constructor. @class: marks a variable as a class or a function as a constructor. Can be used in a constructor comment to separate the description of the constructor (first line below) from the description of the class (second line below). /** * Creates a new instance of class Person. * @class Represents a person. */ Person = function() { } @constructs: marks a method in an object literal as taking up the duties of a constructor. That is, setting up instance data. In such a case, the class must be documented there. Works in tandem with @lends. @lends namePath: specifies to which class the following object literal contributes. There are two ways of contributing. @lends Person# – the object literal contributes instance properties to Person. @lends Person – the object literal contributes class properties to Person. 6.1. Inheritance, namespacing JavaScript has no simple support for subclassing and no real namespaces [6]. You thus have to help JSDoc see what is going on when you are using work-arounds. @extends namePath: indicates that the documented class is the subclass of another one. Example: /** * @constructor * @extends Person */ function Programmer(name) { Person.call(this, name); ... } // Remaining code for subclassing omitted @augments: a synonym for @extends. @namespace: One can use objects to simulate namespaces in JavaScript. This tag marks such objects. Example: /** @namespace */ var util = { ... }; 7. Meta-tags Meta-tags are tags that are added to several variables. You put them in a comment that starts with “/**#@+”. They are then added to all variables until JSDoc encounters the closing comment “/**#@-*/”. Example: /**#@+ * @private * @memberOf Foo */ function baz() {} function zop() {} function pez() {} /**#@-*/ 8. Rarely used tags @ignore: ignore a variable. Note that variables without /** comments are ignored, anyway. @borrows otherNamePath as this.propName: A variable is just a reference to somewhere else; it is documented there. Example: /** * @constructor * @borrows Remote#transfer as this.send */ function SpecialWriter() { this.send = Remote.prototype.transfer; } @description text: Provide a description, the same as all of the text before the first tag. Manual categorization. Sometimes JSDoc misinterprets what a variable is. Then you can help it via one of the following tags: Tag Mark variable as @function function @field non-function value @public public (especially inner variables) @private private @inner inner and thus also private @static accessible without instantiation Name and membership: @name namePath: override the parsed name and use the given name, instead. @memberOf parentNamePath: the documented variable is a member of the specified object. Not explained here: @event, see [2]. 9. Related reading jsdoc-toolkit - A documentation generator for JavaScript: JSDoc homepage on Google Code. Includes a link to the downloads. JSDoc wiki: the official documentation of JSDoc and source of this post. JSDoc wiki – TemplateGallery: lists available JSDoc templates. JSDoc wiki – TagReference: a handy cheat-sheet for JSDoc tags. Lightweight JavaScript inheritance APIs Modules and namespaces in JavaScript JavaScript values: not everything is an object Prototypes as classes – an introduction to JavaScript inheritance From http://www.2ality.com/2011/08/jsdoc-intro.html
August 18, 2011
by Axel Rauschmayer
· 70,334 Views · 2 Likes
article thumbnail
Practical PHP Refactoring: Replace Data Value with Object
One of the rules of simple design is the necessity to minimize the number of moving parts, like classes and methods, as long as the tests are satisfied and we are not accepting duplication or feeling the lack of an explicit concept. Thus, a rule that aids simple design is to use primitive types unless a field has already some behavior attached: we don't create a class for the user's name or the user's password; we just use some strings. As we make progress, however, we must be able to revise our decisions via refactoring: if a field gains some logic, this behavior shouldn't be modelled by methods in the containing class, but by a new object. The code in this new class can be reused, while the containing object will change from case to case and you will end up duplicating the same methods. Transforming a scalar value into an object is the essence of the Replace Data Value with Object refactoring. In most of the cases, a Value Object or a Parameter Object come out as a result: while DDD pursue Value Objects as concepts in the domain layer, this refactoring is more general and can be applied anywhere. For instance, in a project we started introducing Data Transfer Objects to model the data sent by the controller to a Service Layer. Data values in PHP In PHP, all scalar values are by nature data values as they cannot host methods: string, integers, and booleans are proper scalar. arrays are not scalar in the Perl or mathematical sense, but they are still a primitive type. On the borderline, we find some simple objects used as data containers in PHP: ArrayObjects. SplHeap and other SPL data structures. The classes on the borderline may host methods, but the original class is out of reach for modification, and an indirection has to be introduced."Local Extension" Steps Create the new class: it should contain as a private field just the value you want to substitute. The methods you immediately need have to be chosen between a constructor, getters, and setters (where needed). Change the field in the containing class. Update the constructor to also create the new object and populate the field, or accept injection (a rarer case). Update the original getter to delegate to the new one. Update the original setter to delegate to the new one (where present) or to create a new object. Run tests at the functional level; the changes should be propagated to the construction phases, while the external usage should not change very much. Example In the initial state, magic arrays are passed around. It's very easy to build an array where a key is missing or is called incorrectly. newPassword(array( 'userId' => 42, 'oldPassword' => 'gismo', 'newPassword' => 'supersecret', 'repeatNewPassword' => 'supersecret' )); $this->markTestIncomplete('This refactoring is about the introduction of an object; it suffices that the test does not explode.'); } } class UserService { public function newPassword($changePasswordData) { /* it's not interesting to do something here */ } } After the introduction of an ArrayObject extension, a little type safety is ensure and we gained a place to put methods at a little cost. newPassword(new ChangePasswordCommand(array( 'userId' => 42, 'oldPassword' => 'gismo', 'newPassword' => 'supersecret', 'repeatNewPassword' => 'supersecret' ))); $this->markTestIncomplete('This refactoring is about the introduction of an object; it suffices that the test does not explode.'); } } class UserService { public function newPassword(ChangePasswordCommand $changePasswordData) { /* it's not interesting to do something here */ } } class ChangePasswordCommand extends ArrayObject { } We add methods to implement logic on this object; in this case, validation logic; in general cases, any kind of code that should not be duplicated by the different clients. For a stricter implementation, wrap an array or another data structure (scalars, SPL objects) instead of extending ArrayObject as you gain immutability and encapsulation (but this kind of objects need little encapsulation.) class ChangePasswordCommand extends ArrayObject { public function __construct($data) { if (!isset($data['userId'])) { throw new Exception('User id is missing.'); } parent::__construct($data); } public function getPassword() { if ($this['newPassword'] != $this['repeatNewPassword']) { throw new Exception('Password do not match.'); } return $this['newPassword']; } } Being this a refactoring however, this is the less invasive kind of introduction of objects you can make as the client code can still use the ArrayAccess interface and treat the object as a scalar array.
August 15, 2011
by Giorgio Sironi
· 9,553 Views
article thumbnail
Serialize only specific class properties to JSON string using JavaScriptSerializer
About one year ago I wrote a blog post about JavaScriptSerializer and the Serialize and Deserialize methods it supports. Note: This blog post has been in draft for sometime now, so I decided to complete it and publish it. There might be situation when you want to serialize to JSON string only specific properties of a given class. You can do that using JavaScriptSerializer in combination with LINQ. Let’s say we have the following class definition public class Customer { public string Name { get; set; } public string Surname { get; set; } public string Email { get; set; } public int Age { get; set; } public bool Drinker { get; set; } public bool Smoker { get; set; } public bool Single { get; set; } } Next, lets create method that will create sample data for our demo private List GetListOfCustomers() { List customers = new List(); customers.Add(new Customer() { Name = "Hajan", Surname = "Selmani", Age = 25, Drinker = false, Smoker = false, Single = false, Email = "hajan@hajan.com" }); customers.Add(new Customer() { Name = "John", Surname = "Doe", Age = 29, Drinker = false, Smoker = true, Single = false, Email = "john@doe.com" }); customers.Add(new Customer() { Name = "Mark", Surname = "Moris", Age = 34, Drinker = true, Smoker = true, Single = true, Email = "mark@moris.com" }); return customers; } So, we have three customers with some property values for each of them. Now, lets serialize some of their properties using JavaScriptSerializer. First, you must put the following directive: using System.Web.Script.Serialization; Next, we create list of customers that will get the returned value from GetListOfCustomers method and we create instance of JavaScriptSerializer class List customers = GetListOfCustomers(); JavaScriptSerializer serializer = new JavaScriptSerializer(); Now, lets say we want to serialize as JSON string and retrieve only the Age property data… We do that with only one simple line of code: //this will serialize only the 'Age' property string jsonString = serializer.Serialize(customers.Select(x => x.Age)); The result will be: Nice! Now, what if we want to serialize multiple properties at once, but not all class properties? string jsonStringMultiple = serializer.Serialize(customers.Select(x => new { x.Name, x.Surname, x.Age })); The result will be: You see, the result is an array of objects with the four properties and their corresponding values we have selected using the LINQ query above. You can see that integer and boolean values are without quotes, which is correct way of serialization. Now, you probably saw a difference somewhere? Namely, in the first example where we have selected only one property, there are only the values of the property (no property name), while in the second example we have the property name and it’s corresponding value… Why is it like that? It’s because in the second query, we use new { … } to specify multiple properties in the select statement. Therefore, the anonymous new { … } creates an object of each found item. So, if you are interested to make some more tests, run the following two lines of code: var customers1 = customers.Select(x => x.Name).ToList(); var customers2 = customers.Select(x=> new { x.Name } ).ToList(); and you will obviously see the difference. If we use the new { } way for single property selection, like in the following example string jsonString2 = serializer.Serialize(customers.Select(x => new { x.Age })); the result will be: The complete demo code used for this blog post: List customers = GetListOfCustomers(); JavaScriptSerializer serializer = new JavaScriptSerializer(); //this will serialize only the 'Age' property string jsonString = serializer.Serialize(customers.Select(x => x.Age )); string jsonStringMultiple = serializer.Serialize(customers.Select(x => new { x.Name, x.Surname, x.Age, x.Drinker })); var customers1 = customers.Select(x => x.Name).ToList(); var customers2 = customers.Select(x=> new { x.Name } ).ToList(); string jsonString2 = serializer.Serialize(customers.Select(x => new { x.Age })); You can download the demo project here.
August 10, 2011
by Hajan Selmani
· 31,987 Views
article thumbnail
A collection with billions of entries
There are a number of problems with having a large number of records in memory. One way around this is to use direct memory, but this is too low level for most developers. Is there a way to make this more friendly? Limitations of large numbers of objects The overhead per object is between 12 and 16 bytes for 64-bit JVMs. If the object is relatively small, this is significant and could be more than the data itself. The GC pause time increases with the number of objects. Pause times can be around one second per GB of objects. Collections and arrays only support two billion elements Huge collections One way to store more data and still follow object orientated principles is have wrappers for direct ByteBuffers. This can be tedious to write, but very efficient. What would be ideal is to have these wrappers generated automatically. Small JavaBean Example This is an example of JavaBean which would have far more overhead than actual data contained. interface MutableByte { public void setByte(byte b); public byte getByte(); } It is also small enough that I can create billions of these on my machine. This example creates a List with 16 billion elements. final long length = 16_000_000_000L; HugeArrayList hugeList = new HugeArrayBuilder() {{ allocationSize = 4 * 1024 * 1024; capacity = length; }.create(); List list = hugeList; assertEquals(0, list.size()); hugeList.setSize(length); // add a GC to see what the GC times are like. System.gc(); assertEquals(Integer.MAX_VALUE, list.size()); assertEquals(length, hugeList.longSize()); byte b = 0; for (MutableByte mb : list) mb.setByte(b++); b = 0; for (MutableByte mb : list) { byte b2 = mb.getByte(); byte expected = b++; if (b2 != expected) assertEquals(expected, b2); } From start to finish, the heap memory used is as follows. with -verbosegc 0 sec - 3100 KB used [GC 9671K->1520K(370496K), 0.0020330 secs] [Full GC 1520K->1407K(370496K), 0.0063500 secs] 10 sec - 3885 KB used 20 sec - 4428 KB used 30 sec - 4428 KB used ... deleted ... 1380 sec - 4475 KB used 1390 sec - 4476 KB used 1400 sec - 4476 KB used 1410 sec - 4476 KB used The only GC is one triggered explicitly. Without the System.gc(); no GC logs appear. After 20 sec, the increase in memory used is from logging how much memory was used. Conclusion The library is relatively slow. Each get or set takes about 40 ns which really adds up when there are so many calls to make. I plan to work on it so it is much faster. ;) On the upside, it wouldn't be possible to create 16 billion objects with the memory I have, nor could it be put in an ArrayList, so having it a little slow is still better than not working at all. From http://vanillajava.blogspot.com/2011/08/collection-with-billions-of-entries.html
August 10, 2011
by Peter Lawrey
· 17,104 Views
article thumbnail
Mocking JMS infrastructure with MockRunner to favour testing
This article shows *one* way to mock the JMS infrastructure in a Spring JMS application. This allows us to test our JMS infrastructure without actually having to depend on a physical connection being available. If you are reading this article, chances are that you are also frustrated with failing tests in your continuous integration environment due to a JMS server being (temporarily) unavailable. By mocking the JMS provider, developers are left free to test not only the functionality of their API (unit tests) but also the plumbing of the different components, e.g. in a Spring container. In this article I show how a Spring JMS Hello World application can be fully tested without the need of a physical JMS connection. I would like to stress the fact that the code in this article is by no means meant for production and that the approach shown is just one of many. The infrastructure For this article I use the following infrastructure: Apache ActiveMQ, an open source JMS provider, running on an Ubuntu installation Spring 3 Java 6 MockRunner Eclipse as development environment, running on Windows 7 The Spring configuration It's my belief that using what I define as Spring Configuration Strategy Pattern (SCSP) is the right solution in almost all cases when there is the need for a sound testing infrastructure. I will dedicate an entire article to SCSP, for now this is how it looks: The Spring application context Here follows the content of jemosJms-appContext.xml The only important thing to note here is that there are some services which rely on an existing bean named jmsConnectionFactory but that such bean is not defined in this file. This is key to the SCSP and I will illustrate this in one of my future articles. The Spring application context implementation Here follows the content of jemosJms-appContextImpl.xml which could be seen as an implementation of the Spring application context defined above This Spring context file imports the Spring application context defined above and it is this application context which declared the connection factory. This decoupling of the bean requirement (in the super context) from its actual declaration (Spring application context implementation) represents the cornerstore of SCSP. Mocking the JMS provider - The Spring Test application context and MockRunner Following the same approach I used above, I can now declare a fake connection factory which does not require a physical connection to a JMS provider. Here follows the content of jemosJmsTest-appContext.xml. Please note that this file should reside in the test resources of your project, i.e. it should never make it to production. Here the Spring test application context file imports the Spring application context (not its implementation) and it declares a fake connection factory, thanks to the MockRunner MockQueueConnectionFactory class. A POJO listener The job of handling the message is delegated to a simple POJO, which happens to be declared also as a bean: package uk.co.jemos.experiments; public class HelloWorldHandler { /** The application logger */ private static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger .getLogger(HelloWorldHandler.class); public void handleHelloWorld(String msg) { LOG.info("Received message: " + msg); } } There is nothing glamorous about this class. In real life this should have probably be the implementation of an interface, but here I wanted to keep things simple. A simple JMS message producer Here follows an example of a JMS message producer, which would use the real JMS infrastructure to send messages: package uk.co.jemos.experiments; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.jms.core.JmsTemplate; public class JmsTest { /** The application logger */ private static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger .getLogger(JmsTest.class); /** * @param args */ public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext( "classpath:jemosJms-appContextImpl.xml"); JmsTemplate jmsTemplate = ctx.getBean(JmsTemplate.class); jmsTemplate.send("jemos.tests", new HelloWorldMessageCreator()); LOG.info("Message sent successfully"); } } The only thing of interest here is that this class retrieves the real JmsTemplate to send a message to the queue. Now if I was to run this class as is, I would obtain the following: 2011-07-31 17:09:46 ClassPathXmlApplicationContext [INFO] Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@19e0ff2f: startup date [Sun Jul 31 17:09:46 BST 2011]; root of context hierarchy 2011-07-31 17:09:46 XmlBeanDefinitionReader [INFO] Loading XML bean definitions from class path resource [jemosJms-appContextImpl.xml] 2011-07-31 17:09:46 XmlBeanDefinitionReader [INFO] Loading XML bean definitions from class path resource [jemosJms-appContext.xml] 2011-07-31 17:09:46 DefaultListableBeanFactory [INFO] Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3479e304: defining beans [helloWorldConsumer,jmsTemplate,org.springframework.jms.listener.DefaultMessageListenerContainer#0,jmsConnectionFactory]; root of factory hierarchy 2011-07-31 17:09:46 DefaultLifecycleProcessor [INFO] Starting beans in phase 2147483647 2011-07-31 17:09:47 HelloWorldHandler [INFO] Received message: Hello World 2011-07-31 17:09:47 JmsTest [INFO] Message sent successfully Writing the integration test There are various interpretations as to what different types of tests mean and I don't pretend to have the only answer; my interpreation is that an integration test is a functional test which also wires up different components together but which does not interact with real external infrastructure (e.g. a Dao integration test fakes data, a JMS integration test fakes the JMS physical connection, an HTTP integration test fakes the remote Web host, etc). Whereas in my opinion, the main purpose of a unit (aka functional) test is to let the API emerge from the tests, the main goal of an integration test is to test that the plumbing amongst components works as expected so as to avoid surprises in a production environment. Both unit (functional) and integration tests should run very fast (e.g. under 10 minutes) as they constitute what can be considered the "development token". If unit and integration tests are green one should feel pretty confident that 90% of the functionality works as expected; in my projects when both unit and integration tests are green I let developers free to release the token. This does not mean that the other 10% (e.g. the interaction with the real infrastructure) should not be tested, but this can be delegated to system tests which run nightly and don't require the development token. Because unit and integration tests need to run fast, interaction with external infrastructure should be mocked whenever possible. Here follows an integration test for the Hello World handler: package uk.co.jemos.experiments.test.integration; import javax.annotation.Resource; import javax.jms.TextMessage; import junit.framework.Assert; import org.junit.Before; import org.junit.Test; import org.springframework.jms.core.JmsTemplate; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.AbstractJUnit4SpringContextTests; import uk.co.jemos.experiments.HelloWorldHandler; import uk.co.jemos.experiments.HelloWorldMessageCreator; import com.mockrunner.jms.DestinationManager; import com.mockrunner.mock.jms.MockQueue; /** * @author mtedone * */ @ContextConfiguration(locations = { "classpath:jemosJmsTest-appContextImpl.xml" }) public class HelloWorldHandlerIntegrationTest extends AbstractJUnit4SpringContextTests { @Resource private JmsTemplate jmsTemplate; @Resource private DestinationManager mockDestinationManager; @Resource private HelloWorldHandler helloWorldHandler; @Before public void init() { Assert.assertNotNull(jmsTemplate); Assert.assertNotNull(mockDestinationManager); Assert.assertNotNull(helloWorldHandler); } @Test public void helloWorld() throws Exception { MockQueue mockQueue = mockDestinationManager.createQueue("jemos.tests"); jmsTemplate.send(mockQueue, new HelloWorldMessageCreator()); TextMessage message = (TextMessage) jmsTemplate.receive(mockQueue); Assert.assertNotNull("The text message cannot be null!", message.getText()); helloWorldHandler.handleHelloWorld(message.getText()); } } And here follows the output: 2011-07-31 17:17:26 XmlBeanDefinitionReader [INFO] Loading XML bean definitions from class path resource [jemosJmsTest-appContextImpl.xml] 2011-07-31 17:17:26 XmlBeanDefinitionReader [INFO] Loading XML bean definitions from class path resource [jemosJms-appContext.xml] 2011-07-31 17:17:26 GenericApplicationContext [INFO] Refreshing org.springframework.context.support.GenericApplicationContext@f01a1e: startup date [Sun Jul 31 17:17:26 BST 2011]; root of context hierarchy 2011-07-31 17:17:27 DefaultListableBeanFactory [INFO] Pre-instantiating singletons in org.springframework.beans.factory.support. DefaultListableBeanFactory@39478a43: defining beans [helloWorldConsumer,jmsTemplate,org.springframework.jms.listener.DefaultMessageListener Container#0,destinationManager,configurationManager,jmsConnectionFactory,org.springframework.context.annotation.internalConfigurationAnnotation Processor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequired AnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor]; root of factory hierarchy 2011-07-31 17:17:27 DefaultLifecycleProcessor [INFO] Starting beans in phase 2147483647 2011-07-31 17:17:27 HelloWorldHandler [INFO] Received message: Hello World 2011-07-31 17:17:27 GenericApplicationContext [INFO] Closing org.springframework.context.support.GenericApplicationContext@f01a1e: startup date [Sun Jul 31 17:17:26 BST 2011]; root of context hierarchy 2011-07-31 17:17:27 DefaultLifecycleProcessor [INFO] Stopping beans in phase 2147483647 2011-07-31 17:17:32 DefaultMessageListenerContainer [WARN] Setup of JMS message listener invoker failed for destination 'jemos.tests' - trying to recover. Cause: Queue with name jemos.tests not found 2011-07-31 17:17:32 DefaultListableBeanFactory [INFO] Destroying singletons in org.springframework.beans.factory.support. DefaultListableBeanFactory@39478a43: defining beans [helloWorldConsumer,jmsTemplate,org.springframework.jms.listener.DefaultMessageListener Container#0,destinationManager,configurationManager,jmsConnectionFactory,org.springframework.context.annotation.internalConfigurationAnnotationProcessor ,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context. annotation.internalCommonAnnotationProcessor]; root of factory hierarchy In this test, although we simulated a message roundtrip to a JMS queue, the message never left the current JVM and it the whole execution did not depend on a JMS infrastructure being up. This gives us the power to simulate the JMS infrastructure, to test the integration of our business components without having to fear a red from time to time due to JMS infrastructure being down or inaccessible. Please note that in the output there are some warnings because the JMS listener container declared in the jemosJms-appContext.xml does not find a queue named "jemos.test" in the fake connection factory, but this is fine; it's a warning and does not impede the test from running successfully. The Maven configuration Here follows the Maven pom.xml to compile the example: 4.0.0 uk.co.jemos.experiments jmx-experiments 0.0.1-SNAPSHOT Jemos JMS experiments junit junit 4.8.2 test com.mockrunner mockrunner 0.3.1 test log4j log4j 1.2.16 compile org.slf4j slf4j-api 1.6.1 compile org.slf4j slf4j-simple 1.6.1 compile org.apache.activemq activemq-all 5.5.0 compile org.springframework spring-beans 3.0.5.RELEASE org.springframework spring-context 3.0.5.RELEASE org.springframework spring-core 3.0.5.RELEASE org.springframework spring-jms 3.0.5.RELEASE org.springframework spring-test 3.0.5.RELEASE test From http://tedone.typepad.com/blog/2011/07/mocking-spring-jms-with-mockrunner.html
August 5, 2011
by Marco Tedone
· 53,815 Views · 13 Likes
article thumbnail
REST JSON to SOAP conversion tutorial
i often get asked about ‘rest to soap’ transformation use cases these days. using an soa gateway like securespan to perform this type of transformation at runtime is trivial to setup. with securespan in front of any existing web service (in the dmz for example), you can virtualize a rest version of this same service. using an example, here is a description of the steps to perform this conversion. imagine the geoloc web service for recording geographical locations. it has two methods, one for setting a location and one for getting a location. see below what this would look like in soap. request: 34802398402 response: 52.37706 4.889721 request: 34802398402 52.37706 4.889721 response: ok here is the equivalent rest target that i want to support at the edge. payloads could be xml, but let’s use json to make it more interesting. get /position/34802398402 http 200 ok content-type: text/json { 'latitude' : 52.37706 'longitude' : 4.889721 } post /position/34802398402 content-type: text/json { 'latitude' : 52.37706 'longitude' : 4.889721 } http 200 ok ok now let’s implement this rest version of the service using securespan. i’m assuming that you already have a securespan gateway deployed between the potential rest requesters and the existing soap web service. first, i will create a new service endpoint on the gateway for this service and assign anything that comes at the uri pattern /position/* to this service. i will also allow the http verbs get and post for this service. rest geoloc service properties next, let’s isolate the resource id from the uri and save this as a context variable named ‘trackerid’. we can use a simple regex assertion to accomplish this. also, i will branch on the incoming http verb using an or statement. i am just focusing on get and post for this example but you could add additional logic for other http verbs that you want to support for this rest service. regex for rest service resource identification policy branching for get vs post for get requests, the transformation is very simple, we just declare a message variable using a soap skeleton into which we refer to the trackerid variable. soap request template this soap message is routed to the existing web service and the essential elements are isolated using xpath assertions. processing soap response the rest response is then constructed back using a template response. template json response a similar logic is performed for the post message. see below for the full policy logic. complete policy you’re done for virtualizing the rest service. setting this up with securespan took less than an hour, did not require any change on the existing soap web service and did not require the deployment of an additional component. from there, you would probably enrich the policy to perform some json schema validation , some url and query parameter validation, perhaps some authentication, authorization , etc.
August 4, 2011
by Francois Lascelles
· 37,064 Views
article thumbnail
JSR-299 CDI Decorators for Spring beans
This blog is about my new Spring-CDI modules effort. It's pupose is to make useful CDI patterns like decorators or interceptors available to a Spring application. I do believe that the explicit pattern implementation in CDI is very useful. It makes it obvious and simple to use patterns for not so experienced developers. Therefore I decided to investigate how to make those patterns and the corresponding CDI annotations available for Spring managed beans. Here is the current status of my work. If you're interested and you have some time left, take a look or try out my early version of the Spring-CDI decorator module. The set-up is straight forward. You'll find all you need below. Please notice: The intention of my blog is to share and discuss ideas. If you use any of this in your applications you're acting at your own risk. JSR-299 decorator pattern implementation Features The decorator module provides the following features: - Use JSR-299 @Decorator and @Delegate in Spring managed beans - Support chains of multiple decorators for the same target delegate bean - Allow to qualify decorators to decorate multiple implementations of the same interface with different decorators - Support scoped beans, allow scoped decorators - Integrate with Spring AOP, both dynamic JDK proxies and CGLIB proxies - Allow definition of custom decorator and delegate annotations Download Link The Spring-CDI decorator module is an usual Spring IoC-container extension delivered as JAR archive. You can download the module JAR and put that on the classpath of your Spring application. Compiled Spring-CDI decorator module JAR: download here Sources: download here API-Doc: view here Everything is hosted on a git repository on Github.com. Dependencies org.springframework spring-context ${spring.version} org.springframework org.springframework.aop ${spring.version} javax.enterprise cdi-api 1.0 cglib cglib 2.2.2 Configuration If the Spring-CDI decorator module JAR and its dependencies are on your classpath, all you need to do is: (1) register DecoratorAwareBeanFactoryPostProcessor in your application context (2) define an include-filter to include javax.decorator.Decorator as component annotation in your context:component-scan tag Use Case The following code snippets show how you can use the decorator pattern ones you have configured your Spring application as described above. For more complex scenarios see my unit test cases. Let's assume you have a business interface called: MyService package com.mycompany.springapp.springcdi; public interface MyService { String sayHello(); } This is your implementation of the service. package com.mycompany.springapp.springcdi; import org.springframework.stereotype.Component; @Component public class MyServiceImpl implements MyService { public String sayHello() { return "Hello"; } } You want to do some transaction and security stuff, but you do not want to mess up the business code with it. For security you'd write a decorator that points to the MyService business service. package com.mycompany.springapp.springcdi; import javax.decorator.Decorator; import javax.decorator.Delegate; @Decorator public class MyServiceSecurityDecorator implements MyService { @Delegate private MyService delegate; public String sayHello() { // do some security stuff return delegate.sayHello(); } } To seperate the cross-cutting-concerns you write another decorator for transaction handling that points to the MyService business service. package com.mycompany.springapp.springcdi; import javax.decorator.Decorator; import javax.decorator.Delegate; @Decorator public class MyServiceTransactionDecorator implements MyService{ @Delegate private MyService delegate; public String sayHello() { // do some transaction stuff return delegate.sayHello(); } } Then you can just use standard Spring @Autowired annotation to make that work. The injected bean will be decorated with your new security and transaction decorator. package com.mycompany.springapp.springcdi; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; @ContextConfiguration("/test-decorator-context.xml") @RunWith(SpringJUnit4ClassRunner.class) public class DecoratorTestCase { // This injected bean will be a decorated MyServiceImpl @Autowired private MyService service; @Test public void testHelloWorld() { Assert.assertTrue(service.sayHello().equals("Hello")); } } How it works The core is the DecoratorAwareBeanFactoryPostProcessor that scans the registered bean definitions for existing decorators. It gathers meta data and stores that data in the DecoratorMetaDataBean. The DecoratorAwareBeanPostProcessor uses the meta data to wire the decorators into a chain and creates a CGLIB proxy that intercepts method calls to the target delegate bean. It redirects those calls to the decorator chain. The DecoratorAutowireCandidateResolver applies autowiring rules specific to the CDI decorator pattern. It also uses meta data to do that. The two modes The DecoratorAwareBeanFactoryPostProcessor accepts two runtime modes. The 'processor' (default) mode uses DecoratorAwareBeanPostProcessor and the DecoratorChainingStrategy to wire the decorator chain. The 'resolver' mode uses DecoratorAwareAutowireCandidateResolver to implement custom wiring logic based on complex wiring rules implemented in ResolverCDIAutowiringRules. The 'resolver' mode was just another option how one can implement such complex logic. I tried two different options and both work. The 'processor' alternative however implements simpler logic. Therefore it's my prefered mode at the moment. Decorator Meta Data Model The DecoratorAwareBeanFactoryPostProcessor scans bean definitions and stores meta data about the decorators and delegates in the application context. These are the model beans in their hierarchical access order: DecoratorMetaDataBean.java: Top level entry point to the meta-data. Registered and available in the application context. QualifiedDecoratorChain.java: A chain of decorators for the same target delegate bean. DecoratorInfo.java: A decorator bean definition wrapper class. DelegateField.java: Contains the delegate field of the decorator implementation. Strategies The Spring-CDI decorator module is easy to adopt by users through the use of strategy pattern in many places. These are the strategies that allow users to change processing logic if required: DecoratorChainingStrategy.java: Wires the decorators for a specific target delegate bean. DecoratorOrderingStrategy.java: Orders the decorators for a specific target delegate bean. DecoratorResolutionStrategy.java: Scans the bean factory for available decorator beans. DelegateResolutionStrategy.java: Searches the delegate bean for a specific decorator bean. Decorator Autowiring Rules The 'processor' mode and the 'resolver' mode both use a custom AutowireCandidateResolver applied to the current bean factory. The class is called DecoratorAwareAutowireCandidateResolver and it is applied to the bean factory in the DecoratorAwareBeanFactoryPostProcessor. The custom resolver works with different rule sets. In the 'processor' mode it works with a very simple rule set called BeanPostProcessorCDIAutowiringRules. In the 'resolver' mode it uses ResolverCDIAutowiringRules which is far more complex. If these rule sets are not sufficient for your autowiring logic, it's easy to apply additional rule sets by implementing a custom SpringCDIPlugin and adding it to the DecoratorAwareAutowireCandidateResolver. Spring-CDI Plugin System The Spring-CDI decorator module contains two infrastructure interfaces that allow the modularized approach of Spring-CDI project: SpringCDIPlugin and SpringCDIInfrastructure. When I implement additional modules - like the interceptor module - users can decide which modules to use and import into their projects. It's not required to add all Spring-CDI functionality if one only needs decorators. From http://niklasschlimm.blogspot.com/2011/08/jsr-299-cdi-decorators-for-spring-beans.html
August 4, 2011
by Niklas Schlimm
· 10,654 Views
article thumbnail
Practical PHP Refactoring: Introduce Foreign Method
A new method is needed, as we are factoring out some lines of code. It would be nice to have it available on a class which already has all the fields to execute it, but unfortunately we cannot modify original code (because it's part of a library, or it's bundled with PHP.) A classic example of Introduce Foreign Method is cause by a missing method on ArrayObject, or SplQueue, or other external classes. The first place to place a new method is the object the code works with, since it's cohesive with the rest of the data. The second best place is the client object, although the method could get duplicated in different clients. A Foreign Method is a method created in the client code to complete the functionality of a source object. How it works The source object becomes a first additional argument of the Foreign Method: Python and some other languages reflects this with the self first argument, which makes refactoring Foreign Methods into normal ones easier. For a PHP- example, consider PHPUnit: I use Foreign Methods in my test cases to wrap $this->getMock() multiple usages. I factor away the adaptation to the Api (use of false arguments or MockBuilder) and leave as arguments the variability of the mocks, like the number of calls or the expected parameters. A Foreign Method is a work-around: if you have the possibility to change the source class, definitely go for that. Another alternative is to introduce a wrapper, when you can modify the lifecycle in order to inject a wrapper in the client code instead of the original object. It's not the case for PHPUnit (you extend the class), but it is for SplQueue and other collection objects; wrapping has also other benefits which will be shown in the next articles. Steps Create the method in the client class. Make the server object the first parameter of the method, if it cannot be passed already through $this (in case it's referenced by a field). Substitute the duplicated code with method calls. Fowler suggests also to comment the method calling it Foreign Method and saying it should be moved onto the server object when it will become possible. Example Initially, we're using an ArrayObject and continuing to ordering it after each addition of an element: addUrl('twitter.com'); $links->add('plus.google.com', 'Google+'); $links->add('facebook.com', 'Facebook'); $expected = "Facebook\n" . "Google+\n" . "twitter.com"; $this->assertEquals($expected, $links->__toString()); } } class LinkGroup { private $links; public function __construct() { $this->links = new ArrayObject(); } public function add($url, $text) { $this->links[$url] = $text; $this->links->asort(); } public function addUrl($url) { $this->links[$url] = $url; $this->links->asort(); } public function __toString() { $links = array(); foreach ($this->links as $url => $text) { $links[] = "$text"; } return implode("\n", $links); } } We introduce a Foreign Method, newLink(): class LinkGroup { private $links; public function __construct() { $this->links = new ArrayObject(); } public function add($url, $text) { $this->newLink($url, $text); } public function addUrl($url) { $this->newLink($url, $url); } public function __toString() { $links = array(); foreach ($this->links as $url => $text) { $links[] = "$text"; } return implode("\n", $links); } private function newLink($url, $text) { $this->links[$url] = $text; $this->links->asort(); } } We also add some comments noting that if more and more Foreign Methods pop out, a radical solution would be needed: /** * Foreign Method of the ArrayObject. Should be moved onto a newly extracted * collaborator which wraps the ArrayObject, or an heap-like data structure * should be used. */ private function newLink($url, $text) { $this->links[$url] = $text; $this->links->asort(); }
August 3, 2011
by Giorgio Sironi
· 8,141 Views
article thumbnail
Java Tools for Source Code Optimization and Analysis
Below is a list of some tools that can help you examine your Java source code for potential problems: 1. PMD from http://pmd.sourceforge.net/ License: PMD is licensed under a “BSD-style” license PMD scans Java source code and looks for potential problems like: * Possible bugs – empty try/catch/finally/switch statements * Dead code – unused local variables, parameters and private methods * Suboptimal code – wasteful String/StringBuffer usage * Overcomplicated expressions – unnecessary if statements, for loops that could be while loops * Duplicate code – copied/pasted code means copied/pasted bugs You can download everything from here, and you can get an overview of all the rules at the rulesets index page. PMD is integrated with JDeveloper, Eclipse, JEdit, JBuilder, BlueJ, CodeGuide, NetBeans/Sun Java Studio Enterprise/Creator, IntelliJ IDEA, TextPad, Maven, Ant, Gel, JCreator, and Emacs. 2. FindBug from http://findbugs.sourceforge.net License: L-GPL FindBugs, a program which uses static analysis to look for bugs in Java code. And since this is a project from my alumni university (IEEE – University of Maryland, College Park – Bill Pugh) , I have to definitely add this contribution to this list. 3. Clover from http://www.cenqua.com/clover/ License: Free for Open Source (more like a GPL) Measures statement, method, and branch coverage and has XML, HTML, and GUI reporting. and comprehensive plug-ins for major IDEs. * Improve Test Quality * Increase Testing Productivity * Keep Team on Track Fully integrated plugins for NetBeans, Eclipse , IntelliJ IDEA, JBuilder and JDeveloper. These plugins allow you to measure and inspect coverage results without leaving the IDE. Seamless Integration with projects using Apache Ant and Maven. * Easy integration into legacy build systems with command line interface and API. Fast, accurate, configurable, detailed coverage reporting of Method, Statement, and Branch coverage. Rich reporting in HTML, PDF, XML or a Swing GUI Precise control over the coverage gathering with source-level filtering. Historical charting of code coverage and other metrics. Fully compatible with JUnit 3.x & 4.x, TestNG, JTiger and other testing frameworks. Can also be used with manual, functional or integration testing. 4. Macker from http://innig.net/macker/ License: GPL Macker is a build-time architectural rule checking utility for Java developers. It’s meant to model the architectural ideals programmers always dream up for their projects, and then break — it helps keep code clean and consistent. You can tailor a rules file to suit a specific project’s structure, or write some general “good practice” rules for your code. Macker doesn’t try to shove anybody else’s rules down your throat; it’s flexible, and writing a rules file is part of the development process for each unique project. 5 EMMA from http://emma.sourceforge.net/ License: EMMA is distributed under the terms of Common Public License v1.0 and is thus free for both open-source and commercial development. Reports on class, method, basic block, and line coverage (text, HTML, and XML). EMMA can instrument classes for coverage either offline (before they are loaded) or on the fly (using an instrumenting application classloader). Supported coverage types: class, method, line, basic block. EMMA can detect when a single source code line is covered only partially. Coverage stats are aggregated at method, class, package, and “all classes” levels. Output report types: plain text, HTML, XML. All report types support drill-down, to a user-controlled detail depth. The HTML report supports source code linking. Output reports can highlight items with coverage levels below user-provided thresholds. Coverage data obtained in different instrumentation or test runs can be merged together. EMMA does not require access to the source code and degrades gracefully with decreasing amount of debug information available in the input classes. EMMA can instrument individial .class files or entire .jars (in place, if desired). Efficient coverage subset filtering is possible, too. Makefile and ANT build integration are supported on equal footing. EMMA is quite fast: the runtime overhead of added instrumentation is small (5-20%) and the bytecode instrumentor itself is very fast (mostly limited by file I/O speed). Memory overhead is a few hundred bytes per Java class. EMMA is 100% pure Java, has no external library dependencies, and works in any Java 2 JVM (even 1.2.x). 6. XRadar from http://xradar.sourceforge.net/ License: BSD (me thinks) The XRadar is an open extensible code report tool currently supporting all Java based systems. The batch-processing framework produces HTML/SVG reports of the systems current state and the development over time – all presented in sexy tables and graphs. The XRadar gives measurements on standard software metrics such as package metrics and dependencies, code size and complexity, code duplications, coding violations and code-style violations. 7. Hammurapi from Hammurapi Group License: (if anyone knows the license for this email me Venkatt.Guhesan at Y! dot com) Hammurapi is a tool for execution of automated inspection of Java program code. Following the example of 282 rules of Hammurabi’s code, we are offered over 120 Java classes, the so-called inspectors, which can, at three levels (source code, packages, repository of Java files), state whether the analysed source code contains violations of commonly accepted standards of coding. Relevant Links: http://en.sdjournal.org/products/articleInfo/93 http://wiki.hammurapi.biz/index.php?title=Hammurapi_4_Quick_Start 8. Relief from http://www.workingfrog.org/ License: GPL Relief is a design tool providing a new look on Java projects. Relying on our ability to deal with real objects by examining their shape, size or relative place in space it gives a “physical” view on java packages, types and fields and their relationships, making them easier to handle. 9. Hudson from http://hudson-ci.org/ License: MIT Hudson is a continuous integration (CI) tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. It supports SCM tools including CVS, Subversion, Git and Clearcase and can execute Apache Ant and Apache Maven based projects, as well as arbitrary shell scripts and Windows batch commands. 10. Cobertura from http://cobertura.sourceforge.net/ License: GNU GPL Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage. 11. SonarSource from http://www.sonarsource.org/ (recommended by Vishwanath Krishnamurthi – thanks) License: LGPL Sonar is an open platform to manage code quality. As such, it covers the 7 axes of code quality: Architecture & Design, Duplications, Unit Tests, Complexity, Potential bugs, Coding rules, Comments. From http://mythinkpond.wordpress.com/2011/07/14/java-tools-for-source-code-optimization-and-analysis/
July 29, 2011
by Venkatt Guhesan
· 64,213 Views
article thumbnail
Java: What is the Limit to the Number of Threads You Can Create?
I have seen a number of tests where a JVM has 10K threads. However, what happens if you go beyond this? My recommendation is to consider having more servers once your total reaches 10K. You can get a decent server for $2K and a powerful one for $10K. Creating threads gets slower The time it takes to create a thread increases as you create more thread. For the 32-bit JVM, the stack size appears to limit the number of threads you can create. This may be due to the limited address space. In any case, the memory used by each thread's stack add up. If you have a stack of 128KB and you have 20K threads it will use 2.5 GB of virtual memory. Bitness Stack Size Max threads 32-bit 64K 32,073 32-bit 128K 20,549 32-bit 256K 11,216 64-bit 64K stack too small 64-bit 128K 32,072 64-bit 512K 32,072 Note: in the last case, the thread stacks total 16 GB of virtual memory. Java 6 update 26 32-bit,-XX:ThreadStackSize=64 4,000 threads: Time to create 4,000 threads was 0.522 seconds 8,000 threads: Time to create 4,000 threads was 1.281 seconds 12,000 threads: Time to create 4,000 threads was 1.874 seconds 16,000 threads: Time to create 4,000 threads was 2.725 seconds 20,000 threads: Time to create 4,000 threads was 3.333 seconds 24,000 threads: Time to create 4,000 threads was 4.151 seconds 28,000 threads: Time to create 4,000 threads was 5.293 seconds 32,000 threads: Time to create 4,000 threads was 6.636 seconds After creating 32,073 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 32-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.525 seconds 8,000 threads: Time to create 4,000 threads was 1.239 seconds 12,000 threads: Time to create 4,000 threads was 1.902 seconds 16,000 threads: Time to create 4,000 threads was 2.529 seconds 20,000 threads: Time to create 4,000 threads was 3.165 seconds After creating 20,549 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 32-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.526 seconds 8,000 threads: Time to create 4,000 threads was 1.212 seconds After creating 11,216 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 64-bit,-XX:ThreadStackSize=128 4,000 threads: Time to create 4,000 threads was 0.577 seconds 8,000 threads: Time to create 4,000 threads was 1.292 seconds 12,000 threads: Time to create 4,000 threads was 1.995 seconds 16,000 threads: Time to create 4,000 threads was 2.653 seconds 20,000 threads: Time to create 4,000 threads was 3.456 seconds 24,000 threads: Time to create 4,000 threads was 4.663 seconds 28,000 threads: Time to create 4,000 threads was 5.818 seconds 32,000 threads: Time to create 4,000 threads was 6.792 seconds After creating 32,072 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) Java 6 update 26 64-bit,-XX:ThreadStackSize=512 4,000 threads: Time to create 4,000 threads was 0.577 seconds 8,000 threads: Time to create 4,000 threads was 1.292 seconds 12,000 threads: Time to create 4,000 threads was 1.995 seconds 16,000 threads: Time to create 4,000 threads was 2.653 seconds 20,000 threads: Time to create 4,000 threads was 3.456 seconds 24,000 threads: Time to create 4,000 threads was 4.663 seconds 28,000 threads: Time to create 4,000 threads was 5.818 seconds 32,000 threads: Time to create 4,000 threads was 6.792 seconds After creating 32,072 threads, java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.google.code.java.core.threads.MaxThreadsMain.addThread(MaxThreadsMain.java:46) at com.google.code.java.core.threads.MaxThreadsMain.main(MaxThreadsMain.java:16) The Code MaxThreadsMain.java From http://vanillajava.blogspot.com/2011/07/java-what-is-limit-to-number-of-threads.html
July 26, 2011
by Peter Lawrey
· 147,894 Views · 1 Like
article thumbnail
Nothing is Private: Python Closures (and ctypes)
as i'm sure you know python doesn't have a concept of private members. one trick that is sometimes used is to hide an object inside a python closure, and provide a proxy object that only permits limited access to the original object. here's a simple example of a hide function that takes an object and returns a proxy. the proxy allows you to access any attribute of the original, but not to set or change any attributes. def hide(obj): class proxy(object): __slots__ = () def __getattr__(self, name): return getattr(obj, name) return proxy() here it is in action: >>> class foo(object): ... def __init__(self, a, b): ... self.a = a ... self.b = b ... >>> f = foo(1, 2) >>> p = hide(f) >>> p.a, p.b (1, 2) >>> p.a = 3 traceback (most recent call last): ... attributeerror: 'proxy' object has no attribute 'a' after the hide function has returned the proxy object the __getattr__ method is able to access the original object through the closure . this is stored on the __getattr__ method as the func_closure attribute (python 2) or the __closure__ attribute (python 3). this is a "cell object" and you can access the contents of the cell using the cell_contents attribute: >>> cell_obj = p.__getattr__.func_closure[0] >>> cell_obj.cell_contents <__main__.foo object at 0x...> this makes hide useless for actually preventing access to the original object. anyone who wants access to it can just fish it out of the cell_contents . what we can't do from pure-python is*set* the contents of the cell, but nothing is really private in python - or at least not in cpython. there are two python c api functions, pycell_get and pycell_set , that provide access to the contents of closures. from ctypes we can call these functions and both introspect and modify values inside the cell object: >>> import ctypes >>> ctypes.pythonapi.pycell_get.restype = ctypes.py_object >>> py_obj = ctypes.py_object(cell_obj) >>> f2 = ctypes.pythonapi.pycell_get(py_obj) >>> f2 is f true >>> new_py_obj = ctypes.py_object(foo(5, 6)) >>> ctypes.pythonapi.pycell_set(py_obj, new_py_obj) 0 >>> p.a, p.b (5, 6) as you can see, after the call to pycell_set the proxy object is using the new object we put in the closure instead of the original. using ctypes may seem like cheating, but it would only take a trivial amount of c code to do the same. two notes about this code. it isn't (of course) portable across different python implementations don't ever do this, it's for illustration purposes only! still, an interesting poke around the cpython internals with ctypes. interestingly i have heard of one potential use case for code like this. it is alleged that at some point armin ronacher was using a similar technique in jinja2 for improving tracebacks. (tracebacks from templating languages can be very tricky because the compiled python code usually bears a quite distant relationship to the original text based template.) just because armin does it doesn't mean you can though...
July 25, 2011
by Michael Foord
· 8,023 Views
article thumbnail
Five reasons why you should rejoice about Kotlin
As you probably saw by now, JetBrains just announced that they are working on a brand new statically typed JVM language called Kotlin. I am planning to write a post to evaluate how Kotlin compares to the other existing languages, but first, I’d like to take a slightly different angle and try to answer a question I have already seen asked several times: what’s the point? We already have quite a few JVM languages, do we need any more? Here are a few reasons that come to mind. 1) Coolness New languages are exciting! They really are. I’m always looking forward to learning new languages. The more foreign they are, the more curious I am, but the languages I really look forward to discovering are the ones that are close to what I already know but not identical, just to find out what they did differently that I didn’t think of. Allow me to make a small digression in order to clarify my point. Some time ago, I started learning Japanese and it turned out to be the hardest and, more importantly, the most foreign natural language I ever studied. Everything is different from what I’m used to in Japanese. It’s not just that the grammar, the syntax and the alphabets are odd, it’s that they came up with things that I didn’t even think would make any sense. For example, in English (and many other languages), numbers are pretty straightforward and unique: one bag, two cars, three tickets, etc… Now, did it ever occur to you that a language could allow several words to mean “one”, and “two”, and “three”, etc…? And that these words are actually not arbitrary, their usage follows some very specific rules. What could these rules be? Well, in Japanese, what governs the word that you pick is… the shape of the object that you are counting. That’s right, you will use a different way to count if the object is long, flat, a liquid or a building. Mind-boggling, isn’t it? Here is another quick example: in Russian, each verb exists in two different forms, which I’ll call A and B to simplify. Russian doesn’t have a future tense, so when you want to speak at the present tense, you’ll conjugate verb A in the present, and when you want the future, you will use the B form… in the present tense. It’s not just that you need to learn two verbs per verb, you also need to know which one is which if you want to get your tenses right. Oh and these forms also have different meanings when you conjugate them in past tenses. End of digression. The reason why I am mentioning this is because this kind of construct bends your mind, and this goes for natural languages as much as programming languages. It’s tremendously exciting to read new syntaxes this way. For that reason alone, the arrival of new languages should be applauded and welcome. Kotlin comes with a few interesting syntactic innovations of its own, which I’ll try to cover in a separate post, but for now, I’d like to come back to my original point, which was to give you reasons why you should be excited about Kotlin, so let’s keep going down the list. 2) IDE support None of the existing JVM languages (Groovy, Scala, Fantom, Gosu, Ceylon) have really focused much on the tooling aspect. IDE plug-ins exist for each of them, all with varying quality, but they are all an afterthought, and they suffer from this oversight. The plug-ins are very slow to mature, they have to keep up with the internals of a compiler that’s always evolving and which, very often, doesn’t have much regards for the tools built on top of it. It’s a painful and frustrating process for tool creators and tool users alike. With Kotlin, we have good reasons to think that the IDE support will be top notch. JetBrains is basically announcing that they are building the compiler and the IDEA support in lockstep, which is a great way to guarantee that the language will work tremendously well inside IDEA, but also that other tools should integrate nicely with it as well (I’m rooting for a speedy Eclipse plug-in, obviously). 3) Reified generics This is a pretty big deal. Not so much for the functionality (I’ll get back to this in the next paragraph) but because this is probably the very first time that we see a JVM language with true support for reified generics. This innovation needs to be saluted. Correction from the comments: Gosu has reified generics Having said that, I don’t feel extremely excited by this feature because overall, I think that reified generics come at too high a price. I’ll try to dedicate a full blog post to this topic alone, because it deserves a more thorough treatment. 4) Commercial support JetBrains has a very clear financial interest in seeing Kotlin succeed. It’s not just that they are a commercial entity that can put money behind the development of the language, it’s also that the success of the language would most likely mean that they will sell more IDEA licenses, and this can also turn into an additional revenue stream derived from whatever other tools they might come up with that would be part of the Kotlin ecosystem. This kind of commercial support for a language was completely unheard of in the JVM world for fifteen years, and suddenly, we have two instances of it (Typesafe and now JetBrains). This is a good sign for the JVM community. 5) Still no Java successors Finally, the simple truth is that we still haven’t found any credible Java successor. Java still reigns supreme and is showing no sign of giving away any mindshare. Pulling numbers out of thin air, I would say that out of all the code currently running on the JVM today, maybe 94% of it is in Java, 3% is in Groovy and 1% is in Scala. This 94% figure needs to go down, but so far, no language has stepped up to whittle it down significantly. What will it take? Obviously, nothing that the current candidates are offering (closures, modularity, functional features, more concise syntax, etc…) has been enough to move that needle. Something is still missing. Could the missing piece be “stellar IDE support” or “reified generics”? We don’t know yet because no contender offers any of these features, but Kotlin will, so we will soon know. Either way, I am predicting that we will keep seeing new JVM languages pop up at a regular pace until one finally claims the prize. And this should be cause for rejoicing for everyone interested in the JVM ecosystem. So let’s cheer for Kotlin and wish JetBrains the best of luck with their endeavor. I can’t wait to see what will come out of this. Oh, and secretly, I am rooting for Eclipse to start working on their own JVM language too, obviously. From http://beust.com/weblog/2011/07/20/five-reasons-why-should-rejoice-about-kotlin/
July 22, 2011
by Cedric Beust
· 8,262 Views
article thumbnail
ExtJS 4 File Upload + Spring MVC 3 Example
this tutorial will walk you through out how to use the ext js 4 file upload field in the front end and spring mvc 3 in the back end. this tutorial is also an update for the tutorial ajax file upload with extjs and spring framework , implemented with ext js 3 and spring mvc 2.5. ext js file upload form first, we will need the ext js 4 file upload form. this one is the same as showed in ext js 4 docs . ext.onready(function(){ ext.create('ext.form.panel', { title: 'file uploader', width: 400, bodypadding: 10, frame: true, renderto: 'fi-form', items: [{ xtype: 'filefield', name: 'file', fieldlabel: 'file', labelwidth: 50, msgtarget: 'side', allowblank: false, anchor: '100%', buttontext: 'select a file...' }], buttons: [{ text: 'upload', handler: function() { var form = this.up('form').getform(); if(form.isvalid()){ form.submit({ url: 'upload.action', waitmsg: 'uploading your file...', success: function(fp, o) { ext.msg.alert('success', 'your file has been uploaded.'); } }); } } }] }); }); html page then in the html page, we will have a div where we are going to render the ext js form. this page also contains the required javascript imports extjs/resources/css/ext-all.css" /> click on "browse" button (image) to select a file and click on upload button fileupload bean we will also need a fileupload bean to represent the file as a multipart file: package com.loiane.model; import org.springframework.web.multipart.commons.commonsmultipartfile; /** * represents file uploaded from extjs form * * @author loiane groner * http://loiane.com * http://loianegroner.com */ public class fileuploadbean { private commonsmultipartfile file; public commonsmultipartfile getfile() { return file; } public void setfile(commonsmultipartfile file) { this.file = file; } } file upload controller then we will need a controller. this one is implemented with spring mvc 3. package com.loiane.controller; import org.springframework.stereotype.controller; import org.springframework.validation.bindingresult; import org.springframework.validation.objecterror; import org.springframework.web.bind.annotation.requestmapping; import org.springframework.web.bind.annotation.requestmethod; import org.springframework.web.bind.annotation.responsebody; import com.loiane.model.extjsformresult; import com.loiane.model.fileuploadbean; /** * controller - spring * * @author loiane groner * http://loiane.com * http://loianegroner.com */ @controller @requestmapping(value = "/upload.action") public class fileuploadcontroller { @requestmapping(method = requestmethod.post) public @responsebody string create(fileuploadbean uploaditem, bindingresult result){ extjsformresult extjsformresult = new extjsformresult(); if (result.haserrors()){ for(objecterror error : result.getallerrors()){ system.err.println("error: " + error.getcode() + " - " + error.getdefaultmessage()); } //set extjs return - error extjsformresult.setsuccess(false); return extjsformresult.tostring(); } // some type of file processing... system.err.println("-------------------------------------------"); system.err.println("test upload: " + uploaditem.getfile().getoriginalfilename()); system.err.println("-------------------------------------------"); //set extjs return - sucsess extjsformresult.setsuccess(true); return extjsformresult.tostring(); } ext js form return some people asked me how to return something to the form to display a message to the user. we can implement a pojo with a success property. the success property is the only thing ext js needs as a return: package com.loiane.model; /** * a simple return message for ext js * * @author loiane groner * http://loiane.com * http://loianegroner.com */ public class extjsformresult { private boolean success; public boolean issuccess() { return success; } public void setsuccess(boolean success) { this.success = success; } public string tostring(){ return "{success:"+this.success+"}"; } } spring config don’t forget to add the multipart file config in the spring config file: nullpointerexception i also got some questions about nullpointerexception. make sure the fileupload field name has the same name as the commonsmultipartfile property in the fileuploadbean class: extjs : { xtype: 'filefield', name: 'file', fieldlabel: 'file', labelwidth: 50, msgtarget: 'side', allowblank: false, anchor: '100%', buttontext: 'select a file...' } java: public class fileuploadbean { private commonsmultipartfile file; } these properties always have to match! you can still use the spring mvc 2.5 code with the ext js 4 code presented in this tutorial. download you can download the source code from my github repository (you can clone the project or you can click on the download button on the upper right corner of the project page): https://github.com/loiane/extjs4-file-upload-spring you can also download the source code form the google code repository: http://code.google.com/p/extjs4-file-upload-spring/ both repositories have the same source. google code is just an alternative. happy coding! from http://loianegroner.com/2011/07/extjs-4-file-upload-spring-mvc-3-example/
July 21, 2011
by Loiane Groner
· 71,025 Views
article thumbnail
Practical PHP Refactoring: Extract Class
Sometimes there is too much logic to deal with in a single class. You tried extracting methods, but they are so many that the design is still complex to understand. The next step in the refactoring quest is Extract Class, the creation of a new class whose objects will be referenced from the original class. Fields and methods may be moved in the new class, in order for the original to get smaller and more manageable. Why class inflation happens? This refactoring is always caused by classes growing in responsibilities. My personal hypothesis is that we as developers have a bias for the add field|method operations preferring it to the add class. Usually, creating a new class also mean adding an entire new file (hopefully) and more design considerations like its namespace and name. The mental cost for the developer is heavier, but the results are often better than for smaller extractions, as classes can be reused independently; extracted methods are instead clustered together. Moreover, our libraries (e.g. ORMs such as Doctrine 2) reinforce this bias by making significatively more difficult to extract a Value Object (should you serialize it? write a custom DBAL type?) or even another Entity (should I link to it with a @OneToOne? @OneToMany? Which cascade options will work? Which constraints the relationships has?) By the way, this solution is a manifestation of composition over inheritance in the refactoring realm (while there are also other options where the class is made smaller by introducing superclasses instead of an unrelated type.) What are the signs it's time to extract a class? You may encounter a subset of methods and fields that cluster together: for example, they are identified by a prefix; or they have have a temporal coupling which makes them change together faster or slower than the other fields. They may be of similar types (scalar or superclass). Another option to target higher cohesion is to simply see which fields are used together by each method in the class. Fowler's suggestion is to try removing each field (conceptually), and think about which other fields become useless. Repeat this and you'll find the subsets of fields to extract if they exist. Steps Divide responsibilities in source class and extracted class: fields and methods should be classified to one or the two targets. This is true for public and private members, but the latter may change scope to public to be visible from the source class. Create a new class, and check the names of the source and the new one. In the extracted class, you decide the name on the fly; in the source class, you have to change the old name if it's no longer applicable (and later, also the name of references in the system to its objects). It may be the case that the extracted class steals the name of the source one. Place a field in the source class referencing an object (or more) of the extracted class. The field may be initialized in the constructor, and also injected with a constructor parameter if the change is not too invasive. Move Field iteratively from the source to the extracted class. Move Method iteratively. If you bundle steps 4 and 5, you'll be faster, but the point is you should be able to go in smaller steps when it is necessary. Likewise, TDD is taught with baby steps because it gives you the ability to make them when required: everyone is capable of cutting a giant piece of code and fiddling with it for hours until it works again. But that involves a rewriting part, not only refactoring. Reduce the interfaces that each class exposes. Often the extracted one only needs public methods for what the original class uses, while the original class maintains its old protocol to avoiding ripple effects towards the rest of the object graph. In fact, it's not even said that you should expose the extracted class. In many cases, you don't have to; but you'll be able to use it independently by creating other objects, while there is often no need to expose this particular object, composed by the source class (and the Law of Demeter says you shouldn't.) Tests should be executable after each movement of fields or methods. Example In the example, we pass from an initial state where formatting and HTML logic is crammed into the same class: assertEquals('10,000.00', $moneyAmount->toHtml()); } } class MoneyAmount { /** * @param int $amount */ public function __construct($amount) { $this->amount = $amount; } public function toHtml() { $amount = $this->amount; $formatted = ''; while (strlen($amount) > 3) { $cut = strlen($amount) % 3; $cut = $cut == 0 ? 3 : $cut; $formatted .= substr($amount, 0, $cut) . ','; $amount = substr($amount, $cut); } $formatted .= $amount . '.00'; $html = "$formatted"; return $html; } } To two separated classes, one modelling the logical amount and its formatting, one taking care of printing HTML tags. assertEquals('10,000.00', $moneyAmount->toHtml()); } } class MoneySpan { /** * @param int $amount */ public function __construct(MoneyAmount $amount) { $this->amount = $amount; } public function toHtml() { $html = '' . $this->amount->format() . ''; return $html; } } class MoneyAmount { private $amount; public function __construct($amount) { $this->amount = $amount; } public function format() { $amount = $this->amount; $formatted = ''; while (strlen($amount) > 3) { $cut = strlen($amount) % 3; $cut = $cut == 0 ? 3 : $cut; $formatted .= substr($amount, 0, $cut) . ','; $amount = substr($amount, $cut); } return $formatted . $amount . '.00'; } } You can see the four intermediate steps in the Github history of the file.
July 20, 2011
by Giorgio Sironi
· 8,697 Views
article thumbnail
Testing Entity Validations with a Mock Entity - Roo in Action Corner
In Spring Roo in Action, Chapter 3, I discuss how Roo automatically executes the Bean Validators when persisting a live entity. However, when running unit tests, we don't have a live entity at all, nor do we have a Spring container - so how can we exercise the validation without actually hitting our Roo application and the database? The following post is ancillary material from the upcoming book Spring Roo in Action, by Ken Rimple and Srini Penchikala, with Gordon Dickens. You can purchase the MEAP edition of the book, and participate in the author forum, at www.manning.com/rimple. The answer is that we have to bootstrap the validation framework within the test ourselves. We can use the CourseDataOnDemand class's getNewTransientEntityName method to generate a valid, transient JPA entity. Then, we can: Mock static entity methods, such as findById, to bring back pre-fabricated class instances of your entity Initialize the validation engine, bootstrapping a JSR-303 bean validation framework engine, and perform validation on your entity Set any appropriate properties to apply to a particular test condition Initialize a test instance of the entity validator and assert the appropriate validation results are returned The concept in action... Given a Student entity with the following definition: @RooEntity @RooJavaBean @RooToString public class Student { @NotNull private String emergencyContactInfo; ... } The listing below shows a unit test method that ensures the NotNull validation fires against missing emergency contact information on the Student entity: @Test public void testStudentMissingEmergencyContactValidation() { // setup our test data StudentDataOnDemand dod = new StudentDataOnDemand(); // tell the mock to expect this call Student.findStudent(1L); // tell the mocking API to expect a return from the prior call in the form of // a new student from the test data generator, dod AnnotationDrivenStaticEntityMockingControl.expectReturn( dod.getNewTransientStudent(0)); // put our mock in playback mode AnnotationDrivenStaticEntityMockingControl.playback(); // Setup the validator API in our unit test LocalValidatorFactoryBean validator = new LocalValidatorFactoryBean(); validator.afterPropertiesSet(); // execute the call from the mock, set the emergency contact field // to an invalid value Student student = Student.findStudent(1L); student.setEmergencyContactInfo(null); // execute validation, check for violations Set> violations = validator.validate(student, Default.class); // do we have one? Assert.assertEquals(1, violations.size()); // now, check the constraint violations to check for our specific error ConstraintViolation violation = violations.iterator().next(); // contains the right message? Assert.assertEquals("{javax.validation.constraints.NotNull.message}", violation.getMessageTemplate()); // from the right field? Assert.assertEquals("emergencyContactInfo", violation.getPropertyPath().toString()); } Analysis The test starts with a declaration of a StudentOnDemand object, which we'll use to generate our test data. We'll get into the more advanced uses of the DataOnDemand Framework later in the chapter. For now, keep in mind that we can use this class to create an instance of an Entity, with randomly assigned, valid data. We then require that the test calls the Student.findStudent method, passing it a key of 1L. Next, we'll tell the entity mocking framework that the call should return a new transient Student instance. At this point, we've defined our static mocking behavior, so we'll put the mocking framework into playback mode. Next, we issue the actual Student.findById(1L) call, this time storing the result as the member variable student. This call will trip the mock, which will return a new transient instance. We then set the emergencyContactInfo field to null, so that it becomes invalid, as it is annotated with a @NotNull annotation. Now we are ready to set up our bean validation framework. We create a LocalValidatorFactoryBean instance, which will boot the Bean Validation Framework in the afterPropertiesSet() method, which is defined for any Spring Bean implementing InitializingBean. We must call this method ourselves, because Spring is not involved in our unit test. Now we're ready to run our validation and assert the proper behavior has occurred. We call our validator's validate method, passing it the student instance and the standard Default validation group, which will trigger validation. We'll then check that we only have one validation failure, and that the message template for the error is the same as the one for the @NotNull validation. We also check to ensure that the field that caused the validation was our emergencyContactInfo field. In our answer callback, we can launch the Bean Validation Framework, and execute the validate method against our entity. In this way, we can exercise our bean instance any way we want, and instead of persisting the entity, can perform the validation phase and exit gracefully. Caveats... There are a few things slightly wrong here. First of all, the Data on Demand classes actually use Spring to inject relationships to each other, which I've logged a bug against as ROO-2497. You can override the setup of the data on demand class and manually create the DoD of the referring one, which is fine. They have slated to work on this bug for Roo 1.2, so it should be fixed sometime in the next few months. Also, realize that this is NOT easy to do, compared to writing an integration test. However, this test runs markedly faster. If you have some sophisticated logic that you've attached to a @AssertTrue annotation, this is the way to test it in isolation. From http://www.rimple.com/tech/2011/7/17/testing-entity-validations-with-a-mock-entity-roo-in-action.html
July 20, 2011
by Ken Rimple
· 14,180 Views
article thumbnail
Software for Gear Design and Manufacturing Simulation on the NetBeans Platform
The "WZL Gear Toolbox", by the Laboratory for Machine Tools and Production Engineering at RWTH Aachen University, represents a unified graphical user interface containing different simulation programs for gear applications. It enables the usage of the following simulations: manufacturing simulation "GearGenerator" process simulation "SPARTApro" process simulation "KegelSpan" tooth contact analysis "ZaKo3D" The uniform graphical user interface enables the user to realize the simulations and to analyze the calculation results in a comfortable way. Thus, the "WZL Gear Toolbox" enables the analysis of the running behavior of gears by means of tooth contact analysis of the manufacturing-related deviations from the generating grinding. The long-term goal of the uniform graphical user interface is to provide a software tool containing the whole production chain of gear manufacturing. The "WZL Gear Toolbox" is funded by the WZL Gear Research Circle. Screenshot
July 19, 2011
by Jens Hofschröer
· 24,370 Views
  • Previous
  • ...
  • 811
  • 812
  • 813
  • 814
  • 815
  • 816
  • 817
  • 818
  • 819
  • 820
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: