DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
jPlay on the NetBeans Platform
My name is Carlos Hoces. I've never been involved, professionally speaking, in software development, though it's a personal passionate activity since my college times. I'm a telecommunication engineer, specialized in Electronics and Automation Equipment, and have worked in Control Systems Maintenance for companies like Westinghouse and Phillips Medical Systems, most of my professional time, stretching over 25 years. My long term relationship with computers has had more to do with embedded proprietary systems than with software itself. I'm currently unemployed and am having extreme difficulties to get a new job, due to my age, which is 54. I live in Asturias in the north of Spain. About jPlay jPlay is an open source desktop application for managing and playing music. (Another article about it can be read here.) It's currently almost a two years development effort: I'd been interested in the Java language since it showed up. However, I never got involved hands on too much, until recently. I have a long background in Assembler and Pascal programming, so once I became unemployed, about 4 years ago, I saw an opportunity to "spend" time enough to get updated myself into Java world. Those were the early stages of jPlay development: a project useful to improve my programming skills, this time using Java, along with its own usefulness as both an application and a tool-set for further development. There is another developer involved since a bit more than a year ago, Salvatore Ciaramella, who has been a source of excellent coding and ideas all this time. I must say jPlay is what it is now, due to his efforts too. This is now a "two players" project. Enter the NetBeans Platform Three months ago, Salvatore Ciaramella, my development colleague, and project co-administrator, made a proposal for moving the application from Swing Application Framework (SAF) to the NetBeans Platform. There were very good reasons to do it so: SAF was no longer under development, and our own SAF forge (which is also in the repository) didn't do anything better than an overall polishing. There were issues like Application Update and Plug-in support, which could take a great amount of development time to implement. Deployment was another main concern: we like to make life easier to use both at installation and starting up. The NetBeans Platform, among other benefits, solves these issues out-of-the-box. Unique Look and Feel We use the JTattoo library (http://www.jtattoo.net/index.html) for application LaF, which gets initialized via the main ModuleInstaller class. This gives us a great degree of user selectable LaF via the application main menu, under the Aspect menu. You may also notice we hide the tab from our main top component, following this tip. The remaining components you see are plain Swing ones, usually extended to give them some more functions. It's really all a visual trick! Some more screenshots are shown below:
October 31, 2010
by Carlos Hoces
· 12,433 Views
article thumbnail
Generate, Rename and Delete Getters/Setters Instantly in Eclipse
despite the arguments and debates about getters and setters in java, the fact is that they’re a reality and you have to work with them. but managing getters and setters is a time-consuming effort. creating a getter/setter for 5 fields in a class can take minutes, renaming one is error-prone and deleting one is just plain inconvenient. there are options like project lombok (that implicitly creates getters/setters without the need to code them) and you could avoid getters/setters altogether by redesigning your classes. but these options aren’t always available, so it’s a good thing eclipse has some handy features for managing getters and setters. combined with the ability to generate constructors based on fields , you can get the boilerplate code out of the way in seconds and get on with the real coding. generate getters and setters to generate getters and setters, do the following: create the fields you want in the class then press alt+shift+s, r . a dialog will pop up allowing you to choose the fields you want to generate getters and setters for. click select all to create getters/setters for all fields. of course you can choose individual fields as required. change insertion point to last member . this tells eclipse that you want to put the methods at the bottom of the class. this is normally the best option for me as i want them out of the way. click ok . eclipse will create the getters and setters for you. here’s an example of what the dialog should look like. note: by default, eclipse doesn’t allow you to create a setter for a final field – the setter just doesn’t appear in the dialog. this can be a nuisance, especially if you’ve enabled autoformatting to make fields final where possible. to bypass this restriction, enable the checkbox allow setters for final fields on the dialog. the setter for the field will now appear in the dialog. once you click ok, eclipse will remove the final keyword from the field and generate the setter. eclipse also remembers this setting. another way to add just a single getter/setter is to position your cursor anywhere in the class (outside any method), start typing either “get” or “set” and press ctrl+space . the options on the autocomplete menu will include any getters/setters of fields that don’t have any defined yet. this is a quick way to create a single getter/setter but isn’t geared for bulk creation. here’s an example of how the autocomplete looks: rename getters and setters the easiest way to rename getters/setters is to use the rename refactoring. place your cursor on the field name (anywhere in the class, not just the declaration) and press alt+shift+r . if you’re using in-place rename (the default), then just rename the field, press enter and eclipse will rename the corresponding getters and setters as well. if you’ve chosen to use the classic refactor dialog (see note below) then make sure you enable rename getter and rename setter on the rename dialog. note: you can choose to do renaming using the traditional rename dialog by going to window > preferences > java and unchecking rename in editor without dialog . i prefer using the rename dialog as it highlights the whole name by default making it easier to overwrite and i have the option of not renaming the getters and setters if i don’t want to. the eclipse default these days is to use the new in-place renaming. although eclipse will rename the getter/setter, it won’t rename the argument passed to the setter method. if you want consistency, you can navigate to that method (eg. using ctrl+o ) and rename the argument yourself. delete getters and setters deleting getters and setters isn’t as straightforward as just deleting the field in the editor. however, you can delete a field and its getters/setters from the outline view. open the outline view ( alt+shift+q, o ), select the field you want to delete and press delete (or right-click, delete). eclipse will ask you whether you want to delete the getters/setters as well. just choose yes to all and they will be removed. you need to have fields visible in the outline view to use this feature (ie. untoggle the hide fields button ). you can select multiple fields simultaneously. and you can delete individual getters/setters (excluding the field) by just selecting the getter/setter and pressing delete . from http://eclipseone.wordpress.com/2010/10/26/generate-rename-and-delete-getterssetters-in-eclipse/
October 27, 2010
by Byron M
· 157,076 Views · 2 Likes
article thumbnail
Know the JVM Series: Shutdown Hooks
Shutdown Hooks are a special construct that allow developers to plug in a piece of code to be executed when the JVM is shutting down.
October 23, 2010
by Yohan Liyanage
· 84,839 Views · 1 Like
article thumbnail
SQL Server: How to insert million numbers to table fast?
Yesterday I attended at local community evening where one of the most famous Estonian MVPs – Henn Sarv – spoke about SQL Server queries and performance. During this session we saw very cool demos and in this posting I will introduce you my favorite one – how to insert million numbers to table. The problem is: how to get one million numbers to table with less time? We can solve this problem using different approaches but not all of them are quick. Let’s go now step by step and see how different approaches perform. NB! The code samples here are not original ones but written by me as I wrote this posting. Using WHILE First idea for many guys is using WHILE. It is robust and primitive approach but it works if you don’t think about better solutions. Solution with WHILE is here. declare @i as int set @i = 0 while(@i < 1000000) begin insert into numbers values(@i) set @i += 1 end When we run this code we have to wait. Well… we have to wait couple of minutes before SQL Server gets done. On my heavily loaded development machine it took 6 minutes to run. Well, maybe we can do something. Using inline table As a next thing we may think that inline table that is kept in memory will boost up performance. Okay, let’s try out the following code. declare @t TABLE (number int) declare @i as int set @i = 0 while(@i < 1000000) begin insert into @t values(@i) set @i += 1 end insert into numbers select * from @t Okay, it is better – it took “only” 01:30 to run. It is better than six minutes but it is not good yet. Maybe we can do something more? Optimizing WHILE If we investigate the code in first example we can find one hidden resource eater. All these million inserts are run in separate transaction. Let’s try to run inserts in one transaction. declare @i as int set @i = 0 begin transaction while(@i < 1000000) begin insert into numbers values(@i) set @i += 1 end commit transaction Okay, it’s a lot better – 18 seconds only! Using only set operations Now let’s write some SQL that doesn’t use any sequential constructs like WHILE or other loops. We will write SQL that uses only set operations and no long running stuff like before. declare @t table (number int) insert into @t select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 insert into numbers select t1.number + t2.number*10 + t3.number*100 + t4.number*1000 + t5.number*10000 + t6.number*100000 from @t as t1, @t as t2, @t as t3, @t as t4, @t as t5, @t as t6 Bad side of this SQL is that it is not as intuitive for application programmers as previous examples. But when you are working with databases you have to know how some set calculus as well. The result is now seven seconds! Results As last thing, let’s see the results as bar chart to illustrate difference between approaches. I think this example shows very well how usual optimization can give you better results but when you are moving to sets – this is something that SQL Server and other databases understand better – you can get very good results in performance.
October 22, 2010
by Gunnar Peipman
· 34,545 Views
article thumbnail
Manage Hierarchical Data using Spring, JPA and Aspects
Managing hierarchical data using two dimentional tables is a pain. There are some patterns to reduce this pain. One such solution is described here. This article is about implementing the same using Spring, JPA, Annotations and Aspects. Please go through follow the link to better understand this solution described. The purpose is to come up with a component that will remove the boiler-plate code in the business layer to handle hierarchical data. Summary Create a base class for Entities used to represent Hierarchical data Create annotation classes Code the Aspect that will execute addional steps for managing Hierarchical data. (Heart of the solution) Now the Aspect can be used everywhere Hierarchical data is used. Detail Create base class for Entities used to represent Hierarchical data. The purpose of the super class is to encapsulate all the common attrubutes and operations required for managing hierarchical data in a table. Please note that the class is annotated as @MappedSuperclass. The methods are meant to generate queries required to perform CRUD operations on the Table. Their use will be more clear later in the article when we will revisit HierarchicalEntity. Now any Entity that extends this class will have all the attributes required to manage hierarchical data. import com.es.clms.aspect.HierarchicalEntity; import javax.persistence.EntityListeners; import javax.persistence.MappedSuperclass; @MappedSuperclass @EntityListeners({HierarchicalEntity.class}) public abstract class AbstractHierarchyEntity implements Serializable { protected Long parentId; protected Long lft; protected Long rgt; public String getMaxRightQuery() { return "Select max(e.rgt) from " + this.getClass().getName() + " e"; } public String getQueryForParentRight() { return "Select e.rgt from " + this.getClass().getName() + " e where e.id = ?1"; } public String getDeleteStmt() { return "Delete from " + this.getClass().getName() + " e Where e.lft between ?1 and ?2"; } public String getUpdateStmtForFirst() { return "Update " + this.getClass().getName() + " e set e.lft = e.lft + ?2 Where e.lft >= ?1"; } public String getUpdateStmtForRight() { return "Update " + this.getClass().getName() + " e set e.rgt = e.rgt + ?2 Where e.rgt >= ?1"; } . . .//Getter and setters for all the attributes. } Create annotation classes The following is an annotation class that will be used to annotate the methods that perform CRUD operations on hierarchical data. It is followed by an enum that will decide the type of CRUD operation to be performed. These classes will make more sense after the next section. import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface HierarchicalOperation { HierarchicalOperationType operationType(); } /** * Enum - Type of CRUD operation. */ public enum HierarchicalOperationType { SAVE, DELETE; } Code the Aspect that will execute addional steps for managing Hierarchical data. HierarchicalEntity is an aspect that performs the additional logic required to manage the hierarchical data as descriped in the article here. This is the first time I have used an Aspect, therefore I am sure that there are better ways to do this. Those of you, who are good at it, please improve this part of code. This class is annotated as @Aspect. The pointcut will intercept any method anotated with HierarchicalOperation and has a input of type AbstractHierarchyEntity. A sample its usage is in next section. operation method is annotated to be executed before the pointcut. Based on the HierarchicalOperationType passed, this method will either execute the additional tasks required to save or delete the hierarchical record. This is where the methods defined in AbstractHierarchyEntity for generating JPA Queries are used. GenericDAOHelper is a utility class for using JPA. import com.es.clms.annotation.HierarchicalOperation; import com.es.clms.annotation.HierarchicalOperationType; import com.es.clms.common.GenericDAOHelper; import com.es.clms.model.AbstractHierarchyEntity; import org.aspectj.lang.JoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut; import org.springframework.stereotype.Service; import org.springframework.beans.factory.annotation.Autowired; @Aspect @Service("hierarchicalEntity") public class HierarchicalEntity { @Autowired private GenericDAOHelper genericDAOHelper; @Pointcut(value = "execution(@com.es.clms.annotation.HierarchicalOperation * *(..)) " + "&& args(AbstractHierarchyEntity)") private void hierarchicalOps() { } /** * * @param jp * @param hierarchicalOperation */ @Before("hierarchicalOps() && @annotation(hierarchicalOperation) ") public void operation(final JoinPoint jp, final HierarchicalOperation hierarchicalOperation) { if (jp.getArgs().length != 1) { throw new IllegalArgumentException( "Expecting only one parameter of type AbstractHierarchyEntity in " + jp.getSignature()); } if (HierarchicalOperationType.SAVE.equals( hierarchicalOperation.operationType())) { save(jp); } else if (HierarchicalOperationType.DELETE.equals( hierarchicalOperation.operationType())) { delete(jp); } } /** * * @param jp */ private void save(JoinPoint jp) { AbstractHierarchyEntity entity = (AbstractHierarchyEntity) jp.getArgs()[0]; if (entity == null) return; if (entity.getParentId() == null) { Long maxRight = (Long) genericDAOHelper.executeSingleResultQuery( entity.getMaxRightQuery()); if (maxRight == null) { maxRight = 0L; } entity.setLft(maxRight + 1); entity.setRgt(maxRight + 2); } else { Long parentRight = (Long) genericDAOHelper.executeSingleResultQuery( entity.getQueryForParentRight(), entity.getParentId()); entity.setLft(parentRight); entity.setRgt(parentRight + 1); genericDAOHelper.executeUpdate( entity.getUpdateStmtForFirst(), parentRight, 2L); genericDAOHelper.executeUpdate( entity.getUpdateStmtForRight(), parentRight, 2L); } } /** * * @param jp */ private void delete(JoinPoint jp) { AbstractHierarchyEntity entity = (AbstractHierarchyEntity) jp.getArgs()[0]; genericDAOHelper.executeUpdate( entity.getDeleteStmt(), entity.getLft(), entity.getRgt()); Long width = (entity.getRgt() - entity.getLft()) + 1; genericDAOHelper.executeUpdate( entity.getUpdateStmtForFirst(), entity.getRgt(), width * (-1)); genericDAOHelper.executeUpdate( entity.getUpdateStmtForRight(), entity.getRgt(), width * (-1)); } } Sample Usage From this point on you don't have to worry about the additional tasks required for managing the data. Just use the HierarchicalOperation anotation with appropriate HierarchicalOperationType. Below is a sample use of the code developed so far. @HierarchicalOperation(operationType = HierarchicalOperationType.SAVE) public long save(VariableGroup group) { entityManager.persist(group); return group.getId(); } @HierarchicalOperation(operationType = HierarchicalOperationType.DELETE) public void delete(VariableGroup group) { entityManager.remove(entityManager.merge(group)); } http://rajeshkilango.blogspot.com/2010/10/manage-hierarchical-data-using-spring.html
October 21, 2010
by Rajesh Ilango
· 16,663 Views
article thumbnail
Tutorial: Linked/Cascading ExtJS Combo Boxes using Spring MVC 3 and Hibernate 3.5
this post will walk you through how to implement extjs linked/cascading/nested combo boxes using spring mvc 3 and hibernate 3.5. i am going to use the classic linked combo boxes: state and cities. in this example, i am going to use states and cities from brazil! what is our main goal? when we select a state from the first combo box, the application will load the second combo box with the cities that belong to the selected state. there are two ways to implement it. the first one is to load all the information you need for both combo boxes, and when user selects a state, the application will filter the cities combo box according to the selected state. the second one is to load information only to populate the state combo box. when user selects a state, the application will retrieve all the cities that belong to the selected state from database. which one is best? it depends on the amount of data you have to retrieve from your database. for example: you have a combo box that lists all the countries in the world. and the second combo box represents all the cities in the world. in this case, scenario number 2 is the best option, because you will have to retrieve a large amount of data from the database. ok. let’s get into the code. i’ll show how to implement both scenarios. first, let me explain a little bit of how the project is organized: let’s take a look at the java code. basedao: contains the hibernate template used for citydao and statedao. package com.loiane.dao; import org.hibernate.sessionfactory; import org.springframework.beans.factory.annotation.autowired; import org.springframework.orm.hibernate3.hibernatetemplate; import org.springframework.stereotype.repository; @repository public abstract class basedao { private hibernatetemplate hibernatetemplate; public hibernatetemplate gethibernatetemplate() { return hibernatetemplate; } @autowired public void setsessionfactory(sessionfactory sessionfactory) { hibernatetemplate = new hibernatetemplate(sessionfactory); } } citydao: contains two methods: one to retrieve all cities from database (used in scenario #1), and one method to retrieve all the cities that belong to a state (used in scenario #2). package com.loiane.dao; import java.util.list; import org.hibernate.criterion.detachedcriteria; import org.hibernate.criterion.restrictions; import org.springframework.stereotype.repository; import com.loiane.model.city; @repository public class citydao extends basedao{ public list getcitylistbystate(int stateid) { detachedcriteria criteria = detachedcriteria.forclass(city.class); criteria.add(restrictions.eq("stateid", stateid)); return this.gethibernatetemplate().findbycriteria(criteria); } public list getcitylist() { detachedcriteria criteria = detachedcriteria.forclass(city.class); return this.gethibernatetemplate().findbycriteria(criteria); } } statedao: contains only one method to retrieve all the states from database. package com.loiane.dao; import java.util.list; import org.hibernate.criterion.detachedcriteria; import org.springframework.stereotype.repository; import com.loiane.model.state; @repository public class statedao extends basedao{ public list getstatelist() { detachedcriteria criteria = detachedcriteria.forclass(state.class); return this.gethibernatetemplate().findbycriteria(criteria); } } city: represents the city pojo, represents the city table. package com.loiane.model; import javax.persistence.column; import javax.persistence.entity; import javax.persistence.generatedvalue; import javax.persistence.id; import javax.persistence.table; import org.codehaus.jackson.annotate.jsonautodetect; @jsonautodetect @entity @table(name="city") public class city { private int id; private int stateid; private string name; //getters and setters } state: represents the state pojo, represents the state table. package com.loiane.model; import javax.persistence.column; import javax.persistence.entity; import javax.persistence.generatedvalue; import javax.persistence.id; import javax.persistence.table; import org.codehaus.jackson.annotate.jsonautodetect; @jsonautodetect @entity @table(name="state") public class state { private int id; private int countryid; private string code; private string name; //getters and setters } cityservice: contains two methods: one to retrieve all cities from database (used in scenario #1), and one method to retrieve all the cities that belong to a state (used in scenario #2). only makes a call to citydao class. package com.loiane.service; import java.util.list; import org.springframework.beans.factory.annotation.autowired; import org.springframework.stereotype.service; import com.loiane.dao.citydao; import com.loiane.model.city; @service public class cityservice { private citydao citydao; public list getcitylistbystate(int stateid) { return citydao.getcitylistbystate(stateid); } public list getcitylist() { return citydao.getcitylist(); } @autowired public void setcitydao(citydao citydao) { this.citydao = citydao; } } stateservice: contains only one method to retrieve all the states from database. makes a call to statedao. package com.loiane.service; import java.util.list; import org.springframework.beans.factory.annotation.autowired; import org.springframework.stereotype.service; import com.loiane.dao.statedao; import com.loiane.model.state; @service public class stateservice { private statedao statedao; public list getstatelist() { return statedao.getstatelist(); } @autowired public void setstatedao(statedao statedao) { this.statedao = statedao; } } citycontroller: contains two methods: one to retrieve all cities from database (used in scenario #1), and one method to retrieve all the cities that belong to a state (used in scenario #2). only makes a call to cityservice class. both methods return a json object that looks like this: {"data":[ {"stateid":1,"name":"acrelândia","id":1}, {"stateid":1,"name":"assis brasil","id":2}, {"stateid":1,"name":"brasiléia","id":3}, {"stateid":1,"name":"bujari","id":4}, {"stateid":1,"name":"capixaba","id":5}, {"stateid":1,"name":"cruzeiro do sul","id":6}, {"stateid":1,"name":"epitaciolândia","id":7}, {"stateid":1,"name":"feijó","id":8}, {"stateid":1,"name":"jordão","id":9}, {"stateid":1,"name":"mâncio lima","id":10}, ]} class: package com.loiane.web; import java.util.hashmap; import java.util.map; import org.springframework.beans.factory.annotation.autowired; import org.springframework.stereotype.controller; import org.springframework.web.bind.annotation.requestmapping; import org.springframework.web.bind.annotation.requestparam; import org.springframework.web.bind.annotation.responsebody; import com.loiane.service.cityservice; @controller @requestmapping(value="/city") public class citycontroller { private cityservice cityservice; @requestmapping(value="/getcitiesbystate.action") public @responsebody map getcitiesbystate(@requestparam int stateid) throws exception { map modelmap = new hashmap(3); try{ modelmap.put("data", cityservice.getcitylistbystate(stateid)); return modelmap; } catch (exception e) { e.printstacktrace(); modelmap.put("success", false); return modelmap; } } @requestmapping(value="/getallcities.action") public @responsebody map getallcities() throws exception { map modelmap = new hashmap(3); try{ modelmap.put("data", cityservice.getcitylist()); return modelmap; } catch (exception e) { e.printstacktrace(); modelmap.put("success", false); return modelmap; } } @autowired public void setcityservice(cityservice cityservice) { this.cityservice = cityservice; } } statecontroller: contains only one method to retrieve all the states from database. makes a call to stateservice. the method returns a json object that looks like this: {"data":[ {"countryid":1,"name":"acre","id":1,"code":"ac"}, {"countryid":1,"name":"alagoas","id":2,"code":"al"}, {"countryid":1,"name":"amapá","id":3,"code":"ap"}, {"countryid":1,"name":"amazonas","id":4,"code":"am"}, {"countryid":1,"name":"bahia","id":5,"code":"ba"}, {"countryid":1,"name":"ceará","id":6,"code":"ce"}, {"countryid":1,"name":"distrito federal","id":7,"code":"df"}, {"countryid":1,"name":"espírito santo","id":8,"code":"es"}, {"countryid":1,"name":"goiás","id":9,"code":"go"}, {"countryid":1,"name":"maranhão","id":10,"code":"ma"}, {"countryid":1,"name":"mato grosso","id":11,"code":"mt"}, {"countryid":1,"name":"mato grosso do sul","id":12,"code":"ms"}, {"countryid":1,"name":"minas gerais","id":13,"code":"mg"}, {"countryid":1,"name":"pará","id":14,"code":"pa"}, ]} class: package com.loiane.web; import java.util.hashmap; import java.util.map; import org.springframework.beans.factory.annotation.autowired; import org.springframework.stereotype.controller; import org.springframework.web.bind.annotation.requestmapping; import org.springframework.web.bind.annotation.responsebody; import com.loiane.service.stateservice; @controller @requestmapping(value="/state") public class statecontroller { private stateservice stateservice; @requestmapping(value="/view.action") public @responsebody map view() throws exception { map modelmap = new hashmap(3); try{ modelmap.put("data", stateservice.getstatelist()); return modelmap; } catch (exception e) { e.printstacktrace(); modelmap.put("success", false); return modelmap; } } @autowired public void setstateservice(stateservice stateservice) { this.stateservice = stateservice; } } inside the webcontent folder we have: ext-3.2.1 – contains all extjs files js – contains all javascript files i implemented for this example. liked-comboboxes-local.js contains the combo boxes for scenario #1; liked-comboboxes-remote.js contains the combo boxes for scenario #2; linked-comboboxes.js contains a tab panel that contains both scenarios. now let’s take a look at the extjs code. scenario number 1: retrieve all the data from database to populate both combo boxes. will user filter on cities combo box. liked-comboboxes-local.js var localform = new ext.formpanel({ width: 400 ,height: 300 ,style:'margin:16px' ,bodystyle:'padding:10px' ,title:'linked combos - local filtering' ,defaults: {xtype:'combo'} ,items:[{ fieldlabel:'select state' ,displayfield:'name' ,valuefield:'id' ,store: new ext.data.jsonstore({ url: 'state/view.action', remotesort: false, autoload:true, idproperty: 'id', root: 'data', totalproperty: 'total', fields: ['id','name'] }) ,triggeraction:'all' ,mode:'local' ,listeners:{select:{fn:function(combo, value) { var combocity = ext.getcmp('combo-city-local'); combocity.clearvalue(); combocity.store.filter('stateid', combo.getvalue()); } } },{ fieldlabel:'select city' ,displayfield:'name' ,valuefield:'id' ,id:'combo-city-local' ,store: new ext.data.jsonstore({ url: 'city/getallcities.action', remotesort: false, autoload:true, idproperty: 'id', root: 'data', totalproperty: 'total', fields: ['id','stateid','name'] }) ,triggeraction:'all' ,mode:'local' ,lastquery:'' }] }); the state combo box is declared on lines 9 to 28. the city combo box is declared on lines 31 to 46. note that both stores are loaded when we load the page, as we can see in lines 15 and 38 (autoload:true). the state combo box has select event listener that, when executed, filters the cities combo (the child combo) based on the currently selected state. you can see it on lines 23 to 28. cities combo has lastquery:”" . this is to fool internal combo filtering routines on the first page load. the cities combo just thinks that it has already been expanded once. scenario number 2: retrieve all the state data from database to populate state combo. when user selects a state, application will retrieve from database only selected information. liked-comboboxes-remote.js: var databaseform = new ext.formpanel({ width: 400 ,height: 200 ,style:'margin:16px' ,bodystyle:'padding:10px' ,title:'linked combos - database' ,defaults: {xtype:'combo'} ,items:[{ fieldlabel:'select state' ,displayfield:'name' ,valuefield:'id' ,store: new ext.data.jsonstore({ url: 'state/view.action', remotesort: false, autoload:true, idproperty: 'id', root: 'data', totalproperty: 'total', fields: ['id','name'] }) ,triggeraction:'all' ,mode:'local' ,listeners: { select: { fn:function(combo, value) { var combocity = ext.getcmp('combo-city'); //set and disable cities combocity.setdisabled(true); combocity.setvalue(''); combocity.store.removeall(); //reload city store and enable city combobox combocity.store.reload({ params: { stateid: combo.getvalue() } }); combocity.setdisabled(false); } } } },{ fieldlabel:'select city' ,displayfield:'name' ,valuefield:'id' ,disabled:true ,id:'combo-city' ,store: new ext.data.jsonstore({ url: 'city/getcitiesbystate.action', remotesort: false, idproperty: 'id', root: 'data', totalproperty: 'total', fields: ['id','stateid','name'] }) ,triggeraction:'all' ,mode:'local' ,lastquery:'' }] }); the state combo box is declared on lines 9 to 38. the city combo box is declared on lines 40 to 55. note that only state combo store is loaded when we load the page, as we can see at the line 15 (autoload:true). the state combo box has the select event listener that, when executed, reloads the data for cities store (passes stateid as parameter) based on the currently selected state. you can see it on lines 24 to 38. cities combo has lastquery:”" . this is to fool internal combo filtering routines on the first page load. the cities combo just thinks that it has already been expanded once. you can download the complete project from my github repository: i used eclipse ide + tomcat 7 to develop this sample project. references: http://www.sencha.com/learn/tutorial:linked_combos_tutorial_for_ext_2 from http://loianegroner.com/2010/10/tutorial-linkedcascading-extjs-combo-boxes-using-spring-mvc-3-and-hibernate-3-5/
October 20, 2010
by Loiane Groner
· 32,000 Views · 1 Like
article thumbnail
Spring 3 WebMVC - Optional Path Variables
Introduction To bind requests to controller methods via request pattern, Spring WebMVC's REST-feature is a perfect choice. Take a request like http://example.domain/houses/213234 and you can easily bind it to a controller method via annotation and bind path variables: ... @RequestMapping("/houses/{id}") public String handleHouse(@PathVariable long id) { return "viewHouse"; } Problem But the problem was, I needed optional path segments small and preview. That means, it would like to handle requests like /houses/preview/small/213234, /houses/small/213234, /houses/preview/213234 and the original /houses/213234. Why can't I use only one @RequestMapping for that. Ok, I could introduce three new methods with request mappings: ... @RequestMapping("/houses/{id}") public String handleHouse(@PathVariable long id) { return "viewHouse"; } @RequestMapping("/houses/preview/{id}") ... @RequestMapping("/houses/preview/small/{id}") ... @RequestMapping("/houses/small/{id}") ... But imagine I have 3 or 4 optional path segments. I would had 8 or 16 methods to handle all request. So, there must be another option. Browsing through the source code, I found an org.springframework.util.AntPathMatcher which is reponsable for parsing the request uri and extract variables. It seems to be the right place for an extension. Here is, how I would like to write my handler method: @RequestMapping("/houses/[preview/][small/]{id}") public String handlePreview(@PathVariable long id, @PathVariable("preview/") boolean preview, @PathVariable("small/") boolean small) { return "view"; } Solution Let's do the extension: package de.herold.spring3; import java.util.HashMap; import java.util.Map; import org.springframework.util.AntPathMatcher; /** * Extends {@link AntPathMatcher} to introduce the feature of optional path * variables. It's supports request mappings like: * * * @RequestMapping("/houses/[preview/][small/]{id}") * public String handlePreview(@PathVariable long id, @PathVariable("preview/") boolean preview, @PathVariable("small/") boolean small) { * ... * } * * */ public class OptionalPathMatcher extends AntPathMatcher { public static final String ESCAPE_BEGIN = "["; public static final String ESCAPE_END = "]"; /** * stores a request mapping pattern and corresponding variable * configuration. */ protected static class PatternVariant { private final String pattern; private Map variables; public Map getVariables() { return variables; } public PatternVariant(String pattern) { super(); this.pattern = pattern; } public PatternVariant(PatternVariant parent, int startPos, int endPos, boolean include) { final String p = parent.getPattern(); final String varName = p.substring(startPos + 1, endPos); this.pattern = p.substring(0, startPos) + (include ? varName : "") + p.substring(endPos + 1); this.variables = new HashMap(); if (parent.getVariables() != null) { this.variables.putAll(parent.getVariables()); } this.variables.put(varName, Boolean.toString(include)); } public String getPattern() { return pattern; } } /** * here we use {@link AntPathMatcher#doMatch(String, String, boolean, Map)} * to do the real match against the * {@link #getPatternVariants(PatternVariant) calculated patters}. If * needed, template variables are set. */ @Override protected boolean doMatch(String pattern, String path, boolean fullMatch, Map uriTemplateVariables) { for (PatternVariant patternVariant : getPatternVariants(new PatternVariant(pattern))) { if (super.doMatch(patternVariant.getPattern(), path, fullMatch, uriTemplateVariables)) { if (uriTemplateVariables != null && patternVariant.getVariables() != null) { uriTemplateVariables.putAll(patternVariant.getVariables()); } return true; } } return false; } /** * build recursicly all possible request pattern for the given request * pattern. For pattern: /houses/[preview/][small/]{id}, it * generates all combinations: /houses/preview/small/{id}, * /houses/preview/{id} /houses/small/{id} * /houses/{id} */ protected PatternVariant[] getPatternVariants(PatternVariant variant) { final String pattern = variant.getPattern(); if (!pattern.contains(ESCAPE_BEGIN)) { return new PatternVariant[] { variant }; } else { int startPos = pattern.indexOf(ESCAPE_BEGIN); int endPos = pattern.indexOf(ESCAPE_END, startPos + 1); PatternVariant[] withOptionalParam = getPatternVariants(new PatternVariant(variant, startPos, endPos, true)); PatternVariant[] withOutOptionalParam = getPatternVariants(new PatternVariant(variant, startPos, endPos, false)); return concat(withOptionalParam, withOutOptionalParam); } } /** * utility function for array concatenation */ private static PatternVariant[] concat(PatternVariant[] A, PatternVariant[] B) { PatternVariant[] C = new PatternVariant[A.length + B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } } Now let Spring use our new path matcher. Here you have the Spring application context: That's it. Just set the pathMatcher property of AnnotationMethodHandlerAdapter. Fazit I know this implementation is far from being elegant or effective. My intention was to show an extension of the AntPathMatcher. Maybe someone gets inspired and provides an extension to handle pattern like: @RequestMapping("/houses/{**}/{id}") public String handleHouse(@PathVariable long id, @PathVariable("**") String inBetween) { return "viewHouse"; } to get the "**" as a concrete value. The sources for a little prove of concept are attached.
October 19, 2010
by Sebastian Herold
· 51,382 Views · 1 Like
article thumbnail
Step-by-Step Instructions for Integrating DJ Native Swing into NetBeans RCP
Here's how to integrate DJ Native Swing into a NetBeans RCP application. We will create multiple operating-system specific modules, each with the JARs and supporting classes needed for the relevant operating system. Then, in a module installer, we will enable only the module that is relevant for the operating system in question. I.e., if the user is on Windows, only the Windows module will be enabled, while all other modules will be disabled. Many thanks to Aljoscha Rittner for all of the code and each of the steps below. Any errors are my own, his instructions are perfect. 1. Download DJ Native Swing. 2. Go to download.eclipse.org/eclipse/downloads/drops/R-3.6-201006080911/. There, under the heading "SWT Binary and Source", download and extract the os-specific ZIPs that you want to support. 3. Unzip your downloaded ZIPs. Rename your swt.jar in the unzipped files to the full name of the zip. For example, instead of multiple "swt.jar" files, you'll now have JAR names such as "swt-3.6-gtk-linux-x86_64.jar" and "swt-3.6-win32-win32-x86_64.jar". Because these JARs will be in the same cluster folder in the NetBeans Platform application, they will need to have different names. 4. Let's start with Linux. Put the two DJ Native Swing JARs ("DJNativeSwing.jar" and "DJNativeSwing-SWT.jar") into a NetBeans library wrapper module named "com.myapp.nativeswing.linux64". Also put the "swt-3.6-gtk-linux-x86_64.jar" into the library wrapper module, while checking the "project.xml" and making sure there's a classpath extension entry for each of the three JARs in your library wrapper module. 5. Do the same for all the operating systems you're supporting, i.e., create a new library wrapper module like the above, with the operating-system specific SWT Jar, together with the two DJ Native Swing JARs. 6. Create a new module named "DJNativeSwingAPI", with code name base "com.myapp.nativeswing.api". 7. In the above main package, create a subpackage "browser", where you'll create an API to access the different implementations: public interface Browser { public JComponent getBrowserComponent(); public void browseTo (URL url); public void dispose(); } public interface BrowserProvider { public Browser createBrowser(); } 8. Make the "browser" package public and let all the operating-system specific library wrapper modules depend on the API module. 9. In each of the operating-system specific modules, create an "impl" subpackage with the following content, here specifically for Windows 64 bit: package com.myapp.nativeswing.windows64.impl; import chrriis.dj.nativeswing.swtimpl.NativeInterface; import com.myapp.nativeswing.api.browser.Browser; import com.myapp.nativeswing.api.browser.BrowserProvider; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = BrowserProvider.class) public class Win64BrowserProvider implements BrowserProvider { private boolean isInitialized; @Override public Browser createBrowser() { initialize(); return new Win64Browser(); } private synchronized void initialize() { if (!isInitialized) { NativeInterface.open(); isInitialized = true; } } } import chrriis.dj.nativeswing.NSComponentOptions; import chrriis.dj.nativeswing.swtimpl.components.JWebBrowser; import com.myapp.nativeswing.api.browser.Browser; import java.net.URL; import javax.swing.JComponent; class Win64Browser implements Browser { private JWebBrowser webBrowser; public Win64Browser() { //If not this, browser component creates exceptions when you move it around, //this flag is for the native peers to recreate in the new place: webBrowser = new JWebBrowser(NSComponentOptions.destroyOnFinalization()); } public JComponent getBrowserComponent() { return webBrowser; } public void browseTo(URL url) { webBrowser.navigate(url.toString()); } public void dispose() { webBrowser.disposeNativePeer(); webBrowser = null; } } 10. Copy the above two classes into all your other operating-system specific library wrapper modules. Rename the classes accordingly. 11. In the DJ Native Swing API module, create a new subpackage named "utils", with this class, which programmatically enables/disables modules using the NetBeans AutoUpdate API: import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Set; import org.netbeans.api.autoupdate.OperationContainer; import org.netbeans.api.autoupdate.OperationContainer.OperationInfo; import org.netbeans.api.autoupdate.OperationException; import org.netbeans.api.autoupdate.OperationSupport; import org.netbeans.api.autoupdate.OperationSupport.Restarter; import org.netbeans.api.autoupdate.UpdateElement; import org.netbeans.api.autoupdate.UpdateManager; import org.netbeans.api.autoupdate.UpdateUnit; import org.openide.LifecycleManager; import org.openide.modules.ModuleInfo; import org.openide.util.Exceptions; import org.openide.util.Lookup; /** * Der ModuleHandler ist eine Hilfsklasse zum programatischen (de)aktivieren * von Modulen und der Analyse von installierten aktiven Modulen. * @author rittner */ public class ModuleHandler { private boolean restart = false; private OperationContainer oc; private Restarter restarter; private final boolean directMode; public ModuleHandler() { this (false); } public ModuleHandler(boolean directMode) { this.directMode = directMode; } /** * Gibt eine sortierte Liste der Codename-Base aller aktiven installierten * Module zurück. * * Es handelt sich dabei explizit um einen aktuellen Zwischenstand, der sich * jeder Zeit verändern kann. * @param startFilter Es werden nur die Module zurückgegeben, die mit dem Startfilter-Namen anfangen (oder null für alle) * @param includeDisabled Wenn true, werden auch alle inaktiven Module ermittelt. * @return Sortierte Liste der Codename-Base */ public List getModules(String startFilter, boolean includeDisabled) { List activatedModules = new ArrayList(); Collection lookupAll = Lookup.getDefault().lookupAll(ModuleInfo.class); for (ModuleInfo moduleInfo : lookupAll) { if (includeDisabled || moduleInfo.isEnabled()) { if (startFilter == null || moduleInfo.getCodeNameBase().startsWith(startFilter)) { activatedModules.add(moduleInfo.getCodeNameBase()); } } } Collections.sort(activatedModules); return activatedModules; } /** * Führt einen Neustart der Anwendung durch, wenn der vorherige setModulesState * ein Flag dafür gesetzt hat. mit force, kann der Restart erzwungen werden. * * Man sollte nicht davon ausgehen, dass nach dem Aufruf der Methode * zurückgekehrt wird. * @param force */ public void doRestart(boolean force) { if (force || restart) { if (oc != null && restarter != null) { try { oc.getSupport().doRestart(restarter, null); } catch (OperationException ex) { Exceptions.printStackTrace(ex); } } else { LifecycleManager.getDefault().markForRestart(); LifecycleManager.getDefault().exit(); } } } /** * Aktiviert oder deaktivert die Liste der Module * @param enable * @param codeNames * @return true, wenn ein Neustart zwingend erforderlich ist */ public boolean setModulesState (boolean enable, Set codeNames) { boolean restartFlag; if (enable) { restartFlag = setModulesEnabled(codeNames); } else { restartFlag = setModulesDisabled(codeNames); } return restart = restart || restartFlag; } private boolean setModulesDisabled(Set codeNames) { Collection toDisable = new HashSet(); List allUpdateUnits = UpdateManager.getDefault().getUpdateUnits(UpdateManager.TYPE.MODULE); for (UpdateUnit unit : allUpdateUnits) { if (unit.getInstalled() != null) { UpdateElement el = unit.getInstalled(); if (el.isEnabled()) { if (codeNames.contains(el.getCodeName())) { toDisable.add(el); } } } } if (!toDisable.isEmpty()) { oc = directMode ? OperationContainer.createForDirectDisable() : OperationContainer.createForDisable(); for (UpdateElement module : toDisable) { if (oc.canBeAdded(module.getUpdateUnit(), module)) { OperationInfo operationInfo = oc.add(module); if (operationInfo == null) { continue; } // get all module depending on this module Set requiredElements = operationInfo.getRequiredElements(); // add all of them between modules for disable oc.add(requiredElements); } } try { // get operation support for complete the disable operation OperationSupport support = oc.getSupport(); // If support is null, no element can be disabled. if ( support != null ) { restarter = support.doOperation(null); } } catch (OperationException ex) { Exceptions.printStackTrace(ex); } } return restarter != null; } private boolean setModulesEnabled(Set codeNames) { Collection toEnable = new HashSet(); List allUpdateUnits = UpdateManager.getDefault().getUpdateUnits(UpdateManager.TYPE.MODULE); for (UpdateUnit unit : allUpdateUnits) { if (unit.getInstalled() != null) { UpdateElement el = unit.getInstalled(); if (!el.isEnabled()) { if (codeNames.contains(el.getCodeName())) { toEnable.add(el); } } } } if (!toEnable.isEmpty()) { oc = OperationContainer.createForEnable(); for (UpdateElement module : toEnable) { if (oc.canBeAdded(module.getUpdateUnit(), module)) { OperationInfo operationInfo = oc.add(module); if (operationInfo == null) { continue; } // get all module depending on this module Set requiredElements = operationInfo.getRequiredElements(); // add all of them between modules for disable oc.add(requiredElements); } } try { // get operation support for complete the enable operation OperationSupport support = oc.getSupport(); if (support != null) { restarter = support.doOperation(null); } return true; } catch (OperationException ex) { Exceptions.printStackTrace(ex); } } return false; } } 12. Create a ModuleInstall class in the API module. In this class, we need to create a map, connecting all the operating systems to the related code name base of the module relevant to the specific operating system. For this, we use "os.arch" and "os.name". Then we create an enable list and a disable list for the code name base. We create two handlers, one to disable everything, the other to enable just the relevant module. public class Installer extends ModuleInstall { @Override public void restored() { Map modelMap = new HashMap(); modelMap.put("Windows.64", "com.myapp.nativeswing.windows64"); modelMap.put("Linux.64", "com.myapp.nativeswing.linux64"); String osArch = System.getProperty("os.arch"); if ("amd64".equals(osArch)) { osArch = "64"; } else { osArch = "32"; } String osName = System.getProperty("os.name"); if (osName.startsWith("Windows")) { osName = "Windows"; } if (osName.startsWith("Mac")) { osName = "Mac"; } Map osNameMap = new HashMap(); osNameMap.put("Windows", "Windows"); osNameMap.put("Linux", "Linux"); osNameMap.put("Mac", "Mac"); String toEnable = modelMap.get(osNameMap.get(osName) + "." + osArch); Set toDisable = new HashSet(modelMap.values()); if (toEnable != null) { toDisable.remove(toEnable); } ModuleHandler disabler = new ModuleHandler(true); disabler.setModulesState(false, toDisable); ModuleHandler enabler = new ModuleHandler(true); enabler.setModulesState(true, Collections.singleton(toEnable)); } } 13. Finally, create yet another module, where the TopComponent will be found that will host the browser from DJ Native Swing. So, create a new module, add a window where the browser will appear, and set a dependency on the DJ Native Swing API module. In the constructor of the window add the following: setLayout(new BorderLayout()); BrowserProvider bp = Lookup.getDefault().lookup(BrowserProvider.class); if (bp!=null){ Browser createBrowser = bp.createBrowser(); add(createBrowser.getBrowserComponent(), BorderLayout.CENTER); } 14. By default, library wrapper modules are set to "1.4" source code level and to "autoload". You will need to change "1.4" to "1.6" (since you're using annotations above). You will also need to change "autoload" to "regular", otherwise they will never be loaded, since no module depends on them. 15. On Linux, at least on Ubuntu, make sure you have done something like this: export MOZILLA_FIVE_HOME=/usr/lib/mozilla export LD_LIBRARY_PATH=$MOZILLA_FIVE_HOME On Linux (at least on Ubuntu), you also need to set an impl dependency on the "JNA" module. 16. In "platform.properties", add this line: run.args.extra=-J-Dsun.awt.disableMixing=true Hurray, you're done, once you run the application: Note: above I followed these instructions to remove the tab in the browser window.
October 18, 2010
by Geertjan Wielenga
· 51,645 Views
article thumbnail
Practical PHP Patterns: Record Set
This is the last article from the Practical PHP Patterns series. Stay tuned on css.dzone.com for the new series, Practical PHP Testing Patterns. The RecordSet pattern's goal is to represent a set of relational database rows, with the main purpose of giving access to their values (a data structure), sometimes with the possibility of modification (via single Row Data Gateway instances). Despite the term set, the rows have usually a defined order. The intent is ultimately representing a result from an SQL query with an object, to gain the usual advantages of objects over scalars and functions: it can be passed around but maintain its encapsulated behavior, injected, mocked, wrapped and so on. A Record Set is usually not mocked if it is provided by an external extension or library, because instancing a real one working on a lightweight database such as Sqlite is used in substitution for the real one. Especially in the PHP world, this solution is fast enough to become a standard. Today, Record Set is less used on the client side to favor Object-Relational Mapping approaches, which make some kind of translation over the raw rows (what an horrible pun). Record Set instead maintains by definition a one-to-one relationship with the table rows, and it is still diffused in ORM internals (such as Doctrine's own code) or in applications that call PDO directly. It is a fairly basic pattern, but given that a vast part of legacy PHP applications still uses mysql_query()... Interesting things happen when... Interesting leverages of this pattern happen when someone stays in the middle between the Record Set creation and the user interface, with the goal of modifying or decorating it. The UI can then explore the RecordSet and automatically generate itself, in a form of scaffolding. Continuing on this line of thought, UI components can edit the RecordSet without knowing the model which it refers to, via building forms driven by the Record Set metadata. This solution is diffused, but it does not scale to Domain Models with a level of complexity greater than plain arrays. Of course, the Record Set may also encapsulate business logic as a low-cost form of Domain Model. In this case, the ability to unlink it from the database connection is important to its serializability and ease of testing. Examples PDOStatement represents both a SQL query and a RecordSet implementation, after it has been executed. When fetching all the results, it returns an array. In other languages Record Sets are more evolved and can for example be used to navigate a table and modify only certain records (via their annexed Row Data Gateway). PDOStatement is used only for reading. If you want further functionalities (which obviously depends on your domain), you should create your own Record Set accordingly. It is probably best to wrap the PDOStatement because extending it is out of question due to the instantiation not being under our control. connection = $connection; } public function getTweetsRecordSet($username) { /** * @var PDOStatement this is a Record Set */ $stmt = $this->connection->prepare('SELECT * FROM tweets WHERE username = :username'); $stmt->bindValue(':username', $username, PDO::PARAM_STR); $stmt->execute(); return $stmt; } } $pdo = new PDO('sqlite::memory:'); $pdo->exec('CREATE TABLE tweets (id INT NOT NULL, username VARCHAR(255) NOT NULL, text VARCHAR(255) NOT NULL, PRIMARY KEY(id))'); $pdo->exec('INSERT INTO tweets (id, username, text) VALUES (42, "giorgiosironi", "Practical PHP Patterns has come to an end")'); $pdo->exec('INSERT INTO tweets (id, username, text) VALUES (43, "giorgiosironi", "Cool series: will continue as Practical PHP Testing Patterns")'); // client code $table = new TweetsTable($pdo); $recordSet = $table->getTweetsRecordSet('giorgiosironi'); while ($row = $recordSet->fetch()) { var_dump($row['text']); }
October 18, 2010
by Giorgio Sironi
· 3,110 Views
article thumbnail
Enum Tricks: Customized valueOf
When I am writing enumerations I very often found myself implementing a static method similar to the standard enum’s valueOf() but based on field rather than name: public static TestOne valueOfDescription(String description) { for (TestOne v : values()) { if (v.description.equals(description)) { return v; } } throw new IllegalArgumentException( "No enum const " + TestOne.class + "@description." + description); } Where “description” is yet another String field in my enum. And I am not alone. See this article for example. Obviously this method is very ineffective. Every time it is invoked it iterates over all members of the enum. Here is the improved version that uses a cache: private static Map map = null; public static TestTwo valueOfDescription(String description) { synchronized(TestTwo.class) { if (map == null) { map = new HashMap(); for (TestTwo v : values()) { map.put(v.description, v); } } } TestTwo result = map.get(description); if (result == null) { throw new IllegalArgumentException( "No enum const " + TestTwo.class + "@description." + description); } return result; } It is fine if we have only one enum and only one custom field that we use to find the enum value. But if we have 20 enums, and each has 3 such fields, then the code will be very verbose. As I dislike copy/paste programming I have implemented a utility that helps to create such methods. I called this utility class ValueOf. It has 2 public methods: public static , V> T valueOf(Class enumType, String fieldName, V value); which finds the required field in specified enum. It is implemented utilizing reflection and uses a hash table initialized during the first call for better performance. The other overridden valueOf() looks like: public static > T valueOf(Class enumType, Comparable comparable); This method does not cache results, so it iterates over enum members on each invocation. But it is more universal: you can implement comparable as you want, so this method may find enum members using more complicated criteria. Full code with examples and JUnit test case are available here. Conclusions Java Enums provide the ability to locate enum members by name. This article describes a utility that makes it easy to locate enum members by any other field.
October 16, 2010
by Alexander Radzin
· 79,016 Views
article thumbnail
Reduce Boilerplate Code for DAO's -- Hades Introduction
Most web applications will have DAO's for accessing the database layer. A DAO provides an interface for some type of database or persistence mechanism, providing CRUD and finders operations without exposing any database details. So, in your application you will have different DAO's for different entities. Most of the time, code that you have written in one DAO will get duplicated in other DAO's because much of the functionality in DAO's is same (like CRUD and finder methods). One of way of avoiding this problem is to have generic DAO and have your domain classes inherit this generic DAO implementation. You can also add finders using Spring AOP; this approach is explained Per Mellqvist in this article. There is a problem with the approach: this boiler plate code becomes part of your application source code and you will have to maintain it. The more code you write, there are more chances of new bugs getting introduced in your application. So, to avoid writing this code in an application, we can use an open source framework called Hades. Hades is a utility library to work with Data Access Objects implemented with Spring and JPA. The main goal is to ease the development and operation of a data access layer in applications. In this article, I will show you how easy it is write DAO's using Hades without writing any boiler plate code. In order to introduce you to Hades, I will show you how we can manage an entity like Book. Before we write any code we need to add the following dependencies to pom.xml. org.synyx.hades org.synyx.hades 2.0.0.RC3 org.hibernate hibernate-entitymanager 3.5.5-Final So, lets start by creating a Book Entity import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class Book { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column(unique = true) private String title; private String author; private String isbn; private double price; // setters and getters } This is a very simple JPA entity without any relationships. Now that we have modeled our entity we need to add a DAO interface for handling persistence operations. You need to create a BookDao interface which will extend GenericDao interface provided by Hades. GenericDao is an interface for generic CRUD operations on a DAO for a specific type. So, we passed the type parameters Book for entity and Long for id. import org.synyx.hades.dao.GenericDao; public interface BookDao extends GenericDao { } GenericDao has an default implementation called GenericJpaDao which provides implementation of all its operations. Now that we have created a BookDao interface, we will configure it in the Spring application context xml. Hades provides a factory bean which will provide the DAO instance for the given interface (in our case BookDao). In the xml shown above I have used a new feature introduced in Spring 3 Embedded Databases to give me the instance of HSQL database datasource. You can refer to my earlier post on Embedded databases in case you are not aware of it. I have used Hibernate as my JPA provider so you need to configure it in persistence.xml as shown below org.hibernate.ejb.HibernatePersistence Next we will write a JUnit test for testing this code. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration @Transactional public class BookDaoTest { @Autowired private BookDao bookDao; private Book book; @Before public void setUp() throws Exception { book = new Book(); book.setAuthor("shekhar"); book.setTitle("Effective Java"); book.setPrice(500); book.setIsbn("1234567890123"); } @Test public void shouldCreateABook() throws Exception { Book persistedBook = bookDao.save(book); assertBook(persistedBook); } @Test public void shouldReadAPersistedBook() throws Exception { Book persistedBook = bookDao.save(book); Book bookReadByPrimaryKey = bookDao.readByPrimaryKey(persistedBook.getId()); assertBook(bookReadByPrimaryKey); } @Test public void shouldDeleteBook() throws Exception { Book persistedBook = bookDao.save(book); bookDao.delete(persistedBook); Book bookReadByPrimaryKey = bookDao.readByPrimaryKey(persistedBook.getId()); assertNull(bookReadByPrimaryKey); } private void assertBook(Book persistedBook) { assertThat(persistedBook, is(notNullValue())); assertThat(persistedBook.getId(), is(not(equalTo(null)))); assertThat(persistedBook.getAuthor(), equalTo(book.getAuthor())); } } Auto Configuration Using Spring Namespaces The way that we have configured the DAO can become quite cumbersome if the number of DAO's increases. To overcome this we can make use of namespaces to configure daos. This configuration will trigger the auto detection mechanism of DAOs that extend GenericDAO or Extended-GenericDAO. It will create DAO instances for all the DAO interfaces found in this package. You can use or for including or excluding interfaces from getting their beans created. Adding Finders and Query Methods So far we have used the inbuilt operations provided by GenericDao but most of the time we need to add our own finders methods like findByAuthorAndTitle, findWithPriceLessThan . Hades makes it very easy for you to add such methods in your domain dao interface like BookDao. Hades provides 3 strategies for creating JPA query at runtime. These are :- CREATE : This will create a JPA query from method name. This strategy ties you with the method name so you have to think twice before changing the method name. public interface BookDao extends GenericDao { public Book findByAuthorAndTitle(String author, String title); } // test code @Test public void shouldFindByAuthorAndTitle() throws Exception { Book persistedBook = bookDao.save(book); Book bookByAuthorAndTitle = bookDao.findByAuthorAndTitle("shekhar", "Effective Java"); assertBook(bookByAuthorAndTitle); } USE DECLARED QUERY : This lets you define query using JPA @NamedQuery or Hades @Query annotation. If no query is found exception will be thrown. @Query("FROM Book b WHERE b.author = ?1") public Book findBookByAuthorName(String author); // test code public void shouldFindBookByAuthorName() { Book persistedBook = bookDao.save(book); Book bookByAuthor = bookDao.findBookByAuthorName("shekhar"); assertBook(bookByAuthor); } CREATE IF NOT FOUND : It is the combination of both the strategies mentioned above. It will first lookup for the declared query and if it is not found will lookup for method name. This is by default option and you can change it by changing query-lookup-strategy attribute in hades:dao-config element. Hades queries also has the support for pagination and sorting. You can pass the instance of Pageable and Sort to the finder methods create above. public Page findByAuthor(String author, Pageable pageable); public List findByAuthor(String author, Sort sort); Hades is not limited to just limited to CRUD and adding custom finder methods. There are some other features like auditing ,Specifications,etc that I will discuss in second part of this article.
October 15, 2010
by Shekhar Gulati
· 20,841 Views
article thumbnail
Asynchronous (non-blocking) Execution in JDBC, Hibernate or Spring?
There is no so-called asynchronous execution support in JDBC mainly because most of the time you want to wait for the result of your DML or DDL, or because there is too much complexity involved between the back-end database and the front end JDBC driver. Some database vendors do provide such support in their native drives. For example Oracle supports non-blocking calls in its native OCI driver. Unfortunately it is based on polling instead of callback or interrupt. Neither Hibernate or Spring supports this feature. But sometimes you do need such a feature. For example some business logic is still implemented using legacy Oracle PL/SQL stored procedures and they run pretty long. The front-end UI doesn't want to wait for its finish and it just needs to check the running result later in a database logging table into which the store procedure will write the execution status. In other cases your front-end application really cares about low latency and doesn't care too much about how individual DML is executed. So you just fire a DML into the database and forget the running status. Nothing can stop you from making asynchronous DB calls using multi-threading in your application. (Actually even Oracle recommends to use multi-thread instead of polling OCI for efficiency). However you must think about how to handle transaction and connection (or Hibernate Session) in threads. Before continuing, let's assume we are only handling local transaction instead of JTA. 1. JDBC It is straightforward. You just create another thread (DB thread hereafter) from the calling thread to make the actual JDBC call. If such a call is frequent, you call use ThreadPoolExecutor to reduce thread's creation and destroy overhead. 2. Hibernate You usually use session context policy "thread" for Hibernate to automatically handle your session and transaction. With this policy, you get one session and transaction per thread. When you commit the transaction, Hibernate automatically closes the session. Again you need to create a DB thread for the actual stored procedure call. Some developer may be wondering whether the new DB thread inherits its parent calling thread's session and transaction. This is an important question. First of all, you usually don't want to share the same transaction between the calling thread and its spawned DB thread because you want to return immediately from the calling thread and if both threads share the same session and transaction, the calling thread can't commit the transaction and long running transaction should be avoided. Secondly Hibernate's "thread" policy doesn't support such inheritance because if you look at Hibernate's corresponding ThreadLocalSessionContext, it is using ThreadLocal class instead of InheritableThreadLocal. Here is a sample code in the DB thread: // Non-managed environment and "thread" policy is in place // gets a session first Session sess = factory.getCurrentSession(); Transaction tx = null; try { tx = sess.beginTransaction(); // call the long running DB stored procedure //Hibernate automatically closes the session tx.commit(); } catch (RuntimeException e) { if (tx != null) tx.rollback(); throw e; } 3.Spring's Declarative Transaction Let's suppose your stored procedure call is included in method: @Transactional(readOnly=false) public void callDBStoredProcedure(); The calling thread has the following method to call the above method asynchronously using Spring's TaskExecutor: @Transactional(readOnly=false) public void asynchCallDBStoredProcedure() { //creates a DB thread pool this.taskExecutor.execute(new Runnable() { @Override public void run() { //call callDBStoredProcedure() } }); } You usually configure Spring's HibernateTransactionManager and the default proxy mode (aspectj is another mode) for declarative transactions. This class binds a transaction and a Hibernate session to each thread and doesn't Inheritance either just like Hibernate's "thread" policy. Where you put the above method callDBStoredProcedure() makes a huge difference. If you put the method in the same class as the calling thread, the declared transaction for callDBStoredProcedure() doesn't take place because in the proxy mode only external or remote method calls coming in through the AOP proxy (an object created by the AOP framework in order to implement the transaction aspect. This object supports your calling thread's class by composing an instance of your calling thread class) will be intercepted. This meas that "self-invocation", i.e. a method within the target object (the composed instance of your calling thread class in the AOP proxy) calling some other method of the target object, won't lead to an actual transaction at runtime even if the invoked method is marked with @Transactional! So you must put callDBStoredProcedure() in a different class as a Spring's bean so that the DB thread in method asynchCallDBStoredProcedure() can load that bean's AOP proxy and call callDBStoredProcedure() through that proxy. About The Author Yongjun Jiao is a technical manager with SunGard Consulting Services. He has been a professional software developer for the past 10 years. His expertise covers Java SE, Java EE, Oracle, application tuning and high performance and low latency computing.
October 15, 2010
by Yongjun Jiao
· 97,546 Views · 4 Likes
article thumbnail
Mockito - Pros, Cons, and Best Practices
It's been almost 4 years since I wrote a blog post called "EasyMock - Pros, Cons, and Best Practices, and a lot has happened since. You don't hear about EasyMock much any more, and Mockito seems to have replaced it in mindshare. And for good reason: it is better. A Good Humane Interface for Stubbing Just like EasyMock, Mockito allows you to chain method calls together to produce less imperative looking code. Here's how you can make a Stub for the canonical Warehouse object: Warehouse mock = Mockito.mock(Warehouse.class); Mockito.when(mock.hasInventory(TALISKER, 50)). thenReturn(true); I know, I like a crazy formatting. Regardless, giving your System Under Test (SUT) indirect input couldn't be easier. There is no big advantage over EasyMock for stubbing behavior and passing a stub off to the SUT. Giving indirect input with mocks and then using standard JUnit asserts afterwards is simple with both tools, and both support the standard Hamcrest matchers. Class (not just Interface) Mocks Mockito allows you to mock out classes as well as interfaces. I know the EasyMock ClassExtensions allowed you to do this as well, but it is a little nicer to have it all in one package with Mockito. Supports Test Spies, not just Mocks There is a difference between spies and mocks. Stubs allow you to give indirect input to a test (the values are read but never written), Spies allow you to gather indirect output from a test (the mock is written to and verified, but does not give the test input), and Mocks are both (your object gives indirect input to your test through Stubbing and gathers indirect output through spying). The difference is illustrated between two code examples. In EasyMock, you only have mocks. You must set all input and output expectations before running the test, then verify afterwards. // arrange Warehouse mock = EasyMock.createMock(Warehouse.class); EasyMock.expect( mock.hasInventory(TALISKER, 50)). andReturn(true).once(); EasyMock.expect( mock.remove(TALISKER, 50)). andReturn(true).once(); EasyMock.replay(mock); //act Order order = new Order(TALISKER, 50); order.fill(warehouse); // assert EasyMock.verify(mock); That's a lot of code, and not all of it is needed. The arrange section is setting up a stub (the warehouse has inventory) and setting up a mock expectation (the remove method will be called later). The assertion in all this is actually the little verify() method at the end. The main point of this test is that remove() was called, but that information is buried in a nest of expectations. Mockito improves on this by throwing out both the record/playback mode and a generic verify() method. It is shorter and clearer this way: // arrange Warehouse mock = Mockito.mock(Warehouse.class); Mockito.when(mock.hasInventory(TALISKER, 50)). thenReturn(true); //act Order order = new Order(TALISKER, 50); order.fill(warehouse); // assert Mockito.verify(warehouse).remove(TALISKER, 50); The verify step with Mockito is spying on the results of the test, not recording and verifying. Less code and a clearer picture of what really is expected. Update: There is a separate Spy API you can use in Mockito as well: http://mockito.googlecode.com/svn/branches/1.8.3/javadoc/org/mockito/Mockito.html#13 Better Void Method Handling Mockito handles void methods better than EasyMock. The fluent API works fine with a void method, but in EasyMock there were some special methods you had to write. First, the Mockito code is fairly simple to read: // arrange Warehouse mock = Mockito.mock(Warehouse.class); //act Order order = new Order(TALISKER, 50); order.fill(warehouse); // assert Mockito.verify(warehouse).remove(TALISKER, 50); Here is the same in EasyMock. Not as good: // arrange Warehouse mock = EasyMock.createMock(Warehouse.class); mock.remove(TALISKER, 50); EasyMock.expectLastMethodCall().once(); EasyMock.replay(mock); //act Order order = new Order(TALISKER, 50); order.fill(warehouse); // assert EasyMock.verify(mock); Mock Object Organization Patterns Both Mockito and EasyMock suffer from difficult maintenance. What I said in my original EasyMock post holds true for Mockito: The method chaining style interface is easy to write, but I find it difficult to read. When a test other than the one I'm working on fails, it's often very difficult to determine what exactly is going on. I end up having to examine the production code and the test expectation code to diagnose the issue. Hand-rolled mock objects are much easier to diagnose when something breaks... This problem is especially nasty after refactoring expectation code to reduce duplication. For the life of me, I cannot follow expectation code that has been refactored into shared methods. Now, four years later, I have a solution that works well for me. With a little care you can make your mocks reusable, maintainable, and readable. This approach was battle tested over many months in an Enterprise Environment(tm). Create a private static method the first time you need a mock. Any important data needs to be passed in as a parameter. Using constants or "magic" fields hides important information and obfuscates tests. For example: User user = createMockUser("userID", "name"); ... assertEquals("userID", result.id()); assertEquals("name", result.name(); Everything important is visible and in the test, nothing important is hidden. You need to completely hide the replay state behind this factory method if you're still on EasyMock. The Mock framework in use is an implementation detail and try not to let it leak. Next, as your dependencies grow, be sure to always pass them in as factory method parameters. If you need a User and a Role object, then don't create one method that creates both mocks. One method instantiates one object, otherwise it is a parameter and compose your mock objects in the test method: User user = createMockUser( "userID", "name", createMockRole("role1"), createMockRole("role2") ); When each object type has a factory method, then it makes it much easier to compose the different types of objects together. Reuse. But you can only reuse the methods when they are simple and with few dependencies, otherwise they become too specific and difficult to understand. The first time you need to reuse one of these methods, then move the method to a utility class called "*Mocker", like UserMocker or RoleMocker. Follow a naming convention so that they are always easy to find. If you remembered to make the private factory methods static then moving them should be very simple. Your client code ends up looking like this, but you can use static imports to fix that: User user = UserMocker.createMockUser( "userID", "name", RoleMocker.createMockRole("role1"), RoleMocker.createMockRole("role2") ); User overloaded methods liberally. Don't create one giant method with every possible parameter in the parameter list. There are good reasons to avoid overloading in production, but this is test. Use overloading so that the test methods only display data relevant to that test and nothing more. Using Varargs can also help keep a clean test. Lastly, don't use constants. Constants hide the important information out of sight, at the top of the file where you can't see it or in a Mocker class. It's OK to use constants within the test case, but don't define constants in the Mockers, it just hides relevant information and makes the test harder to read later. Avoid Abstract Test Cases Managing mock objects within abstract test cases has been very difficult for me, especially when managing replay and record states. I've given up mixing mock objects and abstract TestCase objects. When something breaks it simply takes too long to diagnose. An alternative is to create custom assertion methods that can be reused. Beyond that, I've given up on Abstract TestCase objects anyway, on the grounds of preferring composition of inheritance. Don't Replace Asserts with Verify My original comments about EasyMock are still relevant for Mockito: The easiest methods to understand and test are methods that perform some sort of work. You run the method and then use asserts to make sure everything worked. In contrast, mock objects make it easy to test delegation, which is when some object other than the SUT is doing work. Delegation means the method's purpose is to produce a side-effect, not actually perform work. Side-effect code is sometimes needed, but often more difficult to understand and debug. In fact, some languages don't even allow it! If you're test code contains assert methods then you have a good test. If you're code doesn't contain asserts, and instead contains a long list of verify() calls, then you're relying on side effects. This is a unit-test bad smell, especially if there are several objects than need to be verified. Verifying several objects at the end of a unit test is like saying, "My test method needs to do several things: x, y, and z." The charter and responsibility of the method is no longer clear. This is a candidate for refactoring. No More All or Nothing Testing Mockito's verify() methods are much more flexible than EasyMock's. You can verify that only one or two methods on the mock were called, while EasyMock had just one coarse verify() method. With EasyMock I ended up littering the code with meaningless expectations, but not so in Mockito. This alone is reason enough to switch. Failure: Expected X received X For the most part, Mockito error messages are better than EasyMock's. However, you still sometimes see a failure that reads "Failure. Got X Expected X." Basically, this means that your toString() methods produce the same results but equals() does not. Every user who starts out gets confused by this message at some point. Be Warned. Don't Stop Handrolling Mocks Don't throw out hand-rolled mock objects. They have their place. Subclass and Override is a very useful technique for creating a testing seam, use it. Learn to Write an ArgumentMatcher Learn to write an ArgumentMatcher. There is a learning curve but it's over quickly. This post is long enough, so I won't give an example. That's it. See you again in 4 years when the next framework comes out! From http://hamletdarcy.blogspot.com/2010/09/mockito-pros-cons-and-best-practices.html
October 14, 2010
by Hamlet D'Arcy
· 56,547 Views
article thumbnail
IntelliJ IDEA Shortcut Wallpaper
The fastest developers use the keyboard almost exclusively. To help you learn the IntelliJ IDEA shortcuts, I created a desktop wallpaper that lists the most common ones for Linux, Mac and Windows users. Can't remember the command? Just pop up the desktop and check it out. Bored while waiting for a compile? Ditto. There are a few resolutions: IntelliJ IDEA Linux/Windows 1440x900 IntelliJ IDEA Macintosh 1440x900 IntelliJ IDEA Linux/Windows 1680x1050 IntelliJ IDEA Macintosh 1680x1050 IntelliJ IDEA Linux/Windows 1920x1200 IntelliJ IDEA Macintosh 1920x1200 The shortcuts are based on the great JetBrains "Key Map" Document plus a few more that I like. As an alternative, print out the JetBrains Key Map Doc and tape it to the sides of your monitor. If neither of these look good on your resolution then leave a comment and I'll scale one just for you. On a Mac? Send me the shortcuts in a text file and I will convert it for you. Not on IDEA? You can download the Eclipse Desktop wallpaper from us, or the vim quick reference from our friend Ted Naleid. doce ut discas - Teach, that you yourself may learn.
October 13, 2010
by Hamlet D'Arcy
· 34,849 Views
article thumbnail
Practical PHP Patterns: Plugin
The Separated Interface pattern can often be used to provide hook points to client code, in the form of interfaces to implement or classes to extend with client code. The right implementation to use in a part of the system can then be chosen via configuration: the Factory or Dependency Injection container with the largest scope would process the configuration and execute conditionals only one time, and inject the right Plugin as a collaborator of a standard object. This pattern is a evolution of the Separated Interface one, where the implementor package is not even under your maintenance, but it is provided by some external developer that links his code to your work. Implementation In PHP the concept of compile time does not exist, apart from the just-in-time cached compilation of the scripts to operation codes, a phrase which you can peacefully ignore if you are not into caching. By the way, even if some checks are performed while loading and parsing the PHP code, PHP is by design a dynamic language where you can write nearly everything and it will not explode until executed. This design leaves open many possibilities for inserting plugins, but due to the lack of compile there is often a lack of a clean separation between code and configuration. For example, database credentials are embedded in PHP code more often than in other languages. Think now of a framework or a library: you cannot change the code but you must adapt or create a configuration to make it work. To implement a Plugin pattern, your application should strive towards the flexibility of a library: think of your production code as external and untouchable, and try to deploy a particular configuration to make it work and to modify a functionality. For example, extract it in a temporary working copy with svn checkout or git clone and hook in the necessary extensions. When you succeed, and your svn diff or git diff is clean, you'll have implemented a Plugin system. Modification of vendor code (and you are the vendor here) is out of the question. Future changes Kent Beck says in Implementation Patterns that providing hooks via implementation and inheritance is one of the most effective ways to tie a framework down from future evolution. For example, once you have published an interface, you cannot add methods to it without breaking all the implementors. You can publish versioned interfaces, but this adds complexity to your application. With a published abstract class instead, you can include a default implementation for new methods, but you can't remove methods or refactor protected members without breaking Plugin implementators. This is the specular situation of providing an interface. Zend Framework includes both an interface and an abstract class for most of its components, but it does not get right the management of extension points (at least in the 1.x branch). When including the possibility of Plugins in your application, default as much as possible to private visibility and hide the internals of your Plugin hook point. What is left to protected is a seam that screams "extend me", and the interfaces not marked as internal will be implemented by someone else. There is no built-in language mechanism to protect interfaces,m so you'll have to rely on some kind of convention (like a particular prefix or namespace), but for private methods left to protected scope we can only blame ourselves. Configuration The configuration of your Plugin system can be managed with solution of different levels of complexity, each more powerful than the previous ones. Of course, you shouldn't provide a needlessly complex system when all you need is a class name. The first solution is indeed to insert class names into configuration files. This is a totally declarative approach, which uses simple INI files. This is commonly done in Zend Framework, for example with bootstrap resources, and in some cases can even manage dependencies of the Plugins. Bootstrap resources can request other object of the same kind, but cannot pull in arbitrary collaborators (unless they create them by themselves... ugly if you know what DI is). A second, widely applicable solution is to request Factory objects. this solution still involves writing PHP code, but it is one step towards textual configuration. However, a Factory object can fetch and inject all the dependencies into a Plugin without cluttering it with this kind glue code (only a constructor or some setters). The problem with Factories is that they tend to contain all the same boilerplate code. A third solution can be used to provide quick construction of objects: Dependency Injection containers, which have recently been introduced even in PHP. A DI container is configured textually, via an XML or INI file containing parameters like the collaborators each object requires, its lifetime, and so on. DI containers are probably the future of flexible PHP applications, but beware of growing too dependent on them: they are a library like every other open source component, and should be isolated from your code as much as possible like you would do with your models and Doctrine 2, or your services and Zend Framework. Example The code sample shows hot to predispose a class for receiving an injected simple Factory that manages user-defined plugins. // plugin_view.php formatDate(time()), ".\n"; factory = $factory; } public function render($script) { include $script; } /** * Forwards the call to the View Helper invoked. */ public function __call($name, $args) { $callback = $this->factory->getHelper($name); return call_user_func_array($callback, $args); } } /** * Extension code. */ class UserDefinedFactory implements ViewHelperFactory { private $helpers; /** * In this example, we only define a simple Plugin for * formatting dates using PHP's internal function. */ public function __construct() { $this->helpers = array( 'formatDate' => function($time) { return date('Y-m-d', $time); } ); } public function getHelper($name) { return $this->helpers[$name]; } } // client code $view = new View(new UserDefinedFactory); $view->render('plugin_view.php');
October 11, 2010
by Giorgio Sironi
· 4,883 Views
article thumbnail
Practical PHP Patterns: Special Case
The Special Case pattern is a very simple base pattern that describes a subclass representing, as the name suggests, a special case of the computation made by your program. Don't think that the technical simplicity of the solution means that this pattern is very diffused. If vs. polymorphism The idea of the pattern is to implements two classes with the same interface or base superclass, and rely on polymorphism to target the special case, instead that inserting if and switch statements in the original class. The extracted piece of functionality can be a method to override (specialization in the Template Method pattern) or an independent collaborator injected into the client code (Strategy pattern and many others). Dispatching a method call instead of inserting if statements is simpler to read and understand, as the code of the class has a lower cyclomatic complexity overall (few possible execution paths). If you ever tried to debug Doctrine 1 or a similar piece of software where the methods contain many nested ifs, you have been probably forced to insert echo statements to reveal the actually executed path, even when a single, isolated unit test was exercising the code. The alternative to some ifs is to introduce a Special Case. A rule of thumb for discovering if the substitution is possible is to check if the condition of the if is based on a state that is longer lived with respect to the parameters of the method that it resides in. Ifs that depend only on the state of the object fields or on collaborators are the simplest to replace. Null Object The Null Object pattern is a specialized version of Special Case (what a pun), and probably the most famous one. Instead of returning false or null when a computation fails to provide a result, you return an object that as a matter of fact, does nothing: an User subclass AnonymousUser with authorize() that always return false an empty array (it can be thought of as a Null Object, even if it is a primitive value) an empty ArrayObject. When a Null Object is returned, it effectively removes the checks for the null value or empty result from the client code. The client class object can call methods on the return value without worrying (calling methods on NULL is a fatal error in PHP and would crash a test suite). Or it can execute foreach() over a returned array and skip the cycle altogether if the array is empty. Since null can be dispatched, we may in fact use it as a Test Double in our test code to ensure a collaborator is never called in a particular scenario. If you have a method that shouldn't refer to a collaborator, you can inject null in the constructor or via a setter. PHP is different from Java in the type hinting behavior: in Java you can pass null to this constructor: public MyClass(Collaborator c) { ... while in PHP you have to resort to this: public function __construct(Collaborator $c = null) { ... Implementation The Special Case can be a Flyweight or an object with, since it has usually no internal state: the behavior depending on state is encapsulate in the code itself. You can also have more than one Special Case for each superclass or interface: Fowler makes the example of a MissingCustomer and AnonymousCustomer as special cases for the Customer class. By the way, every method of a Null Object should return a plain scalar value or another Special Case object. Note that with more than one level of Special Case objects, you may be violating the Law of Demeter: your client code access the first Special Case and then the other contained one, navigating the object graph instead of asking for its dependencies or sending a message. Examples In this example we apply the pattern to go from this situation: type = $type; } /** * This if() is based only on the object state * and can probably be modelled differently. * You'll need two tests for this method. */ public function accelerate() { if ($this->type == 'Ferrari') { $this->speed += 2; } else { $this->speed++; } } public function brake() { $this->speed--; } public function __toString() { return $this->type; } } // client code $car = new Car('Fiat'); $ferrari = new Car('Ferrari'); $car->accelerate(); $ferrari->accelerate(); var_dump($car, $ferrari); to this one: speed--; } public function __toString() { return $this->type; } } /** * One Special Case: a car with $type parametrized in the constructor * and ordinary acceleration properties. */ class OrdinaryCar extends Car { public function __construct($type) { $this->type = $type; } public function accelerate() { $this->speed++; } } /** * Another Special Case: a car with fixed $type and greater acceleration. */ class FerrariCar extends Car { /** * This is state encapsulate in code: you don't have to set up * it in tests or with configuration, only to instantiate this class. */ protected $type = 'Ferrari'; public function accelerate() { $this->speed += 2; } } // client code $car = new OrdinaryCar('Fiat'); $ferrari = new FerrariCar(); $car->accelerate(); $ferrari->accelerate(); var_dump($car, $ferrari);
October 7, 2010
by Giorgio Sironi
· 2,704 Views
article thumbnail
5 Maven Tips
I have been working with Maven for 3 years now and over that time I have learned some tips and tricks that help me work faster with Maven. In this article, I am going to talk about 5 of those tips. Maven rf option Most of us work in a multi-module environment and it happens very often that a build fails at some module. It's a big pain to rerun the entire the build. To save you from going through this pain, Maven has an option called rf (i.e. resume) from which you can resume your build from the module where it failed. So, if your build failed at myproject-commons you can run the build from this module by typing: mvn -rf myproject-commons clean install Maven pl option This next Maven option helps you build specified reactor projects instead of building all projects. For example, if you need to build only two of your modules myproject-commons and myproject-service, you can type: mvn -pl myproject-commons,myproject-service clean install This command will only build commons and service projects. Maven Classpath ordering In Maven, dependencies are loaded in the order in which they are declared in the pom.xml. As of version 2.0.9, Maven introduced deterministic ordering of dependencies on the classpath. The ordering is now preserved from your pom, with dependencies added by inheritence added last. Knowing this can really help you while debugging NoClassDefFoundError. Recently, I faced NoClassDefFoundError: org/objectweb/asm/CodeVisitor while working on a story where cglib-2.1_3.jar was getting loaded before cglib-nodep-2.1_3.jar. And as cglib-2.1_3.jar does not have this CodeVisitor class I was getting the error. Maven Classifiers We all know about qualifiers groupId, artifactId, version that we use to describe a dependency but there is fourth qualifier called classifier. It allows distinguishing two artifacts that belong to the same POM but were built differently, and is appended to the filename after the version. For example, if you want to build two separate jars, one for Java 5 and another for Java 6, you can use classifier. It lets you locate your dependencies with the further level of granularity. junitjunittestjdk16 Dependency Version Ranges Have you ever worked with a library that releases too often when you want that you should always work with the latest without changing the pom. It can be done using dependency version changes. So, if you want to specify that you should always work with the version above a specified version, you will write: [1.1.0,) This line means that the version should always be greater than or equal to 1.1.0. You can read more about dependency version ranges at this link. These are my five tips. If you have any tips please share.
October 5, 2010
by Shekhar Gulati
· 65,550 Views · 5 Likes
article thumbnail
Disable Javascript error in WPF WebBrowser control
I work with WebBrowser control in WPF, and one of the most annoying problems I have with it, is that sometimes you browse sites that raise a lot of javascript errors and the control becomes unusable. Thanks to my friend Marco Campi, yesterday I solved the problem. Marco pointed me a link that does not deal with WebBrowser control, but uses a simple javascript script to disable error handling in a web page. This solution is really simple, and seems to me the right way to solve the problem. The key to the solution is handle the Navigated event raised from the WebBrowser control. First of all I have my WebBrowser control wrapped in a custom class to add functionalities, in that class I declare this constant. private const string DisableScriptError = @"function noError() { return true; } window.onerror = noError;"; This is the very same script of the previous article, then I handle the Navigated event. void browser_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { InjectDisableScript(); Actually I’m doing a lot of other things inside the Navigated event handler, but the very first one is injecting into the page the script that disable javascript error. private void InjectDisableScript() { HTMLDocumentClass doc = Browser.Document as HTMLDocumentClass; HTMLDocument doc2 = Browser.Document as HTMLDocument; //Questo crea lo script per la soprressione degli errori IHTMLScriptElement scriptErrorSuppressed = (IHTMLScriptElement)doc2.createElement("SCRIPT"); scriptErrorSuppressed.type = "text/javascript"; scriptErrorSuppressed.text = DisableScriptError; IHTMLElementCollection nodes = doc.getElementsByTagName("head"); foreach (IHTMLElement elem in nodes) { //Appendo lo script all'head cosi è attivo HTMLHeadElementClass head = (HTMLHeadElementClass)elem; head.appendChild((IHTMLDOMNode)scriptErrorSuppressed); } } This is the code that really solves the problem, the key is creating a IHTMLScriptElement with the script and injecting into the Head of the page, this effectively disables the javascript errors. I’ve not fully tested with a lot of sites to verify that is able to intercept all errors, but it seems to work very well with a lot of links that gave us a lot of problems in the past. Alk.
October 5, 2010
by Ricci Gian Maria
· 20,589 Views
article thumbnail
Practical PHP Patterns: Value Object
Today comes a pattern that I have wanted to write about for a long time: the Value Object. A Value Object is conceptually a small simple object, but a fundamental piece of Domain-Driven Design and of object-oriented programming. Identity vs. equality The definition of a Value Object, that establishes if we can apply the pattern to a class, is the following: the equality of two Value Objects is not based on identity (them being the same object, checked with the operator ===), but on their content. An example of PHP built-in Value Object is the ArrayObject class, which is the object-oriented equivalent of an array. Two ArrayObjects are equal if they contain the same elements, not if they are the same identical object. Even if they were created in different places, as long as the contained scalars are equals (==) and the contained objects are identical (===), they are effectively "identical" for every purpose. In PHP, the == operator is used for equality testing, while the === for identity testing. Also custom equals() method can be created. Basically, the == operator checks that two scalars, like strings or integers, have the same value. When used on objects, it checks that their corresponding fields are equals. The === operator, when used between objects, returns true only if the two objects are actually the same (they refer to the same memory location and one reflects the changes made to another.) Scalars, as objects Other languages provide Value Objects also for basic values like strings and integers, while PHP limits itself to the ArrayObject and Date classes. Another example of a typical Value Object implementation is the Money class. Continuing with the Date example, we usually don't care if two Date objects are the same, but we care instead to know if they had the exact identical timestamp in their internals (which means they are effectively the same date.) On the contrary, if two Users are the same, they should be the same object (in the memory graph) which resides in a single memory location, or they should have an identity field (in databases) that allow us to distinguish univocally between two users. If you and I have the same name and address in the database, we are still not the same User. But if two Dates have the same day, month, and year, they are the same Date for us. This second kind of objects, whose equality is not based on an Identity Field, are Value Objects. For simplicity, ORMs automatically enforce the identity rules via an Identity Map. When you retrieve your own User object via Doctrine 2, you can be sure that only one instance would be created and handed to you, even if you ask for it a thousand times (obviously other instances would be created for different User objects, like mine or the Pope's one). In Domain-Driven Design The Value Object pattern is also a building block of Domain-Driven Design. In practice, it represents an object which is similar to a primitive type, but should be modelled after the domain's business rules. A Date cannot really be a primitive type: Date objects like Feb 31, 2010 or 2010-67-89 cannot exists and their creation should never be allowed. This is a reason why we may use objects even if we already have basic, old procedural types. Another advantage of objects if that they can add (or prohibit) behavior: we can add traits like immutability, or type hinting. There's no reason to modelling a string as a class in PHP, due to the lack of support from the str*() functions; however, if those strings are all Addresses or Phonenumbers... In fact, Value Objects are almost always immutable (not ArrayObject's case) so that they can be shared between other domain objects. The immutability solves the issues named aliasing bugs: if an User changes its date of birth, whose object is shared between other two users, they will reflect the change incorrectly. But if the Date is a Value Object, it is immutable and the User cannot simply change a field. Mutating a Value Object means creating a new one, often with a Factory Method on the Value Object itself, and place it in the reference field that pointed to the old object. In a sense, if transitions are required, they can be modelled with a mechanism similar to the State pattern, where the Value Object returns other instances of its class that can be substituted to the original reference. Garbage collecting will take care of abandoned Value Objects. To a certain extent, we can say that Value Objects are "passed by value" even if the object reference is passed by handler/reference/pointer like for everything else. No method can write over a Value Object you pass to it. The issues of using Value Objects extensively is that there is no support from database mapping tools yet. Maybe we will be able to hack some support with the custom types of Doctrine 2, which comprehend the native Date class, but real support (based on the JPA metadata schema) would come in further releases. Examples An example of Value object is presented here, and my choice fell onto the Paint class, an example treated in the Domain-Driven Design book in discussions about other patterns. On this Value Object we have little but useful behavior (methods such as equals(), and mix()), and of course immutability. red = $red; $this->green = $green; $this->blue = $blue; } /** * Getters expose the field of the Value Object we want to leave * accessible (often all). * There are no setters: once built, the Value Object is immutable. * @return integer */ public function getRed() { return $red; } public function getGreen() { return $green; } public function getBlue() { return $blue; } /** * Actually, confronting two objects with == would suffice. * @return boolean true if the two Paints are equal */ public function equals($object) { return get_class($object) == 'Paint' and $this->red == $object->red and $this->green == $object->green and $this->blue == $object->blue; } /** * Every kind of algorithm, just to expanding this example. * Since the objects are immutable, the resulting Paint is a brand * new object, which is returned. * @return Paint */ public function mix(Paint $another) { return new Paint($this->integerAverage($this->red, $another->red), $this->integerAverage($this->green, $another->green), $this->integerAverage($this->blue, $another->blue)); } private function integerAverage($a, $b) { return (int) (($a + $b) / 2); } } // client code $blue = new Paint(0, 0, 255); $red = new Paint(255, 0, 0); var_dump($blue->equals($red)); var_dump($violet = $blue->mix($red));
September 29, 2010
by Giorgio Sironi
· 24,047 Views · 1 Like
article thumbnail
Java Barcode API
Originally Barcodes were 1D representation of data using width and spacing of bars. Common bar code types are UPC barcodes which are seen on product packages. There are 2D barcodes as well (they are still called Barcodes even though they don’t use bars). A common example of 2D bar code is QR code (shown on right) which is commonly used by mobile phone apps. You can read history and more info about Barcodes on Wikipedia. There is an open source Java library called ‘zxing’ (Zebra Crossing) which can read and write many differently types of bar codes formats. I tested zxing and it was able to read a barcode embedded in the middle of a 100 dpi grayscale busy text document! This article demonstrates how to use zxing to read and write bar codes from a Java program. Getting the library It would be nice if the jars where hosted in a maven repo somewhere, but there is no plan to do that (see Issue 88). Since I could not find the binaries available for download, I decided to download the source code and build the binaries, which was actually quite easy. The source code of the library is available on Google Code. At the time of writing, 1.6 is the latest version of zxing. 1. Download the release file ZXing-1.6.zip (which contains of mostly source files) from here. 2. Unzip the file in a local directory 3. You will need to build 2 jar files from the downloaded source: core.jar, javase.jar Building core.jar cd zxing-1.6/core mvn install cd zxing-1.6/core mvn install This will install the jar in your local maven repo. Though not required, you can also deploy it to your company’s private repo by using mvn:deploy or by manually uploading it to your maven repository. There is an ant script to build the jar as well. Building javase.jar Repeat the same procedure to get javase.jar cd zxing-1.6/javase mvn install cd zxing-1.6/javase mvn install Including the libraries in your project If you are using ant, add the core.jar and javase.jar to your project’s classpath. If you are using maven, add the following to your pom.xml. com.google.zxing core 1.6-SNAPSHOT com.google.zxing javase 1.6-SNAPSHOT com.google.zxing core 1.6-SNAPSHOT com.google.zxing javase 1.6-SNAPSHOT Once you have the jars included in your project’s classpath, you are now ready to read and write barcodes from java! Reading a Bar Code from Java You can read the bar code by first loading the image as an input stream and then calling this utility method. InputStream barCodeInputStream = new FileInputStream("file.jpg"); BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream); LuminanceSource source = new BufferedImageLuminanceSource(barCodeBufferedImage); BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source)); Reader reader = new MultiFormatReader(); Result result = reader.decode(bitmap); System.out.println("Barcode text is " + result.getText()); InputStream barCodeInputStream = new FileInputStream("file.jpg"); BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream); LuminanceSource source = new BufferedImageLuminanceSource(barCodeBufferedImage); BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source)); Reader reader = new MultiFormatReader(); Result result = reader.decode(bitmap); System.out.println("Barcode text is " + result.getText()); Writing a Bar Code from Java You can encode a small text string as follows: String text = "98376373783"; // this is the text that we want to encode int width = 400; int height = 300; // change the height and width as per your requirement // (ImageIO.getWriterFormatNames() returns a list of supported formats) String imageFormat = "png"; // could be "gif", "tiff", "jpeg" BitMatrix bitMatrix = new QRCodeWriter().encode(text, BarcodeFormat.QR_CODE, width, height); MatrixToImageWriter.writeToStream(bitMatrix, imageFormat, new FileOutputStream(new File("qrcode_97802017507991.png"))); String text = "98376373783"; // this is the text that we want to encode int width = 400; int height = 300; // change the height and width as per your requirement // (ImageIO.getWriterFormatNames() returns a list of supported formats) String imageFormat = "png"; // could be "gif", "tiff", "jpeg" BitMatrix bitMatrix = new QRCodeWriter().encode(text, BarcodeFormat.QR_CODE, width, height); MatrixToImageWriter.writeToStream(bitMatrix, imageFormat, new FileOutputStream(new File("qrcode_97802017507991.png"))); In the above example, the bar code for “97802017507991″ is written to the file “qrcode_97802017507991.png” (click to see the output). JavaDocs and Documentation The Javadocs are part of the downloaded zip file. You can find a list of supported bar code formats in the Javadocs. Open the following file to see the javadocs. zxing-1.6/docs/javadoc/index.html zxing-1.6/docs/javadoc/index.html From http://www.vineetmanohar.com/2010/09/java-barcode-api/
September 27, 2010
by Vineet Manohar
· 111,950 Views · 2 Likes
  • Previous
  • ...
  • 819
  • 820
  • 821
  • 822
  • 823
  • 824
  • 825
  • 826
  • 827
  • 828
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: