DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Simple RESTful web services with Glassfish
Here’s a quick guide to creating a RESTful web service with Glassfish using JAX-RS. First create a new maven project called restwebdemo using the jee6-sandbox-archetype so we have a model and some data to work with. To get this working with Glassfish, open the persistence.xml file and change the jta-data-source name to jdbc/__default. Also, make sure that the javaDB is up and running by going to $glassfish_dir/bin and typing asadmin start-database. Verify that the application is working correctly by going to http://localhost:8080/restwebdemo/ and you should get a list of courses. Before we start getting to the interesting stuff, we have one more boring piece of configuration to perform specific to web services. We need to add the jersey servlet container to our web.xml file: Jersey Web Application com.sun.jersey.spi.container.servlet.ServletContainer 1 Jersey Web Application /rest/* This also tells Jersey to handle urls starting with /rest and pass it along to our web service methods. Now we can dive right in an create a new server bean that will respond to requests for web services. For now we’ll just return a simple message from a POJO. @Path("sample") public class SimpleService { @Path("greet") @GET public String doGreet() { return "Hello Stranger, the time is "+ new Date(); } } The path annotation on the class indicates that this is a root resource class and the path value given specifies the base URI for all the web service methods contained in the class. On the doGreet method we have @Path which is used to specify the path template this method should match. The @GET annotation is used to differentiate between a sub-resource method that handles the actual web service request and a sub-resource locator method that returns an object that will instead be used to handle the request. In this case, the method has the @GET annotation which means this method handles the request and returns the result. If you navigate to http://localhost:8080/restwebdemo/rest/sample/greet/ you should see a welcome message with the current date and time. Now we’ll look at adding parameterized web services that extracts parameters from the request URL and uses them to form the output. Add the following method to the web service class: @Path("sayHello/{name}") @GET public String doSayHello(@PathParam("name") String name) { return "Hello there "+name; } Again we have the path annotation to indicate what URLs this method will match, and this time we have have {name} added to the URL. This lets us extract a part of the url and give it a name. This name is used in the @PathParam annotation in the method signature to assign the URL fragment to the name parameter . To test our new method, redeploy the application and go to the URL http://localhost:8080/restwebdemo/rest/sample/sayHello/Andy to get the response Hello there Andy We can also use request parameters to provide values to the method by using the @QueryParam annotation. We’ll create another method that is similar but uses a query parameter instead. @Path("sayHello") @GET public String doSayHelloWithRequestParam(@QueryParam("name") String name) { return "Hi there "+name; } This time, the URL to use is http://localhost:8080/restwebdemo/rest/sample/sayHello?name=Andy to get the same message. To make things more interesting, lets add a new page that lets us enter a name in a form and submit it to the web service. Add a new page called form.html with the following content : Name Go to this page at http://localhost:8080/restwebdemo/form.html, enter your name and click submit and you should be greeted by name in the next page. Notice that we had to set the form method to GET because our web service is only set up to respond to GET requests. If we change the form method to POST we can get the following error message : HTTP Status 405 - Method Not Allowed type Status report message Method Not Allowed description The specified HTTP method is not allowed for the requested resource (Method Not Allowed). Remember, with REST, those actual verbs have meaning and adds meaning to the request so it is strict on how it matches the method to be called. To solve this problem, we can add a new method to handle form POSTs like so : @Path("sayHello") @POST public String doSayHelloWithFormParam(@FormParam("name") String name) { return "Hi there " + name; } Here we changed the @GET to a @POST to allow the different verb and changed the annotation on the name method parameter to @FormParam. The path remains the same because we can have service methods that match the same path, but for different request verbs. We can even have the same verb and path as long as the content type returned is different. The content type is used to specify the type of output the is returned from the method. It is set by adding a @javax.ws.rs.Produces (not to be confused with the CDI Produces annotation). The annotation takes a string parameter that indicates the type of media returned from the method. Common media types are defined as constants in the MediaType class so you can use : @Path("sayHello") @POST @Produces(MediaType.APPLICATION_XML) public String doSayHelloWithFormParam(@FormParam("name") String name) { return "Hi there " + name+""; } If you run your form again, and post it, you will get an xml response as follows : Hi there Andy Depending on your browser, if you return just the text, you will get an error because the plain text isn’t valid XML and the browser expects XML because that is the response type set on the response from the web service. To finish up, we are going to do something a little more interesting, we will create a web service to return the name of a course from the database using the sandbox data built into the archetype. For various reasons, we will take the most direct route to getting data access which is to make the web service bean a stateless bean and inject a persistence context using the @PersistenceContext annotation. Add the @Stateless annotation to the SimpleService class and an entity manager field annotated with @PersistenceContext along with the getters and setters. Add a new method to return the course name for the given course id parameter. We will return it as text for the time being : @Path("courseName/{id}") @GET public String getCourseNameFromId(@PathParam("id") Long id) { Course c = entityManager.find(Course.class, id); if (c == null) { return "Not Found, try the index page and come back"; } else { return c.getTitle(); } } Note that the automatic type conversion takes place and the value is converted to a Long automatically. If the course is not found, we suggest the user goes to the main page of the demo. We aren’t just being overly helpful, the test data is generated when you request one of the application pages for the first time. In the current persistence context, when you redeploy, the database is dropped and rebuilt so it will be empty. You need to go to the front page to automatically create the data and then go back to your page to view the course. An example URL is http://localhost:8080/restwebdemo/rest/sample/courseName/124. Of course you could grab the course object and build your own XML or JSON response to send back to the client, or use a third party library like Jackson to build the JSON response. However, as we’ll see next time, Java EE 6 has all these goodies built in for us, and with a few annotations, we’ll be slinging objects back and forth in no time at all. You can download the source code for the project from here. Simply unzip, build with maven (mvn clean package) and deploy to Glassfish.
February 4, 2011
by Andy Gibson
· 60,422 Views
article thumbnail
Spring Data with Redis
The Spring Data project provides a solution for accessing data stored in new emerging technologies like NoSQL databases and cloud based services. When we look into the SpringSource git repository we see a lot of spring-data sub-projects: spring-data-commons: common interfaces and utility class for other spring-data projects. spring-data-column: support for column based databases. It has not started yet, but there will be support for Cassandra and HBase spring-data-document: support for document databases. Currently MongoDB and CouchDB are supported. spring-data-graph: support for graph based databases. Currently Neo4j is supported. spring-data-keyvalue: support for key-value databases. Currently Redis and Riak are supported and probably Membase will be supported in future. spring-data-jdbc-ext: JDBC extensions, as example Oracle RAC connection failover is implemented. spring-data-jpa: simplifies JPA based data access layer. I would like to share with you how you can use Redis. The first step is to download it from the redis.io web page. try.redis-db.com is a useful site where we can run Redis commands. It also provides a step by step tutorial. This tutorial shows us all structures that Redis supports (list, set, sorted set and hashes) and some useful commands. A lot of reputable sites use Redis today. After download and unpacking we should compile Redis (version 2.2, the release candidate is the preferable one to use since some commands do not work in version 2.0.4). make sudo make install Once we run these commands we are all set to run the following five commands: redis-benchmark - for benchmarking Redis server redis-check-aof - check the AOF (Aggregate Objective Function), and it can repair that. redis-check-dump - check rdb files for unprocessable opcodes. redis-cli - Redis client. redis-server - Redis server. We can test Redis server. redis-server [1055] 06 Jan 18:19:15 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [1055] 06 Jan 18:19:15 * Server started, Redis version 2.0.4 [1055] 06 Jan 18:19:15 * The server is now ready to accept connections on port 6379 [1055] 06 Jan 18:19:15 - 0 clients connected (0 slaves), 1074272 bytes in use and Redis client. redis-cli redis> set my-super-key "my-super-value" OK Now we create a simple Java project in order to show how simple a spring-data-redis module really is. mvn archetype:create -DgroupId=info.pietrowski -DpackageName=info.pietrowski.redis -DartifactId=spring-data-redis -Dpackage=jar Next we have to add in pom.xml milestone spring repository, and add spring-data-redis as a dependency. After that all required dependencies will be fetched. Next we create a resources folder under the main folder, and create application.xml which will have all the configuration. We can configure the JedisConnectionFactory, in two different ways, One - we can provide a JedisShardInfo object in shardInfo property. Two - we can provide host (default localhost), port (default 6379), password (default empty) and timeout (default 2000) properties. One thing to keep in mind is that the JedisShardInfo object has precedence and allows to setup weight, but only allows constructor injection. We can setup the factory to use connection pooling by setting the value of the pooling property to 'true' (default). See application.xml comments to see three different way of configuration. Note: There are two different libraries supported: Jedis and JRedis. They have very similar names and both have the same factory name. See the difference: org.springframework.data.keyvalue.redis.connection.jedis.JedisConnectionFactory org.springframework.data.keyvalue.redis.connection.jredis.JredisConnectionFactory Similar to what we do in Spring, we configure the template object by providing it with a connection factory. We will perform all the operations through this template object. By default we need to provide only Connection Factory, but there are more properties we can provide: exposeConnection (default false) - if we return real connection or proxy object. keySerializer, hashKeySerializer, valueSerializer, hashValueSerializer (default JdkSerializationRedisSerializer) which delegates serialization to Java serialization mechanism. stringSerializer (default StringRedisSerializer) which is simple String to byte[] (and back) serializer with UTF8 encoding. We are ready to execute some code which will be cooperating with the Redis instance. Spring-Data provides us with two ways of interaction, First is by using the execute method and providing a RedisCallback object. Second is by using *Operations helpers (these will be explained later) When we are using RedisCallback we have access to low level Redis commands, see this list of interfaces (I won't put all the methods here because it is huge list): RedisConnection - gathers all Redis commands plus connection management. RedisCommands - gathers all Redis commands (listed beloved). RedisHashCommands - Hash-specific Redis commands. RedisListCommands - List-specific Redis commands. RedisSetCommands - Set-specific Redis commands. RedisStringCommands - key/value specific Redis commands. RedisTxCommands - Transaction/Batch specific Redis commands. RedisZSetCommands - Sorted Set-specific Redis commands. Check RedisCallbackExample class, this was the hard way and the problem is we have to convert our objects into byte arrays in both directions, the second way is easier. Spring Data provides for us with Operations objects, so we have much more simpler API and all byte<->object conversion is made by serializer we setup (or the default one). Higher level API (you will easily recognize *Operation *Commands equivalents): HashOperations - Redis hash operations. ListOperations - Redis list operations. SetOperations - Redis set operations. ValueOperations - Redis 'string' operations. ZSetOperations - Redis sorted set operations. Most of methods get key as first parameters so we have an even better API for multiple operations on the same key: BoundHashOperations - Redis hash operations for specific key. BoundListOperations - Redis list operations for specific key. BoundSetOperations - Redis set operations for specific key. BoundValueOperations - Redis 'string' operations for specific key. BoundZSetOperations - Redis sorted set operations for specific key. Check RedisCallbackExample class to see some easy examples of *Operations usage. One important thing to mention is that you should use stringSerializers for keys, otherwise you will have problems from other clients, because standard serialization adds class information. Otherwise you end up keys such as: "\xac\xed\x00\x05t\x00\x05atInt" "\xac\xed\x00\x05t\x00\nmySuperKey" "\xac\xed\x00\x05t\x00\bsuperKey" Up until now we have just checked the API for Redis, but Spring Data offers more for us. All the cool stuff is in org.springframework.data.keyvalue.redis.support package and all sub-packages. We have: RedisAtomicInteger - Atomic integer (CAS operation) backed by Redis. RedisAtomicLong - Same as previous for Long. RedisList - Redis extension for List, Queue, Deque, BlockingDeque and BlockingQueue with two additional methods List range(start, end) and RedisList trim(start, end). RedisSet - Redis extension for Set with additional methods: diff, diffAndStore, intersect, intersectAndStore, union, unionAndStore. RedisZSet - Redis extension for SortedSet. Note that Comparator is not applicable here so this interface extends normal Set and provide proper methods similar to SortedSet. RedisMap - Redis extension for Map with additional Long increment(key, delta) method Every interface currently has one Default implementation. Check application-support.xml for examples of configuration and RedisSupportClassesExample for examples of use. There is lot of useful information in the comments as well. Summary The library is a first milestone release so there are minor bugs, the documentation isn't as perfect as we used to and the current version needs no stable Redis server. But this is definitely a great library which allows us to use all this cool NoSQL stuff in a "standard" Spring Data Access manner. Awesome job! This post is only useful if you checkout the code: from bitbucket , for the lazy ones here is spring-data-redis zip file as well. This post is originally from http://pietrowski.info/2011/01/spring-data-redis-tutorial/
February 3, 2011
by Sebastian Pietrowski
· 30,623 Views
article thumbnail
Work with Multiple Instances of NetBeans IDE
Generally NetBeans IDE works with a default workspace. In this scenario, if you want to manage multiple sets of projects, you have to create a project group, which allows you to switch between project groups as and when required. But the problem in this is that if the workspace is damaged, and you delete it, all your settings are lost and everything is reset to the default NetBeans IDE settings. The solution is to work with multiple workspaces, that is, multiple instances, of NetBeans IDE. And here is how to do that: Right-click the NetBeans IDE shortcut and choose "Properties". In the shortcut tab, in Target, type after netbeans.exe "--userdir [Folder Path]". Make another shortcut of NetBeans IDE and provide a different folder path. Like this you can create as many workspaces as you want. If one workspace is damaged, it will not effect the settings of the others. Isn't that great? :) Also posted at: http://goo.gl/fb/rrvnA
February 1, 2011
by Ravindra Gullapalli
· 42,461 Views · 2 Likes
article thumbnail
JUnit 4.9 - Class and Suite Level Rules
If you have worked with JUnit Rule API which was introduced in version 4.8 then you might probably also think that they are very useful. For example, there is a Rule called TemporaryFolder which creates files and folder before test is executed and deletes them after the test method finishes(whether test passes or fails). For those who are not familiar with Rule API , a Rule is an alteration in how a test method, or set of methods, is run and reported. Before version 4.9 Rule can only be applied to a test method not to a Test class or JUnit test suite. But with JUnit 4.9 which is not yet released you can use @ClassRule annotation to apply a rule to test class or a test suite. If you want to use JUnit 4.9 download it from JUnit git repository. The @ClassRule annotation can be applied to any public static field of Type org.junit.rules.TestRule. The class level rule can be applied in scenarios where you use normally use @BeforeClass and @AfterClass annotation. Some of the scenarios can be like when you want to start a server or any other external resource before a test class or test suite, or when you want to make sure that your test suite or test class runs within a specified time, etc. The advantage of using class level rule is that they can be reused among different modules and classes. Let's take an example when we want to make sure that our test suite should run within x seconds otherwise test should timeout. As you can see below we TestSuite AllTests runs TestCase1 and TestCase2. I have defined a suite level rule which would make sure that test run in three seconds otherwise suite will fail. Test Suite @RunWith(Suite.class) @SuiteClasses({ TestCase1.class, TestCase2.class }) public class AllTests { @ClassRule public static Timeout timeout = new Timeout(3000); } TestCase 1 public class TestCase1 { @Test public void test1() throws Exception{ Thread.sleep(1000); Assert.assertTrue(true); } } TestCase2 public class TestCase2 { @Test public void test2() throws Exception{ Thread.sleep(1000); Assert.assertTrue(true); } } This is the most important feature which will be released in version 4.9 of JUnit.
January 31, 2011
by Shekhar Gulati
· 36,997 Views
article thumbnail
HOWTO: Partially Clone an SVN Repo to Git, and Work With Branches
I've blogged a few times now about Git (which I pronounce with a hard 'g' a la "get", as it's supposed to be named for Linus Torvalds, a self-described git, but which I've also heard called pronounced with a soft 'g' like "jet"). Either way, I'm finding it way more efficient and less painful than either CVS or SVN combined. So, to continue this series ([1], [2], [3]), here is how (and why) to pull an SVN repo down as a Git repo, but with the omission of old (irrelevant) revisions and branches. Using SVN for SVN repos In days of yore when working with the JBoss Tools and JBoss Developer Studio SVN repos, I would keep a copy of everything in trunk on disk, plus the current active branch (most recent milestone or stable branch maintenance). With all the SVN metadata, this would eat up substantial amounts of disk space but still require network access to pull any old history of files. The two repos were about 2G of space on disk, for each branch. Sure, there's tooling to be able to diff and merge between branches w/o having both branches physically checked out, but nothing beats the ability to place two folders side by side OFFLINE for deep comparisons. So, at times, I would burn as much as 6-8G of disk simply to have a few branches of source for comparison and merging. With my painfullly slow IDE drive, this would grind my machine to a halt, especially when doing any SVN operation or counting files / disk usage. Using Git for SVN repos naively Recently, I started using git-svn to pull the whole JBDS repo into a local Git repo, but it was slow to create and still unwieldy. And the JBoss Tools repo was too large to even create as a Git repo - the operation would run out of memory while processing old revisions of code to play forward. At this point, I was stuck having individual Git repos for each JBoss Tools component (major source folder) in SVN: archives, as, birt, bpel, build, etc. It worked, but replicating it when I needed to create a matching repo-collection for a branch was painful and time-consuming. As well, all the old revision information was eating even more disk than before: jbosstools' trunk as multiple git-svn clones: 6.1G devstudio's trunk as single git-svn clone: 1.3G So, now, instead of a couple Gb per branch, I was at nearly 4x as much disk usage. But at least I could work offline and not deal w/ network-intense activity just to check history or commit a change. Still, far from ideal. Cloning SVN with standard layout & partial history This past week, I discovered two ways to make the git-svn experience at least an order of magnitude better: Standard layout (-s) - this allows your generated Git repo to contain the usual trunk, branches/* and tags/* layout that's present in the source SVN repo. This is a win because it means your repo will contain the branch information so you can easily switch between branches within the same repo on disk. No more remote network access needed! Revision filter (-r) - this allows your generated Git repo to start from a known revision number instead of starting at its birth. Now instead of taking hours to generate, you can get a repo in minutes by excluding irrelevant (ancient) revisions. So, why is this cool? Because now, instead of having 2G of source+metadata to copy when I want to do a local comparison between branches, the size on disk is merely: jbosstools' trunk as single git-svn clone w/ trunk and single branch: 1.3G devstudio's trunk as single git-svn clone w/ trunk and single branch: 0.13G So, not only is the footprint smaller, but the performance is better and I need never do a full clone (or svn checkout) again - instead, I can just copy the existing Git repo, and rebase it to a different branch. Instead of hours, this operation takes seconds (or minutes) and happens without the need for a network connection. Okay, enough blather. Show me the code! Check out the repo, including only the trunk & most recent branch # Figure out the revision number based on when a branch was created, then # from r28571, returns -r28571:HEAD rev=$(svn log --stop-on-copy \ http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x \ | egrep "r[0-9]+" | tail -1 | sed -e "s#\(r[0-9]\+\).\+#-\1:HEAD#") # now, fetch repo starting from the branch's initial commit git svn clone -s $rev http://svn.jboss.org/repos/jbosstools jbosstools_GIT Now you have a repo which contains trunk & a single branch git branch -a # list local (Git) and remote (SVN) branches * master remotes/jbosstools-3.2.x remotes/trunk Switch to the branch git checkout -b local/jbosstools-3.2.x jbosstools-3.2.x # connect a new local branch to remote one Checking out files: 100% (609/609), done. Switched to a new branch 'local/jbosstools-3.2.x' git svn info # verify now working in branch URL: http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x Repository Root: http://svn.jboss.org/repos/jbosstools Switch back to trunk git checkout -b local/trunk trunk # connect a new local branch to remote trunk Switched to a new branch 'local/trunk' git svn info # verify now working in branch URL: http://svn.jboss.org/repos/jbosstools/trunk Repository Root: http://svn.jboss.org/repos/jbosstools Rewind your changes, pull updates from SVN repo, apply your changes; won't work if you have local uncommitted changes git svn rebase Fetch updates from SVN repo (ignoring local changes?) git svn fetch Create a new branch (remotely with SVN) svn copy \ http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x \ http://svn.jboss.org/repos/jbosstools/branches/some-new-branch From http://divby0.blogspot.com/2011/01/howto-partially-clone-svn-repo-to-git.html
January 28, 2011
by Nick Boldt
· 34,704 Views
article thumbnail
Apache Solr: Get Started, Get Excited!
we've all seen them on various websites. crappy search utilities. they are a constant reminder that search is not something you should take lightly when building a website or application. search is not just google's game anymore. when a java library called lucene was introduced into the apache ecosystem, and then solr was built on top of that, open source developers began to wield some serious power when it came to customizing search features. in this article you'll be introduced to apache solr and a wealth of applications that have been built with it. the content is divided as follows: introduction setup solr applications summary 1. introduction apache solr is an open source search server. it is based on the full text search engine called apache lucene . so basically solr is an http wrapper around an inverted index provided by lucene. an inverted index could be seen as a list of words where each word-entry links to the documents it is contained in. that way getting all documents for the search query "dzone" is a simple 'get' operation. one advantage of solr in enterprise projects is that you don't need any java code, although java itself has to be installed. if you are unsure when to use solr and when lucene, these answers could help. if you need to build your solr index from websites, you should take a look into the open source crawler called apache nutch before creating your own solution. to be convinced that solr is actually used in a lot of enterprise projects, take a look at this amazing list of public projects powered by solr . if you encounter problems then the mailing list or stackoverflow will help you. to make the introduction complete i would like to mention my personal link list and the resources page which lists books, articles and more interesting material. 2. setup solr 2.1. installation as the very first step, you should follow the official tutorial which covers the basic aspects of any search use case: indexing - get the data of any form into solr. examples: json, xml, csv and sql-database. this step creates the inverted index - i.e. it links every term to its documents. querying - ask solr to return the most relevant documents for the users' query to follow the official tutorial you'll have to download java and the latest version of solr here . more information about installation is available at the official description . next you'll have to decide which web server you choose for solr. in the official tutorial, jetty is used, but you can also use tomcat. when you choose tomcat be sure you are setting the utf-8 encoding in the server.xml . i would also research the different versions of solr, which can be quite confusing for beginners: the current stable version is 1.4.1. use this if you need a stable search and don't need one of the latest features. the next stable version of solr will be 3.x the versions 1.5 and 2.x will be skipped in order to reach the same versioning as lucene. version 4.x is the latest development branch. solr 4.x handles advanced features like language detection via tika, spatial search , results grouping (group by field / collapsing), a new "user-facing" query parser ( edismax handler ), near real time indexing, huge fuzzy search performance improvements, sql join-a like feature and more. 2.2. indexing if you've followed the official tutorial you have pushed some xml files into the solr index. this process is called indexing or feeding. there are a lot more possibilities to get data into solr: using the data import handler (dih) is a really powerful language neutral option. it allows you to read from a sql database, from csv, xml files, rss feeds, emails, etc. without any java knowledge. dih handles full-imports and delta-imports. this is necessary when only a small amount of documents were added, updated or deleted. the http interface is used from the post tool, which you have already used in the official tutorial to index xml files. client libraries in different languages also exist. (e.g. for java (solrj) or python ). before indexing you'll have to decide which data fields should be searchable and how the fields should get indexed. for example, when you have a field with html in it, then you can strip irrelevant characters , tokenize the text into 'searchable terms', lower case the terms and finally stem the terms . in contrast, if you would have a field with text in it that should not be interpreted (e.g. urls) you shouldn't tokenize it and use the default field type string. please refer to the official documentation about field and field type definitions in the schema.xml file. when designing an index keep in mind the advice from mauricio : "the document is what you will search for. " for example, if you have tweets and you want to search for similar users, you'll need to setup a user index - created from the tweets. then every document is a user. if you want to search for tweets, then setup a tweet index; then every document is a tweet. of course, you can setup both indices with the multi index options of solr. please also note that there is a project called solr cell which lets you extract the relevant information out of several different document types with the help of tika. 2.3. querying for debugging it is very convenient to use the http interface with a browser to query solr and get back xml. use firefox and the xml will be displayed nicely: you can also use the velocity contribution , a cross-browser tool, which will be covered in more detail in the section about 'search application prototyping' . to query the index you can use the dismax handler or standard query handler . you can filter and sort the results: q=superman&fq=type:book&sort=price asc you can also do a lot more ; one other concept is boosting. in solr you can boost while indexing and while querying. to prefer the terms in the title write: q=title:superman^2 subject:superman when using the dismax request handler write: q=superman&qf=title^2 subject check out all the various query options like fuzzy search , spellcheck query input , facets , collapsing and suffix query support . 3. applications now i will list some interesting use cases for solr - in no particular order. to see how powerful and flexible this open source search server is. 3.1. drupal integration the drupal integration can be seen as generic use case to integrate solr into php projects. for the php integration you have the choice to either use the http interface for querying and retrieving xml or json. or to use the php solr client library . here is a screenshot of a typical faceted search in drupal : for more information about faceted search look into the wiki of solr . more php projects which integrates solr: open source typo3- solr module magento enterprise - solr module . the open source integration is out dated. oxid - solr module . no open source integration available. 3.2. hathi trust the hathi trust project is a nice example that proves solr's ability to search big digital libraries. to quote directly from the article : "... the index for our one million book index is over 200 gigabytes ... so we expect to end up with a two terabyte index for 10 million books" other examples for libraries: vufind - aims to replace opac internet archive national library of australia 3.3. auto suggestions mainly, there are two approaches to implement auto-suggestions (also called auto-completion) with solr: via facets or via ngramfilterfactory . to push it to the extreme you can use a lucene index entirely in ram. this approach is used in a large music shop in germany. live examples for auto suggestions: kaufda.de 3.4. spatial search applications when mentioning spatial search, people have geographical based applications in mind. with solr, this ordinary use case is attainable . some examples for this are : city search - city guides yellow pages kaufda.de spatial search can be useful in many different ways : for bioinformatics, fingerprints search, facial search, etc. (getting the fingerprint of a document is important for duplicate detection). the simplest approach is implemented in jetwick to reduce duplicate tweets, but this yields a performance of o(n) where n is the number of queried terms. this is okay for 10 or less terms, but it can get even better at o(1)! the idea is to use a special hash set to get all similar documents. this technique is called local sensitive hashing . read this nice paper about 'near similarity search and plagiarism analysis' for more information. 3.5. duckduckgo duckduckgo is made with open source and its "zero click" information is done with the help of solr using the dismax query handler: the index for that feature contains 18m documents and has a size of ~12gb. for this case had to tune solr: " i have two requirements that differ a bit from most sites with respect to solr: i generally only show one result, with sometimes a couple below if you click on them. therefore, it was really important that the first result is what people expected. false positives are really bad in 0-click, so i needed a way to not show anything if a match wasn't too relevant. i got around these by a) tweaking dismax and schema and b) adding my own relevancy filter on top that would re-order and not show anything in various situations. " all the rest is done with tuned open source products. to quote gabriel again: "the main results are a hybrid of a lot of things, including external apis, e.g. bing, wolframalpha, yahoo, my own indexes and negative indexes (spam removal), etc. there are a bunch of different types of data i'm working with. " check out the other cool features such as privacy or bang searches . 3.6. clustering support with carrot2 carrot2 is one of the "contributed plugins" of solr. with carrot2 you can support clustering : " clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. " see some research papers regarding clustering here . here is one visual example when applying clustering on the search "pannous" - our company : 3.7. near real time search solr isn't real time yet, but you can tune solr to the point where it becomes near real time, which means that the time ('real time latency') that a document takes to be searchable after it gets indexed is less than 60 seconds even if you need to update frequently. to make this work, you can setup two indices. one write-only index "w" for the indexer and one read-only index "r" for your application. index r refers to the same data directory of w, which has to be defined in the solrconfig.xml of r via: /pathto/indexw/data/ to make sure your users and the r index see the indexed documents of w, you have to trigger an empty commit every 60 seconds: wget -q http://localhost:port/solr/update?stream.body=%3ccommit/%3e -o /dev/null everytime such a commit is triggered a new searcher without any cache entries is created. this can harm performance for visitors hitting the empty cache directly after this commit, but you can fill the cache with static searches with the help of the newsearcher entry in your solrconfig.xml. additionally, the autowarmcount property needs to be tuned, which fills the cache with a newsearcher from old entries. also, take a look at the article 'scaling lucene and solr' , where experts explain in detail what to do with large indices (=> 'sharding') and what to do for high query volume (=> 'replicating'). 3.8. loggly = full text search in logs feeding log files into solr and searching them at near real-time shows that solr can handle massive amounts of data and queries the data quickly. i've setup a simple project where i'm doing similar things , but loggly has done a lot more to make the same task real-time and distributed. you'll need to keep the write index as small as possible otherwise commit time will increase too great. loggly creates a new solr index every 5 minutes and includes this when searching using the distributed capabilities of solr ! they are merging the cores to keep the number of indices small, but this is not as simple as it sounds. watch this video to get some details about their work. 3.9. solandra = solr + cassandra solandra combines solr and the distributed database cassandra , which was created by facebook for its inbox search and then open sourced. at the moment solandra is not intended for production use. there are still some bugs and the distributed limitations of solr apply to solandra too. tthe developers are working very hard to make solandra better. jetwick can now run via solandra just by changing the solrconfig.xml. solandra also has the advantages of being real-time (no optimize, no commit!) and distributed without any major setup involved. the same is true for solr cloud. 3.10. category browsing via facets solr provides facets , which make it easy to show the user some useful filter options like those shown in the "drupal integration" example. like i described earlier , it is even possible to browse through a deep category tree. the main advantage here is that the categories depend on the query. this way the user can further filter the search results with this category tree provided by you. here is an example where this feature is implemented for one of the biggest second hand stores in germany. a click on 'schauspieler' shows its sub-items: other shops: game-change 3.11. jetwick - open twitter search you may have noticed that twitter is using lucene under the hood . twitter has a very extreme use case: over 1,000 tweets per second, over 12,000 queries per second, but the real-time latency is under 10 seconds! however, the relevancy at that volume is often not that good in my opinion. twitter search often contains a lot of duplicates and noise. reducing this was one reason i created jetwick in my spare time. i'm mentioning jetwick here because it makes extreme use of facets which provides all the filters to the user. facets are used for the rss-alike feature (saved searches), the various filters like language and retweet-count on the left, and to get trending terms and links on the right: to make jetwick more scalable i'll need to decide which of the following distribution options to choose: use solr cloud with zookeeper use solandra move from solr to elasticsearch which is also based on apache lucene other examples with a lot of facets: cnet reviews - product reviews. electronics reviews, computer reviews & more. shopper.com - compare prices and shop for computers, cell phones, digital cameras & more. zappos - shoes and clothing. manta.com - find companies. connect with customers. 3.12. plaxo - online address management plaxo.com , which is now owned by comcast, hosts web addresses for more than 40 million people and offers smart search through the addresses - with the help of solr. plaxo is trying to get the latest 'social' information of your contacts through blog posts, tweets, etc. plaxo also tries to reduce duplicates . 3.13. replace fast or google search several users report that they have migrated from a commercial search solution like fast or google search appliance (gsa) to solr (or lucene). the reasons for that migration are different: fast drops linux support and google can make integration problems. the main reason for me is that solr isn't a black box —you can tweak the source code, maintain old versions and fix your bugs more quickly! 3.14. search application prototyping with the help of the already integrated velocity plugin and the data import handler it is possible to create an application prototype for your search within a few hours. the next version of solr makes the use of velocity easier. the gui is available via http://localhost:port/solr/browse if you are a ruby on rails user, you can take a look into flare. to learn more about search application prototyping, check out this video introduction and take a look at these slides. 3.15. solr as a whitelist imagine you are the new google and you have a lot of different types of data to display e.g. 'news', 'video', 'music', 'maps', 'shopping' and much more. some of those types can only be retrieved from some legacy systems and you only want to show the most appropriated types based on your business logic . e.g. a query which contains 'new york' should result in the selection of results from 'maps', but 'new yorker' should prefer results from the 'shopping' type. with solr you can set up such a whitelist-index that will help to decide which type is more important for the search query. for example if you get more or more relevant results for the 'shopping' type then you should prefer results from this type. without the whitelist-index - i.e. having all data in separate indices or systems, would make it nearly impossible to compare the relevancy. the whitelist-index can be used as illustrated in the next steps. 1. query the whitelist-index, 2. decide which data types to display, 3. query the sub-systems and 4. display results from the selected types only. 3.16. future solr is also useful for scientific applications, such as a dna search systems. i believe solr can also be used for completely different alphabets so that you can query nucleotide sequences - instead of words - to get the matching genes and determine which organism the sequence occurs in, something similar to blast . another idea you could harness would be to build a very personalized search. every user can drag and drop their websites of choice and query them afterwards. for example, often i only need stackoverflow, some wikis and some mailing lists with the expected results, but normal web search engines (google, bing, etc.) give me results that are too cluttered. my final idea for a future solr-based app could be a lucene/solr implementation of desktop search. solr's facets would be especially handy to quickly filter different sources (files, folders, bookmarks, man pages, ...). it would be a great way to wade through those extra messy desktops. 4. summary the next time you think about a problem, think about solr! even if you don't know java and even if you know nothing about search: solr should be in your toolbox. solr doesn't only offer professional full text search, it could also add valuable features to your application. some of them i covered in this article, but i'm sure there are still some exciting possibilities waiting for you!
January 25, 2011
by Peter Karussell
· 146,882 Views
article thumbnail
Mock Static Methods using Spring Aspects
I am a Spring framework user for last three years and I have really enjoyed working with Spring. One thing that I am seeing these days is the heavy use of AspectJ in most of SpringSource products. Spring Roo is a RAD tool for Java developers which makes use of AspectJ ITD's for separate compilation units. Spring transaction management and exception translation is also done using Aspectj. There are also numerous other usages of Aspectj in Spring products. In this article, I am going to talk about another cool usage of AspectJ by Spring - Mocking Static Methods. These days most of the developers write unit test cases and it is very common to use any of the mocking libraries like EasyMock, Mockito etc. to mock the external dependencies of a class. Using any of these mocking libraries it is very easy to mock calls to other class instance method. But most of these mocking framework does not provide the facility to mock the calls to static methods. Spring provides you the capability to mock static methods by using Spring Aspects library. In order to use this feature you need to add spring-aspects.jar dependency in your pom.xml. org.springframework spring-aspects 3.0.4.RELEASE Next thing you need to do is to convert your project in to a AspectJ project. If you are using Eclipse or STS(SpringSource Tool Suite) you can do that by right-click your project -> configure -> convert to AspectJ project. STS by default has AspectJ plugin, for eclipse users you need to install AspectJ plugin for above to work. I would recommend using STS for developing Spring based applications. The two aspects and one annotation that is of interest in spring-aspects.jar are : AbstractMethodMockingControl : This is an abstract aspect to enable mocking of methods picked out by a pointcut. All the child aspects need to define mockStaticsTestMethod() and methodToMock() pointcuts. The mockStaticTestMethod() pointcut is used to indicate when mocking should be triggered and methodToMock() pointcut is used to define which method invocations to mock. AnnotationDrivenStaticEntityMockingControl : This is the single implementation of AbstractMethodMockingControl aspect which exists in spring-aspects.jar. This is an annotation-based aspect to use in a test build to enable mocking static methods on Entity classes. In this aspect mockStaticTestMethod() pointcut defines that for classes marked with @MockStaticEntityMethods annotation mocking should be triggered and methodToMock() pointcut defines that all the public static methods in the classes marked with @Entity annotation should be mocked. MockStaticEntityMethods : Annotation to indicate a test class for whose @Test methods static methods on Entity classes should be mocked. The AnnotationDrivenStaticEntityMockingControl provide the facility to mock static methods of any class which is marked with @Entity annotation. But usually we would need to mock static method of classes other than marked with @Entity annotation. The only thing we need to do to make it work is to extend AbstractMethodMockingControl aspect and provide definitions for mockStaticsTestMethod() and methodToMock() pointcuts. For example, lets write an aspect which should mock all the public static methods of classes marked with @Component annotation. package com.shekhar.javalobby; import org.springframework.mock.staticmock.AbstractMethodMockingControl; import org.springframework.stereotype.Component; import org.springframework.mock.staticmock.MockStaticEntityMethods;; public aspect AnnotationDrivenStaticComponentMockingControl extends AbstractMethodMockingControl { public static void playback() { AnnotationDrivenStaticComponentMockingControl.aspectOf().playbackInternal(); } public static void expectReturn(Object retVal) { AnnotationDrivenStaticComponentMockingControl.aspectOf().expectReturnInternal(retVal); } public static void expectThrow(Throwable throwable) { AnnotationDrivenStaticComponentMockingControl.aspectOf().expectThrowInternal(throwable); } protected pointcut mockStaticsTestMethod() : execution(public * (@MockStaticEntityMethods *).*(..)); protected pointcut methodToMock() : execution(public static * (@Component *).*(..)); } The only difference between AnnotationDrivenStaticEntityMockingControl(comes with spring-aspects.jar) and AnnotationDrivenStaticComponentMockingControl(custom that we have written above) is in methodToMock() pointcut. In methodToMock() pointcut we have specified that it should mock all the static methods in any class marked with @Component annotation. Now that we have written the custom aspect lets test it. I have created a simple ExampleService with one static method. This is the method which we want to mock. @Component public class ExampleService implements Service { /** * Reads next record from input */ public String getMessage() { return myName(); } public static String myName() { return "shekhar"; } } This class will return "shekhar" when getMessage() method will be called. Lets test this without mocking package com.shekhar.javalobby; import org.junit.Assert; import org.junit.Test; public class ExampleConfigurationTests { private ExampleService service = new ExampleService(); @Test public void testSimpleProperties() throws Exception { String myName = service.getMessage(); Assert.assertEquals("shekhar", myName); } } This test will work fine. Now let's add mocking to this test class. There are two things that we need to do in our test We need to annotate our test with @MockStaticEntityMethods to indicate that static methods of @Component classes will be mocked. Please note that it is not required to use @MockStaticEntityMethods annotation you can create your own annotation and use that in mockStaticsTestMethod() pointcut. So, I could have created an annotation called @MockStaticComponentMethods and used that in mockStaticsTestMethod() pointcut. But I just reused the @MockStaticEntityMethods annotation. In our test methods we need to first invoke the static method which we want to mock so that it gets recorded. Next we need to set our expectation i.e. what should be returned from the mock and finally we need to call the playback method to stop recording mock calls and enter playback state. To make it more concrete lets apply mocking to the above test import org.junit.Assert; import org.junit.Test; import org.springframework.mock.staticmock.MockStaticEntityMethods; @MockStaticEntityMethods public class ExampleConfigurationTests { private ExampleService service = new ExampleService(); @Test public void testSimpleProperties() throws Exception { ExampleService.myName(); AnnotationDrivenStaticComponentMockingControl.expectReturn("shekhargulati"); AnnotationDrivenStaticComponentMockingControl.playback(); String myName = service.getMessage(); Assert.assertEquals("shekhargulati", myName); } } As you can see we annotated the test class with @MockStaticEntityMethods annotation and in the test method we first recorded the call (ExampleService.myName()), then we set the expectations, then we did the playback and finally called the actual code. In this way you can mock the static method of class.
January 25, 2011
by Shekhar Gulati
· 22,123 Views · 1 Like
article thumbnail
Linqer – a nice tool for SQL to LINQ transition
Almost all .NET developers who have been working in several applications up to date are probably familiar with writing SQL queries for specific needs within the application. Before LINQ as a technology came on scene, my daily programming life was about 60-70% of the day writing code either in the front-end (ASPX, JavaScript, jQuery, HTML/CSS etc…) or in the back-end (C#, VB.NET etc…), and about 30-40% writing SQL queries for specific needs used within the application. Now, when LINQ is there, I feel that the percentage for writing SQL queries got down to about 10% per day. I don’t say it won’t change with time depending what technology I use within the projects or what way would be better, but since I’m writing a lot LINQ code in the latest projects, I thought to see if there is a tool that can automatically translate SQL to LINQ so that I can transfer many queries as a LINQ statements within the code. Linqer is a tool that I have tested in the previous two weeks and I see it works pretty good. Even I’m not using it yet to convert SQL to LINQ code because I did it manually before I discovered that Linqer could have really helped me, I would recommend it for those who are just starting with LINQ and have knowledge of writing SQL queries. Let’s pass through several steps so that I will help you get started faster… 1. Go to http://www.sqltolinq.com/ website and download the version you want. There is a Linqer Version 4.0.1 for .NET 4.0 or Linqer Version 3.5.1 for .NET 3.5. 2. Once you download the zip file, extract it and launch the Linqer4Inst.exe then add install location. In the location you will add, the Linqer.exe will be created. 3. Launch the Linqer.exe. Once you run it for first time, the Linqer Connection Pool will be displayed so that you can create connection to your existing Model Click the Add button Right after this, the following window will appear #1 – The name of the connection string you are creating #2 – Click “…” to construct your connection string using Wizard window #3 – Chose your language, either C# or VB #4 – Model LINQ to SQL or LINQ to Entities Right after you select LINQ to SQL, the options to select the files for the Model will be displayed. In our case I will select LINQ to SQL, and here is the current progress So, you can select existing model from your application or you can Generate LINQ to SQL Files so that the *.dbml and *.designer.cs will be automatically filled #5 – At the end, you can chose your context name of the model which will be used when generating the LINQ code Once you are done, click OK. You will get back to the parent window filled with all needed info and click Close. Note: You can later add additional connections in your Linqer Connections Pool from Tools –> Linqer Connections In the root folder where your Linqer.exe is placed, now you have Linqer.ini file containing the Connection string settings. Ok, now lets go to the interesting part. Lets create one (first) simple SQL query and try to translate it to LINQ statement. SQL Query select * from authors a where a.city = 'Oakland' If we add this query to Linqer, here is the result: So, the LINQ code is similar to the SQL code and is easy to read since it’s simple. Also, if you notice, the tool generates class (you can add class name) with prepared code for using in your project. Perfect! Now, lets try to translate a query with two joined tables (little bit more complex): SQL Query select * from employee left join publishers on employee.pub_id = publishers.pub_id where employee.fname like '%a' The LINQ generated code is: from employee in db.Employee join publishers in db.Publishers on employee.Pub_id equals publishers.Pub_id into publishers_join from publishers in publishers_join.DefaultIfEmpty() where employee.Fname.EndsWith("a") select new { employee.Emp_id, employee.Fname, employee.Minit, employee.Lname, employee.Job_id, employee.Job_lvl, employee.Pub_id, employee.Hire_date, Column1 = publishers.Pub_id, Pub_name = publishers.Pub_name, City = publishers.City, State = publishers.State, Country = publishers.Country } So, if you can notice the where clause, we said in the SQL query: ... like "%a" and the corresponding LINQ code in C# is ... EndsWith("a"); - Excellent! And the Class automatically generated by the tool is public class EmployeePubClass { private String _Emp_id; private String _Fname; private String _Minit; private String _Lname; private Int16? _Job_id; private Byte? _Job_lvl; private String _Pub_id; private DateTime? _Hire_date; private String _Column1; private String _Pub_name; private String _City; private String _State; private String _Country; public EmployeePubClass( String AEmp_id, String AFname, String AMinit, String ALname, Int16? AJob_id, Byte? AJob_lvl, String APub_id, DateTime? AHire_date, String AColumn1, String APub_name, String ACity, String AState, String ACountry) { _Emp_id = AEmp_id; _Fname = AFname; _Minit = AMinit; _Lname = ALname; _Job_id = AJob_id; _Job_lvl = AJob_lvl; _Pub_id = APub_id; _Hire_date = AHire_date; _Column1 = AColumn1; _Pub_name = APub_name; _City = ACity; _State = AState; _Country = ACountry; } public String Emp_id { get { return _Emp_id; } } public String Fname { get { return _Fname; } } public String Minit { get { return _Minit; } } public String Lname { get { return _Lname; } } public Int16? Job_id { get { return _Job_id; } } public Byte? Job_lvl { get { return _Job_lvl; } } public String Pub_id { get { return _Pub_id; } } public DateTime? Hire_date { get { return _Hire_date; } } public String Column1 { get { return _Column1; } } public String Pub_name { get { return _Pub_name; } } public String City { get { return _City; } } public String State { get { return _State; } } public String Country { get { return _Country; } } } public class List: List { public List(Pubs db) { var query = from employee in db.Employee join publishers in db.Publishers on employee.Pub_id equals publishers.Pub_id into publishers_join from publishers in publishers_join.DefaultIfEmpty() where employee.Fname.EndsWith("a") select new { employee.Emp_id, employee.Fname, employee.Minit, employee.Lname, employee.Job_id, employee.Job_lvl, employee.Pub_id, employee.Hire_date, Column1 = publishers.Pub_id, Pub_name = publishers.Pub_name, City = publishers.City, State = publishers.State, Country = publishers.Country }; foreach (var r in query) Add(new EmployeePubClass( r.Emp_id, r.Fname, r.Minit, r.Lname, r.Job_id, r.Job_lvl, r.Pub_id, r.Hire_date, r.Column1, r.Pub_name, r.City, r.State, r.Country)); } } Great! We have ready-to-use class for our application and we don't need to type all this code. Besides this way to generate code, you can in same time use this tool to see the db results I like this tool because mainly it’s very easy to use, lightweight and does the job pretty straight forward. You can try the tool and send me feedback using the comments in this blog post.
January 24, 2011
by Hajan Selmani
· 67,256 Views
article thumbnail
Practical PHP Testing Patterns: Chained Tests
Sometimes you have tests where the fixtures can be recyled, instead of being recreated; but that's not all: sometimes, even the state reset we preach for Shared Fixture gets in the way instead of being mandatory. For example, think of testing addition and removal of values over a collection object: to test the removal, you need a collection containing something. But this collection has to be populated somewhere, involving other methods which would have their own independent tests. If you look closer, however, you notice that the collection you need for testing values removal is just like the one that testAddsElements() produces when it succeeds, and simply throws away when the Test Methods returns... The pattern In these cases, the Chained Tests pattern can help you design isolated tests, by expressing the dependencies they implicitly have. The application of this pattern results in a form of Api encapsulation: a test that tests Collection::remove() does not need to use Collection::add(), and won't be influenced by a change in the name or the signature of add(), which it does not focus on. The test just needs just a collection which has been already filled with values. Before "spolverare" unserialization and other strange techniques, just consider using the leftovers from a previous test. Let's suppose we have two tests A and B, where B uses a variable produced in A. Of course, if A fails, the dependent Test Method B cannot (and shouldn't) be run; it will only add noise with an obvious failure which derives not from the behavior really under test there, but just from the fixture setup, which happens to be the previous test. If A instead runs correctly, B can be run. Implementation In PHPUnit, Chained Tests are supported when they are in the same Testcase Class. In this case, the Testcase Objects are not totally isolated anymore: you can return variables and objects from one test, in order for PHPUnit to pass them to other tests, that simply asks for them as dependencies. You can do so by adding @depends annotation to dependent tests. Note that this annotation can also be used without passing fixtures around, just for the sake of not executing complex tests where the base ones have already failed. @depends working by stopping the execution of tests down in the chain when the dependencies fail: they will be marked as skipped and signaled by an S in PHPUnit's ordinary output. The tests which produce a fixture as a leftover must return that fixture at the end of the test. The tests accepting fixtures, tagged with @depends, must accept parameters in the Test Method signatures. In case of multiple parameters to return, you can return an array() and use list() in the dependent method to extract variables from it. There is a known issue in running dependent tests with --filter: dependent tests would not run alone, so if you plan to use filter you may want to pick similar test names for couples of dependent/dependency in order to execute both of them when needed. Example Here we have a sample Testcase Class where the second and third tests are dependent on the first. Long chains of tests are not encouraged as it becomes increasingly difficult to tell what is the state of the fixture in the tests at the end of chain. assertEquals(15, $array[2], 'The third element was not 15.'); return $array; } /** * @depends testArrayAcceptsNewValues */ public function testArrayCanDeleteValues($array) { unset($array[2]); $this->assertEquals(2, count($array), 'There are more than 2 elements in the array.'); } /** * @depends testArrayAcceptsNewValues */ public function testArrayAcceptsDuplicateValues($array) { $array[] = 15; $this->assertEquals(4, count($array), 'There are less than 4 elements in the array.'); } } If we execute the whole test case, we obtain: [10:40:55][giorgio@Desmond:~]$ phpunit txt/articles/chainedtest.php PHPUnit 3.5.5 by Sebastian Bergmann. ... Time: 0 seconds, Memory: 5.00Mb OK (3 tests, 3 assertions) If we put a $this->fail() in the first test, instead, we get: [10:41:40][giorgio@Desmond:~]$ phpunit txt/articles/chainedtest.php PHPUnit 3.5.5 by Sebastian Bergmann. FSS Time: 0 seconds, Memory: 5.00Mb There was 1 failure: 1) ArrayTest::testArrayAcceptsNewValues /home/giorgio/Dropbox/txt/articles/chainedtest.php:13 FAILURES! Tests: 1, Assertions: 1, Failures: 1, Skipped: 2. If we filter the second test, we cannot run it due to the missing dependency: [10:49:25][giorgio@Desmond:~]$ phpunit --filter testArrayCan txt/articles/chainedtest.php PHPUnit 3.5.5 by Sebastian Bergmann. S Time: 0 seconds, Memory: 4.75Mb OK, but incomplete or skipped tests! Tests: 0, Assertions: 0, Skipped: 1. But if we filter the third test, thanks to the well-chosen name, we get: [10:49:16][giorgio@Desmond:~]$ phpunit --filter testArrayAccepts txt/articles/chainedtest.php PHPUnit 3.5.5 by Sebastian Bergmann. .. Time: 0 seconds, Memory: 5.00Mb OK (2 tests, 2 assertions)
January 24, 2011
by Giorgio Sironi
· 4,467 Views
article thumbnail
Java Generics Wildcard Capture - A Useful Thing to Know
Recently, I was writing a piece of code where I need to write a copy factory for a class. A copy factory is a static factory that will construct a copy of the same type as that of the argument passed to the factory. The copy factory will copy the state of the argument object into the new object. This will make sure that the newly constructed object is equal to the old one. A copy constructor is similar to a copy factory with just one difference - a copy constructor can only exist in the class containing the constructor whereas copy factory can exist in any other class as well. For example, // Copy Factory public static Field getNewInstance(Field field) // Copy Constructor public Field Field(Field field) I choose static copy factory because I didn't have the source code for the class With copy factory I can code to an interface instead of a class. The class for which I wanted to write copy factory was similar to the one shown below : public class Field { private String name; private T value; public String getName() { return name; } public void setName(String name) { this.name = name; } public T getValue() { return value; } public void setValue(T value) { this.value = value; } } The first implementation of FieldUtils class that came to my mind was as shown below public static Field copy(Field field) { Field objField = new Field();//1 objField.setName(field.getName());//2 objField.setValue(field.getValue());//3 return objField;//4 } The above code will not compile because of two compilation errors. The first compilation error is at line 1 because you can't create an instance of Field. Field means Field and when you have ? extends Something you can only get values out of it, you can't set values in it except null. For an object to be useful you should be able to do both, so the compiler does not allow you to create an object. The second compilation error will be at line number 3 and you will get a cryptic error message like this The method setValue(capture#3-of ?) in the type Field is not applicable for the arguments (capture#4-of ?) In simple terms this error message is saying that you are trying to set a wrong value in objField. But what if I had written the following method? public static Field copy(Field field) { field.setName(field.getName()); field.setValue(field.getValue()); return field; } Will the above code compile? No. You will again get a similar error message as mentioned above. To fix this error we will write a private helper method which will capture the wildcard and assign it to a Type variable. This technique is called Wildcard Capture. I have read this in the Java Generics and Collection book which is a must-read for understanding Java Generics. Wildcard Capture works by type inference. public static Field copy(Field field) { return copyHelper(field); } private static Field copyHelper(Field field) { Field objField = new Field(); objField.setName(field.getName()); objField.setValue(field.getValue()); return objField ; } Wildcard capture is very useful when you work with wildcards and knowing it will save a lot of your time.
January 20, 2011
by Shekhar Gulati
· 52,183 Views · 1 Like
article thumbnail
Migrating from JBoss 4 to JBoss 5
I have been using JBoss 4.2.3 for over a year right now and really like it (although the clustering / JMS stuff seems WAY overcomplicated - and needing 80 config files - not fun!). But anyway it is time to upgrade to JBoss 5 and the path has been marred by many stops and starts and has been surprisingly difficult. In any event I found the following links to be a life saver and thought I would share them http://community.jboss.org/wiki/MigrationfromJBoss4.pdf http://venugopaal.wordpress.com/2009/02/02/jboss405-to-jboss-5ga/ http://www.tikalk.com/java/migrating-your-application-jboss-4x-jboss-5x The real kickers have to be 1) Increased adherence to the Java spec that cause WARs / JARs to no longer deploy 2) Changed location of XML files 3) Changed XML filenames 4) Changed XML file contents Man they don't make it easy do they? From http://softarc.blogspot.com/2011/01/migrating-from-jboss-4-to-jboss-5.html
January 14, 2011
by Frank Kelly
· 20,193 Views
article thumbnail
EMF Tips: Accessing Model Meta Data, Serializing into Element/Attribute
Two tips for the Eclipse Modeling Framework (EMF) 2.2.1: Accessing model’s meta model – accessing EClass/attribute by name – so that you can set an attribute when you only know its name and haven’t its EAttribute How to force EMF to serialize an object as an XML element instead of an XML attribute Excuse my rather liberal and slightly confusing use of the terms model, model instance and meta model. Tip 1: Accessing model’s meta model – accessing EClass/attribute by name Normally you cannot do any operation on a dynamic EMF model instance – such as instantiating an EObject or settings its property via eSet – without the corresponding meta model objects such as EClass and EAttribute. But there is a solution – you can build an ExtendedMetaData instance and use its methods to find the meta model objects based on search criteria such as element name and namespace. Examples Create ExtendedMetaData from a model One way to build a meta data instance is to instantiate BasicExtendedMetaData based on a Registry, containing all registered packages. This is usually available either via ResourceSet.getPackageRegistry() or globally, via EPackage.Registry.INSTANCE. import org.eclipse.emf.ecore.util.*; ... ExtendedMetaData modelMetaData = new BasicExtendedMetaData(myResourceSet.getPackageRegistry()); Get EClass for a namespace and name Provided that your model contains an element with the name Book and namespace http://example.com/book: EClass bookEClass = (EClass) modelMetaData.getType("http://example.com/book", "Book"); Get EClass’ attribute by name Beware: Some properties (such as those described by named complex types) are not represented by an EAttribute but an EReference (both extend EStructuralFeature) and are accessible as the EClass’ elements, not attributes, even though from developer’s points of view they’re attributes of the owning class. Let’s suppose that the book has the attribute name: EStructuralFeature nameAttr = modelMetaData.getElement(bookEClass, null, "name"); The namespace is null because normally attributes/nested elements are not classified with a schema. Here is how you would print the name and namespace URI of an attribute/element: System.out.println("attr: " + modelMetaData.getNamespace(nameAttr) + ":" + nameAttr.getName()); //prints "null:name" Tip 2: How to force EMF to serialize an object as an XML element instead of an XML attribute Normally EMF stores simple Java properties as attributes of the XML element representing the owning class: but you might prefer to have it rather as a nested element: The Book of Songs To achieve that: Enable the save option OPTION_EXTENDED_META_DATA (so that extended meta data such as annotations and an XML map aren’t ignored) Tell EMF that you want this property to be stored as an element By attaching an eAnnotation to it (not shown) By supplying a XML Map with this information upon save To enable the extended metadata: Map saveOptions = new HashMap(); saveOptions.put(XMLResource.OPTION_EXTENDED_META_DATA, Boolean.TRUE); According to some documentation the value should be an implementation of ExtendedMetaData, according to others Boolean.TRUE is the correct choice – I use the latter for it’s easier and works for me. To tell EMF to write a property as an element when serailizing to XML: import org.eclipse.emf.ecore.xmi.impl.*; ... EAttribute bookNameEAttribute = ...; // retrieved e.g. from meta data, see Tip 1 XMLMapImpl map = new XMLMapImpl(); XMLInfoImpl x = new XMLInfoImpl(); x.setXMLRepresentation(XMLInfoImpl.ELEMENT); map.add(bookNameEAttribute, x); saveOptions.put(XMLResource.OPTION_XML_MAP, map); The XMLInfoImpl enables you to customize the namespace, name and representation of the element. When saving you then just supply the save options: EObject target = ...; org.eclipse.emf.ecore.resource.Resource outResource = ...; outResource.getContents().add(target); outResource.save(saveOptions); Reference: the Redbook Eclipse Development using the Graphical Editing Framework and the Eclipse Modeling Framework, page 74, section 2.3.4 From http://theholyjava.wordpress.com/2011/01/11/emf-tips-accessing-model-meta-data-serializing-into-elementattribute/
January 12, 2011
by Jakub Holý
· 14,231 Views
article thumbnail
Interview: Troy Giunipero, Author of NetBeans E-commerce Tutorial
Troy Giunipero (pictured, right) is a student at the University of Edinburgh studying toward an MSc in Computer Science. Formerly, he was one of the NetBeans Docs writers based in Prague, Czech Republic, where he spent most of his time writing Java web tutorials. In this interview, Troy introduces you to The NetBeans E-commerce Tutorial. This is a very detailed tutorial describing just about everything you need to know when creating an e-commerce web application in Java. It has received a lot of very positive feedback. Let's find out about the background of this tutorial and what Troy learnt in writing it. Hi Troy! During your time on the NetBeans team, you wrote a very large tutorial describing how to create an e-commerce site. How and why did you start writing it? Well, there’s a short answer and a long answer to this. The short answer is that I was lucky to take part in Sun’s SEED (Sun Engineering Enrichment and Development) program. I wanted to focus on technical aspects, so I based my curriculum on developing an e-commerce application using Java technologies. I documented my efforts and applied them toward deliverables for the IDE’s 6.8 and 6.9 releases, resulting in the 13-part NetBeans E-commerce Tutorial. The long answer is that I had previously been tasked with creating an e-commerce application for my degree project (I was studying toward a BSc in IT and Computing), and ran into loads of trouble trying to integrate the various technologies into a cohesive, functioning application. I was coming from a non-technical background and found there was a steep learning curve involved in web development. My work was fraught with problems which I can now attribute to poor time-management, and a lack of good, practical, hands-on learning resources. So in a way, working on the AffableBean project (this is the project used in the NetBeans E-commerce Tutorial) was a way for me to go back and attempt to do the whole thing right. With the tutorial, I had two goals in mind: one, I wanted to consolidate my understanding of everything by writing about it, and two, I wanted to help others avoid the problems and pitfalls that I’d earlier ran into by designing a piece of documentation that puts everything together. Can you run us through the basic parts and what they provide? Certainly. First I want to point out that there’s a live demo application (http://dot.netbeans.org:8080/AffableBean/) which I managed to get up and running with help from Honza Pirek from the NetBeans Web Team (thanks Honza!): The application is modeled on the well-known MVC architecture: The tutorial refers to the above diagram at various points, and covers a bunch of different concepts and technologies along the way, including: Project design and planning (unit 2) Designing a data model (using MySQL WorkBench) (unit 4) Forward-engineering the data model into a database schema (unit 4) Database connectivity (units 3, 4, 6) Servlet, JSP/EL and JSTL technologies (units 3, 5, 6) EJB 3 and JPA 2 technologies (unit 7), and transactional support (unit 9) Session management (i.e., for the shopping cart mechanism) (unit 8) Form validation and data conversion (unit 9) Multilingual support (unit 10) Security (i.e., using form-based authentication and encrypted communication) (unit 11) Load testing with JMeter (unit 12) Monitoring the application with the IDE’s Profiler (unit 12) Tips for deployment to a production server (units 12, 13) Also, the tutorial aims to provide ample background information on the whole “Java specifications” concept, with an introduction to the Java Community Process, how final releases include reference implementations, and how these relate to the tutorial application using the IDE’s bundled GlassFish server (units 1, 7). Finally, the tutorial is as much about the above concepts and technologies as it is about learning to make best use of the IDE. I really tried to squeeze as much IDE-centric information in there as possible. So for example you’ll find: An introduction to the IDE’s main windows and their functions (unit 3) A section dedicated to editor tips and tricks (unit 5), and abundant usage of keyboard shortcuts in steps throughout the tutorial Use of the debugger (unit 8) Special “tip boxes” that discuss IDE functionality that is sometimes difficult to fit into conventional documentation. For example, there are tips on using the IDE’s Template Manager (unit 5), GUI support for database tables (unit 6), Javadoc support (unit 8), and support for code templates (unit 9). Did you learn any new things yourself while writing it? Yes! Three things immediately come to mind: EJB 3 technology. Initially this was a big hurdle for me. Using EJB 3 effectively seems to be something of an art form. If you know what you’re doing and understand exactly how to use the EntityManager to handle persistence operations on a database, EJB lets you do some amazingly smart things with just a few lines of code. But there seems to be a lack of good free documentation online—especially since EJB 3 is a significant departure from EJB 2. Therefore, almost all of the tutorial’s information on EJB comes from the very excellent book, EJB in Action by Debu Panda and Reza Rahman. Interpreting the NetBeans Profiler. The final hands-on unit, Testing an Profiling, was the most difficult for me to write, primarily because I just wasn’t familiar with the Profiler at all. I spent an unhealthy amount of time just watching the Telemetry graph run against my JMeter test plan, which is only slightly more stimulating than watching water come to boil. That being said, I feel that by just examining the graphs and other windows over time, critical logical associations start to jump out at you after a while. Likewise with JMeter. Hopefully unit 12 was able to capture and relay some of these. How to search online for decent articles and learning materials. The old Sun Technical Articles site was a great resource. Many of the links in the See Also sections at the bottom of tutorial units were found by adding site:java.sun.com/developer/technicalArticles/ to a Google search. Also the official forums (found at forums.sun.com) became a good place for questions I couldn’t find ready answers to. I had both the Java EE 5 and 6 Tutorials bookmarked. And Marty Hall’s Core Servlets and JavaServer Pages became an invaluable resource for the first half of the tutorial. What are your personal favorite features of the technologies discussed in the tutorial? I particularly liked learning about session management—using the HttpSession object to carry user-specific data between requests, and working with JSP’s corresponding implicit objects in the front-end pages. Session management is a defining aspect for e-commerce applications, as they need to provide some sort of shopping cart mechanism... ...and so the Managing Sessions unit (unit 8) was a key chapter in the tutorial. It’s extremely useful to be able to suspend the debugger on a portion of code that includes session-scoped variables, then hover the mouse over a given variable in the editor to determine its current value. I used the debugger continuously during this phase, and so I went so far as to incorporate use of the debugger throughout the Managing Sessions unit. What kind of background does someone starting the tutorial need to have? Someone can come to the tutorial with little or absolutely no experience using NetBeans. I’ve tried to be particularly careful in providing clear and easy-to-follow instructions in this respect. But one would be best off having some background or knowledge in basic web technologies, and at least some exposure to relational databases. With this foundation, I think that the topics covered in the second half of the tutorial, like applying entity classes and session beans, language support and security, won’t seem too daunting. I’ve noticed that the vast majority of feedback that comes in relates to the first half of the tutorial, and I sometimes get the impression that people feel they need to follow the tutorial units consecutively. Not so. The units are 90% modular. In other words, if somebody just wants to run through the security unit (unit 11), they can do so by downloading the associated project snapshot, follow the setup instructions, and then just follow along without needing to even look at other parts of the tutorial. What will they be able to do at the end of it? Naturally, anybody who completes individual tutorial units will be able to apply the concepts and technologies to their own work. But anyone who completes the tutorial in its entirety will gain an insight into the development process as a whole, and I think will also get a certain confidence that comes with knowing how “all the pieces fit together”—from gathering customer requirements all the way to deployment of the completed app to a production server. They’ll also have gained a solid familiarity with the NetBeans IDE, and be in a good position to explore popular Java web frameworks that work on top of servlet technology or impose an MVC architecture on their own, such as JSF, Spring, Struts, or Wicket. Do you see any problems in the technologies discussed and what would be your suggestions for enhancements? Well there’s one thing that comes to mind. When I started working on this project, I was studying the Duke’s BookStore example from the Java EE 5 Tutorial. A wonderful example that demonstrates how to progressively implement the same application using various technologies and combinations thereof. So for example you start out with an all-servlet implementation, then move on to a JSP/servlet version. Then there’s a JSTL implementation and ultimately, a version using JavaServer Faces. It’s great learning material, but also terrifically outdated. Right around this time, Sun was gearing up for the big Java EE 6 release (Dec. 2009), and I was also trying to learn about the new upcoming technologies, namely CDI, JSF 2, and EJB 3, for my regular NetBeans documentation work. I was getting the definite sense that JSP and JSTL were slowly being pushed aside—in the case of JavaServer Faces, Facelets templating was the new page authoring technology. So really, the E-commerce Tutorial application has become a sort of EE 5/EE 6 hybrid by combining JSP/JSTL with EJB 3 and JPA 2. Now the problem I see from the perspective of a student trying to learn this stuff from scratch, is that the leap from basic servlet technology to a full-blown JSF/EJB/JPA solution is tremendous, and cannot readily be taught through a single tutorial. Naturally, others may disagree with me here. I’m not sure if there’s a solution other than to compensate by producing a lot of quality learning material that covers lots of different use-cases. I’d suggest that the E-commerce Tutorial puts one in a very advantageous position to begin learning about Java-based frameworks, such as GWT, Spring, and JSF, which is a natural course of action for people looking to get a job with this knowledge. Planning any more parts to the tutorial or a new one? No more parts. The E-commerce Tutorial is done. Upon committing the final installments and changes last November, I rejoiced. However, I’m still actively responding to feedback [the ‘Send Us Your Feedback’ links at the bottom of tutorials] and plan to maintain it indefinitely, so if anyone spots any typos, has questions or comments, recommendations for improvement, etc., please write in! :-)
January 9, 2011
by Geertjan Wielenga
· 29,863 Views
article thumbnail
Clojure: select-keys, select-values, and apply-values
Clojure provides the get and get-in functions for returning values from a map and the select-keys function for returning a new map of only the specified keys. Clojure doesn't provide a function that returns a list of values; however, it's very easy to create such a function (which I call select-values). Once you have the ability to select-values it becomes very easy to create a function that applies a function to the selected values (which I call apply-values). The select-keys function returns a map containing only the entries of the specified keys. The following (pasted) REPL session shows a few different select-keys behaviors. user=> (select-keys {:a 1 :b 2} [:a]) {:a 1} user=> (select-keys {:a 1 :b 2} [:a :b]) {:b 2, :a 1} user=> (select-keys {:a 1 :b 2} [:a :b :c]) {:b 2, :a 1} user=> (select-keys {:a 1 :b 2} []) {} user=> (select-keys {:a 1 :b 2} nil) {} The select-keys function is helpful in many occassions; however, sometimes you only care about selecting the values of certain keys in a map. A simple solution is to call select-keys and then vals. Below you can find the results of applying this idea. user=> (def select-values (comp vals select-keys)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b]) (2 1) user=> (select-values {:a 1 :b 2} [:a :b :c]) (2 1) user=> (select-values {:a 1 :b 2} []) nil user=> (select-values {:a 1 :b 2} nil) nil The select-values implementation from above may be sufficient for what you are doing, but there are two things worth noticing: in cases where you might be expecting an empty list you are seeing nil; and, the values are not in the same order that the keys were specified in. Given that (standard) maps are unsorted, you can't be sure of the ordering the values. (side-note: If you are concerned with microseconds, it's also been reported that select-keys is a bit slow/garbage heavy.) An alternative definition of select-values uses the reduce function and pulls the values by key and incrementally builds the (vector) result. user=> (defn select-values [map ks] (reduce #(conj %1 (map %2)) [] ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) [1] user=> (select-values {:a 1 :b 2} [:a :b]) [1 2] user=> (select-values {:a 1 :b 2} [:a :b :c]) [1 2 nil] user=> (select-values {:a 1 :b 2} []) [] user=> (select-values {:a 1 :b 2} nil) [] The new select-values function returns the values in order and returns an empty vector in the cases where previous examples returned nil, but we have a new problem: Keys specified that don't exist in the map are now included in the vector as nil. This issue is easily addressed by adding a call to the remove function. The implementation that includes removing nils can be found below. user=> (defn select-values [map ks] (remove nil? (reduce #(conj %1 (map %2)) [] ks))) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b]) (1 2) user=> (select-values {:a 1 :b 2} [:a :b :c]) (1 2) user=> (select-values {:a 1 :b 2} []) () user=> (select-values {:a 1 :b 2} nil) () There is no "correct" implementation for select-values. If you don't care about ordering and nil is a reasonable return value: the first implementation is the correct choice due to it's concise definition. If you do care about ordering and performance: the second implementation might be the right choice. If you want something that follows the principle of least surprise: the third implementation is probably the right choice. You'll have to decide what's best for your context. In fact, here's a few more implementations that might be better based on your context. user=> (defn select-values [m ks] (map #({:a 1 :b 2} %) ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b :c]) (1 2 nil) user=> (defn select-values [m ks] (reduce #(if-let [v ({:a 1 :b 2} %2)] (conj %1 v) %1) [] ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) [1] user=> (select-values {:a 1 :b 2} [:a :b :c]) [1 2] Pulling values from a map is helpful, but it's generally not the end goal. If you find yourself pulling values from a map, it's likely that you're going to want to apply a function to the extracted values. With that in mind, I generally define an apply-values function that returns the result of applying a function to the values returned from specified keys. A good example of this is returning the total for a line item represented as a map. Given a map that specifies a line item costing $5 and having a quantity of 4, you can use (* price quantity) to determine the total price for the line item. Using our previously defined select-values function we can do the work ourselves, as the example below shows. user=> (let [[price quantity] (select-values {:price 5 :quantity 4 :upc 1123} [:price :quantity])] (* price quantity)) 20 The example above works perfectly well; however, applying a function to the values of a map seems like a fairly generic operation that can easily be extracted to it's own function (the apply-values function). The example below shows the definition and usage of my definition of apply-values. user=> (defn apply-values [map f & ks] (apply f (select-values map ks))) #'user/apply-values user=> (apply-values {:price 5 :quantity 4 :upc 1123} * :price :quantity) 20 I find select-keys, select-values, & apply-values to be helpful when writing Clojure applications. If you find you need these functions, feel free to use them in your own code. However, you'll probably want to check the comments - I'm sure someone with more Clojure experience than I have will provide superior implementations. From http://blog.jayfields.com/2011/01/clojure-select-keys-select-values-and.html
January 6, 2011
by Jay Fields
· 15,438 Views
article thumbnail
Using Default Values for Properties in Spring
I was looking through the Spring Greenhouse application and discovered an existing feature that I was unaware of. We can set a default value when configuring PropertyPlaceholderConfigurer. 1. Set the default value separator in config <-- Ignore file not found --> <-- Ignore when tokens are not found --> ... 2. Set the default values for your properties ... Notes If unspecified, the default separator is a colon “:” This option is not available when using the “context” namespace – (submitted Jira ticket: SPR-7794) From http://gordondickens.com/wordpress/2010/12/06/using-default-values-for-properties-in-spring/
December 30, 2010
by Gordon Dickens
· 51,835 Views
article thumbnail
Domain Objects and Their Variants
I read "Domain Object Dependency Injection with Spring" the other day. Actually, what interested me were those comments about the concepts and patterns around "Domain Object". I had been always confused about those buzzwords related to Domain Object: Domain Object, Business Object, DTO, POJO, Anemic Domain Object, Active Record Pattern. After some research, I found several points critical to help understand those concepts (at least for myself): Among those concepts, POJO is not that relevant. It's a general principal applied to all enterprise beans (service beans, dao beans, etc.) not to be framework-specifc, though framework-specific annotations are commonly used in 'POJO' these days. Domain Object == Business Object. They are entity representitives in business layer, which shoud be understood by non-programming people, such as business analysts. Business Object should have both data and behaviour. Anemic Domain Object are those business objects only have data without behaviour. Data Transfer Object (DTO) is a concept within data access layer. DTOs are often used in conjunction with DAOs to represent data retrieved from a DB. DTOs and Domain Objects can be designed to use the same class, even represented by the same object for the same entity. Personally, I think Domain Objects refer to the combination of DTO and Business Objects for plenty of cases. Active Record Pattern: Domain objects have rich behaviour, for example, inject DAO into domain objects to make it has CRUD ability. Understanding the above concepts is just half way, I would like to going further for things behind these domain object variants: On the one hand, people like Martin Fowler thinks domain object should have data and behaviour, which is consistent with OOD. Anemic Domain Object is anti-pattern, which in effect causes 'procedure script' style thick service layer. On the other hand, anemic domain objects are still popular. For myself, the following two facts explain the phenomenon: Active Record Pattern is really painful. I don't like the idea of putting DAO into Domain Object at all. And the clear multi-layer design, makes coding more intuitive and very easy to maintain. Encapsulating behaviour into business object doesn't mean removing service layer at all. Just don't put all behaviour into service layer, this layer should be as thin as possible. What I learned: I will put behaviour only relevant to the individual entity into domain object, for example: validation, calculation, maybe CRUD (I still prefer having separate DAO to fufill the tasks, don't like any huge object) Service layer should serve as a gateway to all above layer, which means any business atomic operation should find its place in service layer. Then, transaction can be managed in one layer. Behaviour involves different entities regardless same type of entity or different types may go into service layer. This is my personal opinion based on what I understand so far. Any comments or critisim are welcome. PS. This is my first post:) Wish it good luck, and Merry Christmas to everyone. More to say: I've read the "Domain Driven Design" refcardz after writing this article, found out some similar points were already mentioned there, though DDD covers broad aspects. I think my article exposes some facts or problems of this topic, which would help people obtain deep insides via further reading. Actually, I am planning to read some DDD book as the second session after current looking up.
December 21, 2010
by Yaozong Zhu
· 28,646 Views · 3 Likes
article thumbnail
jQuery Templates – tmpl(), template() and tmplItem()
In the first blog, I made introduction to the jQuery Templates plugin made by Microsoft for the jQuery library. Now in this blog I will pass briefly through the three main functions that are supported by the plugin up to now. - tmpl() and $.tmpl() The purpose of this function is to render the template Syntax: tmpl([data], [options]) $.tmpl(template, [data], [options]) As you can see, the first tmpl() function takes two parameters, while the second takes three parameters. What is the real difference here? Both functions make the same. The difference is that in the second $.tmpl (with jQuery sign) we have template parameter so that this parameter can be string containing markup, HTML Element, or Name of named template. To see this in real-world example, see the complete example at the bottom of this blog. tmpl() Example $("#myTemplate").tmpl(myData).appendTo("ul"); $.tmpl() Example $.tmpl("namedTemplate", myData).appendTo("ul"); - template() and $.template() We can use this function to create named templates. Previously I’ve written an example with $.tmpl() where it’s first parameter is named template. We can create named templates using tempalte or $.template() and use it anywhere else. So, with this function we can simply compile the template by associating it with the given name. Syntax: template([name]) $.template(name, template) Let’s see two simple examples: template() Example $("#myTemplate").template("attendeeTemplate"); $.template() $.template("attendeeTemplate", "${Attendee}"); Usage of the above example would be $.tmpl("attendeeTemplate", Data).appendTo("ol") now you see both $.tmpl and $.template in same scenario - tmplItem() and $.tmplItem() Using this function, we can access the rendered template item. It will retrieve object which has data and nodes properties. Syntax: tmplItem() $.tmplItem(element) And here are the examples: tmplItem() var item = $("font").tmplItem(); $.tmplItem() var item = $.tmplItem("font"); Here is complete example where I’m using all three functions in one scenario Name: Surname: Speaker: Add New HighLight Last Speaker the result is You can simply copy the code and paste it in your VS.NET by creating new page and it will work fine. You don’t need to add scripts manually since I use the scripts directly from Microsoft CDN.
December 15, 2010
by Hajan Selmani
· 33,824 Views
article thumbnail
A simple and intuitive approach to interface your database with Java
Introduction In recent years, I have experienced the same developer's need again and again. The need for improved persistence support. After lots of years of experience with Java, I have grown tired with all the solutions that are "standard", "J2EE compliant", but in the end, just ever so complicated. I don't deny, there are many good ideas around, that have eventually brought up excellent tools, such as Hibernate, JPA/EJB3, iBatis, etc. But all of those tools seem to go to a single direction without giving up any of that thought: Object-relational Mapping. So you end up using a performant database that cost's 100k+$ of license every year just to abstract it with a "standard" persistence layer. I wanted to go a different direction. And take the best of OR-Mapping (code generation, type safety, object oriented query construction, SQL dialect abstraction, etc) without denying the fact, that beneath, I'm running an RDBMS. That's right. R like Relational. Read on about how jOOQ (Java Object Oriented Querying) succeeds in bringing the "relational to the object" Abstract Many companies and software projects seem to implement one of the following two approaches to interfacing Java with SQL The very basic approach: Using JDBC directly or adding a home-grown abstraction on top of it. There is a lot of manual work associated with the creation, maintenance, and extension of the data layer code base. Developers can easily use the full functionality of the underlying database, but will always operate on a very low level, concatenating Strings all over the place. The very sophisticated approach: There is a lot of configuration and a steep learning curve associated with the introduction of sophisticated database abstraction layers, such as the ones created by Hibernate, JPA, iBatis, or even plain old EJB entity beans. While the generated objects and API's may allow for easy manipulation of data, the setup and maintenance of the abstraction layer may become very complex. Besides, these abstraction layers provide so much abstraction on top of SQL, that SQL-experienced developers have to rethink. A different paradigm I tried to find a new solution addressing many issues that I think most developers face every day. With jOOQ - Java Object Oriented Querying, I want to embrace the following paradigm: SQL is a good thing. Many things can be expressed quite nicely in SQL. The relational data model is a good thing. It should not be abstracted by OR-Mapping SQL has a structure and syntax. It should not be expressed using "low-level" String concatenation. Variable binding tends to be very complex when dealing with major queries. POJO's (or data transfer objects) are great when writing Java code manipulating database data. POJO's are a pain to write and maintain manually. Source code generation is the way to go The database comes first. Then the code on top of it. Yes, you do have stored procedures and user defined types (UDT's) in your legacy database. Your database-tool should support that. I think that these key ideas are useful for a very specific type of developer. That specific developer interfaces Java with huge legacy databases. knows SQL well and wants to use it extensively. doesn't want to learn any new language (HQL, JPQL, etc) doesn't want to spend one minute fine-tuning some sophisticated XML-configuration. wants little abstraction over SQL, because his software is tightly coupled with his database. Something that I think the guys at Hibernate or JPA seem to have ignored. needs a strong but light-weight library for database access. For instance to develop for mobile devices. How does jOOQ fit in this paradigm? Not only does jOOQ completely address the above paradigm, it does so quite elegantly. Let's say you have this database that models your bookstore. And you need to run a query selecting all books by authors born after 1920. You know how to do this in SQL: -- Select all books by authors born after 1920, named "Paulo" from a catalogue: SELECT * FROM t_author a JOIN t_book b ON a.id = b.author_id WHERE a.year_of_birth > 1920 AND a.first_name = 'Paulo' ORDER BY b.title The same query expressed with jOOQ-Objects // Instanciate your factory using a JDBC connection // and specify the SQL dialect you're using. Of course you can // have several factories in your application. Factory create = new Factory(connection, SQLDialect.MYSQL); // Create the query using generated, type-safe objects. You could // write even less code than that with static imports! SelectQuery q = create.selectQuery(); q.addFrom(TAuthor.T_AUTHOR); q.addJoin(TBook.T_BOOK, TAuthor.ID, TBook.AUTHOR_ID); // Note how you do not need to worry about variable binding. // jOOQ does that for you, dynamically q.addCompareCondition(TAuthor.YEAR_OF_BIRTH, 1920, Comparator.GREATER); // The AND operator and EQUALS comparator are implicit here q.addCompareCondition(TAuthor.FIRST_NAME, "Paulo"); q.addOrderBy(TBook.TITLE); The jOOQ query object model uses generated classes, such as TAuthor or TBook. Like many other code generation tools do, jOOQ will generate static final objects for the fields contained in each table. In this case, TAuthor holds a member called TAuthor.T_AUTHOR to represent the table itself, and members such as TAuthor.ID, TAuthor.YEAR_OF_BIRTH, etc to hold the table's fields. But you could also use the jOOQ DSL API to stay closer to SQL // Do it all "on one line". SelectQuery q = create.select() .from(T_AUTHOR) .join(T_BOOK).on(TAuthor.ID.equal(TBook.AUTHOR_ID)) .where(TAuthor.YEAR_OF_BIRTH.greaterThan(1920) .and(TAuthor.FIRST_NAME.equal("Paulo"))) .orderBy(TBook.TITLE).getQuery(); jOOQ ships with a DSL (Domain Specific Language) somewhat similar to Linq that facilitates query creation. The strength of DSL becomes obvious when you are using jOOQ constructs such as the decode function: // Create a case statement. Unfortunately "case" is a reserved word in Java // Hence the method is called DECODE after its related Oracle function Field nationality = create.decode() .when(TAuthor.FIRST_NAME.equal("Paulo"), "brazilian") .when(TAuthor.FIRST_NAME.equal("George"), "english") .otherwise("unknown"); // "else" is also a reserved word ;-) The above will render this SQL code: CASE WHEN T_AUTHOR.FIRST_NAME = 'Paulo' THEN 'brazilian' WHEN T_AUTHOR.FIRST_NAME = 'George' THEN 'english' ELSE 'unknown' END Use the DSL API when: You want your Java code to look like SQL You want your IDE to help you with auto-completion (you will not be able to write select .. order by .. where .. join or any of that stuff) Use the regular API when: You want to create your query step-by-step, creating query parts one-by-one You need to assemble your query from various places, passing the query around, adding new conditions and joins on the way In any case, all API's will construct the same underlying implementation object, and in many cases, you can combine the two approaches Once you have established the query, execute it and fetch results // Execute the query and fetch the results q.execute(); Result result = q.getResult(); // Result is Iterable, so you can loop over the resulting records like this: for (Record record : result) { // Type safety assured with generics String firstName = record.getValue(TAuthor.FIRST_NAME); String lastName = record.getValue(TAuthor.LAST_NAME); String title = record.getValue(TBook.TITLE); Integer publishedIn = record.getValue(TBook.PUBLISHED_IN); System.out.println(title + " (published in " + publishedIn + ") by " + firstName + " " + lastName); } Or simply write for (Record record : q.fetch()) { // [...] } Fetch data from a single table and use jOOQ as a simple OR-Mapper // Similar query, but don't join books to authors. // Note the generic record type that is added to your query: SimpleSelectQuery q = create.select(T_AUTHOR) .where(TAuthor.YEAR_OF_BIRTH.greaterThan(1920) .and(TAuthor.FIRST_NAME.equal("Paulo"))) .orderBy(TAuthor.LAST_NAME).getQuery(); // When executing this query, also Result holds a generic type: q.execute(); Result result = q.getResult(); for (TAuthorRecord record : result) { // With generate record classes, you can use generated getters and setters: String firstName = record.getFirstName(); String lastName = record.getLastName(); System.out.println("Author : " + firstName + " " + lastName + " wrote : "); // Use generated foreign key navigation methods for (TBookRecord book : record.getTBooks()) { System.out.println(" Book : " + book.getTitle()); } } jOOQ not only generates code to model your schema, but it also generates domain model classes to represent tuples in your schema. In the above example, you can see how selecting from the TAuthor.T_AUTHOR table will produce results containing well-defined TAuthorRecord types. These types hold getters and setters like any POJO, but also some more advanced OR-code, such as foreign key navigator methods like // Return all books for an author that are obtained through the // T_AUTHOR.ID = T_BOOK.AUTHOR_ID foreign key relationship public List getTBooks() Now, for true OR-mapping, you would probably prefer mature and established frameworks such as Hibernate or iBATIS. Don't panic. Better integration with Hibernate and JPA is on the feature roadmap. The goals of jOOQ should not be to reimplement things that are already well-done, but to bring true SQL to Java Execute CRUD operations with jOOQ as an OR-mapper // Create a new record and insert it into the database TBookRecord book = create.newRecord(T_BOOK); book.setTitle("My first book"); book.store(); // Update it with new values book.setPublishedIn(2010); book.store(); // Delete it book.delete(); Nothing new in the OR-mapping world. These ideas have been around since EJB entity beans or even before. It's still quite useful for simple purposes. Execute CRUD operations the way you're used to You don't need to go into that OR-mapping business. You can create your own INSERT, "INSERT SELECT", UPDATE, DELETE queries. Some examples: InsertQuery i = create.insertQuery(T_AUTHOR); i.addValue(TAuthor.FIRST_NAME, "Hermann"); i.addValue(TAuthor.LAST_NAME, "Hesse"); i.execute(); UpdateQuery u = create.updateQuery(T_AUTHOR); u.addValue(TAuthor.FIRST_NAME, "Hermie"); u.addCompareCondition(TAuthor.LAST_NAME.equal("Hesse")); u.execute(); // etc... Now for the advanced stuff Many tools can do similar stuff as what we have seen before. Especially Hibernate and JPA have a feature called criteria query, that provides all of the type-safety and query object building using DSL's while being based on a solid (but blown-up) underlying architecture. An important goal for jOOQ is to provide you with all (or at least: most) SQL features that you are missing in other frameworks but that you would like to use because you think SQL is a great thing but JDBC is too primitive for the year 2010, 2011, or whatever year we're in, when you're reading this. So, jOOQ comes along with aliasing, nested selects, unions and many other SQL features. Check out the following sections: Aliasing That's a very important feature. How could you have self-joins or in/exists clauses without aliasing? Let's say we have a "T_TREE" table with fields "ID", "PARENT_ID", and "NAME". If we want to find all parent/child NAME couples, we will need to execute a self-join on T_TREE. In SQL, this reads: SELECT parent.NAME parent_name, child.NAME child_name FROM T_TREE parent JOIN T_TREE child ON (parent.ID = child.PARENT_ID) No problem for jOOQ. We'll write: // Create table aliases Table parent = TTree.T_TREE.as("parent"); Table child = TTree.T_TREE.as("child"); // Create field aliases from aliased table Field parentName = parent.getField(TTree.NAME).as("parent_name"); Field childName = child.getField(TTree.NAME).as("child_name"); // Execute the above select Record record = create.select(parentName, childName) .from(parent) .join(child).on(parent.getField(TTree.ID).equal(child.getField(TTree.PARENT_ID))) .fetchAny(); // The aliased fields can be read from the record as in the simpler examples: record.getValue(parentName); Functionally, it is easy to see how this works. Look out for future releases of jOOQ for improvements in the DSL support of field and table aliasing IN clause The org.jooq.Field class provides many methods to construct conditions. In previous examples, we have seen how to create regular compare conditions with = < <= >= > != operators. Now Field also has a couple of methods to create IN conditions: // Create IN conditions with constant values that are bound to the // query via JDBC's '?' bind variable placeholders Condition in(T... values); Condition in(Collection values); Condition notIn(T... values); Condition notIn(Collection values); // Create IN conditions with a sub-select Condition in(QueryProvider query) Condition notIn(QueryProvider query) The constant set of values for IN conditions is an obvious feature. But the sub-select is quite nice: -- Select authors with books that are sold out SELECT * FROM T_AUTHOR WHERE T_AUTHOR.ID IN (SELECT DISTINCT T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.STATUS = 'SOLD OUT'); In jOOQ, this translates to create.select(T_AUTHOR) .where (TAuthor.ID.in(create.selectDistinct(TBook.AUTHOR_ID) .from(T_BOOK) .where(TBook.STATUS.equal(TBookStatus.SOLD_OUT)))); EXISTS clause Very similar statements can be expressed with the EXISTS clause. The above set of authors could also be obtained with this statement: -- Select authors with books that are sold out SELECT * FROM T_AUTHOR a WHERE EXISTS (SELECT 1 FROM T_BOOK WHERE T_BOOK.STATUS = 'SOLD OUT' AND T_BOOK.AUTHOR_ID = a.ID); In jOOQ (as of version 1.5.0), this translates to // Alias the author table Table a = T_AUTHOR.as("a"); // Use the aliased table in the select statement create.selectFrom(a) .where(create.exists(create.select(create.constant(1)) .from(T_BOOK) .where(TBook.STATUS.equal(TBookStatus.SOLD_OUT) .and(TBook.AUTHOR_ID.equal(a.getField(TAuthor.ID)))))); UNION clauses SQL knows of four types of "UNION operators": UNION UNION ALL EXCEPT INTERSECT All of these operators are supported by all types of select queries. So in order to write things like: SELECT TITLE FROM T_BOOK WHERE PUBLISHED_IN > 1945 UNION SELECT TITLE FROM T_BOOK WHERE AUTHOR_ID = 1 You can write the following jOOQ logic: create.select(TBook.TITLE).from(T_BOOK).where(TBook.PUBLISHED_IN.greaterThan(1945)).union( create.select(TBook.TITLE).from(T_BOOK).where(TBook.AUTHOR_ID.equal(1))); Of course, you can then again nest the union query in another one (but be careful to correctly use aliases): -- alias_38173 is an example of a generated alias, -- generated by jOOQ for union queries SELECT alias_38173.TITLE FROM ( SELECT T_BOOK.TITLE, T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.PUBLISHED_IN > 1945 UNION SELECT T_BOOK.TITLE, T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.AUTHOR_ID = 1 ) alias_38173 ORDER BY alias_38173.AUTHOR_ID DESC In jOOQ: Select union = create.select(TBook.TITLE, TBook.AUTHOR_ID).from(T_BOOK).where(TBook.PUBLISHED_IN.greaterThan(1945)).union( create.select(TBook.TITLE, TBook.AUTHOR_ID).from(T_BOOK).where(TBook.AUTHOR_ID.equal(1))); create.select(union.getField(TBook.TITLE)) .from(union) .orderBy(union.getField(TBook.AUTHOR_ID).descending()); Note that a UNION query will automatically generate an alias if you use it as a nested table. In order to nest this query correctly, you need to get the aliased field from the query as seen in the example abov. Other, non-standard SQL features See more examples about stored procedures, UDT's, enums, etc on https://sourceforge.net/apps/trac/jooq/wiki/Examples Summary jOOQ brings the relational world to Java without trying to cover up its origins. jOOQ is relational. And object oriented. Just in a different way. Try it for yourself and I would be very glad for any feedback you may have. Find jOOQ on http://jooq.sourceforge.net Cheers Lukas Eder
December 14, 2010
by Lukas Eder
· 3,661 Views
article thumbnail
Getting MySQL work with Entity Framework 4.0
Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) { var myEvents = from e in context.Events from a in e.Attendees where a.Person.FirstName == "Gunnar" && a.Person.LastName == "Peipman" select e; Console.WriteLine("My events: "); foreach(var e in myEvents) { Console.WriteLine(e.Title); } } Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. ... ... Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.
December 10, 2010
by Gunnar Peipman
· 24,222 Views
article thumbnail
Changing Eclipse Default Encoding to UTF-8 for JSP files
Try creating a new JSP file in your Eclipse and you’ll notice that the JSP page directive will have encoding something like: But for a better I18N and L10N support, it is recommended to follow UTF-8 encoding where ever possible. So, how do we change the default JSP file encoding to UTF-8 in eclipse? Simple. Just do these things: In Eclipse, go to Windows -> Preferences -> Web -> JSP Files Select UTF-8 encoding from the Encoding dropdown box there. That’s it! And if you wonder how this change works, you can very well see that. In the same Preferences window, go to this location: Preferences -> Web -> JSP Files -> Editor -> Templates. Then in the right hand side, you’ll see a list of templates defined for JSP files. And in a new JSP file template, you can see this code: Here as you might have guessed, the ${encoding} will be replaced by whatever we set in the above step. The same method can be used to change the default encoding type for other file types too (css, html). From http://veerasundar.com/blog/2010/12/changing-eclipse-default-encoding-to-utf-8-for-jsp-files/
December 8, 2010
by Veera Sundar
· 26,128 Views · 1 Like
  • Previous
  • ...
  • 817
  • 818
  • 819
  • 820
  • 821
  • 822
  • 823
  • 824
  • 825
  • 826
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: