DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Java Topics

article thumbnail
Spring Data with Redis
The Spring Data project provides a solution for accessing data stored in new emerging technologies like NoSQL databases and cloud based services. When we look into the SpringSource git repository we see a lot of spring-data sub-projects: spring-data-commons: common interfaces and utility class for other spring-data projects. spring-data-column: support for column based databases. It has not started yet, but there will be support for Cassandra and HBase spring-data-document: support for document databases. Currently MongoDB and CouchDB are supported. spring-data-graph: support for graph based databases. Currently Neo4j is supported. spring-data-keyvalue: support for key-value databases. Currently Redis and Riak are supported and probably Membase will be supported in future. spring-data-jdbc-ext: JDBC extensions, as example Oracle RAC connection failover is implemented. spring-data-jpa: simplifies JPA based data access layer. I would like to share with you how you can use Redis. The first step is to download it from the redis.io web page. try.redis-db.com is a useful site where we can run Redis commands. It also provides a step by step tutorial. This tutorial shows us all structures that Redis supports (list, set, sorted set and hashes) and some useful commands. A lot of reputable sites use Redis today. After download and unpacking we should compile Redis (version 2.2, the release candidate is the preferable one to use since some commands do not work in version 2.0.4). make sudo make install Once we run these commands we are all set to run the following five commands: redis-benchmark - for benchmarking Redis server redis-check-aof - check the AOF (Aggregate Objective Function), and it can repair that. redis-check-dump - check rdb files for unprocessable opcodes. redis-cli - Redis client. redis-server - Redis server. We can test Redis server. redis-server [1055] 06 Jan 18:19:15 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [1055] 06 Jan 18:19:15 * Server started, Redis version 2.0.4 [1055] 06 Jan 18:19:15 * The server is now ready to accept connections on port 6379 [1055] 06 Jan 18:19:15 - 0 clients connected (0 slaves), 1074272 bytes in use and Redis client. redis-cli redis> set my-super-key "my-super-value" OK Now we create a simple Java project in order to show how simple a spring-data-redis module really is. mvn archetype:create -DgroupId=info.pietrowski -DpackageName=info.pietrowski.redis -DartifactId=spring-data-redis -Dpackage=jar Next we have to add in pom.xml milestone spring repository, and add spring-data-redis as a dependency. After that all required dependencies will be fetched. Next we create a resources folder under the main folder, and create application.xml which will have all the configuration. We can configure the JedisConnectionFactory, in two different ways, One - we can provide a JedisShardInfo object in shardInfo property. Two - we can provide host (default localhost), port (default 6379), password (default empty) and timeout (default 2000) properties. One thing to keep in mind is that the JedisShardInfo object has precedence and allows to setup weight, but only allows constructor injection. We can setup the factory to use connection pooling by setting the value of the pooling property to 'true' (default). See application.xml comments to see three different way of configuration. Note: There are two different libraries supported: Jedis and JRedis. They have very similar names and both have the same factory name. See the difference: org.springframework.data.keyvalue.redis.connection.jedis.JedisConnectionFactory org.springframework.data.keyvalue.redis.connection.jredis.JredisConnectionFactory Similar to what we do in Spring, we configure the template object by providing it with a connection factory. We will perform all the operations through this template object. By default we need to provide only Connection Factory, but there are more properties we can provide: exposeConnection (default false) - if we return real connection or proxy object. keySerializer, hashKeySerializer, valueSerializer, hashValueSerializer (default JdkSerializationRedisSerializer) which delegates serialization to Java serialization mechanism. stringSerializer (default StringRedisSerializer) which is simple String to byte[] (and back) serializer with UTF8 encoding. We are ready to execute some code which will be cooperating with the Redis instance. Spring-Data provides us with two ways of interaction, First is by using the execute method and providing a RedisCallback object. Second is by using *Operations helpers (these will be explained later) When we are using RedisCallback we have access to low level Redis commands, see this list of interfaces (I won't put all the methods here because it is huge list): RedisConnection - gathers all Redis commands plus connection management. RedisCommands - gathers all Redis commands (listed beloved). RedisHashCommands - Hash-specific Redis commands. RedisListCommands - List-specific Redis commands. RedisSetCommands - Set-specific Redis commands. RedisStringCommands - key/value specific Redis commands. RedisTxCommands - Transaction/Batch specific Redis commands. RedisZSetCommands - Sorted Set-specific Redis commands. Check RedisCallbackExample class, this was the hard way and the problem is we have to convert our objects into byte arrays in both directions, the second way is easier. Spring Data provides for us with Operations objects, so we have much more simpler API and all byte<->object conversion is made by serializer we setup (or the default one). Higher level API (you will easily recognize *Operation *Commands equivalents): HashOperations - Redis hash operations. ListOperations - Redis list operations. SetOperations - Redis set operations. ValueOperations - Redis 'string' operations. ZSetOperations - Redis sorted set operations. Most of methods get key as first parameters so we have an even better API for multiple operations on the same key: BoundHashOperations - Redis hash operations for specific key. BoundListOperations - Redis list operations for specific key. BoundSetOperations - Redis set operations for specific key. BoundValueOperations - Redis 'string' operations for specific key. BoundZSetOperations - Redis sorted set operations for specific key. Check RedisCallbackExample class to see some easy examples of *Operations usage. One important thing to mention is that you should use stringSerializers for keys, otherwise you will have problems from other clients, because standard serialization adds class information. Otherwise you end up keys such as: "\xac\xed\x00\x05t\x00\x05atInt" "\xac\xed\x00\x05t\x00\nmySuperKey" "\xac\xed\x00\x05t\x00\bsuperKey" Up until now we have just checked the API for Redis, but Spring Data offers more for us. All the cool stuff is in org.springframework.data.keyvalue.redis.support package and all sub-packages. We have: RedisAtomicInteger - Atomic integer (CAS operation) backed by Redis. RedisAtomicLong - Same as previous for Long. RedisList - Redis extension for List, Queue, Deque, BlockingDeque and BlockingQueue with two additional methods List range(start, end) and RedisList trim(start, end). RedisSet - Redis extension for Set with additional methods: diff, diffAndStore, intersect, intersectAndStore, union, unionAndStore. RedisZSet - Redis extension for SortedSet. Note that Comparator is not applicable here so this interface extends normal Set and provide proper methods similar to SortedSet. RedisMap - Redis extension for Map with additional Long increment(key, delta) method Every interface currently has one Default implementation. Check application-support.xml for examples of configuration and RedisSupportClassesExample for examples of use. There is lot of useful information in the comments as well. Summary The library is a first milestone release so there are minor bugs, the documentation isn't as perfect as we used to and the current version needs no stable Redis server. But this is definitely a great library which allows us to use all this cool NoSQL stuff in a "standard" Spring Data Access manner. Awesome job! This post is only useful if you checkout the code: from bitbucket , for the lazy ones here is spring-data-redis zip file as well. This post is originally from http://pietrowski.info/2011/01/spring-data-redis-tutorial/
February 3, 2011
by Sebastian Pietrowski
· 30,623 Views
article thumbnail
JUnit 4.9 - Class and Suite Level Rules
If you have worked with JUnit Rule API which was introduced in version 4.8 then you might probably also think that they are very useful. For example, there is a Rule called TemporaryFolder which creates files and folder before test is executed and deletes them after the test method finishes(whether test passes or fails). For those who are not familiar with Rule API , a Rule is an alteration in how a test method, or set of methods, is run and reported. Before version 4.9 Rule can only be applied to a test method not to a Test class or JUnit test suite. But with JUnit 4.9 which is not yet released you can use @ClassRule annotation to apply a rule to test class or a test suite. If you want to use JUnit 4.9 download it from JUnit git repository. The @ClassRule annotation can be applied to any public static field of Type org.junit.rules.TestRule. The class level rule can be applied in scenarios where you use normally use @BeforeClass and @AfterClass annotation. Some of the scenarios can be like when you want to start a server or any other external resource before a test class or test suite, or when you want to make sure that your test suite or test class runs within a specified time, etc. The advantage of using class level rule is that they can be reused among different modules and classes. Let's take an example when we want to make sure that our test suite should run within x seconds otherwise test should timeout. As you can see below we TestSuite AllTests runs TestCase1 and TestCase2. I have defined a suite level rule which would make sure that test run in three seconds otherwise suite will fail. Test Suite @RunWith(Suite.class) @SuiteClasses({ TestCase1.class, TestCase2.class }) public class AllTests { @ClassRule public static Timeout timeout = new Timeout(3000); } TestCase 1 public class TestCase1 { @Test public void test1() throws Exception{ Thread.sleep(1000); Assert.assertTrue(true); } } TestCase2 public class TestCase2 { @Test public void test2() throws Exception{ Thread.sleep(1000); Assert.assertTrue(true); } } This is the most important feature which will be released in version 4.9 of JUnit.
January 31, 2011
by Shekhar Gulati
· 36,995 Views
article thumbnail
HOWTO: Partially Clone an SVN Repo to Git, and Work With Branches
I've blogged a few times now about Git (which I pronounce with a hard 'g' a la "get", as it's supposed to be named for Linus Torvalds, a self-described git, but which I've also heard called pronounced with a soft 'g' like "jet"). Either way, I'm finding it way more efficient and less painful than either CVS or SVN combined. So, to continue this series ([1], [2], [3]), here is how (and why) to pull an SVN repo down as a Git repo, but with the omission of old (irrelevant) revisions and branches. Using SVN for SVN repos In days of yore when working with the JBoss Tools and JBoss Developer Studio SVN repos, I would keep a copy of everything in trunk on disk, plus the current active branch (most recent milestone or stable branch maintenance). With all the SVN metadata, this would eat up substantial amounts of disk space but still require network access to pull any old history of files. The two repos were about 2G of space on disk, for each branch. Sure, there's tooling to be able to diff and merge between branches w/o having both branches physically checked out, but nothing beats the ability to place two folders side by side OFFLINE for deep comparisons. So, at times, I would burn as much as 6-8G of disk simply to have a few branches of source for comparison and merging. With my painfullly slow IDE drive, this would grind my machine to a halt, especially when doing any SVN operation or counting files / disk usage. Using Git for SVN repos naively Recently, I started using git-svn to pull the whole JBDS repo into a local Git repo, but it was slow to create and still unwieldy. And the JBoss Tools repo was too large to even create as a Git repo - the operation would run out of memory while processing old revisions of code to play forward. At this point, I was stuck having individual Git repos for each JBoss Tools component (major source folder) in SVN: archives, as, birt, bpel, build, etc. It worked, but replicating it when I needed to create a matching repo-collection for a branch was painful and time-consuming. As well, all the old revision information was eating even more disk than before: jbosstools' trunk as multiple git-svn clones: 6.1G devstudio's trunk as single git-svn clone: 1.3G So, now, instead of a couple Gb per branch, I was at nearly 4x as much disk usage. But at least I could work offline and not deal w/ network-intense activity just to check history or commit a change. Still, far from ideal. Cloning SVN with standard layout & partial history This past week, I discovered two ways to make the git-svn experience at least an order of magnitude better: Standard layout (-s) - this allows your generated Git repo to contain the usual trunk, branches/* and tags/* layout that's present in the source SVN repo. This is a win because it means your repo will contain the branch information so you can easily switch between branches within the same repo on disk. No more remote network access needed! Revision filter (-r) - this allows your generated Git repo to start from a known revision number instead of starting at its birth. Now instead of taking hours to generate, you can get a repo in minutes by excluding irrelevant (ancient) revisions. So, why is this cool? Because now, instead of having 2G of source+metadata to copy when I want to do a local comparison between branches, the size on disk is merely: jbosstools' trunk as single git-svn clone w/ trunk and single branch: 1.3G devstudio's trunk as single git-svn clone w/ trunk and single branch: 0.13G So, not only is the footprint smaller, but the performance is better and I need never do a full clone (or svn checkout) again - instead, I can just copy the existing Git repo, and rebase it to a different branch. Instead of hours, this operation takes seconds (or minutes) and happens without the need for a network connection. Okay, enough blather. Show me the code! Check out the repo, including only the trunk & most recent branch # Figure out the revision number based on when a branch was created, then # from r28571, returns -r28571:HEAD rev=$(svn log --stop-on-copy \ http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x \ | egrep "r[0-9]+" | tail -1 | sed -e "s#\(r[0-9]\+\).\+#-\1:HEAD#") # now, fetch repo starting from the branch's initial commit git svn clone -s $rev http://svn.jboss.org/repos/jbosstools jbosstools_GIT Now you have a repo which contains trunk & a single branch git branch -a # list local (Git) and remote (SVN) branches * master remotes/jbosstools-3.2.x remotes/trunk Switch to the branch git checkout -b local/jbosstools-3.2.x jbosstools-3.2.x # connect a new local branch to remote one Checking out files: 100% (609/609), done. Switched to a new branch 'local/jbosstools-3.2.x' git svn info # verify now working in branch URL: http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x Repository Root: http://svn.jboss.org/repos/jbosstools Switch back to trunk git checkout -b local/trunk trunk # connect a new local branch to remote trunk Switched to a new branch 'local/trunk' git svn info # verify now working in branch URL: http://svn.jboss.org/repos/jbosstools/trunk Repository Root: http://svn.jboss.org/repos/jbosstools Rewind your changes, pull updates from SVN repo, apply your changes; won't work if you have local uncommitted changes git svn rebase Fetch updates from SVN repo (ignoring local changes?) git svn fetch Create a new branch (remotely with SVN) svn copy \ http://svn.jboss.org/repos/jbosstools/branches/jbosstools-3.2.x \ http://svn.jboss.org/repos/jbosstools/branches/some-new-branch From http://divby0.blogspot.com/2011/01/howto-partially-clone-svn-repo-to-git.html
January 28, 2011
by Nick Boldt
· 34,704 Views
article thumbnail
Apache Solr: Get Started, Get Excited!
we've all seen them on various websites. crappy search utilities. they are a constant reminder that search is not something you should take lightly when building a website or application. search is not just google's game anymore. when a java library called lucene was introduced into the apache ecosystem, and then solr was built on top of that, open source developers began to wield some serious power when it came to customizing search features. in this article you'll be introduced to apache solr and a wealth of applications that have been built with it. the content is divided as follows: introduction setup solr applications summary 1. introduction apache solr is an open source search server. it is based on the full text search engine called apache lucene . so basically solr is an http wrapper around an inverted index provided by lucene. an inverted index could be seen as a list of words where each word-entry links to the documents it is contained in. that way getting all documents for the search query "dzone" is a simple 'get' operation. one advantage of solr in enterprise projects is that you don't need any java code, although java itself has to be installed. if you are unsure when to use solr and when lucene, these answers could help. if you need to build your solr index from websites, you should take a look into the open source crawler called apache nutch before creating your own solution. to be convinced that solr is actually used in a lot of enterprise projects, take a look at this amazing list of public projects powered by solr . if you encounter problems then the mailing list or stackoverflow will help you. to make the introduction complete i would like to mention my personal link list and the resources page which lists books, articles and more interesting material. 2. setup solr 2.1. installation as the very first step, you should follow the official tutorial which covers the basic aspects of any search use case: indexing - get the data of any form into solr. examples: json, xml, csv and sql-database. this step creates the inverted index - i.e. it links every term to its documents. querying - ask solr to return the most relevant documents for the users' query to follow the official tutorial you'll have to download java and the latest version of solr here . more information about installation is available at the official description . next you'll have to decide which web server you choose for solr. in the official tutorial, jetty is used, but you can also use tomcat. when you choose tomcat be sure you are setting the utf-8 encoding in the server.xml . i would also research the different versions of solr, which can be quite confusing for beginners: the current stable version is 1.4.1. use this if you need a stable search and don't need one of the latest features. the next stable version of solr will be 3.x the versions 1.5 and 2.x will be skipped in order to reach the same versioning as lucene. version 4.x is the latest development branch. solr 4.x handles advanced features like language detection via tika, spatial search , results grouping (group by field / collapsing), a new "user-facing" query parser ( edismax handler ), near real time indexing, huge fuzzy search performance improvements, sql join-a like feature and more. 2.2. indexing if you've followed the official tutorial you have pushed some xml files into the solr index. this process is called indexing or feeding. there are a lot more possibilities to get data into solr: using the data import handler (dih) is a really powerful language neutral option. it allows you to read from a sql database, from csv, xml files, rss feeds, emails, etc. without any java knowledge. dih handles full-imports and delta-imports. this is necessary when only a small amount of documents were added, updated or deleted. the http interface is used from the post tool, which you have already used in the official tutorial to index xml files. client libraries in different languages also exist. (e.g. for java (solrj) or python ). before indexing you'll have to decide which data fields should be searchable and how the fields should get indexed. for example, when you have a field with html in it, then you can strip irrelevant characters , tokenize the text into 'searchable terms', lower case the terms and finally stem the terms . in contrast, if you would have a field with text in it that should not be interpreted (e.g. urls) you shouldn't tokenize it and use the default field type string. please refer to the official documentation about field and field type definitions in the schema.xml file. when designing an index keep in mind the advice from mauricio : "the document is what you will search for. " for example, if you have tweets and you want to search for similar users, you'll need to setup a user index - created from the tweets. then every document is a user. if you want to search for tweets, then setup a tweet index; then every document is a tweet. of course, you can setup both indices with the multi index options of solr. please also note that there is a project called solr cell which lets you extract the relevant information out of several different document types with the help of tika. 2.3. querying for debugging it is very convenient to use the http interface with a browser to query solr and get back xml. use firefox and the xml will be displayed nicely: you can also use the velocity contribution , a cross-browser tool, which will be covered in more detail in the section about 'search application prototyping' . to query the index you can use the dismax handler or standard query handler . you can filter and sort the results: q=superman&fq=type:book&sort=price asc you can also do a lot more ; one other concept is boosting. in solr you can boost while indexing and while querying. to prefer the terms in the title write: q=title:superman^2 subject:superman when using the dismax request handler write: q=superman&qf=title^2 subject check out all the various query options like fuzzy search , spellcheck query input , facets , collapsing and suffix query support . 3. applications now i will list some interesting use cases for solr - in no particular order. to see how powerful and flexible this open source search server is. 3.1. drupal integration the drupal integration can be seen as generic use case to integrate solr into php projects. for the php integration you have the choice to either use the http interface for querying and retrieving xml or json. or to use the php solr client library . here is a screenshot of a typical faceted search in drupal : for more information about faceted search look into the wiki of solr . more php projects which integrates solr: open source typo3- solr module magento enterprise - solr module . the open source integration is out dated. oxid - solr module . no open source integration available. 3.2. hathi trust the hathi trust project is a nice example that proves solr's ability to search big digital libraries. to quote directly from the article : "... the index for our one million book index is over 200 gigabytes ... so we expect to end up with a two terabyte index for 10 million books" other examples for libraries: vufind - aims to replace opac internet archive national library of australia 3.3. auto suggestions mainly, there are two approaches to implement auto-suggestions (also called auto-completion) with solr: via facets or via ngramfilterfactory . to push it to the extreme you can use a lucene index entirely in ram. this approach is used in a large music shop in germany. live examples for auto suggestions: kaufda.de 3.4. spatial search applications when mentioning spatial search, people have geographical based applications in mind. with solr, this ordinary use case is attainable . some examples for this are : city search - city guides yellow pages kaufda.de spatial search can be useful in many different ways : for bioinformatics, fingerprints search, facial search, etc. (getting the fingerprint of a document is important for duplicate detection). the simplest approach is implemented in jetwick to reduce duplicate tweets, but this yields a performance of o(n) where n is the number of queried terms. this is okay for 10 or less terms, but it can get even better at o(1)! the idea is to use a special hash set to get all similar documents. this technique is called local sensitive hashing . read this nice paper about 'near similarity search and plagiarism analysis' for more information. 3.5. duckduckgo duckduckgo is made with open source and its "zero click" information is done with the help of solr using the dismax query handler: the index for that feature contains 18m documents and has a size of ~12gb. for this case had to tune solr: " i have two requirements that differ a bit from most sites with respect to solr: i generally only show one result, with sometimes a couple below if you click on them. therefore, it was really important that the first result is what people expected. false positives are really bad in 0-click, so i needed a way to not show anything if a match wasn't too relevant. i got around these by a) tweaking dismax and schema and b) adding my own relevancy filter on top that would re-order and not show anything in various situations. " all the rest is done with tuned open source products. to quote gabriel again: "the main results are a hybrid of a lot of things, including external apis, e.g. bing, wolframalpha, yahoo, my own indexes and negative indexes (spam removal), etc. there are a bunch of different types of data i'm working with. " check out the other cool features such as privacy or bang searches . 3.6. clustering support with carrot2 carrot2 is one of the "contributed plugins" of solr. with carrot2 you can support clustering : " clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. " see some research papers regarding clustering here . here is one visual example when applying clustering on the search "pannous" - our company : 3.7. near real time search solr isn't real time yet, but you can tune solr to the point where it becomes near real time, which means that the time ('real time latency') that a document takes to be searchable after it gets indexed is less than 60 seconds even if you need to update frequently. to make this work, you can setup two indices. one write-only index "w" for the indexer and one read-only index "r" for your application. index r refers to the same data directory of w, which has to be defined in the solrconfig.xml of r via: /pathto/indexw/data/ to make sure your users and the r index see the indexed documents of w, you have to trigger an empty commit every 60 seconds: wget -q http://localhost:port/solr/update?stream.body=%3ccommit/%3e -o /dev/null everytime such a commit is triggered a new searcher without any cache entries is created. this can harm performance for visitors hitting the empty cache directly after this commit, but you can fill the cache with static searches with the help of the newsearcher entry in your solrconfig.xml. additionally, the autowarmcount property needs to be tuned, which fills the cache with a newsearcher from old entries. also, take a look at the article 'scaling lucene and solr' , where experts explain in detail what to do with large indices (=> 'sharding') and what to do for high query volume (=> 'replicating'). 3.8. loggly = full text search in logs feeding log files into solr and searching them at near real-time shows that solr can handle massive amounts of data and queries the data quickly. i've setup a simple project where i'm doing similar things , but loggly has done a lot more to make the same task real-time and distributed. you'll need to keep the write index as small as possible otherwise commit time will increase too great. loggly creates a new solr index every 5 minutes and includes this when searching using the distributed capabilities of solr ! they are merging the cores to keep the number of indices small, but this is not as simple as it sounds. watch this video to get some details about their work. 3.9. solandra = solr + cassandra solandra combines solr and the distributed database cassandra , which was created by facebook for its inbox search and then open sourced. at the moment solandra is not intended for production use. there are still some bugs and the distributed limitations of solr apply to solandra too. tthe developers are working very hard to make solandra better. jetwick can now run via solandra just by changing the solrconfig.xml. solandra also has the advantages of being real-time (no optimize, no commit!) and distributed without any major setup involved. the same is true for solr cloud. 3.10. category browsing via facets solr provides facets , which make it easy to show the user some useful filter options like those shown in the "drupal integration" example. like i described earlier , it is even possible to browse through a deep category tree. the main advantage here is that the categories depend on the query. this way the user can further filter the search results with this category tree provided by you. here is an example where this feature is implemented for one of the biggest second hand stores in germany. a click on 'schauspieler' shows its sub-items: other shops: game-change 3.11. jetwick - open twitter search you may have noticed that twitter is using lucene under the hood . twitter has a very extreme use case: over 1,000 tweets per second, over 12,000 queries per second, but the real-time latency is under 10 seconds! however, the relevancy at that volume is often not that good in my opinion. twitter search often contains a lot of duplicates and noise. reducing this was one reason i created jetwick in my spare time. i'm mentioning jetwick here because it makes extreme use of facets which provides all the filters to the user. facets are used for the rss-alike feature (saved searches), the various filters like language and retweet-count on the left, and to get trending terms and links on the right: to make jetwick more scalable i'll need to decide which of the following distribution options to choose: use solr cloud with zookeeper use solandra move from solr to elasticsearch which is also based on apache lucene other examples with a lot of facets: cnet reviews - product reviews. electronics reviews, computer reviews & more. shopper.com - compare prices and shop for computers, cell phones, digital cameras & more. zappos - shoes and clothing. manta.com - find companies. connect with customers. 3.12. plaxo - online address management plaxo.com , which is now owned by comcast, hosts web addresses for more than 40 million people and offers smart search through the addresses - with the help of solr. plaxo is trying to get the latest 'social' information of your contacts through blog posts, tweets, etc. plaxo also tries to reduce duplicates . 3.13. replace fast or google search several users report that they have migrated from a commercial search solution like fast or google search appliance (gsa) to solr (or lucene). the reasons for that migration are different: fast drops linux support and google can make integration problems. the main reason for me is that solr isn't a black box —you can tweak the source code, maintain old versions and fix your bugs more quickly! 3.14. search application prototyping with the help of the already integrated velocity plugin and the data import handler it is possible to create an application prototype for your search within a few hours. the next version of solr makes the use of velocity easier. the gui is available via http://localhost:port/solr/browse if you are a ruby on rails user, you can take a look into flare. to learn more about search application prototyping, check out this video introduction and take a look at these slides. 3.15. solr as a whitelist imagine you are the new google and you have a lot of different types of data to display e.g. 'news', 'video', 'music', 'maps', 'shopping' and much more. some of those types can only be retrieved from some legacy systems and you only want to show the most appropriated types based on your business logic . e.g. a query which contains 'new york' should result in the selection of results from 'maps', but 'new yorker' should prefer results from the 'shopping' type. with solr you can set up such a whitelist-index that will help to decide which type is more important for the search query. for example if you get more or more relevant results for the 'shopping' type then you should prefer results from this type. without the whitelist-index - i.e. having all data in separate indices or systems, would make it nearly impossible to compare the relevancy. the whitelist-index can be used as illustrated in the next steps. 1. query the whitelist-index, 2. decide which data types to display, 3. query the sub-systems and 4. display results from the selected types only. 3.16. future solr is also useful for scientific applications, such as a dna search systems. i believe solr can also be used for completely different alphabets so that you can query nucleotide sequences - instead of words - to get the matching genes and determine which organism the sequence occurs in, something similar to blast . another idea you could harness would be to build a very personalized search. every user can drag and drop their websites of choice and query them afterwards. for example, often i only need stackoverflow, some wikis and some mailing lists with the expected results, but normal web search engines (google, bing, etc.) give me results that are too cluttered. my final idea for a future solr-based app could be a lucene/solr implementation of desktop search. solr's facets would be especially handy to quickly filter different sources (files, folders, bookmarks, man pages, ...). it would be a great way to wade through those extra messy desktops. 4. summary the next time you think about a problem, think about solr! even if you don't know java and even if you know nothing about search: solr should be in your toolbox. solr doesn't only offer professional full text search, it could also add valuable features to your application. some of them i covered in this article, but i'm sure there are still some exciting possibilities waiting for you!
January 25, 2011
by Peter Karussell
· 146,882 Views
article thumbnail
Mock Static Methods using Spring Aspects
I am a Spring framework user for last three years and I have really enjoyed working with Spring. One thing that I am seeing these days is the heavy use of AspectJ in most of SpringSource products. Spring Roo is a RAD tool for Java developers which makes use of AspectJ ITD's for separate compilation units. Spring transaction management and exception translation is also done using Aspectj. There are also numerous other usages of Aspectj in Spring products. In this article, I am going to talk about another cool usage of AspectJ by Spring - Mocking Static Methods. These days most of the developers write unit test cases and it is very common to use any of the mocking libraries like EasyMock, Mockito etc. to mock the external dependencies of a class. Using any of these mocking libraries it is very easy to mock calls to other class instance method. But most of these mocking framework does not provide the facility to mock the calls to static methods. Spring provides you the capability to mock static methods by using Spring Aspects library. In order to use this feature you need to add spring-aspects.jar dependency in your pom.xml. org.springframework spring-aspects 3.0.4.RELEASE Next thing you need to do is to convert your project in to a AspectJ project. If you are using Eclipse or STS(SpringSource Tool Suite) you can do that by right-click your project -> configure -> convert to AspectJ project. STS by default has AspectJ plugin, for eclipse users you need to install AspectJ plugin for above to work. I would recommend using STS for developing Spring based applications. The two aspects and one annotation that is of interest in spring-aspects.jar are : AbstractMethodMockingControl : This is an abstract aspect to enable mocking of methods picked out by a pointcut. All the child aspects need to define mockStaticsTestMethod() and methodToMock() pointcuts. The mockStaticTestMethod() pointcut is used to indicate when mocking should be triggered and methodToMock() pointcut is used to define which method invocations to mock. AnnotationDrivenStaticEntityMockingControl : This is the single implementation of AbstractMethodMockingControl aspect which exists in spring-aspects.jar. This is an annotation-based aspect to use in a test build to enable mocking static methods on Entity classes. In this aspect mockStaticTestMethod() pointcut defines that for classes marked with @MockStaticEntityMethods annotation mocking should be triggered and methodToMock() pointcut defines that all the public static methods in the classes marked with @Entity annotation should be mocked. MockStaticEntityMethods : Annotation to indicate a test class for whose @Test methods static methods on Entity classes should be mocked. The AnnotationDrivenStaticEntityMockingControl provide the facility to mock static methods of any class which is marked with @Entity annotation. But usually we would need to mock static method of classes other than marked with @Entity annotation. The only thing we need to do to make it work is to extend AbstractMethodMockingControl aspect and provide definitions for mockStaticsTestMethod() and methodToMock() pointcuts. For example, lets write an aspect which should mock all the public static methods of classes marked with @Component annotation. package com.shekhar.javalobby; import org.springframework.mock.staticmock.AbstractMethodMockingControl; import org.springframework.stereotype.Component; import org.springframework.mock.staticmock.MockStaticEntityMethods;; public aspect AnnotationDrivenStaticComponentMockingControl extends AbstractMethodMockingControl { public static void playback() { AnnotationDrivenStaticComponentMockingControl.aspectOf().playbackInternal(); } public static void expectReturn(Object retVal) { AnnotationDrivenStaticComponentMockingControl.aspectOf().expectReturnInternal(retVal); } public static void expectThrow(Throwable throwable) { AnnotationDrivenStaticComponentMockingControl.aspectOf().expectThrowInternal(throwable); } protected pointcut mockStaticsTestMethod() : execution(public * (@MockStaticEntityMethods *).*(..)); protected pointcut methodToMock() : execution(public static * (@Component *).*(..)); } The only difference between AnnotationDrivenStaticEntityMockingControl(comes with spring-aspects.jar) and AnnotationDrivenStaticComponentMockingControl(custom that we have written above) is in methodToMock() pointcut. In methodToMock() pointcut we have specified that it should mock all the static methods in any class marked with @Component annotation. Now that we have written the custom aspect lets test it. I have created a simple ExampleService with one static method. This is the method which we want to mock. @Component public class ExampleService implements Service { /** * Reads next record from input */ public String getMessage() { return myName(); } public static String myName() { return "shekhar"; } } This class will return "shekhar" when getMessage() method will be called. Lets test this without mocking package com.shekhar.javalobby; import org.junit.Assert; import org.junit.Test; public class ExampleConfigurationTests { private ExampleService service = new ExampleService(); @Test public void testSimpleProperties() throws Exception { String myName = service.getMessage(); Assert.assertEquals("shekhar", myName); } } This test will work fine. Now let's add mocking to this test class. There are two things that we need to do in our test We need to annotate our test with @MockStaticEntityMethods to indicate that static methods of @Component classes will be mocked. Please note that it is not required to use @MockStaticEntityMethods annotation you can create your own annotation and use that in mockStaticsTestMethod() pointcut. So, I could have created an annotation called @MockStaticComponentMethods and used that in mockStaticsTestMethod() pointcut. But I just reused the @MockStaticEntityMethods annotation. In our test methods we need to first invoke the static method which we want to mock so that it gets recorded. Next we need to set our expectation i.e. what should be returned from the mock and finally we need to call the playback method to stop recording mock calls and enter playback state. To make it more concrete lets apply mocking to the above test import org.junit.Assert; import org.junit.Test; import org.springframework.mock.staticmock.MockStaticEntityMethods; @MockStaticEntityMethods public class ExampleConfigurationTests { private ExampleService service = new ExampleService(); @Test public void testSimpleProperties() throws Exception { ExampleService.myName(); AnnotationDrivenStaticComponentMockingControl.expectReturn("shekhargulati"); AnnotationDrivenStaticComponentMockingControl.playback(); String myName = service.getMessage(); Assert.assertEquals("shekhargulati", myName); } } As you can see we annotated the test class with @MockStaticEntityMethods annotation and in the test method we first recorded the call (ExampleService.myName()), then we set the expectations, then we did the playback and finally called the actual code. In this way you can mock the static method of class.
January 25, 2011
by Shekhar Gulati
· 22,113 Views · 1 Like
article thumbnail
Java Generics Wildcard Capture - A Useful Thing to Know
Recently, I was writing a piece of code where I need to write a copy factory for a class. A copy factory is a static factory that will construct a copy of the same type as that of the argument passed to the factory. The copy factory will copy the state of the argument object into the new object. This will make sure that the newly constructed object is equal to the old one. A copy constructor is similar to a copy factory with just one difference - a copy constructor can only exist in the class containing the constructor whereas copy factory can exist in any other class as well. For example, // Copy Factory public static Field getNewInstance(Field field) // Copy Constructor public Field Field(Field field) I choose static copy factory because I didn't have the source code for the class With copy factory I can code to an interface instead of a class. The class for which I wanted to write copy factory was similar to the one shown below : public class Field { private String name; private T value; public String getName() { return name; } public void setName(String name) { this.name = name; } public T getValue() { return value; } public void setValue(T value) { this.value = value; } } The first implementation of FieldUtils class that came to my mind was as shown below public static Field copy(Field field) { Field objField = new Field();//1 objField.setName(field.getName());//2 objField.setValue(field.getValue());//3 return objField;//4 } The above code will not compile because of two compilation errors. The first compilation error is at line 1 because you can't create an instance of Field. Field means Field and when you have ? extends Something you can only get values out of it, you can't set values in it except null. For an object to be useful you should be able to do both, so the compiler does not allow you to create an object. The second compilation error will be at line number 3 and you will get a cryptic error message like this The method setValue(capture#3-of ?) in the type Field is not applicable for the arguments (capture#4-of ?) In simple terms this error message is saying that you are trying to set a wrong value in objField. But what if I had written the following method? public static Field copy(Field field) { field.setName(field.getName()); field.setValue(field.getValue()); return field; } Will the above code compile? No. You will again get a similar error message as mentioned above. To fix this error we will write a private helper method which will capture the wildcard and assign it to a Type variable. This technique is called Wildcard Capture. I have read this in the Java Generics and Collection book which is a must-read for understanding Java Generics. Wildcard Capture works by type inference. public static Field copy(Field field) { return copyHelper(field); } private static Field copyHelper(Field field) { Field objField = new Field(); objField.setName(field.getName()); objField.setValue(field.getValue()); return objField ; } Wildcard capture is very useful when you work with wildcards and knowing it will save a lot of your time.
January 20, 2011
by Shekhar Gulati
· 52,182 Views · 1 Like
article thumbnail
Migrating from JBoss 4 to JBoss 5
I have been using JBoss 4.2.3 for over a year right now and really like it (although the clustering / JMS stuff seems WAY overcomplicated - and needing 80 config files - not fun!). But anyway it is time to upgrade to JBoss 5 and the path has been marred by many stops and starts and has been surprisingly difficult. In any event I found the following links to be a life saver and thought I would share them http://community.jboss.org/wiki/MigrationfromJBoss4.pdf http://venugopaal.wordpress.com/2009/02/02/jboss405-to-jboss-5ga/ http://www.tikalk.com/java/migrating-your-application-jboss-4x-jboss-5x The real kickers have to be 1) Increased adherence to the Java spec that cause WARs / JARs to no longer deploy 2) Changed location of XML files 3) Changed XML filenames 4) Changed XML file contents Man they don't make it easy do they? From http://softarc.blogspot.com/2011/01/migrating-from-jboss-4-to-jboss-5.html
January 14, 2011
by Frank Kelly
· 20,193 Views
article thumbnail
Interview: Troy Giunipero, Author of NetBeans E-commerce Tutorial
Troy Giunipero (pictured, right) is a student at the University of Edinburgh studying toward an MSc in Computer Science. Formerly, he was one of the NetBeans Docs writers based in Prague, Czech Republic, where he spent most of his time writing Java web tutorials. In this interview, Troy introduces you to The NetBeans E-commerce Tutorial. This is a very detailed tutorial describing just about everything you need to know when creating an e-commerce web application in Java. It has received a lot of very positive feedback. Let's find out about the background of this tutorial and what Troy learnt in writing it. Hi Troy! During your time on the NetBeans team, you wrote a very large tutorial describing how to create an e-commerce site. How and why did you start writing it? Well, there’s a short answer and a long answer to this. The short answer is that I was lucky to take part in Sun’s SEED (Sun Engineering Enrichment and Development) program. I wanted to focus on technical aspects, so I based my curriculum on developing an e-commerce application using Java technologies. I documented my efforts and applied them toward deliverables for the IDE’s 6.8 and 6.9 releases, resulting in the 13-part NetBeans E-commerce Tutorial. The long answer is that I had previously been tasked with creating an e-commerce application for my degree project (I was studying toward a BSc in IT and Computing), and ran into loads of trouble trying to integrate the various technologies into a cohesive, functioning application. I was coming from a non-technical background and found there was a steep learning curve involved in web development. My work was fraught with problems which I can now attribute to poor time-management, and a lack of good, practical, hands-on learning resources. So in a way, working on the AffableBean project (this is the project used in the NetBeans E-commerce Tutorial) was a way for me to go back and attempt to do the whole thing right. With the tutorial, I had two goals in mind: one, I wanted to consolidate my understanding of everything by writing about it, and two, I wanted to help others avoid the problems and pitfalls that I’d earlier ran into by designing a piece of documentation that puts everything together. Can you run us through the basic parts and what they provide? Certainly. First I want to point out that there’s a live demo application (http://dot.netbeans.org:8080/AffableBean/) which I managed to get up and running with help from Honza Pirek from the NetBeans Web Team (thanks Honza!): The application is modeled on the well-known MVC architecture: The tutorial refers to the above diagram at various points, and covers a bunch of different concepts and technologies along the way, including: Project design and planning (unit 2) Designing a data model (using MySQL WorkBench) (unit 4) Forward-engineering the data model into a database schema (unit 4) Database connectivity (units 3, 4, 6) Servlet, JSP/EL and JSTL technologies (units 3, 5, 6) EJB 3 and JPA 2 technologies (unit 7), and transactional support (unit 9) Session management (i.e., for the shopping cart mechanism) (unit 8) Form validation and data conversion (unit 9) Multilingual support (unit 10) Security (i.e., using form-based authentication and encrypted communication) (unit 11) Load testing with JMeter (unit 12) Monitoring the application with the IDE’s Profiler (unit 12) Tips for deployment to a production server (units 12, 13) Also, the tutorial aims to provide ample background information on the whole “Java specifications” concept, with an introduction to the Java Community Process, how final releases include reference implementations, and how these relate to the tutorial application using the IDE’s bundled GlassFish server (units 1, 7). Finally, the tutorial is as much about the above concepts and technologies as it is about learning to make best use of the IDE. I really tried to squeeze as much IDE-centric information in there as possible. So for example you’ll find: An introduction to the IDE’s main windows and their functions (unit 3) A section dedicated to editor tips and tricks (unit 5), and abundant usage of keyboard shortcuts in steps throughout the tutorial Use of the debugger (unit 8) Special “tip boxes” that discuss IDE functionality that is sometimes difficult to fit into conventional documentation. For example, there are tips on using the IDE’s Template Manager (unit 5), GUI support for database tables (unit 6), Javadoc support (unit 8), and support for code templates (unit 9). Did you learn any new things yourself while writing it? Yes! Three things immediately come to mind: EJB 3 technology. Initially this was a big hurdle for me. Using EJB 3 effectively seems to be something of an art form. If you know what you’re doing and understand exactly how to use the EntityManager to handle persistence operations on a database, EJB lets you do some amazingly smart things with just a few lines of code. But there seems to be a lack of good free documentation online—especially since EJB 3 is a significant departure from EJB 2. Therefore, almost all of the tutorial’s information on EJB comes from the very excellent book, EJB in Action by Debu Panda and Reza Rahman. Interpreting the NetBeans Profiler. The final hands-on unit, Testing an Profiling, was the most difficult for me to write, primarily because I just wasn’t familiar with the Profiler at all. I spent an unhealthy amount of time just watching the Telemetry graph run against my JMeter test plan, which is only slightly more stimulating than watching water come to boil. That being said, I feel that by just examining the graphs and other windows over time, critical logical associations start to jump out at you after a while. Likewise with JMeter. Hopefully unit 12 was able to capture and relay some of these. How to search online for decent articles and learning materials. The old Sun Technical Articles site was a great resource. Many of the links in the See Also sections at the bottom of tutorial units were found by adding site:java.sun.com/developer/technicalArticles/ to a Google search. Also the official forums (found at forums.sun.com) became a good place for questions I couldn’t find ready answers to. I had both the Java EE 5 and 6 Tutorials bookmarked. And Marty Hall’s Core Servlets and JavaServer Pages became an invaluable resource for the first half of the tutorial. What are your personal favorite features of the technologies discussed in the tutorial? I particularly liked learning about session management—using the HttpSession object to carry user-specific data between requests, and working with JSP’s corresponding implicit objects in the front-end pages. Session management is a defining aspect for e-commerce applications, as they need to provide some sort of shopping cart mechanism... ...and so the Managing Sessions unit (unit 8) was a key chapter in the tutorial. It’s extremely useful to be able to suspend the debugger on a portion of code that includes session-scoped variables, then hover the mouse over a given variable in the editor to determine its current value. I used the debugger continuously during this phase, and so I went so far as to incorporate use of the debugger throughout the Managing Sessions unit. What kind of background does someone starting the tutorial need to have? Someone can come to the tutorial with little or absolutely no experience using NetBeans. I’ve tried to be particularly careful in providing clear and easy-to-follow instructions in this respect. But one would be best off having some background or knowledge in basic web technologies, and at least some exposure to relational databases. With this foundation, I think that the topics covered in the second half of the tutorial, like applying entity classes and session beans, language support and security, won’t seem too daunting. I’ve noticed that the vast majority of feedback that comes in relates to the first half of the tutorial, and I sometimes get the impression that people feel they need to follow the tutorial units consecutively. Not so. The units are 90% modular. In other words, if somebody just wants to run through the security unit (unit 11), they can do so by downloading the associated project snapshot, follow the setup instructions, and then just follow along without needing to even look at other parts of the tutorial. What will they be able to do at the end of it? Naturally, anybody who completes individual tutorial units will be able to apply the concepts and technologies to their own work. But anyone who completes the tutorial in its entirety will gain an insight into the development process as a whole, and I think will also get a certain confidence that comes with knowing how “all the pieces fit together”—from gathering customer requirements all the way to deployment of the completed app to a production server. They’ll also have gained a solid familiarity with the NetBeans IDE, and be in a good position to explore popular Java web frameworks that work on top of servlet technology or impose an MVC architecture on their own, such as JSF, Spring, Struts, or Wicket. Do you see any problems in the technologies discussed and what would be your suggestions for enhancements? Well there’s one thing that comes to mind. When I started working on this project, I was studying the Duke’s BookStore example from the Java EE 5 Tutorial. A wonderful example that demonstrates how to progressively implement the same application using various technologies and combinations thereof. So for example you start out with an all-servlet implementation, then move on to a JSP/servlet version. Then there’s a JSTL implementation and ultimately, a version using JavaServer Faces. It’s great learning material, but also terrifically outdated. Right around this time, Sun was gearing up for the big Java EE 6 release (Dec. 2009), and I was also trying to learn about the new upcoming technologies, namely CDI, JSF 2, and EJB 3, for my regular NetBeans documentation work. I was getting the definite sense that JSP and JSTL were slowly being pushed aside—in the case of JavaServer Faces, Facelets templating was the new page authoring technology. So really, the E-commerce Tutorial application has become a sort of EE 5/EE 6 hybrid by combining JSP/JSTL with EJB 3 and JPA 2. Now the problem I see from the perspective of a student trying to learn this stuff from scratch, is that the leap from basic servlet technology to a full-blown JSF/EJB/JPA solution is tremendous, and cannot readily be taught through a single tutorial. Naturally, others may disagree with me here. I’m not sure if there’s a solution other than to compensate by producing a lot of quality learning material that covers lots of different use-cases. I’d suggest that the E-commerce Tutorial puts one in a very advantageous position to begin learning about Java-based frameworks, such as GWT, Spring, and JSF, which is a natural course of action for people looking to get a job with this knowledge. Planning any more parts to the tutorial or a new one? No more parts. The E-commerce Tutorial is done. Upon committing the final installments and changes last November, I rejoiced. However, I’m still actively responding to feedback [the ‘Send Us Your Feedback’ links at the bottom of tutorials] and plan to maintain it indefinitely, so if anyone spots any typos, has questions or comments, recommendations for improvement, etc., please write in! :-)
January 9, 2011
by Geertjan Wielenga
· 29,863 Views
article thumbnail
Clojure: select-keys, select-values, and apply-values
Clojure provides the get and get-in functions for returning values from a map and the select-keys function for returning a new map of only the specified keys. Clojure doesn't provide a function that returns a list of values; however, it's very easy to create such a function (which I call select-values). Once you have the ability to select-values it becomes very easy to create a function that applies a function to the selected values (which I call apply-values). The select-keys function returns a map containing only the entries of the specified keys. The following (pasted) REPL session shows a few different select-keys behaviors. user=> (select-keys {:a 1 :b 2} [:a]) {:a 1} user=> (select-keys {:a 1 :b 2} [:a :b]) {:b 2, :a 1} user=> (select-keys {:a 1 :b 2} [:a :b :c]) {:b 2, :a 1} user=> (select-keys {:a 1 :b 2} []) {} user=> (select-keys {:a 1 :b 2} nil) {} The select-keys function is helpful in many occassions; however, sometimes you only care about selecting the values of certain keys in a map. A simple solution is to call select-keys and then vals. Below you can find the results of applying this idea. user=> (def select-values (comp vals select-keys)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b]) (2 1) user=> (select-values {:a 1 :b 2} [:a :b :c]) (2 1) user=> (select-values {:a 1 :b 2} []) nil user=> (select-values {:a 1 :b 2} nil) nil The select-values implementation from above may be sufficient for what you are doing, but there are two things worth noticing: in cases where you might be expecting an empty list you are seeing nil; and, the values are not in the same order that the keys were specified in. Given that (standard) maps are unsorted, you can't be sure of the ordering the values. (side-note: If you are concerned with microseconds, it's also been reported that select-keys is a bit slow/garbage heavy.) An alternative definition of select-values uses the reduce function and pulls the values by key and incrementally builds the (vector) result. user=> (defn select-values [map ks] (reduce #(conj %1 (map %2)) [] ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) [1] user=> (select-values {:a 1 :b 2} [:a :b]) [1 2] user=> (select-values {:a 1 :b 2} [:a :b :c]) [1 2 nil] user=> (select-values {:a 1 :b 2} []) [] user=> (select-values {:a 1 :b 2} nil) [] The new select-values function returns the values in order and returns an empty vector in the cases where previous examples returned nil, but we have a new problem: Keys specified that don't exist in the map are now included in the vector as nil. This issue is easily addressed by adding a call to the remove function. The implementation that includes removing nils can be found below. user=> (defn select-values [map ks] (remove nil? (reduce #(conj %1 (map %2)) [] ks))) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b]) (1 2) user=> (select-values {:a 1 :b 2} [:a :b :c]) (1 2) user=> (select-values {:a 1 :b 2} []) () user=> (select-values {:a 1 :b 2} nil) () There is no "correct" implementation for select-values. If you don't care about ordering and nil is a reasonable return value: the first implementation is the correct choice due to it's concise definition. If you do care about ordering and performance: the second implementation might be the right choice. If you want something that follows the principle of least surprise: the third implementation is probably the right choice. You'll have to decide what's best for your context. In fact, here's a few more implementations that might be better based on your context. user=> (defn select-values [m ks] (map #({:a 1 :b 2} %) ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) (1) user=> (select-values {:a 1 :b 2} [:a :b :c]) (1 2 nil) user=> (defn select-values [m ks] (reduce #(if-let [v ({:a 1 :b 2} %2)] (conj %1 v) %1) [] ks)) #'user/select-values user=> (select-values {:a 1 :b 2} [:a]) [1] user=> (select-values {:a 1 :b 2} [:a :b :c]) [1 2] Pulling values from a map is helpful, but it's generally not the end goal. If you find yourself pulling values from a map, it's likely that you're going to want to apply a function to the extracted values. With that in mind, I generally define an apply-values function that returns the result of applying a function to the values returned from specified keys. A good example of this is returning the total for a line item represented as a map. Given a map that specifies a line item costing $5 and having a quantity of 4, you can use (* price quantity) to determine the total price for the line item. Using our previously defined select-values function we can do the work ourselves, as the example below shows. user=> (let [[price quantity] (select-values {:price 5 :quantity 4 :upc 1123} [:price :quantity])] (* price quantity)) 20 The example above works perfectly well; however, applying a function to the values of a map seems like a fairly generic operation that can easily be extracted to it's own function (the apply-values function). The example below shows the definition and usage of my definition of apply-values. user=> (defn apply-values [map f & ks] (apply f (select-values map ks))) #'user/apply-values user=> (apply-values {:price 5 :quantity 4 :upc 1123} * :price :quantity) 20 I find select-keys, select-values, & apply-values to be helpful when writing Clojure applications. If you find you need these functions, feel free to use them in your own code. However, you'll probably want to check the comments - I'm sure someone with more Clojure experience than I have will provide superior implementations. From http://blog.jayfields.com/2011/01/clojure-select-keys-select-values-and.html
January 6, 2011
by Jay Fields
· 15,438 Views
article thumbnail
A simple and intuitive approach to interface your database with Java
Introduction In recent years, I have experienced the same developer's need again and again. The need for improved persistence support. After lots of years of experience with Java, I have grown tired with all the solutions that are "standard", "J2EE compliant", but in the end, just ever so complicated. I don't deny, there are many good ideas around, that have eventually brought up excellent tools, such as Hibernate, JPA/EJB3, iBatis, etc. But all of those tools seem to go to a single direction without giving up any of that thought: Object-relational Mapping. So you end up using a performant database that cost's 100k+$ of license every year just to abstract it with a "standard" persistence layer. I wanted to go a different direction. And take the best of OR-Mapping (code generation, type safety, object oriented query construction, SQL dialect abstraction, etc) without denying the fact, that beneath, I'm running an RDBMS. That's right. R like Relational. Read on about how jOOQ (Java Object Oriented Querying) succeeds in bringing the "relational to the object" Abstract Many companies and software projects seem to implement one of the following two approaches to interfacing Java with SQL The very basic approach: Using JDBC directly or adding a home-grown abstraction on top of it. There is a lot of manual work associated with the creation, maintenance, and extension of the data layer code base. Developers can easily use the full functionality of the underlying database, but will always operate on a very low level, concatenating Strings all over the place. The very sophisticated approach: There is a lot of configuration and a steep learning curve associated with the introduction of sophisticated database abstraction layers, such as the ones created by Hibernate, JPA, iBatis, or even plain old EJB entity beans. While the generated objects and API's may allow for easy manipulation of data, the setup and maintenance of the abstraction layer may become very complex. Besides, these abstraction layers provide so much abstraction on top of SQL, that SQL-experienced developers have to rethink. A different paradigm I tried to find a new solution addressing many issues that I think most developers face every day. With jOOQ - Java Object Oriented Querying, I want to embrace the following paradigm: SQL is a good thing. Many things can be expressed quite nicely in SQL. The relational data model is a good thing. It should not be abstracted by OR-Mapping SQL has a structure and syntax. It should not be expressed using "low-level" String concatenation. Variable binding tends to be very complex when dealing with major queries. POJO's (or data transfer objects) are great when writing Java code manipulating database data. POJO's are a pain to write and maintain manually. Source code generation is the way to go The database comes first. Then the code on top of it. Yes, you do have stored procedures and user defined types (UDT's) in your legacy database. Your database-tool should support that. I think that these key ideas are useful for a very specific type of developer. That specific developer interfaces Java with huge legacy databases. knows SQL well and wants to use it extensively. doesn't want to learn any new language (HQL, JPQL, etc) doesn't want to spend one minute fine-tuning some sophisticated XML-configuration. wants little abstraction over SQL, because his software is tightly coupled with his database. Something that I think the guys at Hibernate or JPA seem to have ignored. needs a strong but light-weight library for database access. For instance to develop for mobile devices. How does jOOQ fit in this paradigm? Not only does jOOQ completely address the above paradigm, it does so quite elegantly. Let's say you have this database that models your bookstore. And you need to run a query selecting all books by authors born after 1920. You know how to do this in SQL: -- Select all books by authors born after 1920, named "Paulo" from a catalogue: SELECT * FROM t_author a JOIN t_book b ON a.id = b.author_id WHERE a.year_of_birth > 1920 AND a.first_name = 'Paulo' ORDER BY b.title The same query expressed with jOOQ-Objects // Instanciate your factory using a JDBC connection // and specify the SQL dialect you're using. Of course you can // have several factories in your application. Factory create = new Factory(connection, SQLDialect.MYSQL); // Create the query using generated, type-safe objects. You could // write even less code than that with static imports! SelectQuery q = create.selectQuery(); q.addFrom(TAuthor.T_AUTHOR); q.addJoin(TBook.T_BOOK, TAuthor.ID, TBook.AUTHOR_ID); // Note how you do not need to worry about variable binding. // jOOQ does that for you, dynamically q.addCompareCondition(TAuthor.YEAR_OF_BIRTH, 1920, Comparator.GREATER); // The AND operator and EQUALS comparator are implicit here q.addCompareCondition(TAuthor.FIRST_NAME, "Paulo"); q.addOrderBy(TBook.TITLE); The jOOQ query object model uses generated classes, such as TAuthor or TBook. Like many other code generation tools do, jOOQ will generate static final objects for the fields contained in each table. In this case, TAuthor holds a member called TAuthor.T_AUTHOR to represent the table itself, and members such as TAuthor.ID, TAuthor.YEAR_OF_BIRTH, etc to hold the table's fields. But you could also use the jOOQ DSL API to stay closer to SQL // Do it all "on one line". SelectQuery q = create.select() .from(T_AUTHOR) .join(T_BOOK).on(TAuthor.ID.equal(TBook.AUTHOR_ID)) .where(TAuthor.YEAR_OF_BIRTH.greaterThan(1920) .and(TAuthor.FIRST_NAME.equal("Paulo"))) .orderBy(TBook.TITLE).getQuery(); jOOQ ships with a DSL (Domain Specific Language) somewhat similar to Linq that facilitates query creation. The strength of DSL becomes obvious when you are using jOOQ constructs such as the decode function: // Create a case statement. Unfortunately "case" is a reserved word in Java // Hence the method is called DECODE after its related Oracle function Field nationality = create.decode() .when(TAuthor.FIRST_NAME.equal("Paulo"), "brazilian") .when(TAuthor.FIRST_NAME.equal("George"), "english") .otherwise("unknown"); // "else" is also a reserved word ;-) The above will render this SQL code: CASE WHEN T_AUTHOR.FIRST_NAME = 'Paulo' THEN 'brazilian' WHEN T_AUTHOR.FIRST_NAME = 'George' THEN 'english' ELSE 'unknown' END Use the DSL API when: You want your Java code to look like SQL You want your IDE to help you with auto-completion (you will not be able to write select .. order by .. where .. join or any of that stuff) Use the regular API when: You want to create your query step-by-step, creating query parts one-by-one You need to assemble your query from various places, passing the query around, adding new conditions and joins on the way In any case, all API's will construct the same underlying implementation object, and in many cases, you can combine the two approaches Once you have established the query, execute it and fetch results // Execute the query and fetch the results q.execute(); Result result = q.getResult(); // Result is Iterable, so you can loop over the resulting records like this: for (Record record : result) { // Type safety assured with generics String firstName = record.getValue(TAuthor.FIRST_NAME); String lastName = record.getValue(TAuthor.LAST_NAME); String title = record.getValue(TBook.TITLE); Integer publishedIn = record.getValue(TBook.PUBLISHED_IN); System.out.println(title + " (published in " + publishedIn + ") by " + firstName + " " + lastName); } Or simply write for (Record record : q.fetch()) { // [...] } Fetch data from a single table and use jOOQ as a simple OR-Mapper // Similar query, but don't join books to authors. // Note the generic record type that is added to your query: SimpleSelectQuery q = create.select(T_AUTHOR) .where(TAuthor.YEAR_OF_BIRTH.greaterThan(1920) .and(TAuthor.FIRST_NAME.equal("Paulo"))) .orderBy(TAuthor.LAST_NAME).getQuery(); // When executing this query, also Result holds a generic type: q.execute(); Result result = q.getResult(); for (TAuthorRecord record : result) { // With generate record classes, you can use generated getters and setters: String firstName = record.getFirstName(); String lastName = record.getLastName(); System.out.println("Author : " + firstName + " " + lastName + " wrote : "); // Use generated foreign key navigation methods for (TBookRecord book : record.getTBooks()) { System.out.println(" Book : " + book.getTitle()); } } jOOQ not only generates code to model your schema, but it also generates domain model classes to represent tuples in your schema. In the above example, you can see how selecting from the TAuthor.T_AUTHOR table will produce results containing well-defined TAuthorRecord types. These types hold getters and setters like any POJO, but also some more advanced OR-code, such as foreign key navigator methods like // Return all books for an author that are obtained through the // T_AUTHOR.ID = T_BOOK.AUTHOR_ID foreign key relationship public List getTBooks() Now, for true OR-mapping, you would probably prefer mature and established frameworks such as Hibernate or iBATIS. Don't panic. Better integration with Hibernate and JPA is on the feature roadmap. The goals of jOOQ should not be to reimplement things that are already well-done, but to bring true SQL to Java Execute CRUD operations with jOOQ as an OR-mapper // Create a new record and insert it into the database TBookRecord book = create.newRecord(T_BOOK); book.setTitle("My first book"); book.store(); // Update it with new values book.setPublishedIn(2010); book.store(); // Delete it book.delete(); Nothing new in the OR-mapping world. These ideas have been around since EJB entity beans or even before. It's still quite useful for simple purposes. Execute CRUD operations the way you're used to You don't need to go into that OR-mapping business. You can create your own INSERT, "INSERT SELECT", UPDATE, DELETE queries. Some examples: InsertQuery i = create.insertQuery(T_AUTHOR); i.addValue(TAuthor.FIRST_NAME, "Hermann"); i.addValue(TAuthor.LAST_NAME, "Hesse"); i.execute(); UpdateQuery u = create.updateQuery(T_AUTHOR); u.addValue(TAuthor.FIRST_NAME, "Hermie"); u.addCompareCondition(TAuthor.LAST_NAME.equal("Hesse")); u.execute(); // etc... Now for the advanced stuff Many tools can do similar stuff as what we have seen before. Especially Hibernate and JPA have a feature called criteria query, that provides all of the type-safety and query object building using DSL's while being based on a solid (but blown-up) underlying architecture. An important goal for jOOQ is to provide you with all (or at least: most) SQL features that you are missing in other frameworks but that you would like to use because you think SQL is a great thing but JDBC is too primitive for the year 2010, 2011, or whatever year we're in, when you're reading this. So, jOOQ comes along with aliasing, nested selects, unions and many other SQL features. Check out the following sections: Aliasing That's a very important feature. How could you have self-joins or in/exists clauses without aliasing? Let's say we have a "T_TREE" table with fields "ID", "PARENT_ID", and "NAME". If we want to find all parent/child NAME couples, we will need to execute a self-join on T_TREE. In SQL, this reads: SELECT parent.NAME parent_name, child.NAME child_name FROM T_TREE parent JOIN T_TREE child ON (parent.ID = child.PARENT_ID) No problem for jOOQ. We'll write: // Create table aliases Table parent = TTree.T_TREE.as("parent"); Table child = TTree.T_TREE.as("child"); // Create field aliases from aliased table Field parentName = parent.getField(TTree.NAME).as("parent_name"); Field childName = child.getField(TTree.NAME).as("child_name"); // Execute the above select Record record = create.select(parentName, childName) .from(parent) .join(child).on(parent.getField(TTree.ID).equal(child.getField(TTree.PARENT_ID))) .fetchAny(); // The aliased fields can be read from the record as in the simpler examples: record.getValue(parentName); Functionally, it is easy to see how this works. Look out for future releases of jOOQ for improvements in the DSL support of field and table aliasing IN clause The org.jooq.Field class provides many methods to construct conditions. In previous examples, we have seen how to create regular compare conditions with = < <= >= > != operators. Now Field also has a couple of methods to create IN conditions: // Create IN conditions with constant values that are bound to the // query via JDBC's '?' bind variable placeholders Condition in(T... values); Condition in(Collection values); Condition notIn(T... values); Condition notIn(Collection values); // Create IN conditions with a sub-select Condition in(QueryProvider query) Condition notIn(QueryProvider query) The constant set of values for IN conditions is an obvious feature. But the sub-select is quite nice: -- Select authors with books that are sold out SELECT * FROM T_AUTHOR WHERE T_AUTHOR.ID IN (SELECT DISTINCT T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.STATUS = 'SOLD OUT'); In jOOQ, this translates to create.select(T_AUTHOR) .where (TAuthor.ID.in(create.selectDistinct(TBook.AUTHOR_ID) .from(T_BOOK) .where(TBook.STATUS.equal(TBookStatus.SOLD_OUT)))); EXISTS clause Very similar statements can be expressed with the EXISTS clause. The above set of authors could also be obtained with this statement: -- Select authors with books that are sold out SELECT * FROM T_AUTHOR a WHERE EXISTS (SELECT 1 FROM T_BOOK WHERE T_BOOK.STATUS = 'SOLD OUT' AND T_BOOK.AUTHOR_ID = a.ID); In jOOQ (as of version 1.5.0), this translates to // Alias the author table Table a = T_AUTHOR.as("a"); // Use the aliased table in the select statement create.selectFrom(a) .where(create.exists(create.select(create.constant(1)) .from(T_BOOK) .where(TBook.STATUS.equal(TBookStatus.SOLD_OUT) .and(TBook.AUTHOR_ID.equal(a.getField(TAuthor.ID)))))); UNION clauses SQL knows of four types of "UNION operators": UNION UNION ALL EXCEPT INTERSECT All of these operators are supported by all types of select queries. So in order to write things like: SELECT TITLE FROM T_BOOK WHERE PUBLISHED_IN > 1945 UNION SELECT TITLE FROM T_BOOK WHERE AUTHOR_ID = 1 You can write the following jOOQ logic: create.select(TBook.TITLE).from(T_BOOK).where(TBook.PUBLISHED_IN.greaterThan(1945)).union( create.select(TBook.TITLE).from(T_BOOK).where(TBook.AUTHOR_ID.equal(1))); Of course, you can then again nest the union query in another one (but be careful to correctly use aliases): -- alias_38173 is an example of a generated alias, -- generated by jOOQ for union queries SELECT alias_38173.TITLE FROM ( SELECT T_BOOK.TITLE, T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.PUBLISHED_IN > 1945 UNION SELECT T_BOOK.TITLE, T_BOOK.AUTHOR_ID FROM T_BOOK WHERE T_BOOK.AUTHOR_ID = 1 ) alias_38173 ORDER BY alias_38173.AUTHOR_ID DESC In jOOQ: Select union = create.select(TBook.TITLE, TBook.AUTHOR_ID).from(T_BOOK).where(TBook.PUBLISHED_IN.greaterThan(1945)).union( create.select(TBook.TITLE, TBook.AUTHOR_ID).from(T_BOOK).where(TBook.AUTHOR_ID.equal(1))); create.select(union.getField(TBook.TITLE)) .from(union) .orderBy(union.getField(TBook.AUTHOR_ID).descending()); Note that a UNION query will automatically generate an alias if you use it as a nested table. In order to nest this query correctly, you need to get the aliased field from the query as seen in the example abov. Other, non-standard SQL features See more examples about stored procedures, UDT's, enums, etc on https://sourceforge.net/apps/trac/jooq/wiki/Examples Summary jOOQ brings the relational world to Java without trying to cover up its origins. jOOQ is relational. And object oriented. Just in a different way. Try it for yourself and I would be very glad for any feedback you may have. Find jOOQ on http://jooq.sourceforge.net Cheers Lukas Eder
December 14, 2010
by Lukas Eder
· 3,660 Views
article thumbnail
Using Sphinx and Java to Implement Free Text Search
As promised I am going to provide an article on how we can use Sphinx with Java to perform a full text search. I will begin the article with an introduction to Sphinx. Introduction to Sphinx Databases are continually growing and sometimes tend to hold about 100M records and need an external solution for full text search to be performed. I have picked Sphinx, an open source full-text search engine, distributed under GPL version 2 to perform a full text search on such a huge amount of data. Generally, it's a standalone search engine meant to provide fast, size-efficient and relevant full-text search functions to other applications very much compatible with an SQL Database. So my example will be based on the MySQL database, as we cannot produce millions of data to evaluate the real power of Sphinx, we will have a small amount of data and I think that should not be a problem. Here are few Sphinx Unique Features: high indexing speed (up to 10 MB/sec on modern CPUs) high search speed (avg query is under 0.1 sec on 2-4 GB text collections) high scalability (up to 100 GB of text, upto 100 M documents on a single CPU) provides distributed searching capabilities provides searching from within MySQL through pluggable storage engine supports boolean, phrase, and word proximity queries supports multiple full-text fields per document (upto 32 by default) supports multiple additional attributes per document (ie. groups, timestamps, etc) supports MySQL natively (MyISAM and InnoDB tables are both supported) The important features which have been adopted to perform a full text search are the provision of the Java API to integrate easily with the web application and considerably high indexing and searching speed with an average of 4-10 MB/sec & 20-30 ms/q @5GB,3.5M docs(wikipedia) Sphinx Terms & How It Works The fist principle part of sphinx is indexer. It is solely responsible for gathering the data that will be searchable. From the Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields. This is biased towards SQL, where each row corresponds to a document, and each column to a field. Sphinx builds a special data structure optimized for our queries from the data provided. This structure is called index; and the process of building index from data is called indexing and the element of sphinx which carries out these tasks is called indexer. Indexer can be executed either from a regular script or command-line interface. Sphinx documents are equal to records in DB. Document is set of text fields and number attributes + unique ID – similar to row in DB Set of fields and attributes is constant for index – similar to table in DB Fields are searchable for FullText queries Attributes may be used for filtering, sorting, grouping searchd is the second principle tools as part of Sphinx. It is the part of the system which actually handles searches; it functions as a server and is responsible for receiving queries, processing them and returning a dataset back to the different APIs for client applications. Unlike indexer, searchd is not designed to be run either from a regular script or command-line calling, but instead either as a daemon to be called from init.d (on Unix/Linux type systems) or to be called as a service (on Windows-type systems). I am going to focus on Windows environment so later I will show you how we can install sphinx on windows as a service. Finally search is one of the helper tools within the Sphinx package. Whereas searchd is responsible for searches in a server-type environment, search is aimed at testing the index quickly without building a framework to make the connection to the server and process its response. This will only be used for testing sphinx from command – line and with respect to application’s requirement; searchd service will be used to query the MySql Server with a pre created index. Installation on Windows So now we come to the part of installing Sphinx on Windows: Download Sphinx from the official Sphinx download site i.e http://sphinxsearch.com (I downloaded Win32 release binaries with MySQL support: sphinx-0.9.9-win32.zip) Unzip the file to some folder, I unzipped to C:\devel\sphinx-0.9.9-win32 and added the bin directory to the windows path variable Well Sphinx is installed. Nice, simple, easy. Later I will tell how to set up indexes and search. Sample Application Till now I guess the whole motto of this article is clear to you, let's move ahead to define our sample application. We all use the Address Book to search for people by using their name or e-mail address when we want to immediately address an e-mail message to a specific person, people, or distribution list. We also search for people by using other basic information, such as e-mail alias, office location, and telephone number etc. I think most of the people on this planet are quire familiar with this kind of search, so let's make outlook address book as our sample database schema. Most of the fields are mapped from microsoft outlook, the only additional column is date of joining so that we can filter our queries based on joining dates of the employees. The example that I am going to put forth will use Sphinx to search for a particluar address entry using free text search, meaning the user is free to type in anything, here is our search screen, the DOJ (date of joining) search parameter is optional. The screen is self explanatory, let's move ahead and define our database. As Sphinx works well with MySQL and MySQL being free also, lets create our db scripts around mysql database (Those who wish to install MySQL can dowload it from http://www.mysql.com) Let's create our sample database 'addressbook' mysql> create database addressbook; Query OK, 1 row affected (0.03 sec) mysql> use addressbook; Database changed Note: The fields defined in the following tables are for the purpose of learning only and may not contain a complete set of fields that microsoft address book or any similar software may provide. mysql> CREATE TABLE addressbook ( Id int(11) NOT NULL, FirstName varchar(30) NOT NULL, LastName varchar(30) NOT NULL, OfficeId int(11) DEFAULT NULL, Title varchar(20) DEFAULT NULL, Alias varchar(20) NOT NULL, Email varchar(50) NOT NULL, DOJ date NOT NULL, PhoneNo varchar(20) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; mysql> CREATE TABLE CompanyLocations ( Id int(11) NOT NULL, Location varchar(60) NOT NULL, Country varchar(20) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; It's time to put some dummy data into the table, so let's fill our tables. Our virtual company 'gogs.it' has six offices across India and Singapore as defined in the following insert script. mysql> insert into CompanyLocations (Id, Location, Country) VALUES (1, 'Tower One, Harbour Front, Singapore', 'SG'); insert into CompanyLocations (Id, Location, Country) VALUES (2, 'DLF Phase 3, Gurgaon, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (3, 'Hiranandani Gardens, Powai, Mumbai, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (4, 'Hinjwadi, Pune, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (5, 'Toll Post, Nagrota, Jammu, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (6, 'Bani (Kathua), India', 'IN'); Now comes the real stuff... The data sphinx is going to index, let's populate that as well...wooooo mysql> INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (1,'Aabheer','Kumar',1,'Mr','u534','Kumar.Aabheer@gogs.it','2008-9-3', '+911234599990'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (2,'Aadarsh','Gupta',6,'Mr','u668','Gupta.Aadarsh@gogs.it','2007-2-23','+911234599991'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (3,'Aachman','Singh',5,'Mr','u2766','Singh.Aachman@gogs.it','2006-12-18','+911234599992'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (4,'Aadesh','Shrivastav',5,'Mr','u3198','Shrivastav.Aadesh@gogs.it','2007-11-23','+911234599993'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (5,'Aadi','manav',1,'Mr','u2686','manav.Aadi@gogs.it','2010-7-20','+911234599994'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (6,'Aadidev','singh',4,'Mr','u572','singh.Aadidev@gogs.it','2010-8-18','+911234599995'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (7,'Aafreen','sheikh',4,'Smt','u1092','sheikh.Aafreen@gogs.it','2007-7-11','+911234599996'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (8,'Aakar','Sherpa',5,'Mr','u1420','Sherpa.Aakar@gogs.it','2009-10-3','+911234599997'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (9,'Aakash','Singh',4,'Mrs','u2884','Singh.Aakash@gogs.it','2008-6-11','+911234599998'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (10,'Aalap','Singhania',4,'Mrs','u609','Singhania.Aalap@gogs.it','2010-10-8','+911234599999'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (11,'Aandaleeb','mahajan',1,'Smt','u131','mahajan.Aandaleeb@gogs.it','2010-10-21','+911234580001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (12,'Mamata','kumari',5,'Sh','u2519','kumari.Mamata@gogs.it','2009-6-12','+911234580002'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (13,'Mamta','sharma',6,'Smt','u4123','sharma.Mamta@gogs.it','2009-2-8','+911234580003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (14,'Manali','singh',6,'Mr','u1078','singh.Manali@gogs.it','2008-6-14','+911234580004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (15,'Manda','saxena',1,'Mrs','u196','saxena.Manda@gogs.it','2010-9-4','+911234580005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (16,'Salila','shetty',3,'Miss','u157','shetty.Salila@gogs.it','2009-11-15','+911234580006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (17,'Salima','happy',3,'Mrs','u3445','happy.Salima@gogs.it','2006-7-14','+911234580007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (18,'Salma','haik',5,'Sh','u4621','haik.Salma@gogs.it','2008-6-23','+911234580008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (19,'Samita','patil',3,'Smt','u3156','patil.Samita@gogs.it','2006-6-7','+911234580009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (20,'Sameena','sheikh',5,'Mrs','u952','sheikh.Sameena@gogs.it','2008-8-13','+911234580010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (21,'Ranita','gupta',5,'Mrs','u2664','gupta.Ranita@gogs.it','2008-10-20','+911234580011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (22,'Ranjana','sharma',1,'Sh','u3085','sharma.Ranjana@gogs.it','2010-6-21','+911234580012'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (23,'Ranjini','singh',6,'Mrs','u4200','singh.Ranjini@gogs.it','2007-4-13','+911234580013'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (24,'Ranjita','vyapari',2,'Smt','u1109','vyapari.Ranjita@gogs.it','2008-1-22','+911234580014'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (25,'Rashi','gupta',6,'Mrs','u3492','gupta.Rashi@gogs.it','2006-2-2','+911234580015'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (26,'Rashmi','sehgal',3,'Mr','u3248','sehgal.Rashmi@gogs.it','2008-9-9','+911234580016'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (27,'Rashmika','sexy',1,'Mrs','u4599','sexy.Rashmika@gogs.it','2009-3-12','+911234580017'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (28,'Rasika','dulari',3,'Smt','u2089','dulari.Rasika@gogs.it','2009-1-24','+911234580018'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (29,'Dilber','lover',6,'Mr','u4241','lover.Dilber@gogs.it','2007-10-11','+911234580019'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (30,'Dilshad','happy',1,'Mr','u1564','happy.Dilshad@gogs.it','2007-4-8','+911234580020'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (31,'Dipali','lights',5,'Sh','u1127','lights.Dipali@gogs.it','2006-11-1','+911234580021'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (32,'Dipika','lamp',1,'Sh','u2271','lamp.Dipika@gogs.it','2010-12-17','+911234580022'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (33,'Dipti','brightness',5,'Smt','u422','brightness.Dipti@gogs.it','2010-9-25','+911234580023'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (34,'Disha','singh',3,'Sh','u4604','singh.Disha@gogs.it','2006-5-2','+911234580024'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (35,'Maadhav','Krishna',1,'Miss','u2561','Krishna.Maadhav@gogs.it','2007-11-6','+911234580025'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (36,'Maagh','month',5,'Miss','u874','month.Maagh@gogs.it','2008-5-8','+911234580026'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (37,'Maahir','Skilled',4,'Mr','u3372','Skilled.Maahir@gogs.it','2007-8-4','+911234580027'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (38,'Maalolan','Ahobilam',5,'Mrs','u3498','Ahobilam.Maalolan@gogs.it','2007-7-9','+911234580028'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (39,'Maandhata','King',1,'Smt','u2089','King.Maandhata@gogs.it','2009-9-3','+911234580029'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (40,'Maaran','Brave',2,'Miss','u4020','Brave.Maaran@gogs.it','2008-4-5','+9112345606001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (41,'Maari','Rain',2,'Sh','u3593','Rain.Maari@gogs.it','2007-12-5','+9112345606002'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (42,'Madan','Cupid',4,'Mrs','u795','Cupid.Madan@gogs.it','2007-11-11','+9112345606003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (43,'Madangopal','Krishna',3,'Sh','u438','Krishna.Madangopal@gogs.it','2007-2-19','+9112345606004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (44,'sahil','gogna',1,'Sh','u2273','gogna.sahil@gogs.it','2007-10-7','+9112345606005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (45,'nikhil','gogna',2,'Mr','u1240','gogna.nikhil@gogs.it','2009-9-14','+9112345606006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (46,'amit','gogna',5,'Sh','u3879','gogna.amit@gogs.it','2006-2-8','+9112345606007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (47,'krishan','gogna',4,'Miss','u3632','gogna.krishan@gogs.it','2010-9-20','+9112345606008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (48,'anil','kashyap',4,'Smt','u3939','kashyap.anil@gogs.it','2010-3-15','+9112345606009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (49,'sunil','kashyap',5,'Mrs','u3493','kashyap.sunil@gogs.it','2008-3-16','+9112345606010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (50,'sandy','singh',6,'Mrs','u4691','singh.sandy@gogs.it','2009-6-2','+9112345606011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (51,'vishal','kapoor',3,'Mr','u1087','kapoor.vishal@gogs.it','2010-5-13','+9112345606012'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (52,'bala','ji',5,'Mrs','u4762','ji.bala@gogs.it','2007-8-9','+9112345606013'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (53,'karan','sarin',4,'Miss','u3030','sarin.karan@gogs.it','2008-4-8','+9112345606014'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (54,'abhishek','kumar',4,'Miss','u1093','kumar.abhishek@gogs.it','2008-12-21','+9112345605001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (55,'babu','the',1,'Miss','u1055','the.babu@gogs.it','2008-7-2','+9112345506001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (56,'sandeep','gainda',3,'Miss','u1320','gainda.sandeep@gogs.it','2010-5-14','+9112345606301'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (57,'dheeraj','kumar',3,'Miss','u3685','kumar.dheeraj@gogs.it','2007-10-14','+9112345606091'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (58,'dharmendra','chauhan',1,'Smt','u3235','chauhan.dharmendra@gogs.it','2008-8-1','+9112345806001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (59,'max','alan',3,'Smt','u3465','alan.max@gogs.it','2009-5-5','+9112345608011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (60,'hidayat','khan',3,'Smt','u958','khan.hidayat@gogs.it','2007-11-18','+911234599101'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (61,'himnashu','singh',4,'Miss','u2027','singh.himnashu@gogs.it','2008-3-2','+911234599102'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (62,'dinesh','kumar',6,'Sh','u3233','kumar.dinesh@gogs.it','2008-5-9','+911234599103'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (63,'toshi','prakash',1,'Mr','u3766','prakash.toshi@gogs.it','2010-9-17','+911234599104'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (64,'niti','puri',3,'Mr','u3575','puri.niti@gogs.it','2009-11-15','+911234599105'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (65,'pawan','tikki',3,'Sh','u3919','tikki.pawan@gogs.it','2006-3-19','+911234599106'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (66,'gaurav','sharma',2,'Sh','u413','sharma.gaurav@gogs.it','2010-4-2','+911234599107'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (67,'himanshu','verma',2,'Mrs','u4732','verma.himanshu@gogs.it','2009-3-20','+911234599108'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (68,'priyanshu','verma',3,'Sh','u183','verma.priyanshu@gogs.it','2010-8-12','+911234599109'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (69,'nitika','luthra',2,'Mrs','u4259','luthra.nitika@gogs.it','2010-7-12','+911234599110'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (70,'neeru','gogna',2,'Sh','u1633','gogna.neeru@gogs.it','2010-6-23','+91532110000'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (71,'bindu','gupta',1,'Sh','u1859','gupta.bindu@gogs.it','2006-11-10','+91532110001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (72,'gurleen','bakshi',5,'Miss','u1423','bakshi.gurleen@gogs.it','2007-7-1','+91532110003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (73,'rahul','gupta',3,'Sh','u1223','gupta.rahul@gogs.it','2009-8-11','+91532110004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (74,'jagdish','salgotra',3,'Mr','u12','salgotra.jagdish@gogs.it','2008-5-19','+91532110005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (75,'vikas','sharma',3,'Smt','u465','sharma.vikas@gogs.it','2006-6-2','+91532110006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (76,'poonam','mahendra',2,'Sh','u1744','mahendra.poonam@gogs.it','2009-12-2','+91532110007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (77,'pooja','kulkarni',3,'Mrs','u1903','kulkarni.pooja@gogs.it','2008-10-6','+91532110008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (78,'priya','mahajan',6,'Sh','u4205','mahajan.priya@gogs.it','2010-8-5','+91532110009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (79,'manoj','zerger',1,'Mrs','u3369','zerger.manoj@gogs.it','2009-12-4','+91532110010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (80,'mohan','master',5,'Mr','u2841','master.mohan@gogs.it','2010-10-7','+91532110011'); Please note that above employee data is just a data *only data* I created using a small java programme using random number generators and reading some names file, so you may find titles getting messed up :( We next create a procedure that we will use from java to fetch records that we just inserted. DROP PROCEDURE IF EXISTS search_address_book; CREATE PROCEDURE search_address_book(IN address_ids VARCHAR(1000) ) BEGIN DECLARE search_address_query VARCHAR(2000) DEFAULT ''; SET address_ids = CONCAT('\'', REPLACE(address_ids, ',', '\',\''), '\''); SET search_address_query = CONCAT(search_address_query, ' select ab.Id as Id , ab.FirstName as FName, ab.LastName as LName, cl.Location as Location, ab.Title as Title, ab.Alias as Alias, ab.Email as Email, ab.DOJ as DOJ, ab.PhoneNo as PhoneNo ' ); SET search_address_query = CONCAT(search_address_query, ' from AddressBook ab left join CompanyLocations cl on ab.OfficeId=cl.Id '); SET search_address_query = CONCAT(search_address_query, ' where ab.id IN (', address_ids ,') '); SET @statement = search_address_query; PREPARE dynquery FROM @statement; EXECUTE dynquery; DEALLOCATE PREPARE dynquery; END; # To get records for ids 1, 6 and 7, we run following commands: call search_address_book('1,6,7'); Configuring Sphinx It turns out that it was not terribly difficult to setup sphinx, but I had a hard time finding instructions on the web, so I'll post my steps here. By default Sphinx looks for 'sphinx.co.in' configuration file to come with indexes and other stuff, lets create and define source and index for our sample application addressbook.conf (read between the lines) ############################################################################# ## data source definition ############################################################################# source addressBookSource { ## SQL settings for 'mysql' ## type = mysql # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = root sql_db = addressbook sql_port = 3306 # optional, default is 3306 # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query, integer document ID field MUST be the first selected column sql_query = \ select ab.Id as Id , ab.FirstName as FName, ab.LastName as LName, cl.Location as Location, \ ab.Title as Title, ab.Alias as Alias, ab.Email as Email, UNIX_TIMESTAMP(ab.DOJ) as DOJ, ab.PhoneNo as PhoneNo \ from AddressBook ab left join CompanyLocations cl on ab.OfficeId=cl.Id sql_attr_timestamp = DOJ # document info query, ONLY for CLI search (ie. testing and debugging) , optional, default is empty must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM AddressBook WHERE id=$id } ############################################################################# ## index definition ############################################################################# # local index example, this is an index which is stored locally in the filesystem index addressBookIndex { # document source(s) to index source = addressBookSource # index files path and file name, without extension, make sure you have this folder path = C:\devel\sphinx-0.9.9-win32\data\addressBookIndex # document attribute values (docinfo) storage mode docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping mlock = 0 morphology = none # make sure this file exists exceptions =C:\devel\sphinx-0.9.9-win32\data\exceptions.txt enable_star = 1 } ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 32M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M } ############################################################################# ## searchd settings ############################################################################# searchd { # hostname, port, or hostname:port, or /unix/socket/path to listen on listen = 9312 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = C:\devel\sphinx-0.9.9-win32\data\log\searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = C:\devel\sphinx-0.9.9-win32\data\log\query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = C:\devel\sphinx-0.9.9-win32\data\log\searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 0 (do not preopen) preopen_indexes = 0 } # --eof-- Once the configuration is done, its time to index our sql data, the command to use is 'indexer' as shown below. C:\devel\sphinx-0.9.9-win32\bin>indexer.exe --all --config C:\devel\sphinx-0.9.9-win32\addressbook.conf CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file 'C:\devel\sphinx-0.9.9-win32\addressbook.conf'... indexing index 'addressBookIndex'... collected 80 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 80 docs, 5514 bytes total 0.057 sec, 96386 bytes/sec, 1398.43 docs/sec total 2 reads, 0.000 sec, 3.5 kb/call avg, 0.0 msec/call avg total 7 writes, 0.000 sec, 2.5 kb/call avg, 0.0 msec/call avg Note: As I told earlier that Sphinx creates 1 document for each row, as we had 80 rows in the database so a total of 80 docs are created. Time taken is also very very small, believe me I tried with half million rows and it took around 3-4 seconds :) cool isn't it? Once the index is up let's try to search few records, the utility command to perform search is 'search'. Ok Sphinx maharaj* please search for employee whose alias is u4732 C:\devel\sphinx-0.9.9-win32\bin>search.exe --config C:\devel\sphinx-0.9.9-win32\addressbook.conf u4732 CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file 'C:\devel\sphinx-0.9.9-win32\addressbook.conf'... index 'addressBookIndex': query 'u4732 ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=67, weight=1, doj=Fri Mar 20 00:00:00 2009 Id=67 FirstName=himanshu LastName=verma OfficeId=2 Title=Mrs Alias=u4732 Email=verma.himanshu@gogs.it DOJ=2009-03-20 PhoneNo=+911234599108 words: 1. 'u4732': 1 documents, 1 hits words: 1. 'u4732': 1 documents, 1 hits As you can see above this is a unique record for Himanshu. Note: You see a lot of information for the result, this is because of following line in our configuration file sql_query_info = SELECT * FROM AddressBook WHERE id=$id If you want to see less columns you need to change the sql_query_info in configuration file. Let's try another search, sphinx maharaj* please tell me which all rows have gurleen or toshi in them. C:\devel\sphinx-0.9.9-win32\bin>search.exe --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --any toshi gurleen CONSOLE: displaying matches: 1. document=63, weight=2, doj=Fri Sep 17 00:00:00 2010 Id=63 FirstName=toshi LastName=prakash OfficeId=1 Title=Mr Alias=u3766 Email=prakash.toshi@gogs.it DOJ=2010-09-17 PhoneNo=+911234599104 2. document=72, weight=2, doj=Sun Jul 01 00:00:00 2007 Id=72 FirstName=gurleen LastName=bakshi OfficeId=5 Title=Miss Alias=u1423 Email=bakshi.gurleen@gogs.it DOJ=2007-07-01 PhoneNo=+91532110003 Exactly two records were returned and this is what we were expecting. The following special operators and modifiers can be used when using the extended matching mode: operator OR: nikhil | sahil operator NOT: hello -sandy hello !sandy field search operator: @Email gogna.sahil@gogna.it For a complete set of search features , I advise you to go through http://sphinxsearch.com/docs/manual-0.9.9.html#searching link. Sphinx as Windows Service Now our main aim is to use sphinx with JAVA API, so let's move towards that now, before java can utilize the true power of Sphinx, we need to start 'searchd' as a windows service so that our java programme can connect to sphinx search engine. Let's install Sphinx as a windows service so that our java program can use this daemon service to query the index that we just created, the command is : C:\devel\sphinx-0.9.9-win32\bin>searchd.exe --install --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --servicename --port 9312 SphinxSearch CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff Installing service... Service 'SphinxSearch' installed succesfully. Well now the sphinx is ready to serve us on port 9312 Note: If you try to install Sphinx without admin rights, you may get following error messages. C:\devel\sphinx-0.9.9-win32\bin>searchd.exe --install --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --servicename --port 9312 SphinxSearch CONSOLE: Installing service... FATAL: OpenSCManager() failed: code=5, error=Access is denied. Once done you can start the service as: c:\>sc start SphinxSearch (or alternatively from the services screen, start 'services.msc' in windows Run) If some how you want to delete the service , use c:\>sc delete SphinxSearch Let's create an adapter to fetch data from the database. package it.gogs.sphinx.util; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import org.apache.log4j.Logger; /** * Adapter to fetch data from the database. * * @author Munish Gogna * */ public class AddressBookAdapter { private static Logger logger = Logger.getLogger(AddressBookAdapter.class); private AddressBookAdapter() { // use in static way.. } private static Connection getConnection() throws AddressBookTechnicalException { String userName = "root"; String password = "root"; String url = "jdbc:mysql://localhost/addressbook"; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); return DriverManager.getConnection(url, userName, password); } catch (Exception e) { throw new AddressBookTechnicalException("could not get connection"); } } public static List getAddressBookList(List addressIds) throws AddressBookTechnicalException, AddressBookBizException { List addressBoookList = new ArrayList(); if (addressIds == null || addressIds.size() == 0){ logger.error("AddressIds was null or empty, returning empty list"); return addressBoookList; } Connection connection = null; CallableStatement callableStatement = null; try { connection = getConnection(); callableStatement = connection.prepareCall("{ call search_address_book(?)}"); callableStatement.setString(1, Utils.toCommaString(addressIds)); callableStatement.execute(); ResultSet resultSet = callableStatement.getResultSet(); prepareResults(resultSet, addressBoookList); connection.close(); } catch (SQLException e) { logger.error("Problem connecting MYSQL - " + e.getMessage()); throw new AddressBookTechnicalException(e.getMessage()); } catch (AddressBookTechnicalException e) { logger.error("Problem connecting MYSQL - " + e.getMessage()); throw e; } finally{ if(connection != null){ try { connection.close(); } catch (SQLException e) { logger.error("Problem closing conection - " + e.getMessage()); e.printStackTrace(); } } } return addressBoookList; } private static void prepareResults(ResultSet resultSet, List addressBoookList) throws SQLException { AddressBoook addressBoook; while (resultSet.next()) { addressBoook = new AddressBoook(); addressBoook.setAlias(resultSet.getString("Alias")); addressBoook.setEmail(resultSet.getString("Email")); addressBoook.setfName(resultSet.getString("FName")); addressBoook.setlName(resultSet.getString("LName")); addressBoook.setOfficeLocation(resultSet.getString("Location")); addressBoook.setPhoneNo(resultSet.getString("PhoneNo")); addressBoook.setTitle(resultSet.getString("Title")); addressBoook.setDateOfJoining(resultSet.getDate("DOJ")); addressBoook.setId(resultSet.getLong("Id")); addressBoookList.add(addressBoook); } } } Next we create the SphinxInstance that will parse the keywords and date range and provide us a list of Ids that matches the search. package it.gogs.sphinx.util; import it.gogs.sphinx.DateRange; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.api.SphinxClient; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.api.SphinxMatch; import it.gogs.sphinx.api.SphinxResult; import it.gogs.sphinx.exception.AddressBookBizException; import java.util.ArrayList; import java.util.Date; import java.util.List; import org.apache.log4j.Logger; /** * Instance that will parse our free text and provide the results. * * Note: Make sure that 'searchd' is up and running before you use this class * @author Munish Gogna * */ public class SphinxInstance { private static String SPHINX_HOST = "localhost"; private static String SPHINX_INDEX = "addressBookIndex"; private static int SPHINX_PORT = 9312; private static SphinxClient sphinxClient; private static Logger logger = Logger.getLogger(SphinxInstance.class); static { sphinxClient = new SphinxClient(SPHINX_HOST, SPHINX_PORT); } public static List getAddressBookIds(SearchCriteria criteria) throws AddressBookBizException, SphinxException { List addressIdsList = new ArrayList(); try { if (Utils.isNull(criteria)) { logger.error("criteria is null"); throw new AddressBookBizException("criteria is null"); } if (Utils.isNull(criteria.getKeywords())) { logger.error("keyword is a required field"); throw new AddressBookBizException("keyword is a required field"); } DateRange dateRange = criteria.getDateRage(); if (!Utils.isNull(dateRange)) { if (Utils.isDateRangeValid(dateRange)) { // this is to filter results based on joining dates if they are provided sphinxClient.SetFilterRange("DOJ", getTimeInSeconds(dateRange.getFromDate()), getTimeInSeconds(dateRange.getToDate()), false); } else { logger.error(" fromDate/toDate should not be empty and 'fromDate' should be less than equal to 'toDate'"); throw new AddressBookBizException("fromDate/toDate should not be empty and 'fromDate' should be less than equal to 'toDate'"); } } sphinxClient.SetMatchMode(SphinxClient.SPH_MATCH_EXTENDED2); sphinxClient.SetSortMode(SphinxClient.SPH_SORT_RELEVANCE, ""); SphinxResult result = sphinxClient.Query(buildSearchQuery(criteria), SPHINX_INDEX, "buidling query for address book search"); SphinxMatch[] matches = result.matches; for (SphinxMatch match : matches) { addressIdsList.add(String.valueOf(match.docId)); } } catch (SphinxException e) { throw e; } catch (AddressBookBizException e) { throw e; } logger.info("Total record(s):" + addressIdsList.size()); return addressIdsList; } private static long getTimeInSeconds(Date time) { return time.getTime()/1000; } private static String buildSearchQuery(SearchCriteria criteria) throws AddressBookBizException { String keywords[] = criteria.getKeywords().split(" "); StringBuilder searchFor = new StringBuilder(); for (String key : keywords) { if (!Utils.isEmpty(key)) { searchFor.append(key); if (searchFor.length() > 1) { searchFor.append("*|*"); } } } searchFor.delete(searchFor.lastIndexOf("|*"), searchFor.length()); StringBuilder queryBuilder = new StringBuilder(); String query = searchFor.toString(); queryBuilder.append("@FName *" + query + " | "); queryBuilder.append("@LName *" + query + " | "); queryBuilder.append("@Title *" + query + " | "); queryBuilder.append("@Location *"+ query + " | "); queryBuilder.append("@Alias *" + query + " | "); queryBuilder.append("@Email *" + query + " | "); queryBuilder.append("@PhoneNo *" + query); logger.info("Sphinx Query: " + queryBuilder.toString()); return queryBuilder.toString(); } } Here is the interface that I will expose to the outside world (in my future article I will expose this interface as Web Service) import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import java.util.List; /** * * @author Munish Gogna * */ public interface AddressBook { /** * Returns the list of AddressBook objects based on search criteria. * * @param criteria * @throws AddressBookTechnicalException * @throws AddressBookBizException * @throws SphinxException */ public List getAddressBookList(SearchCriteria criteria) throws AddressBookTechnicalException, AddressBookBizException, SphinxException; } and here is the implementation class for the same. package it.gogs.sphinx.addressbook.impl; import java.util.List; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.addressbook.AddressBook; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import it.gogs.sphinx.util.AddressBookAdapter; import it.gogs.sphinx.util.SphinxInstance; /** * Implementation for our Address Book example * * @author Munish Gogna * */ public class AddressBookImpl implements AddressBook{ public List getAddressBookList(SearchCriteria criteria) throws AddressBookTechnicalException, AddressBookBizException, SphinxException { List addressIds= SphinxInstance.getAddressBookIds(criteria); return AddressBookAdapter.getAddressBookList(addressIds); } } ok so far so good, let's run some tests now ............ package it.gogs.sphinx.test; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.List; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.DateRange; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.addressbook.AddressBook; import it.gogs.sphinx.addressbook.impl.AddressBookImpl; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import junit.framework.TestCase; /** * * @author Munish Gogna * */ public class AddressBookTest extends TestCase { private AddressBook addressBook; @Override protected void setUp() throws Exception { super.setUp(); addressBook = new AddressBookImpl(); } @Override protected void tearDown() throws Exception { super.tearDown(); } /** this should be a unique record for Himanshu */ public void test_search_for_himanshu() throws Exception { SearchCriteria criteria = new SearchCriteria(); // remember the first 'search' example?? criteria.setKeywords("u4732"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 1); assertTrue("expecting himanshu here", "himanshu".equals(addressList.get(0).getfName())); } /** only two employees have name gurleen or toshi */ public void test_search_for_gurleen_or_toshi() throws Exception { SearchCriteria criteria = new SearchCriteria(); // remember the second 'search' example?? criteria.setKeywords("gurleen toshi"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 2); assertTrue("expecting toshi here", "toshi".equals(addressList.get(0).getfName())); assertTrue("expecting gurleen here", "gurleen".equals(addressList.get(1).getfName())); } /** there are 16 people from jammu location */ public void test_search_for_people_from_jammu_location() throws Exception { SearchCriteria criteria = new SearchCriteria(); criteria.setKeywords("jammu"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 16); } /** only Aalap, Manda and nitika are having title as Mrs and joined in 2010 */ public void test_joined_in_2010_with_title_Mrs() throws Exception { DateRange dateRange = new DateRange(); GregorianCalendar calendar1 = new GregorianCalendar(); calendar1.set(Calendar.YEAR, 2010); calendar1.set(Calendar.MONTH, Calendar.JANUARY); calendar1.set(Calendar.DAY_OF_MONTH, 1); dateRange.setFromDate(calendar1.getTime()); GregorianCalendar calendar2 = new GregorianCalendar(); calendar2.set(Calendar.YEAR, 2010); calendar2.set(Calendar.MONTH, Calendar.DECEMBER); calendar2.set(Calendar.DAY_OF_MONTH, 31); dateRange.setToDate(calendar2.getTime()); SearchCriteria criteria = new SearchCriteria(); criteria.setKeywords("Mrs"); criteria.setDateRage(dateRange); List addressList = addressBook.getAddressBookList(criteria); assertTrue("expecting 3 records here", addressList.size() == 3); } /** should get a business exception here */ public void test_without_specifying_keywords(){ SearchCriteria criteria = new SearchCriteria(); //criteria.setKeywords("Mrs"); try { addressBook.getAddressBookList(criteria); } catch (Exception e) { assertTrue(e instanceof AddressBookBizException); assertTrue(e.getMessage().indexOf("keyword is a required field") >-1); } } } How we update the Index once database changes? For these kinds of requirements, we can set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. First Time data will go in the "main" index and the newly inserted address book entries will go into "delta". Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes. Also one thing to take from this article is once 'searchd' daemon is running we can't index the data in normal way,we have to use --rotate option in such cases. For some applications where there is a timely batch update for the data, we can configure some cron job to reindex our documents in Sphinx as shown below. C:\devel\sphinx-0.9.9-win32\bin>indexer.exe --all --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --rotate Capsule We asked Sphinx to provide us the Document Ids corresponding to our search parameters and then we used those Ids to fire database query. In case the data we want to return is included in Index (DOJ attribute for example in our case) we can skip the database portion, so choose wisely how much information (attributes) you want to include while you index your sql data. Well that's all ... it's time to say good bye. Take good care of your health and don't forget to vote, its a must :) - Munish Gogna
December 7, 2010
by Munish Gogna
· 37,600 Views · 2 Likes
article thumbnail
A Closer Look at JUnit Categories
JUnit 4.8 introduced Categories: a mechanism to label and group tests, giving developers the option to include or exclude groups (or categories.) This post presents a brief overview of JUnit categories and some unexpected behavior I have found while using them. 1. Quick Introduction The following example shows how to use categories: (adapted from JUnit’s release notes) public interface FastTests { /* category marker */ } public interface SlowTests { /* category marker */ } public class A { @Category(SlowTests.class) @Test public void a() {} } @Category(FastTests.class}) public class B { @Test public void b() {} } @RunWith(Categories.class) @IncludeCategory(SlowTests.class) @ExcludeCategory(FastTests.class) @SuiteClasses({ A.class, B.class }) public class SlowTestSuite {} Lines 1, 2: we define two categories, FastTests and SlowTests. JUnit categories can be defined as classes or interfaces. Since a category acts like a label or marker, my intuition tells me to use interfaces. Line 5: we use the annotation @org.junit.experimental.categories.Category to label test classes and test methods with one or more categories. Lines 6, 9: test methods and test classes can be marked as belonging to one or more categories of tests. Labeling a test class with a category automatically includes all its test methods in such category. Lines 14 to 18: currently programmatic test suites (line 17) are the only way to specify which test categories (line 14) should be included (line 15) or excluded (line 16) when the suite is executed. I find this approach (especially the way test classes need to included in the suite) too verbose and not so flexible. Hopefully Ant, Maven and IDEs will provide support for categories (with a simpler configuration) in the very near future. Note: I recently discovered ClasspathSuite, a project that simplifies the creation of programmatic JUnit test suites. For example, we can specify we want to include in a test suite all tests whose names end with “UnitTest.” 2. Category Subtyping Categories also support subtyping. Let’s say we have the category IntegrationTests that extends SlowTests: public interface IntegrationTests extends SlowTests {} Any test class or test method labeled with the category IntegrationTests is also part of the category SlowTests. To be honest, I don’t know how handy category subtyping could be. I’ll need to experiment with it more to have an opinion. 3. Categories and Test Inheritance 3a. Method-level Categories JUnit behaves as expected when test inheritance is combined with method-level categories. For example: public class D { @Category(GuiTest.class) @Test public void d() {} } public class E extends D { @Category(GuiTest.class) @Test public void e() {} } @RunWith(Categories.class) @IncludeCategory(GuiTest.class) @SuiteClasses(E.class) public class TestSuite {} As I expected, when running TestSuite, test methods d and e are executed (both methods belong to the GuiTest category and E inherits method d from superclass D.) Nice! 3b. Class-level Categories On the other hand, unless I’m missing something, I think I found some strange behavior in JUnit in this scenario. Consider the following classes: @Category(GuiTest.class) public class A { @Test public void a() {} } public class B extends A { @Test public void b() {} } @RunWith(Categories.class) @IncludeCategory(GuiTest.class) @SuiteClasses(B.class) public class TestSuite {} As we can see, TestSuite should execute the tests in B that belong to the category GuiTest. I was expecting TestSuite to execute test method a, even though B is not marked as a GuiTest. Here is my reasoning: test method a belongs to the category GuiTest because test class A is labeled with such category test class B is an A and it inherits test method a Therefore, TestSuite should execute test method a. But it doesn’t! Here is a screenshot of the results I get (click to see full size.) There are two ways to fix this issue, depending on what test methods we want to actually run: Label class B with GuiTest. In this case, both methods, a and b, will be executed. Label method a with GuiTest. In this case, only method a will be executed. (I’ll be posting a question regarding this issue in the JUnit mailing list shortly.) 4. Categories vs. TestNG Groups (You saw this one coming, didn’t you?) Categories (or groups) have been part of TestNG for long time. Unlike JUnit’s, TestNG’s groups are defined as simple strings, not as classes or interfaces. As a static typing lover, I was pretty happy with JUnit categories. By using an IDE, we could safely rename a category or look for usages of a category within a project. Even though my observation was correct, I was missing one important point: all this works great as long as your test suite is written in Java. In the real world, I’d like to define a test suite in either Ant or Maven (or Gradle, or Rake.) In this scenario, having categories as Java types does not bring any benefit. In fact, I suspect it would be very verbose and error-prone to specify the fully-qualified name of a category in a build script. Renaming a category now would be limited to a text-based “search and replace.” Ant and Maven really need to provide a way to specify JUnit categories, clever enough to be fool-proof. As you may expect, I prefer the simplicity and pragmatism of TestNG’s groups. Update: my good friend (and creator of the TestNG framework,) Cédric, reminded me that we can use regular expressions to include or exclude groups in a test suite (details here.) This is really powerful! 5. My Usage of Categories I’m not using JUnit categories in my test suites yet. I started to look into JUnit categories because I wasn’t completely happy with the way we recognized GUI tests in FEST. We recognize test methods or test classes as “GUI tests” if they have been annotated with the @GUITest (provided by FEST.) When a “GUI test” fails, FEST automatically takes a screenshot of the desktop and includes it in the JUnit or TestNG HTML report. The problem is, our @GUITest annotation is duplicating the functionality of JUnit categories. To solve this issue, I created a JUnit extension that recognizes test methods or test classes as “GUI tests” if they belong to the GuiTest category. At this moment GuiTest is an interface provided by FEST, but I’m thinking about letting users specify their own GuiTest category as well. I also refactored this functionality out of the Swing testing module, expecting to reuse it once I implement a JavaFX testing module :) You can find the FEST code that deals with JUnit categories at github. 6. Conclusion Having the ability to label and group tests via categories is really a great feature. I still have some reservations about the practicality of defining categories as Java types, the lack of support for this feature from Ant and Maven (not JUnit’s fault,) and the unexpected behavior I noticed when combining class-level categories and test inheritance. On the brighter side, categories are still an experimental, non-final feature. I’m sure will see many improvements in future JUnit releases :) Feedback is always welcome. From http://alexruiz.developerblogs.com/?p=1711
December 2, 2010
by Alex Ruiz
· 35,533 Views
article thumbnail
Maven Profile Best Practices
Maven profiles, like chainsaws, are a valuable tool, with whose power you can easily get carried away, wielding them upon problems to which they are unsuited. Whilst you're unlikely to sever a leg misusing Maven profiles, I thought it worthwhile to share some suggestions about when and when not to use them. These three best practices are all born from real-world mishaps: The build must pass when no profile has been activated Never use Use profiles to manage build-time variables, not run-time variables and not (with rare exceptions) alternative versions of your artifact I'll expand upon these recommendations in a moment. First, though, let's have a brief round-up of what Maven profiles are and do. Maven Profiles 101 A Maven profile is a sub-set of POM declarations that you can activate or disactivate according to some condition. When activated, they override the definitions in the corresponding standard tags of the POM. One way to activate a profile is to simply launch Maven with a -P flag followed by the desired profile name(s), but they can also be activated automatically according to a range of contextual conditions: JDK version, OS name and version, presence or absence of a specific file or property. The standard example is when you want certain declarations to take effect automatically under Windows and others under Linux. Almost all the tags that can be placed directly in a POM can also be enclosed within a tag. The easiest place to read up further about the basics is the Build Profiles chapter of Sonatype's Maven book. It's freely available, readable, and explains the motivation behind profiles: making the build portable across different environments. The build must pass when no profile has been activated (Thanks to for this observation.) Why? Good practice is to minimise the effort required to make a successful build. This isn't hard to achieve with Maven, and there's no excuse for a simple mvn clean package not to work. A maintainer coming to the project will not immediately know that profile wibblewibble has to be activated for the build to succeed. Don't make her waste time finding it out. How to achieve it It can be achieved simply by providing sensible defaults in the main POM sections, which will be overridden if a profile is activated. Never use Why not? This flag activates the profile if no other profile is activated. Consequently, it will fail to activate the profile if any other profile is activated. This seems like a simple rule which would be hard to misunderstand, but in fact it's surprisingly easy to be fooled by its behaviour. When you run a multimodule build, the activeByDefault flag will fail to operate when any profile is activated, even if the profile is not defined in the module where the activeByDefault flag occurs. (So if you've got a default profile in your persistence module, and a skinny war profile in your web module... when you build the whole project, activating the skinny war profile because you don't want JARs duplicated between WAR and EAR, you'll find your persistence layer is missing something.) activeByDefault automates profile activation, which is a good thing; activates implicitly, which is less good; and has unexpected behaviour, which is thoroughly bad. By all means activate your profiles automatically, but do it explicitly and automatically, with a clearly defined rule. How to avoid it There's another, less documented way to achieve what aims to achieve. You can activate a profile in the absence of some property: !foo.bar This will activate the profile "nofoobar" whenever the property foo.bar is not defined. Define that same property in some other profile: nofoobar will automatically become active whenever the other is not. This is admittedly more verbose than , but it's more powerful and, most importantly, surprise-free. Use profiles to adapt to build-time context, not run-time context, and not (with rare exceptions) to produce alternative versions of your artifact Profiles, in a nutshell, allow you to have multiple builds with a single POM. You can use this ability in two ways: Adapt the build to variable circumstances (developer's machine or CI server; with or without integration tests) whilst still producing the same final artifact, or Produce variant artifacts. We can further divide the second option into: structural variants, where the executable code in the variants is different, and variants which vary only in the value taken by some variable (such as a database connection parameter). If you need to vary the value of some variable at run-time, profiles are typically not the best way to achieve this. Producing structural variants is a rarer requirement -- it can happen if you need to target multiple platforms, such as JDK 1.4 and JDK 1.5 -- but it, too, is not recommended by the Maven people, and profiles are not the best way of achieving it. The most common case where profiles seem like a good solution is when you need different database connection parameters for development, test and production environments. It is tempting to meet this requirement by combining profiles with Maven's resource filtering capability to set variables in the deliverable artifact's configuration files (e.g. Spring context). This is a bad idea. Why? It's indirect: the point at which a variable's value is determined is far upstream from the point at which it takes effect. It makes work for the software's maintainers, who will need to retrace the chain of events in reverse It's error prone: when there are multiple variants of the same artifact floating around, it's easy to generate or use the wrong one by accident. You can only generate one of the variants per build, since the profiles are mutually exclusive. Therefore you will not be able to use the Maven release plugin if you need release versions of each variant (which you typically will). It's against Maven convention, which is to produce a single artifact per project (plus secondary artifacts such as documentation). It slows down feedback: changing the variable's value requires a rebuild. If you configured at run-time you would only need to restart the application (and perhaps not even that). One should always aim for rapid feedback. Profiles are there to help you ensure your project will build in a variety of environments: a Windows developer's machine and a CI server, for instance. They weren't intended to help you build variant artifacts from the same project, nor to inject run-time configuration into your project. How to achieve it If you need to get variable runtime configuration into your project, there are alternatives: Use JNDI for your database connections. Your project only contains the resource name of the datasource, which never changes. You configure the appropriate database parameters in the JNDI resource on the server. Use system properties: Spring, for example, will pick these up when attempting to resolve variables in its configuration. Define a standard mechanism for reading values from a configuration file that resides outside the project. For example, you could specify the path to a properties file in a system property. Structural variants are harder to achieve, and I confess I have no first-hand experience with them. I recommend you read this explanation of how to do them and why they're a bad idea, and if you still want to do them, take the option of multiple JAR plugin or assembly plugin executions, rather than profiles. At least that way, you'll be able to use the release plugin to generate all your artifacts in one build, rather than a single one at a time. Further reading Profiles chapter from the Sonatype Maven book. Deploying to multiple environments (prod, test, dev): Stackoverflow.com discussion; see the first and top-rated answer. Short of creating a specific project for the run-time configuration, you could simply use run-time parameters such as system properties. Creating multiple artifacts from one project: How to Create Two JARs from One Project (…and why you shouldn’t) by Tim O'Brien of Sonatype (the Maven people) Blog post explaining the same technique Maven best practices (not specifically about profiles): http://mindthegab.com/2010/10/21/boost-your-maven-build-with-best-practices/ http://blog.tallan.com/2010/09/16/maven-best-practices/ This article is a completely reworked version of a post from my blog.
November 27, 2010
by Andrew Spencer
· 140,207 Views · 4 Likes
article thumbnail
Java Thread Local – How to Use and Code Sample
Read about what a Thread Local is, and learn how to use it in this awesome tutorial.
November 23, 2010
by Veera Sundar
· 247,731 Views · 10 Likes
article thumbnail
Real-Time Charts on the Java Desktop
Devoxx, and all similar conferences, is a place where you make new discoveries, continually. One of these, in my case, at last week's Devoxx, started from a discussion with Jaroslav Bachorik from the VisualVM team. He had presented VisualVM's extensibility in a session at Devoxx. I had heard that, when creating extensions for VisualVM, one can also create new charts using VisualVM's own charting API. Jaroslav confirmed this and we created a small demo together to prove it, i.e., there's a charting API in VisualVM. Since VisualVM is based on the NetBeans Platform, I went further and included the VisualVM charts in a generic NetBeans Platform application. Then I wondered what the differences are between JFreeChart and VisualVM charts, so asked the VisualVM chart architect, Jiri Sedlacek. He sent me a very interesting answer: JFreeCharts are great for creating any kind of static graphs (typically for reports). They provide support for all types of existing chart types. The benefit of using JFreeChart is fully customizable appearance and export to various formats. The only problem of this library is that it's not primarily designed for displaying live data. You can hack it to display data in real time, but the performance is poor. That's why I've created the VisualVM charts. The primary (and so far only) goal is to provide charts optimized for displaying live data with minimal performance and memory overhead. You can easily display a fullscreen graph and it will still scroll smoothly while running and adding new values (when running on physical hardware, virtualized environment may give slightly worse results). There's a real rendering engine behind the charts which ensures that only the changed areas of the chart are repainted (no full-repaints because of a 1px change). Scrolling the chart means moving the already rendered image and only painting the newly displayed area. Last but not least, the charts are optimized for displaying over a remote X session - rendering is automatically switched to low-quality ensuring good response times and interactivity. The Tracer engine introduced in VisualVM 1.3 further improves performance of the charts. I've intensively profiled and optimized the charts to minimize the cpu cycles/memory allocations for each repaint. As of now, I believe that the VisualVM charts are the fastest real time Java charts with the lowest cpu/memory footprint. Best of all is that everything described above is in the JDK. That's because VisualVM is in the JDK. Here's a small NetBeans Platform application (though you could also use the VisualVM chart API without using the NetBeans Platform, just include these JARs on your classpath: org-netbeans-lib-profiler-charts.jar, com-sun-tools-visualvm-charts.jar, com-sun-tools-visualvm-uisupport.jar and org-netbeans-lib-profiler-ui.jar) that makes use of the VisualVM chart API outlined above: The chart that you see above is updated in real time and you can change to full screen and you can scroll through it and, at the same time, there is no lag and it is very performant. Below is all the code (from the unit test package in the VisualVM sources) that you see in the JPanel above: public class Demo extends JPanel { private static final long SLEEP_TIME = 500; private static final int VALUES_LIMIT = 150; private static final int ITEMS_COUNT = 8; private SimpleXYChartSupport support; public Demo() { createModels(); setLayout(new BorderLayout()); add(support.getChart(), BorderLayout.CENTER); } private void createModels() { SimpleXYChartDescriptor descriptor = SimpleXYChartDescriptor.decimal(0, 1000, 1000, 1d, true, VALUES_LIMIT); for (int i = 0; i < ITEMS_COUNT; i++) { descriptor.addLineFillItems("Item " + i); } descriptor.setDetailsItems(new String[]{"Detail 1", "Detail 2", "Detail 3"}); descriptor.setChartTitle("Demo Chart"); descriptor.setXAxisDescription("X Axis [time]"); descriptor.setYAxisDescription("Y Axis [units]"); support = ChartFactory.createSimpleXYChart(descriptor); new Generator(support).start(); } private static class Generator extends Thread { private SimpleXYChartSupport support; public void run() { while (true) { try { long[] values = new long[ITEMS_COUNT]; for (int i = 0; i < values.length; i++) { values[i] = (long) (1000 * Math.random()); } support.addValues(System.currentTimeMillis(), values); support.updateDetails(new String[]{1000 * Math.random() + "", 1000 * Math.random() + "", 1000 * Math.random() + ""}); Thread.sleep(SLEEP_TIME); } catch (Exception e) { e.printStackTrace(System.err); } } } private Generator(SimpleXYChartSupport support) { this.support = support; } } } Here is the related Javadoc. To get started using the VisualVM charts in your own application, read this blog, and then look in the "lib" folder of the JDK to find the JARs you will need. And then have fun with real-time data in your Java desktop applications.
November 20, 2010
by Geertjan Wielenga
· 71,403 Views
article thumbnail
SOAP/SAAJ/XML Issues When Migrating to Java 6 (with Axis 1.2)
When you migrate an application using Apache Axis 1.2 from Java 4 or 5 to Java 6 (JRE 1.6) you will most likely encounter a handful of strange SOAP/SAAJ/XML errors and ClassCastExceptions. This is due to the fact that Sun’s implementation of SAAJ 1.3 has been integrated directly into the 1.6 JRE. Due to this integration it’s loaded by the bootstrap class loader and thus cannot see various classes that you might be referencing in your old code. As mentioned on Spring pages: Java 1.6 ships with SAAJ 1.3, JAXB 2.0, and JAXP 1.4 (a custom version of Xerces and Xalan). Overriding these libraries by putting different version on the classpath will result in various classloading issues, or exceptions in org.apache.xml.serializer.ToXMLSAXHandler. The only option for using more recent versions is to put the newer version in the endorsed directory (see above). Fortunately, there is a simple solution, at least for Axis 1.2. Some of the exceptions that we’ve encountered Sample Axis code import javax.xml.messaging.URLEndpoint;import javax.xml.soap.MessageFactory;import javax.xml.soap.SOAPConnection;import javax.xml.soap.SOAPConnectionFactory;import javax.xml.soap.SOAPMessage;...public static callAxisWebservice() {SOAPConnectionFactory soapconnectionfactory = SOAPConnectionFactory.newInstance();SOAPConnection soapconnection = soapconnectionfactory.createConnection();MessageFactory messagefactory = MessageFactory.newInstance();SOAPMessage soapmessage = messagefactory.createMessage();...URLEndpoint urlendpoint = new URLEndpoint(string);SOAPMessage soapmessage_18_ = soapconnection.call(soapmessage, urlendpoint);...} SOAPExceptionImpl: Bad endPoint type com.sun.xml.internal.messaging.saaj.SOAPExceptionImpl: Bad endPoint type http://example.com/ExampleAxisService at com.sun.xml.internal.messaging.saaj.client.p2p.HttpSOAPConnection.call(HttpSOAPConnection.java:161) This extremely confusing error is caused by the following, seemingly innocent code above, namely by the ‘… new URLEndpoint(string)’ and the call itself. The problem here is that Sun’s HttpSOAPConnection can’t see the javax.xml.messaging.URLEndpoint because it is not part of the JRE and is contained in another JAR, not visible to the classes loaded by the bootstrap loader. If you check the HttpSOAPConnection’s code (this is not exactly the version I have but close enough) you will see that it calls “Class.forName(“javax.xml.messaging.URLEndpoint”);” on line 101. For the reason mentioned it fails with a ClassNotFoundException (as indicated by the log “URLEndpoint is available only when JAXM is there” when you enable the JDK logging for the finest level) and thus the method isn’t able to recognize the type of the argument and fails with the confusing Bad endPoint message. A soluti0n in this case would be to pass a java.net.URL or a String instead of a URLEndpoint (though it might lead to other errors, like the one below). Related: Oracle saaj:soap1.2 bug SOAPExceptionImpl: Bad endPoint type. DOMException: NAMESPACE_ERR org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or change an object in a way which is incorrect with regard to namespaces. at org.apache.xerces.dom.AttrNSImpl.setName(Unknown Source) at org.apache.xerces.dom.AttrNSImpl.(Unknown Source) at org.apache.xerces.dom.CoreDocumentImpl.createAttributeNS(Unknown Source) I don’t rembember exactly what we have changed on the classpath to get this confusing exception and I’ve no idea why it is thrown. Bonus: Conflict between Axis and IBM WebSphere JAX-RPC “thin client” Additionally, if you happen to have com.ibm.ws.webservices.thinclient_7.0.0.jar somewhere on the classpath, you may get this funny exception: java.lang.ClassCastException: org.apache.axis.Message incompatible with com.ibm.ws.webservices.engine.Message at com.ibm.ws.webservices.engine.soap.SOAPConnectionImpl.call(SOAPConnectionImpl.java:198) You may wonder why Java tries to use Axis Message with WebSphere SOAP connection. Well, it’s because the SAAJ lookup mechanism prefers the websphere implementation, for it declares itself via META-INF/services/javax.xml.soap.SOAPFactory pointing to com.ibm.ws.webservices.engine.soap.SOAPConnectionFactoryImpl, but instantiates the org.apache.axis.soap.MessageFactoryImpl for message creation for the websphere thin client doesn’t provide an implementation of this factory. The solution here is the same as for all the other exception, to use exclusively Axis. But if you are interested, check the description how to correctly create a Message with the websphere runtime on page 119 of the IBM WebSphere Application Server V7.0 Web Services Guide (md = javax.xml.ws.Service.create(serviceName).createDispatch(portName, SOAPMessage.class, Service.Mode.MESSAGE); ((SOAPBinding) ((BindingProvider) md).getBinding()).getMessageFactory();). Solution The solution that my collegue Jan Nad has found is to force JRE to use the SOAP/SAAJ implementation provided by Axis, something like: java -Djavax.xml.soap.SOAPFactory=org.apache.axis.soap.SOAPFactoryImpl -Djavax.xml.soap.MessageFactory=org.apache.axis.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPConnectionFactory=org.apache.axis.soap.SOAPConnectionFactoryImpl example.MainClass It’s also described in issue AXIS-2777. Check details of the lookup process in the SOAPFactory.newInstance() JavaDoc. From http://theholyjava.wordpress.com/2010/11/19/soapsaajxml-issues-when-migrating-to-java-6-with-axis-1-2/
November 20, 2010
by Jakub Holý
· 27,339 Views
article thumbnail
Java Web Start (Jnlp) Hello World Example
this tutorial shows you how to create a java web start (jnlp) file for user download. when the user clicks on the downloaded jnlp file, it launches a simple awt program. here's the summary steps : create a simple awt program and jar it as testjnlp.jar add keystore into testjnlp.jar create a jnlp file put it all into the tomcat folder access testjnlp.jar from web through http://localhost:8080/test.jnlp before starting this tutorial, lets read this brief java web start explanation from oracle. java web start is a mechanism for program delivery through a standard web server. typically initiated through the browser, these programs are deployed to the client and executed outside the scope of the browser. once deployed, the programs do not need to be downloaded again, and they can automatically download updates on startup without requiring the user to go through the whole installation process again. ok, let's go ~ 1. install jdk and tomcat install java jdk/jre version above 1.5 and tomcat. 2. directory structure directory structure of this example. 3. awt + jnlp see the content of testjnlp.java, it's just a simple awt program with jnlp supported. package com.mkyong;import java.awt.*;import javax.swing.*;import java.net.*;import javax.jnlp.*;import java.awt.event.actionlistener;import java.awt.event.actionevent;public class testjnlp { static basicservice basicservice = null; public static void main(string args[]) { jframe frame = new jframe("mkyong jnlp unofficial guide"); frame.setdefaultcloseoperation(jframe.exit_on_close); jlabel label = new jlabel(); container content = frame.getcontentpane(); content.add(label, borderlayout.center); string message = "jnln hello word"; label.settext(message); try { basicservice = (basicservice) servicemanager.lookup("javax.jnlp.basicservice"); } catch (unavailableserviceexception e) { system.err.println("lookup failed: " + e); } jbutton button = new jbutton("http://www.mkyong.com"); actionlistener listener = new actionlistener() { public void actionperformed(actionevent actionevent) { try { url url = new url(actionevent.getactioncommand()); basicservice.showdocument(url); } catch (malformedurlexception ignored) { } } }; button.addactionlistener(listener); content.add(button, borderlayout.south); frame.pack(); frame.show(); } p.s if "import javax.jnlp.*;" is not found, please include the jnlp library which islocated at jre/lib/javaws.jar. 4. jar it located your java classes folder. jar it with following command in command prompt jar -cf testjnlp.jar *.* this will package all the java's classes into a new jar file, named " testjnlp.jar ". 5. create keystore add a new keystore named " testkeys " keytool -genkey -keystore testkeys -alias jdc it will ask for a keystore password, first name, last name , organization's unit...etc..just fill them all. 6. assign keystore to jar file attached newly generated keystore " testkeys " to your " testjnlp.jar " file jarsigner -keystore testkeys testjnlp.jar jdc it will ask password for your newly created keystore 7. deploy jar it copy " testjnlp.jar " to tomcat's default web server folder, for example, in widnows - c:\program files\apache\tomcat 6.0\webapps\root 8. create jnlp file create a new test.jnlp file, content put this yong mook kim testing testing 9. deploy jnlp file copy test.jnlp to your tomcat default web server folder also. c:\program files\apache\tomcat 6.0\webapps\root 10. start tomcat start tomcat , c:\tomcat folder\bin\tomcat6.exe 11. test it access url http://localhost:8080/test.jnlp , it will prompt you to download the test.jnlp file, just accept and double click on it. if everything went fine, you should see the following output click on the "run" button to lauch the awt program. note if jnlp has no response, put the following code in your web.xml , which is located in the tomcat conf folder. jnlp application/x-java-jnlp-file
November 16, 2010
by Yong Mook Kim
· 93,803 Views · 3 Likes
article thumbnail
Generating Client JAVA code for WSDL using SOAP UI
create a soap ui project using your wsdl. set the preferences in soap ui for axis2 home directory. right click on the wsdl in soap ui and click generate code. select adb binding and the following settings and click generate following is the directory structure and code files generated. that’s it, you can now use this code from you ide by importing it. ps: you will need to add the axis2 jars to your project class path. for more details visit my blog @ http://nitinaggarwal.wordpress.com/
November 12, 2010
by Nitin Aggarwal
· 209,628 Views · 3 Likes
article thumbnail
An introduction to Spock
Spock is an open source testing framework for Java and Groovy that has been attracting a growing following, especially in the Groovy community. It lets you write concise, expressive tests, using a quite readable BDD-style notation. It even comes with its own mocking library built in. Oh. I thought he was a sci-fi character. Can I see an example? Sure. Here's a simple one from a coding kata I did recently: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def "I plus I should equal II"() { given: def calculator = new RomanCalculator() when: def result = calculator.add("I", "I") then: result == "II" } } In Spock, you don't have tests, you have specifications. These are normal Groovy classes that extend the Specifications class, which is actually a JUnit class. Your class contains a set of specifications, represented by methods with funny-method-names-in-quotes™. The funny-method-names-in-quotes™ take advantage of some Groovy magic to let you express your requirements in a very readable form. And since these classes are derived from JUnit, you can run them from within Eclipse like a normal Groovy unit test, and they produce standard JUnit reports, which is nice for CI servers. Another thing: notice the structure of this test? We are using given:, when: and then: to express actions and expected outcomes. This structure is common in Behaviour-Driven Development, or BDD, frameworks like Cucumber and easyb. Though Spock-style tests are generally more concise more technically-focused than tools like Cucumber and easyb, which are often used for automating acceptance tests. But I digress... Actually, the example I gave earlier was a bit terse. We could make our intent clearer by adding text descriptions after the when: and then: labels, as I've done here: def "I plus I should equal II"() { when: "I add two roman numbers together" def result = calculator.add("I", "I") then: "the result should be the roman number equivalent of their sum" result == "II" } This is an excellent of clarifying your ideas and documenting your API. But where are the AssertEquals statements? Aha! I'm glad you asked! Spock uses a feature called Power Asserts. The statement after the then: is your assert. If this test fails, Spock will display a detailed analysis of what went wrong, along the following lines: I plus I should equal II(com.wakaleo.training.spocktutorial.RomanCalculatorSpec) Time elapsed: 0.33 sec <<< FAILURE! Condition not satisfied: result == "II" | | I false 1 difference (50% similarity) I(-) I(I) at com.wakaleo.training.spocktutorial .RomanCalculatorSpec.I plus I should equal II(RomanCalculatorSpec.groovy:17) Nice! But in JUnit, I have @Before and @After for fixtures. Can I do that in Spock? Sure, but you don't use annotations. Instead you implement setup() and cleanup() methods (which are run before and after each specification). I've added one here to show you what they look like: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def calculator def setup() { calculator = new RomanCalculator() } def "I plus I should equal II"() { when: def result = calculator.add("I", "I") then: result == "II" } } You can also define a setupSpec() and cleanupSpec(), which are run just before the first test and just after the last one. I'm a big fan of parameterized tests in JUnit 4. Can I do that in Spock! You sure can! In fact it's one of Spock's killer features! def "The lowest number should go at the end"() { when: def result = calculator.add(a, b) then: result == sum where: a | b | sum "X" | "I" | "XI" "I" | "X" | "XI" "XX" | "I" | "XXI" "XX" | "II" | "XXII" "II" | "XX" | "XXII" } This code will run the test 5 times. The variables a, b, and sum are initialized from the rows in the table in the where: clause. And if any of the tests fail, you get That's pretty cool too. What about mocking? Can I use Mockito? Sure, if you want. but Spock actually comes with it's own mocking framework, which is pretty neat. You set up a mock or a stub using the Mock() method. I've shown two possible ways to use this method here: given: Subscriber subscriber1 = Mock() def subscriber2 = Mock(Subscriber) ... You can set these mocks up to behave in certain ways. Here are a few examples. You can say a method should return a certain value using the >> operator: subscriber1.isActive() >> true subscriber2.isActive() >> false Or you could get a method to throw an exception when it is called: subscriber.activate() >> { throw new BlacklistedSubscriberException() } Then you can test outcomes in a few different ways. Here is a more complicated example to show you some of your options: def "Messages published by the publisher should only be received by active subscribers"() { given: "a publisher" def publisher = new Publisher() and: "some active subscribers" Subscriber activeSubscriber1 = Mock() Subscriber activeSubscriber2 = Mock() activeSubscriber1.isActive() >> true activeSubscriber2.isActive() >> true publisher.add activeSubscriber1 publisher.add activeSubscriber2 and: "a deactivated subscriber" Subscriber deactivatedSubscriber = Mock() deactivatedSubscriber.isActive() >> false publisher.add deactivatedSubscriber when: "a message is published" publisher.publishMessage("Hi there") then: "the active subscribers should get the message" 1 * activeSubscriber1.receive("Hi there") 1 * activeSubscriber2.receive({ it.contains "Hi" }) and: "the deactivated subscriber didn't receive anything" 0 * deactivatedSubscriber.receive(_) } That does look neat. So what is the best place to use Spock? Spock is great for unit or integration testing of Groovy or Grails projects. On the other hand, tools like easyb amd cucumber are probably better for automated acceptance tests - the format is less technical and the reporting is more appropriate for non-developers. From http://www.wakaleo.com/blog/303-an-introduction-to-spock
November 4, 2010
by John Ferguson Smart
· 38,156 Views · 4 Likes
article thumbnail
Know the JVM Series: Shutdown Hooks
Shutdown Hooks are a special construct that allow developers to plug in a piece of code to be executed when the JVM is shutting down.
October 23, 2010
by Yohan Liyanage
· 84,838 Views · 1 Like
  • Previous
  • ...
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: