DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Changing Eclipse Default Encoding to UTF-8 for JSP files
Try creating a new JSP file in your Eclipse and you’ll notice that the JSP page directive will have encoding something like: But for a better I18N and L10N support, it is recommended to follow UTF-8 encoding where ever possible. So, how do we change the default JSP file encoding to UTF-8 in eclipse? Simple. Just do these things: In Eclipse, go to Windows -> Preferences -> Web -> JSP Files Select UTF-8 encoding from the Encoding dropdown box there. That’s it! And if you wonder how this change works, you can very well see that. In the same Preferences window, go to this location: Preferences -> Web -> JSP Files -> Editor -> Templates. Then in the right hand side, you’ll see a list of templates defined for JSP files. And in a new JSP file template, you can see this code: Here as you might have guessed, the ${encoding} will be replaced by whatever we set in the above step. The same method can be used to change the default encoding type for other file types too (css, html). From http://veerasundar.com/blog/2010/12/changing-eclipse-default-encoding-to-utf-8-for-jsp-files/
December 8, 2010
by Veera Sundar
· 26,128 Views · 1 Like
article thumbnail
Using Sphinx and Java to Implement Free Text Search
As promised I am going to provide an article on how we can use Sphinx with Java to perform a full text search. I will begin the article with an introduction to Sphinx. Introduction to Sphinx Databases are continually growing and sometimes tend to hold about 100M records and need an external solution for full text search to be performed. I have picked Sphinx, an open source full-text search engine, distributed under GPL version 2 to perform a full text search on such a huge amount of data. Generally, it's a standalone search engine meant to provide fast, size-efficient and relevant full-text search functions to other applications very much compatible with an SQL Database. So my example will be based on the MySQL database, as we cannot produce millions of data to evaluate the real power of Sphinx, we will have a small amount of data and I think that should not be a problem. Here are few Sphinx Unique Features: high indexing speed (up to 10 MB/sec on modern CPUs) high search speed (avg query is under 0.1 sec on 2-4 GB text collections) high scalability (up to 100 GB of text, upto 100 M documents on a single CPU) provides distributed searching capabilities provides searching from within MySQL through pluggable storage engine supports boolean, phrase, and word proximity queries supports multiple full-text fields per document (upto 32 by default) supports multiple additional attributes per document (ie. groups, timestamps, etc) supports MySQL natively (MyISAM and InnoDB tables are both supported) The important features which have been adopted to perform a full text search are the provision of the Java API to integrate easily with the web application and considerably high indexing and searching speed with an average of 4-10 MB/sec & 20-30 ms/q @5GB,3.5M docs(wikipedia) Sphinx Terms & How It Works The fist principle part of sphinx is indexer. It is solely responsible for gathering the data that will be searchable. From the Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields. This is biased towards SQL, where each row corresponds to a document, and each column to a field. Sphinx builds a special data structure optimized for our queries from the data provided. This structure is called index; and the process of building index from data is called indexing and the element of sphinx which carries out these tasks is called indexer. Indexer can be executed either from a regular script or command-line interface. Sphinx documents are equal to records in DB. Document is set of text fields and number attributes + unique ID – similar to row in DB Set of fields and attributes is constant for index – similar to table in DB Fields are searchable for FullText queries Attributes may be used for filtering, sorting, grouping searchd is the second principle tools as part of Sphinx. It is the part of the system which actually handles searches; it functions as a server and is responsible for receiving queries, processing them and returning a dataset back to the different APIs for client applications. Unlike indexer, searchd is not designed to be run either from a regular script or command-line calling, but instead either as a daemon to be called from init.d (on Unix/Linux type systems) or to be called as a service (on Windows-type systems). I am going to focus on Windows environment so later I will show you how we can install sphinx on windows as a service. Finally search is one of the helper tools within the Sphinx package. Whereas searchd is responsible for searches in a server-type environment, search is aimed at testing the index quickly without building a framework to make the connection to the server and process its response. This will only be used for testing sphinx from command – line and with respect to application’s requirement; searchd service will be used to query the MySql Server with a pre created index. Installation on Windows So now we come to the part of installing Sphinx on Windows: Download Sphinx from the official Sphinx download site i.e http://sphinxsearch.com (I downloaded Win32 release binaries with MySQL support: sphinx-0.9.9-win32.zip) Unzip the file to some folder, I unzipped to C:\devel\sphinx-0.9.9-win32 and added the bin directory to the windows path variable Well Sphinx is installed. Nice, simple, easy. Later I will tell how to set up indexes and search. Sample Application Till now I guess the whole motto of this article is clear to you, let's move ahead to define our sample application. We all use the Address Book to search for people by using their name or e-mail address when we want to immediately address an e-mail message to a specific person, people, or distribution list. We also search for people by using other basic information, such as e-mail alias, office location, and telephone number etc. I think most of the people on this planet are quire familiar with this kind of search, so let's make outlook address book as our sample database schema. Most of the fields are mapped from microsoft outlook, the only additional column is date of joining so that we can filter our queries based on joining dates of the employees. The example that I am going to put forth will use Sphinx to search for a particluar address entry using free text search, meaning the user is free to type in anything, here is our search screen, the DOJ (date of joining) search parameter is optional. The screen is self explanatory, let's move ahead and define our database. As Sphinx works well with MySQL and MySQL being free also, lets create our db scripts around mysql database (Those who wish to install MySQL can dowload it from http://www.mysql.com) Let's create our sample database 'addressbook' mysql> create database addressbook; Query OK, 1 row affected (0.03 sec) mysql> use addressbook; Database changed Note: The fields defined in the following tables are for the purpose of learning only and may not contain a complete set of fields that microsoft address book or any similar software may provide. mysql> CREATE TABLE addressbook ( Id int(11) NOT NULL, FirstName varchar(30) NOT NULL, LastName varchar(30) NOT NULL, OfficeId int(11) DEFAULT NULL, Title varchar(20) DEFAULT NULL, Alias varchar(20) NOT NULL, Email varchar(50) NOT NULL, DOJ date NOT NULL, PhoneNo varchar(20) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; mysql> CREATE TABLE CompanyLocations ( Id int(11) NOT NULL, Location varchar(60) NOT NULL, Country varchar(20) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; It's time to put some dummy data into the table, so let's fill our tables. Our virtual company 'gogs.it' has six offices across India and Singapore as defined in the following insert script. mysql> insert into CompanyLocations (Id, Location, Country) VALUES (1, 'Tower One, Harbour Front, Singapore', 'SG'); insert into CompanyLocations (Id, Location, Country) VALUES (2, 'DLF Phase 3, Gurgaon, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (3, 'Hiranandani Gardens, Powai, Mumbai, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (4, 'Hinjwadi, Pune, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (5, 'Toll Post, Nagrota, Jammu, India', 'IN'); insert into CompanyLocations (Id, Location, Country) VALUES (6, 'Bani (Kathua), India', 'IN'); Now comes the real stuff... The data sphinx is going to index, let's populate that as well...wooooo mysql> INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (1,'Aabheer','Kumar',1,'Mr','u534','Kumar.Aabheer@gogs.it','2008-9-3', '+911234599990'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (2,'Aadarsh','Gupta',6,'Mr','u668','Gupta.Aadarsh@gogs.it','2007-2-23','+911234599991'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (3,'Aachman','Singh',5,'Mr','u2766','Singh.Aachman@gogs.it','2006-12-18','+911234599992'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (4,'Aadesh','Shrivastav',5,'Mr','u3198','Shrivastav.Aadesh@gogs.it','2007-11-23','+911234599993'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (5,'Aadi','manav',1,'Mr','u2686','manav.Aadi@gogs.it','2010-7-20','+911234599994'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (6,'Aadidev','singh',4,'Mr','u572','singh.Aadidev@gogs.it','2010-8-18','+911234599995'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (7,'Aafreen','sheikh',4,'Smt','u1092','sheikh.Aafreen@gogs.it','2007-7-11','+911234599996'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (8,'Aakar','Sherpa',5,'Mr','u1420','Sherpa.Aakar@gogs.it','2009-10-3','+911234599997'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (9,'Aakash','Singh',4,'Mrs','u2884','Singh.Aakash@gogs.it','2008-6-11','+911234599998'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (10,'Aalap','Singhania',4,'Mrs','u609','Singhania.Aalap@gogs.it','2010-10-8','+911234599999'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (11,'Aandaleeb','mahajan',1,'Smt','u131','mahajan.Aandaleeb@gogs.it','2010-10-21','+911234580001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (12,'Mamata','kumari',5,'Sh','u2519','kumari.Mamata@gogs.it','2009-6-12','+911234580002'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (13,'Mamta','sharma',6,'Smt','u4123','sharma.Mamta@gogs.it','2009-2-8','+911234580003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (14,'Manali','singh',6,'Mr','u1078','singh.Manali@gogs.it','2008-6-14','+911234580004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (15,'Manda','saxena',1,'Mrs','u196','saxena.Manda@gogs.it','2010-9-4','+911234580005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (16,'Salila','shetty',3,'Miss','u157','shetty.Salila@gogs.it','2009-11-15','+911234580006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (17,'Salima','happy',3,'Mrs','u3445','happy.Salima@gogs.it','2006-7-14','+911234580007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (18,'Salma','haik',5,'Sh','u4621','haik.Salma@gogs.it','2008-6-23','+911234580008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (19,'Samita','patil',3,'Smt','u3156','patil.Samita@gogs.it','2006-6-7','+911234580009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (20,'Sameena','sheikh',5,'Mrs','u952','sheikh.Sameena@gogs.it','2008-8-13','+911234580010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (21,'Ranita','gupta',5,'Mrs','u2664','gupta.Ranita@gogs.it','2008-10-20','+911234580011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (22,'Ranjana','sharma',1,'Sh','u3085','sharma.Ranjana@gogs.it','2010-6-21','+911234580012'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (23,'Ranjini','singh',6,'Mrs','u4200','singh.Ranjini@gogs.it','2007-4-13','+911234580013'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (24,'Ranjita','vyapari',2,'Smt','u1109','vyapari.Ranjita@gogs.it','2008-1-22','+911234580014'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (25,'Rashi','gupta',6,'Mrs','u3492','gupta.Rashi@gogs.it','2006-2-2','+911234580015'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (26,'Rashmi','sehgal',3,'Mr','u3248','sehgal.Rashmi@gogs.it','2008-9-9','+911234580016'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (27,'Rashmika','sexy',1,'Mrs','u4599','sexy.Rashmika@gogs.it','2009-3-12','+911234580017'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (28,'Rasika','dulari',3,'Smt','u2089','dulari.Rasika@gogs.it','2009-1-24','+911234580018'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (29,'Dilber','lover',6,'Mr','u4241','lover.Dilber@gogs.it','2007-10-11','+911234580019'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (30,'Dilshad','happy',1,'Mr','u1564','happy.Dilshad@gogs.it','2007-4-8','+911234580020'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (31,'Dipali','lights',5,'Sh','u1127','lights.Dipali@gogs.it','2006-11-1','+911234580021'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (32,'Dipika','lamp',1,'Sh','u2271','lamp.Dipika@gogs.it','2010-12-17','+911234580022'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (33,'Dipti','brightness',5,'Smt','u422','brightness.Dipti@gogs.it','2010-9-25','+911234580023'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (34,'Disha','singh',3,'Sh','u4604','singh.Disha@gogs.it','2006-5-2','+911234580024'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (35,'Maadhav','Krishna',1,'Miss','u2561','Krishna.Maadhav@gogs.it','2007-11-6','+911234580025'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (36,'Maagh','month',5,'Miss','u874','month.Maagh@gogs.it','2008-5-8','+911234580026'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (37,'Maahir','Skilled',4,'Mr','u3372','Skilled.Maahir@gogs.it','2007-8-4','+911234580027'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (38,'Maalolan','Ahobilam',5,'Mrs','u3498','Ahobilam.Maalolan@gogs.it','2007-7-9','+911234580028'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (39,'Maandhata','King',1,'Smt','u2089','King.Maandhata@gogs.it','2009-9-3','+911234580029'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (40,'Maaran','Brave',2,'Miss','u4020','Brave.Maaran@gogs.it','2008-4-5','+9112345606001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (41,'Maari','Rain',2,'Sh','u3593','Rain.Maari@gogs.it','2007-12-5','+9112345606002'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (42,'Madan','Cupid',4,'Mrs','u795','Cupid.Madan@gogs.it','2007-11-11','+9112345606003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (43,'Madangopal','Krishna',3,'Sh','u438','Krishna.Madangopal@gogs.it','2007-2-19','+9112345606004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (44,'sahil','gogna',1,'Sh','u2273','gogna.sahil@gogs.it','2007-10-7','+9112345606005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (45,'nikhil','gogna',2,'Mr','u1240','gogna.nikhil@gogs.it','2009-9-14','+9112345606006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (46,'amit','gogna',5,'Sh','u3879','gogna.amit@gogs.it','2006-2-8','+9112345606007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (47,'krishan','gogna',4,'Miss','u3632','gogna.krishan@gogs.it','2010-9-20','+9112345606008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (48,'anil','kashyap',4,'Smt','u3939','kashyap.anil@gogs.it','2010-3-15','+9112345606009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (49,'sunil','kashyap',5,'Mrs','u3493','kashyap.sunil@gogs.it','2008-3-16','+9112345606010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (50,'sandy','singh',6,'Mrs','u4691','singh.sandy@gogs.it','2009-6-2','+9112345606011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (51,'vishal','kapoor',3,'Mr','u1087','kapoor.vishal@gogs.it','2010-5-13','+9112345606012'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (52,'bala','ji',5,'Mrs','u4762','ji.bala@gogs.it','2007-8-9','+9112345606013'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (53,'karan','sarin',4,'Miss','u3030','sarin.karan@gogs.it','2008-4-8','+9112345606014'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (54,'abhishek','kumar',4,'Miss','u1093','kumar.abhishek@gogs.it','2008-12-21','+9112345605001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (55,'babu','the',1,'Miss','u1055','the.babu@gogs.it','2008-7-2','+9112345506001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (56,'sandeep','gainda',3,'Miss','u1320','gainda.sandeep@gogs.it','2010-5-14','+9112345606301'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (57,'dheeraj','kumar',3,'Miss','u3685','kumar.dheeraj@gogs.it','2007-10-14','+9112345606091'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (58,'dharmendra','chauhan',1,'Smt','u3235','chauhan.dharmendra@gogs.it','2008-8-1','+9112345806001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (59,'max','alan',3,'Smt','u3465','alan.max@gogs.it','2009-5-5','+9112345608011'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (60,'hidayat','khan',3,'Smt','u958','khan.hidayat@gogs.it','2007-11-18','+911234599101'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (61,'himnashu','singh',4,'Miss','u2027','singh.himnashu@gogs.it','2008-3-2','+911234599102'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (62,'dinesh','kumar',6,'Sh','u3233','kumar.dinesh@gogs.it','2008-5-9','+911234599103'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (63,'toshi','prakash',1,'Mr','u3766','prakash.toshi@gogs.it','2010-9-17','+911234599104'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (64,'niti','puri',3,'Mr','u3575','puri.niti@gogs.it','2009-11-15','+911234599105'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (65,'pawan','tikki',3,'Sh','u3919','tikki.pawan@gogs.it','2006-3-19','+911234599106'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (66,'gaurav','sharma',2,'Sh','u413','sharma.gaurav@gogs.it','2010-4-2','+911234599107'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (67,'himanshu','verma',2,'Mrs','u4732','verma.himanshu@gogs.it','2009-3-20','+911234599108'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (68,'priyanshu','verma',3,'Sh','u183','verma.priyanshu@gogs.it','2010-8-12','+911234599109'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (69,'nitika','luthra',2,'Mrs','u4259','luthra.nitika@gogs.it','2010-7-12','+911234599110'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (70,'neeru','gogna',2,'Sh','u1633','gogna.neeru@gogs.it','2010-6-23','+91532110000'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (71,'bindu','gupta',1,'Sh','u1859','gupta.bindu@gogs.it','2006-11-10','+91532110001'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (72,'gurleen','bakshi',5,'Miss','u1423','bakshi.gurleen@gogs.it','2007-7-1','+91532110003'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (73,'rahul','gupta',3,'Sh','u1223','gupta.rahul@gogs.it','2009-8-11','+91532110004'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (74,'jagdish','salgotra',3,'Mr','u12','salgotra.jagdish@gogs.it','2008-5-19','+91532110005'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (75,'vikas','sharma',3,'Smt','u465','sharma.vikas@gogs.it','2006-6-2','+91532110006'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (76,'poonam','mahendra',2,'Sh','u1744','mahendra.poonam@gogs.it','2009-12-2','+91532110007'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (77,'pooja','kulkarni',3,'Mrs','u1903','kulkarni.pooja@gogs.it','2008-10-6','+91532110008'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (78,'priya','mahajan',6,'Sh','u4205','mahajan.priya@gogs.it','2010-8-5','+91532110009'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (79,'manoj','zerger',1,'Mrs','u3369','zerger.manoj@gogs.it','2009-12-4','+91532110010'); INSERT INTO AddressBook(Id, FirstName, LastName, OfficeId, Title, Alias, Email, DOJ, PhoneNo) VALUES (80,'mohan','master',5,'Mr','u2841','master.mohan@gogs.it','2010-10-7','+91532110011'); Please note that above employee data is just a data *only data* I created using a small java programme using random number generators and reading some names file, so you may find titles getting messed up :( We next create a procedure that we will use from java to fetch records that we just inserted. DROP PROCEDURE IF EXISTS search_address_book; CREATE PROCEDURE search_address_book(IN address_ids VARCHAR(1000) ) BEGIN DECLARE search_address_query VARCHAR(2000) DEFAULT ''; SET address_ids = CONCAT('\'', REPLACE(address_ids, ',', '\',\''), '\''); SET search_address_query = CONCAT(search_address_query, ' select ab.Id as Id , ab.FirstName as FName, ab.LastName as LName, cl.Location as Location, ab.Title as Title, ab.Alias as Alias, ab.Email as Email, ab.DOJ as DOJ, ab.PhoneNo as PhoneNo ' ); SET search_address_query = CONCAT(search_address_query, ' from AddressBook ab left join CompanyLocations cl on ab.OfficeId=cl.Id '); SET search_address_query = CONCAT(search_address_query, ' where ab.id IN (', address_ids ,') '); SET @statement = search_address_query; PREPARE dynquery FROM @statement; EXECUTE dynquery; DEALLOCATE PREPARE dynquery; END; # To get records for ids 1, 6 and 7, we run following commands: call search_address_book('1,6,7'); Configuring Sphinx It turns out that it was not terribly difficult to setup sphinx, but I had a hard time finding instructions on the web, so I'll post my steps here. By default Sphinx looks for 'sphinx.co.in' configuration file to come with indexes and other stuff, lets create and define source and index for our sample application addressbook.conf (read between the lines) ############################################################################# ## data source definition ############################################################################# source addressBookSource { ## SQL settings for 'mysql' ## type = mysql # some straightforward parameters for SQL source types sql_host = localhost sql_user = root sql_pass = root sql_db = addressbook sql_port = 3306 # optional, default is 3306 # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query, integer document ID field MUST be the first selected column sql_query = \ select ab.Id as Id , ab.FirstName as FName, ab.LastName as LName, cl.Location as Location, \ ab.Title as Title, ab.Alias as Alias, ab.Email as Email, UNIX_TIMESTAMP(ab.DOJ) as DOJ, ab.PhoneNo as PhoneNo \ from AddressBook ab left join CompanyLocations cl on ab.OfficeId=cl.Id sql_attr_timestamp = DOJ # document info query, ONLY for CLI search (ie. testing and debugging) , optional, default is empty must contain $id macro and must fetch the document by that id sql_query_info = SELECT * FROM AddressBook WHERE id=$id } ############################################################################# ## index definition ############################################################################# # local index example, this is an index which is stored locally in the filesystem index addressBookIndex { # document source(s) to index source = addressBookSource # index files path and file name, without extension, make sure you have this folder path = C:\devel\sphinx-0.9.9-win32\data\addressBookIndex # document attribute values (docinfo) storage mode docinfo = extern # memory locking for cached data (.spa and .spi), to prevent swapping mlock = 0 morphology = none # make sure this file exists exceptions =C:\devel\sphinx-0.9.9-win32\data\exceptions.txt enable_star = 1 } ############################################################################# ## indexer settings ############################################################################# indexer { # memory limit, in bytes, kiloytes (16384K) or megabytes (256M) # optional, default is 32M, max is 2047M, recommended is 256M to 1024M mem_limit = 32M # maximum IO calls per second (for I/O throttling) # optional, default is 0 (unlimited) # # max_iops = 40 # maximum IO call size, bytes (for I/O throttling) # optional, default is 0 (unlimited) # # max_iosize = 1048576 # maximum xmlpipe2 field length, bytes # optional, default is 2M # # max_xmlpipe2_field = 4M # write buffer size, bytes # several (currently up to 4) buffers will be allocated # write buffers are allocated in addition to mem_limit # optional, default is 1M # # write_buffer = 1M } ############################################################################# ## searchd settings ############################################################################# searchd { # hostname, port, or hostname:port, or /unix/socket/path to listen on listen = 9312 # log file, searchd run info is logged here # optional, default is 'searchd.log' log = C:\devel\sphinx-0.9.9-win32\data\log\searchd.log # query log file, all search queries are logged here # optional, default is empty (do not log queries) query_log = C:\devel\sphinx-0.9.9-win32\data\log\query.log # client read timeout, seconds # optional, default is 5 read_timeout = 5 # request timeout, seconds # optional, default is 5 minutes client_timeout = 300 # maximum amount of children to fork (concurrent searches to run) # optional, default is 0 (unlimited) max_children = 30 # PID file, searchd process ID file name # mandatory pid_file = C:\devel\sphinx-0.9.9-win32\data\log\searchd.pid # max amount of matches the daemon ever keeps in RAM, per-index # WARNING, THERE'S ALSO PER-QUERY LIMIT, SEE SetLimits() API CALL # default is 1000 (just like Google) max_matches = 1000 # seamless rotate, prevents rotate stalls if precaching huge datasets # optional, default is 1 seamless_rotate = 1 # whether to forcibly preopen all indexes on startup # optional, default is 0 (do not preopen) preopen_indexes = 0 } # --eof-- Once the configuration is done, its time to index our sql data, the command to use is 'indexer' as shown below. C:\devel\sphinx-0.9.9-win32\bin>indexer.exe --all --config C:\devel\sphinx-0.9.9-win32\addressbook.conf CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file 'C:\devel\sphinx-0.9.9-win32\addressbook.conf'... indexing index 'addressBookIndex'... collected 80 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 80 docs, 5514 bytes total 0.057 sec, 96386 bytes/sec, 1398.43 docs/sec total 2 reads, 0.000 sec, 3.5 kb/call avg, 0.0 msec/call avg total 7 writes, 0.000 sec, 2.5 kb/call avg, 0.0 msec/call avg Note: As I told earlier that Sphinx creates 1 document for each row, as we had 80 rows in the database so a total of 80 docs are created. Time taken is also very very small, believe me I tried with half million rows and it took around 3-4 seconds :) cool isn't it? Once the index is up let's try to search few records, the utility command to perform search is 'search'. Ok Sphinx maharaj* please search for employee whose alias is u4732 C:\devel\sphinx-0.9.9-win32\bin>search.exe --config C:\devel\sphinx-0.9.9-win32\addressbook.conf u4732 CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file 'C:\devel\sphinx-0.9.9-win32\addressbook.conf'... index 'addressBookIndex': query 'u4732 ': returned 1 matches of 1 total in 0.001 sec displaying matches: 1. document=67, weight=1, doj=Fri Mar 20 00:00:00 2009 Id=67 FirstName=himanshu LastName=verma OfficeId=2 Title=Mrs Alias=u4732 Email=verma.himanshu@gogs.it DOJ=2009-03-20 PhoneNo=+911234599108 words: 1. 'u4732': 1 documents, 1 hits words: 1. 'u4732': 1 documents, 1 hits As you can see above this is a unique record for Himanshu. Note: You see a lot of information for the result, this is because of following line in our configuration file sql_query_info = SELECT * FROM AddressBook WHERE id=$id If you want to see less columns you need to change the sql_query_info in configuration file. Let's try another search, sphinx maharaj* please tell me which all rows have gurleen or toshi in them. C:\devel\sphinx-0.9.9-win32\bin>search.exe --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --any toshi gurleen CONSOLE: displaying matches: 1. document=63, weight=2, doj=Fri Sep 17 00:00:00 2010 Id=63 FirstName=toshi LastName=prakash OfficeId=1 Title=Mr Alias=u3766 Email=prakash.toshi@gogs.it DOJ=2010-09-17 PhoneNo=+911234599104 2. document=72, weight=2, doj=Sun Jul 01 00:00:00 2007 Id=72 FirstName=gurleen LastName=bakshi OfficeId=5 Title=Miss Alias=u1423 Email=bakshi.gurleen@gogs.it DOJ=2007-07-01 PhoneNo=+91532110003 Exactly two records were returned and this is what we were expecting. The following special operators and modifiers can be used when using the extended matching mode: operator OR: nikhil | sahil operator NOT: hello -sandy hello !sandy field search operator: @Email gogna.sahil@gogna.it For a complete set of search features , I advise you to go through http://sphinxsearch.com/docs/manual-0.9.9.html#searching link. Sphinx as Windows Service Now our main aim is to use sphinx with JAVA API, so let's move towards that now, before java can utilize the true power of Sphinx, we need to start 'searchd' as a windows service so that our java programme can connect to sphinx search engine. Let's install Sphinx as a windows service so that our java program can use this daemon service to query the index that we just created, the command is : C:\devel\sphinx-0.9.9-win32\bin>searchd.exe --install --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --servicename --port 9312 SphinxSearch CONSOLE: Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff Installing service... Service 'SphinxSearch' installed succesfully. Well now the sphinx is ready to serve us on port 9312 Note: If you try to install Sphinx without admin rights, you may get following error messages. C:\devel\sphinx-0.9.9-win32\bin>searchd.exe --install --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --servicename --port 9312 SphinxSearch CONSOLE: Installing service... FATAL: OpenSCManager() failed: code=5, error=Access is denied. Once done you can start the service as: c:\>sc start SphinxSearch (or alternatively from the services screen, start 'services.msc' in windows Run) If some how you want to delete the service , use c:\>sc delete SphinxSearch Let's create an adapter to fetch data from the database. package it.gogs.sphinx.util; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import org.apache.log4j.Logger; /** * Adapter to fetch data from the database. * * @author Munish Gogna * */ public class AddressBookAdapter { private static Logger logger = Logger.getLogger(AddressBookAdapter.class); private AddressBookAdapter() { // use in static way.. } private static Connection getConnection() throws AddressBookTechnicalException { String userName = "root"; String password = "root"; String url = "jdbc:mysql://localhost/addressbook"; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); return DriverManager.getConnection(url, userName, password); } catch (Exception e) { throw new AddressBookTechnicalException("could not get connection"); } } public static List getAddressBookList(List addressIds) throws AddressBookTechnicalException, AddressBookBizException { List addressBoookList = new ArrayList(); if (addressIds == null || addressIds.size() == 0){ logger.error("AddressIds was null or empty, returning empty list"); return addressBoookList; } Connection connection = null; CallableStatement callableStatement = null; try { connection = getConnection(); callableStatement = connection.prepareCall("{ call search_address_book(?)}"); callableStatement.setString(1, Utils.toCommaString(addressIds)); callableStatement.execute(); ResultSet resultSet = callableStatement.getResultSet(); prepareResults(resultSet, addressBoookList); connection.close(); } catch (SQLException e) { logger.error("Problem connecting MYSQL - " + e.getMessage()); throw new AddressBookTechnicalException(e.getMessage()); } catch (AddressBookTechnicalException e) { logger.error("Problem connecting MYSQL - " + e.getMessage()); throw e; } finally{ if(connection != null){ try { connection.close(); } catch (SQLException e) { logger.error("Problem closing conection - " + e.getMessage()); e.printStackTrace(); } } } return addressBoookList; } private static void prepareResults(ResultSet resultSet, List addressBoookList) throws SQLException { AddressBoook addressBoook; while (resultSet.next()) { addressBoook = new AddressBoook(); addressBoook.setAlias(resultSet.getString("Alias")); addressBoook.setEmail(resultSet.getString("Email")); addressBoook.setfName(resultSet.getString("FName")); addressBoook.setlName(resultSet.getString("LName")); addressBoook.setOfficeLocation(resultSet.getString("Location")); addressBoook.setPhoneNo(resultSet.getString("PhoneNo")); addressBoook.setTitle(resultSet.getString("Title")); addressBoook.setDateOfJoining(resultSet.getDate("DOJ")); addressBoook.setId(resultSet.getLong("Id")); addressBoookList.add(addressBoook); } } } Next we create the SphinxInstance that will parse the keywords and date range and provide us a list of Ids that matches the search. package it.gogs.sphinx.util; import it.gogs.sphinx.DateRange; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.api.SphinxClient; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.api.SphinxMatch; import it.gogs.sphinx.api.SphinxResult; import it.gogs.sphinx.exception.AddressBookBizException; import java.util.ArrayList; import java.util.Date; import java.util.List; import org.apache.log4j.Logger; /** * Instance that will parse our free text and provide the results. * * Note: Make sure that 'searchd' is up and running before you use this class * @author Munish Gogna * */ public class SphinxInstance { private static String SPHINX_HOST = "localhost"; private static String SPHINX_INDEX = "addressBookIndex"; private static int SPHINX_PORT = 9312; private static SphinxClient sphinxClient; private static Logger logger = Logger.getLogger(SphinxInstance.class); static { sphinxClient = new SphinxClient(SPHINX_HOST, SPHINX_PORT); } public static List getAddressBookIds(SearchCriteria criteria) throws AddressBookBizException, SphinxException { List addressIdsList = new ArrayList(); try { if (Utils.isNull(criteria)) { logger.error("criteria is null"); throw new AddressBookBizException("criteria is null"); } if (Utils.isNull(criteria.getKeywords())) { logger.error("keyword is a required field"); throw new AddressBookBizException("keyword is a required field"); } DateRange dateRange = criteria.getDateRage(); if (!Utils.isNull(dateRange)) { if (Utils.isDateRangeValid(dateRange)) { // this is to filter results based on joining dates if they are provided sphinxClient.SetFilterRange("DOJ", getTimeInSeconds(dateRange.getFromDate()), getTimeInSeconds(dateRange.getToDate()), false); } else { logger.error(" fromDate/toDate should not be empty and 'fromDate' should be less than equal to 'toDate'"); throw new AddressBookBizException("fromDate/toDate should not be empty and 'fromDate' should be less than equal to 'toDate'"); } } sphinxClient.SetMatchMode(SphinxClient.SPH_MATCH_EXTENDED2); sphinxClient.SetSortMode(SphinxClient.SPH_SORT_RELEVANCE, ""); SphinxResult result = sphinxClient.Query(buildSearchQuery(criteria), SPHINX_INDEX, "buidling query for address book search"); SphinxMatch[] matches = result.matches; for (SphinxMatch match : matches) { addressIdsList.add(String.valueOf(match.docId)); } } catch (SphinxException e) { throw e; } catch (AddressBookBizException e) { throw e; } logger.info("Total record(s):" + addressIdsList.size()); return addressIdsList; } private static long getTimeInSeconds(Date time) { return time.getTime()/1000; } private static String buildSearchQuery(SearchCriteria criteria) throws AddressBookBizException { String keywords[] = criteria.getKeywords().split(" "); StringBuilder searchFor = new StringBuilder(); for (String key : keywords) { if (!Utils.isEmpty(key)) { searchFor.append(key); if (searchFor.length() > 1) { searchFor.append("*|*"); } } } searchFor.delete(searchFor.lastIndexOf("|*"), searchFor.length()); StringBuilder queryBuilder = new StringBuilder(); String query = searchFor.toString(); queryBuilder.append("@FName *" + query + " | "); queryBuilder.append("@LName *" + query + " | "); queryBuilder.append("@Title *" + query + " | "); queryBuilder.append("@Location *"+ query + " | "); queryBuilder.append("@Alias *" + query + " | "); queryBuilder.append("@Email *" + query + " | "); queryBuilder.append("@PhoneNo *" + query); logger.info("Sphinx Query: " + queryBuilder.toString()); return queryBuilder.toString(); } } Here is the interface that I will expose to the outside world (in my future article I will expose this interface as Web Service) import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import java.util.List; /** * * @author Munish Gogna * */ public interface AddressBook { /** * Returns the list of AddressBook objects based on search criteria. * * @param criteria * @throws AddressBookTechnicalException * @throws AddressBookBizException * @throws SphinxException */ public List getAddressBookList(SearchCriteria criteria) throws AddressBookTechnicalException, AddressBookBizException, SphinxException; } and here is the implementation class for the same. package it.gogs.sphinx.addressbook.impl; import java.util.List; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.addressbook.AddressBook; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import it.gogs.sphinx.util.AddressBookAdapter; import it.gogs.sphinx.util.SphinxInstance; /** * Implementation for our Address Book example * * @author Munish Gogna * */ public class AddressBookImpl implements AddressBook{ public List getAddressBookList(SearchCriteria criteria) throws AddressBookTechnicalException, AddressBookBizException, SphinxException { List addressIds= SphinxInstance.getAddressBookIds(criteria); return AddressBookAdapter.getAddressBookList(addressIds); } } ok so far so good, let's run some tests now ............ package it.gogs.sphinx.test; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.List; import it.gogs.sphinx.AddressBoook; import it.gogs.sphinx.DateRange; import it.gogs.sphinx.SearchCriteria; import it.gogs.sphinx.addressbook.AddressBook; import it.gogs.sphinx.addressbook.impl.AddressBookImpl; import it.gogs.sphinx.api.SphinxException; import it.gogs.sphinx.exception.AddressBookBizException; import it.gogs.sphinx.exception.AddressBookTechnicalException; import junit.framework.TestCase; /** * * @author Munish Gogna * */ public class AddressBookTest extends TestCase { private AddressBook addressBook; @Override protected void setUp() throws Exception { super.setUp(); addressBook = new AddressBookImpl(); } @Override protected void tearDown() throws Exception { super.tearDown(); } /** this should be a unique record for Himanshu */ public void test_search_for_himanshu() throws Exception { SearchCriteria criteria = new SearchCriteria(); // remember the first 'search' example?? criteria.setKeywords("u4732"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 1); assertTrue("expecting himanshu here", "himanshu".equals(addressList.get(0).getfName())); } /** only two employees have name gurleen or toshi */ public void test_search_for_gurleen_or_toshi() throws Exception { SearchCriteria criteria = new SearchCriteria(); // remember the second 'search' example?? criteria.setKeywords("gurleen toshi"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 2); assertTrue("expecting toshi here", "toshi".equals(addressList.get(0).getfName())); assertTrue("expecting gurleen here", "gurleen".equals(addressList.get(1).getfName())); } /** there are 16 people from jammu location */ public void test_search_for_people_from_jammu_location() throws Exception { SearchCriteria criteria = new SearchCriteria(); criteria.setKeywords("jammu"); List addressList = addressBook.getAddressBookList(criteria); assertTrue(addressList.size() == 16); } /** only Aalap, Manda and nitika are having title as Mrs and joined in 2010 */ public void test_joined_in_2010_with_title_Mrs() throws Exception { DateRange dateRange = new DateRange(); GregorianCalendar calendar1 = new GregorianCalendar(); calendar1.set(Calendar.YEAR, 2010); calendar1.set(Calendar.MONTH, Calendar.JANUARY); calendar1.set(Calendar.DAY_OF_MONTH, 1); dateRange.setFromDate(calendar1.getTime()); GregorianCalendar calendar2 = new GregorianCalendar(); calendar2.set(Calendar.YEAR, 2010); calendar2.set(Calendar.MONTH, Calendar.DECEMBER); calendar2.set(Calendar.DAY_OF_MONTH, 31); dateRange.setToDate(calendar2.getTime()); SearchCriteria criteria = new SearchCriteria(); criteria.setKeywords("Mrs"); criteria.setDateRage(dateRange); List addressList = addressBook.getAddressBookList(criteria); assertTrue("expecting 3 records here", addressList.size() == 3); } /** should get a business exception here */ public void test_without_specifying_keywords(){ SearchCriteria criteria = new SearchCriteria(); //criteria.setKeywords("Mrs"); try { addressBook.getAddressBookList(criteria); } catch (Exception e) { assertTrue(e instanceof AddressBookBizException); assertTrue(e.getMessage().indexOf("keyword is a required field") >-1); } } } How we update the Index once database changes? For these kinds of requirements, we can set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. First Time data will go in the "main" index and the newly inserted address book entries will go into "delta". Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes. Also one thing to take from this article is once 'searchd' daemon is running we can't index the data in normal way,we have to use --rotate option in such cases. For some applications where there is a timely batch update for the data, we can configure some cron job to reindex our documents in Sphinx as shown below. C:\devel\sphinx-0.9.9-win32\bin>indexer.exe --all --config C:\devel\sphinx-0.9.9-win32\addressbook.conf --rotate Capsule We asked Sphinx to provide us the Document Ids corresponding to our search parameters and then we used those Ids to fire database query. In case the data we want to return is included in Index (DOJ attribute for example in our case) we can skip the database portion, so choose wisely how much information (attributes) you want to include while you index your sql data. Well that's all ... it's time to say good bye. Take good care of your health and don't forget to vote, its a must :) - Munish Gogna
December 7, 2010
by Munish Gogna
· 37,603 Views · 2 Likes
article thumbnail
Practical PHP Testing Patterns: Assertion Message
In the previous article of this series, we saw how assertions are the key to self-validating tests. Even if an assertion's result is only a pass/fail value, programmers and testers find very handy to associate to the fail value some kind of additional information about the failure, to allow easier debugging without insertion of var_dump() and similar statements into the code. As I showed you in the previous article's code sample, assertions are useful the most when they fail, and in case they do, they should explain what happened so that a correction can be made quickly. An assertion that passes does not do anything; an assertion that fails shows an error. Whata happens when tests are failing The typical scenario is that of a large test suite, with hundreds or thousands of tests. At the time of a commit to the source code repository (or of a push if we're talking about git), the whole test suite must be green. Even if only one tests is red, the test suite is regarded as red. Sometimes, a regression is introduced by a programming error or an unforeseen scenario, and the output of the test suite becomes something like this: [12:30:02][giorgio@Desmond:~/txt/articles]$ phpunit assertionmessage.php PHPUnit 3.5.5 by Sebastian Bergmann. FFFF Time: 0 seconds, Memory: 3.75Mb There were 4 failures: 1) AssertionMessagesTest::testAsserTrue The object is not an Iterator. Failed asserting that is true. /home/giorgio/Dropbox/txt/articles/assertionmessage.php:9 2) AssertionMessagesTest::testAssertEquals The square root of 17 is not 4 but 4.1231056256177. Failed asserting that matches expected . /home/giorgio/Dropbox/txt/articles/assertionmessage.php:17 3) AssertionMessagesTest::testAssertContainsFailsWithCustomMessage The array does not contain 4. Failed asserting that an array contains . /home/giorgio/Dropbox/txt/articles/assertionmessage.php:24 4) AssertionMessagesTest::testAssertContains Failed asserting that an array contains . /home/giorgio/Dropbox/txt/articles/assertionmessage.php:36 FAILURES! Tests: 4, Assertions: 4, Failures: 4. The green tests are not noisy (which would be a smell), since they are not interesting at the moment. The red tests, which our focus is on, are shown with all the associated information. Along with the test name, the line of interest, and the kind of problem (failure/error), the assertion message is shown. How to write a meaningful message An assertion message can be passed as an argument of the assertion method, to be used in case of failure. Writing good assertion messages is an art, but before diving into some code to show you some messages I wrote, we can see how assertion messages are categorized in literature: Assertion-Identifying Message: includes information on which of the assertions in the same method caused the failure. This information can be for example the name of the checked variables. It's good to understand at a glance which assertion failed, but PHPUnit shows you the line of the test method that caused the failure, so it's not strictly necessary to include this kind of messages. Expectation-Describing Message: tells us what should have happened, but did not according to the data passed to the assertion method. Argument-Describing Message: when using assertions which are not very smart, like assertFalse() or assertTrue(), a failure would tell us by default only something like False was passed to assertTrue(). Adding an Argument-Describing Message will tell us instead The predicate on the set containing all red cars was false instead of true. Watch the test fail I can never say it enoug: when you write your tests, watch them fail (this mean also that each test should be written in advance with regard to the code it exercises, following the discipline of Test-Driven Development). This practice is not only useful for avoiding false positives, which are rare anyway, but to show at least for once the assertion message. You can check that when the test fails, the error message is meaningful and the test does not cause a Fatal Error. Watch your test fail to ensure that when they'll fail due to a regression in the future, you will be pleased by how fast you'll learn what's wrong. Example The code sample shows how to use PHPUnit's assertion methods with the $message optional argument. Any time you use basic methods like assertTrue(), a message is mandatory. The need for $message is inversely proportional to the specificity of the assertion: for example when assertContains() fails, PHPUnit has already valuable information in the the method itself to produce a readable error message. Note that the usage of "" (double quotes) enables fast composition of error messages via variable substitution. assertTrue($object instanceof Iterator, 'The object is not an Iterator.'); } public function testAssertEquals() { $expected = 4; $square = 17; $result = sqrt($square); $this->assertEquals($expected, $result, "The square root of $square is not $expected but $result."); } public function testAssertContainsFailsWithCustomMessage() { $array = array(1, 2, 3); $testElement = 4; $this->assertContains($testElement, $array, "The array does not contain $testElement."); } /** * This Assertion Message would be enough. It will even be better in some cases, * since PHPUnit would display nicely $testElement even if it was not "castable" * to a string (an object or array). */ public function testAssertContains() { $array = array(1, 2, 3); $testElement = 4; $this->assertContains($testElement, $array); } }
December 6, 2010
by Giorgio Sironi
· 6,576 Views
article thumbnail
Microchip's Embedded Software Development on the NetBeans Platform
Vince Sheard (pictured right) is the Manager of the MPLAB® Integrated Development Environment (IDE) team at Microchip Technology Inc. He has been working at Microchip for more than 10 years, and was the lead architect for the MPLAB X IDE version 1.0, prior to stepping into the management roll. This, the port of MPLAB to the NetBeans Platform, is the sixth major architecture change of the MPLAB IDE since its inception in 1992. Microchip Technology Inc., headquartered in Chandler, Arizona, was spun off from General Instrument in 1989. The Company went public in 1993, and is a leading provider of microcontroller, analog, and Flash-IP solutions, providing low-risk product development, lower total system cost and faster time to market for thousands of diverse customer applications worldwide. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support along with dependable delivery and quality. For more information, visit the Microchip website at http://www.microchip.com. (The MPLAB X landing page is http://www.microchip.com/mplabx, which will be available in the next few days.) Hi, Vince. What's MPLAB, in a nutshell? MPLAB® is an integrated development environment or IDE. It is similar to other IDEs, but there are two important differences. The first difference is that the MPLAB IDE is our customers’ window into Microchip’s PIC® microcontrollers embedded in their designs. Many people believe they understand their PC, because it is “right here.” An embedded device is more difficult to get a handle on. It’s the brain of a product “over there.” It’s not a computer, it’s a thing. The MPLAB IDE gives embedded developers an opportunity to dig into the brain of that thing. The second important difference is that the MPLAB IDE seamlessly covers Microchip’s entire portfolio of more than 700 8-bit, 16-bit and 32-bit PIC microcontrollers. The differences between these devices are massive, from a tiny 6-pin, 8-bit microcontroller that could fit under your fingernail, to a huge 32-bit microcontroller that is much more powerful than the iconic IBM mainframe of last century. The MPLAB IDE provides a consistent and supportive environment in which to debug our customers’ original, creative works of software that differentiate their products. What are the MPLAB IDE’s main features and how does it distinguish itself from its competitors? Integrated development environments share many features: project creation and management, programmers’ editor, language tool integration and build tools, image preparation and programming, and debug facilities. These are the MPLAB X IDE’s main features, too. A large difference comes in the presentation of an embedded target, rather than a PC target, which presents a developer with a less coupled and less easily controlled object for their development. That tightens the focus, but there are other IDEs that support embedded development. The MPLAB X IDE is distinguished by its seamless and timely support of Microchip devices, the vast ecosystem of tightly integrated compilers and hardware tools that also support those devices, and the evolutionary grace of a tool that has grown with our customers and technologies until the three form a smooth and supportive system for developing innovative embedded products. What are the typical technical challenges of an application of this kind? The principal challenge we face is how to provide the facilities developers need in the most intuitive and useful way possible. It’s easy to provide tons and tons of features, but with the GUI technologies of today, one can only present a small fraction of what is provided and available for use. One of the gauges for how well we’ve done our job is when our customers present a fine point of usage that we’ve already discussed and disagreed on how to implement. Those "complaints" are really wonderful because they come from expert tool users who understand the tools they are using. Of course, we are always hustling to present new Microchip devices in the same light that we present established devices. We’re also challenged to exploit the advanced debug facilities of new devices in a way that is most beneficial for our customers. What's the architecture of the application and why did you make the choices that you did? We’re moving away from a Windows OS-only, COM-based architecture. As our customers’ development sophistication has grown, so have their needs. Our customers now require Linux, Mac, and Windows support. Microchip is a worldwide company with customers who may only be fluent in a single language, so that impacts our choices. We also have a number of educational institutions who place interesting demands on the IDE, along with a number of advanced customers who are very forward-looking and are really pushing the envelope of what was previously thought possible. The most telling choice we had to make, though, was the choice of NetBeans as a fully capable, modern, lightweight and flexible platform for our next-generation MPLAB X IDE: Where does the NetBeans Platform fit into all of this? NetBeans is unique among current open-source IDE platform offerings, in that it is the most advanced for addressing our primary challenge, which I described above. The NetBeans IDE presents standard operations in a way that really minimizes hunting around and wasting time, to find out how to accomplish what you need to do. Take, for instance, the classic edit, compile, debug “cycle.” In NetBeans, like in other IDEs, the edit part is pretty easy: open the source file in the programmers’ editor and make the changes you need. It’s really not much different than editing a document with a word processor. The next step is where the difference really shows. In some IDEs, you have to compile or build an image and then figure out how to load it and start a debugging session. In some IDEs, that’s just brutal to figure out, especially in embedded systems where, as we said, the target is not “right here,” it’s “over there.” In NetBeans, it’s a single button press (DebugRun), even in an embedded context like ours. After that, the rest of the steps are taken care of. If any errors occur during the sequence, we take it home by capturing the errors and placing the user in context for an easy solution; and the ability to move on. That’s just one example. NetBeans is way ahead of the curve compared to other menu/toolbar/property sheet IDEs. What are the 3 main benefits of the NetBeans Platform, in this context? First and foremost is the ability to really optimize developers’ time. That’s a major factor in our customers’ focus. Second is the fact that NetBeans is a modern, lightweight and fully capable IDE platform. It doesn’t suffer from the bloat and outdated aspects of some other IDE platforms. Included with these benefits are the abilities to localize the IDE and to utilize it on multiple operating systems. Last, but not least, is the huge benefit of a professionally executed development, maintained by a focused organizing committee and maintained by a tight-knit development community—all headed by the Oracle contributions. That’s a significant departure from some other high-profile, open-source platforms that are available. How did you end up choosing the NetBeans Platform over its alternatives? When you consider all of the benefits I just mentioned, I think it’s pretty much a no-brainer. How did you get started with it? We first discussed the concepts with Sun Microsystems, as we wanted to provide a benefit to the overall NetBeans IDE from an embedded side. We created a proof of concept by plugging in our existing debuggers to the NetBeans IDE on a Windows operating system. We were fortunate to find someone who previously worked within Sun Microsystems on the NetBeans Platform, who came in on contract and gave us some one-on-one training. This was mainly because we had an aggressive schedule and wanted to be able to get up to speed faster. Any tips and tricks for others going down the same road? The mail lists are very helpful. We found many things we wanted to do being asked by others on the mailing lists. Many of the developers monitor these lists frequently, and respond more often than not in a timely fashion. The documentation suite, videos, written documents and interaction with the community are all extremely helpful in getting to the root of any matter quickly. The IDE is well organized and uses common techniques within the code base, which quickly become familiar. Other platforms treat information as job security. Some people may know, but they want you to pay for the information, in one way or another. Not so with NetBeans; everyone involved is completely open and collaborative. Any specific things that surprised you about the NetBeans Platform (in a good or bad way)? NetBeans prevents development with any form of interdependency between modules. For instance, you cannot have A->B and B->A. This really is a good thing and ensures that you break your modules up more to create a common module. The way JUnit is integrated into the IDE makes it so seamless to create your unit tests while developing the code. Source-code revision control worked quite smoothly from within the IDE. A hindrance for us is that there is no way to view a block of memory in hex form, when running under Java (using the JVM debugger). This is something often required when doing embedded development. Since we are developing code for our debug tools, we wanted to see the code blocks being transported from the IDE to the tool in hex, but could not. Anything else you'd like to share? It is great how fully featured the editor is. We are often so deep in developing what is required for the embedded side that we still discover things the editor does that we hadn’t come across. We are using the NetBeans IDE to create our own IDE, which is a completely new, yet compatible, incarnation of our IDE. The NetBeans developers at Oracle/Sun are extremely open to assist in getting value added to the current IDE, which makes our job even easier.
December 3, 2010
by Geertjan Wielenga
· 20,591 Views · 10 Likes
article thumbnail
A Closer Look at JUnit Categories
JUnit 4.8 introduced Categories: a mechanism to label and group tests, giving developers the option to include or exclude groups (or categories.) This post presents a brief overview of JUnit categories and some unexpected behavior I have found while using them. 1. Quick Introduction The following example shows how to use categories: (adapted from JUnit’s release notes) public interface FastTests { /* category marker */ } public interface SlowTests { /* category marker */ } public class A { @Category(SlowTests.class) @Test public void a() {} } @Category(FastTests.class}) public class B { @Test public void b() {} } @RunWith(Categories.class) @IncludeCategory(SlowTests.class) @ExcludeCategory(FastTests.class) @SuiteClasses({ A.class, B.class }) public class SlowTestSuite {} Lines 1, 2: we define two categories, FastTests and SlowTests. JUnit categories can be defined as classes or interfaces. Since a category acts like a label or marker, my intuition tells me to use interfaces. Line 5: we use the annotation @org.junit.experimental.categories.Category to label test classes and test methods with one or more categories. Lines 6, 9: test methods and test classes can be marked as belonging to one or more categories of tests. Labeling a test class with a category automatically includes all its test methods in such category. Lines 14 to 18: currently programmatic test suites (line 17) are the only way to specify which test categories (line 14) should be included (line 15) or excluded (line 16) when the suite is executed. I find this approach (especially the way test classes need to included in the suite) too verbose and not so flexible. Hopefully Ant, Maven and IDEs will provide support for categories (with a simpler configuration) in the very near future. Note: I recently discovered ClasspathSuite, a project that simplifies the creation of programmatic JUnit test suites. For example, we can specify we want to include in a test suite all tests whose names end with “UnitTest.” 2. Category Subtyping Categories also support subtyping. Let’s say we have the category IntegrationTests that extends SlowTests: public interface IntegrationTests extends SlowTests {} Any test class or test method labeled with the category IntegrationTests is also part of the category SlowTests. To be honest, I don’t know how handy category subtyping could be. I’ll need to experiment with it more to have an opinion. 3. Categories and Test Inheritance 3a. Method-level Categories JUnit behaves as expected when test inheritance is combined with method-level categories. For example: public class D { @Category(GuiTest.class) @Test public void d() {} } public class E extends D { @Category(GuiTest.class) @Test public void e() {} } @RunWith(Categories.class) @IncludeCategory(GuiTest.class) @SuiteClasses(E.class) public class TestSuite {} As I expected, when running TestSuite, test methods d and e are executed (both methods belong to the GuiTest category and E inherits method d from superclass D.) Nice! 3b. Class-level Categories On the other hand, unless I’m missing something, I think I found some strange behavior in JUnit in this scenario. Consider the following classes: @Category(GuiTest.class) public class A { @Test public void a() {} } public class B extends A { @Test public void b() {} } @RunWith(Categories.class) @IncludeCategory(GuiTest.class) @SuiteClasses(B.class) public class TestSuite {} As we can see, TestSuite should execute the tests in B that belong to the category GuiTest. I was expecting TestSuite to execute test method a, even though B is not marked as a GuiTest. Here is my reasoning: test method a belongs to the category GuiTest because test class A is labeled with such category test class B is an A and it inherits test method a Therefore, TestSuite should execute test method a. But it doesn’t! Here is a screenshot of the results I get (click to see full size.) There are two ways to fix this issue, depending on what test methods we want to actually run: Label class B with GuiTest. In this case, both methods, a and b, will be executed. Label method a with GuiTest. In this case, only method a will be executed. (I’ll be posting a question regarding this issue in the JUnit mailing list shortly.) 4. Categories vs. TestNG Groups (You saw this one coming, didn’t you?) Categories (or groups) have been part of TestNG for long time. Unlike JUnit’s, TestNG’s groups are defined as simple strings, not as classes or interfaces. As a static typing lover, I was pretty happy with JUnit categories. By using an IDE, we could safely rename a category or look for usages of a category within a project. Even though my observation was correct, I was missing one important point: all this works great as long as your test suite is written in Java. In the real world, I’d like to define a test suite in either Ant or Maven (or Gradle, or Rake.) In this scenario, having categories as Java types does not bring any benefit. In fact, I suspect it would be very verbose and error-prone to specify the fully-qualified name of a category in a build script. Renaming a category now would be limited to a text-based “search and replace.” Ant and Maven really need to provide a way to specify JUnit categories, clever enough to be fool-proof. As you may expect, I prefer the simplicity and pragmatism of TestNG’s groups. Update: my good friend (and creator of the TestNG framework,) Cédric, reminded me that we can use regular expressions to include or exclude groups in a test suite (details here.) This is really powerful! 5. My Usage of Categories I’m not using JUnit categories in my test suites yet. I started to look into JUnit categories because I wasn’t completely happy with the way we recognized GUI tests in FEST. We recognize test methods or test classes as “GUI tests” if they have been annotated with the @GUITest (provided by FEST.) When a “GUI test” fails, FEST automatically takes a screenshot of the desktop and includes it in the JUnit or TestNG HTML report. The problem is, our @GUITest annotation is duplicating the functionality of JUnit categories. To solve this issue, I created a JUnit extension that recognizes test methods or test classes as “GUI tests” if they belong to the GuiTest category. At this moment GuiTest is an interface provided by FEST, but I’m thinking about letting users specify their own GuiTest category as well. I also refactored this functionality out of the Swing testing module, expecting to reuse it once I implement a JavaFX testing module :) You can find the FEST code that deals with JUnit categories at github. 6. Conclusion Having the ability to label and group tests via categories is really a great feature. I still have some reservations about the practicality of defining categories as Java types, the lack of support for this feature from Ant and Maven (not JUnit’s fault,) and the unexpected behavior I noticed when combining class-level categories and test inheritance. On the brighter side, categories are still an experimental, non-final feature. I’m sure will see many improvements in future JUnit releases :) Feedback is always welcome. From http://alexruiz.developerblogs.com/?p=1711
December 2, 2010
by Alex Ruiz
· 35,533 Views
article thumbnail
Creating and Deploying JAX-WS Web Service on Tomcat 6
Some years back I had to provide a wrapper around an EJB 3.0 remote service to come up with a simple web service project that would be deployed over Tomcat and accessed in a simple http way due to some accessibility issues. Now as I cannot reveal the actual requirement I implemented that time so here I am presenting a simple demo kind of service with following signature. public AccountDetails getAccountDetails(String accountNo, SecurityToken token); The service will return the account details of a particular account number, provided the token is valid (generated using some Security module of the application). In nutshell, the client will ask for a token from Security module and then invoke this method. The service will validate the token to see if the caller can invoke the method or not? In general you should use handlers (message interceptors that can be easily plugged in to the JAX-WS runtime to do additional processing of the inbound and outbound messages) to validate the stuff, freeing implementation class from that overhead, it is just an example exercise so our implementation class will check the security token also.Sounds good.. lets move ahead. Libraries we are going to use include JAXB and JAX-WS, as both of them have sensible defaults, the number of annotations can be kept to the minimum. Also in my opinion, it is always best to develop WSDL and schemas by hand to ensure that the service contract is appropriately defined and also that the schema can be re-used (by other services) and extended if necessary. I do not prefer using annotations for automatically producing WSDL and schema at runtime so let's start with the definition, i am using inline schema as we have a very simple requirement here, though we should have schema definitions in separate xsd files. accounts.wsdl: I am not going to explain the In and outs of this wsdl , in a nutshell it defines one operation 'getAccountDetails' which takes accountNo and returns back the account details like account type, balance etc. Please note that I have also added a security header token that will validate the caller (left to the implementation of the service). Now as we are done with our wsdl and schema, let's generate the portable artifacts from our service definition. JAX-WS includes a tool that can do this for us, we will use this tool to generate portable artifacts like Service Endpoint Interface (SEI), Service and Exception classes.These artifacts can be packaged in a WAR file with the WSDL and schema documents along with the endpoint implementation to be deployed. To generate the artifacts , we run following command: wsimport C:\devel\workspace\webservice\WebContent\WEB-INF\wsdl\accounts.wsdl -p com.mg.ws -keep -Xnocompile This will create the following artifacts in com.mg.ws package. AccountDetails.java AccountDetailsFault.java AccountDetailsFault_Exception.java AccountDetailsPortType.java AccountDetailsTO.java GeAccountDetailsINTO.java ObjectFactory.java package-info.java SecurityToken.java Now as we have got all the artifacts we are ready to implement our service, the interface we need to implement is AccountDetailsPortType , so let's do it. Here is our dummy implementation class: package com.mg.ws.impl;import java.math.BigDecimal;import javax.jws.WebService;import com.mg.ws.AccountDetailsFault;import com.mg.ws.AccountDetailsFault_Exception;import com.mg.ws.AccountDetailsPortType;import com.mg.ws.AccountDetailsTO;import com.mg.ws.GeAccountDetailsINTO;import com.mg.ws.SecurityToken;@WebService(name = "AccountDetailsService", portName = "AccountDetailsPort", endpointInterface = "com.mg.ws.AccountDetailsPortType", wsdlLocation = "WEB-INF/wsdl/accounts.wsdl", targetNamespace="http://gognamunish.com/accounts") public class AccountDetailsServiceImpl implements AccountDetailsPortType {public AccountDetailsTO getAccountDetails(GeAccountDetailsINTO parameters, SecurityToken requestHeader) throws AccountDetailsFault_Exception { AccountDetailsTO detailsTO = new AccountDetailsTO(); // validate token validateToken(requestHeader); // populate response detailsTO = getDetailsFromSomewhere (parameters.getAccountNo()); return detailsTO; } private AccountDetailsTO getDetailsFromSomewhere(String accountNo) throws AccountDetailsFault_Exception { if(accountNo == null || accountNo.trim().length()==0){ AccountDetailsFault faultInfo = new AccountDetailsFault(); faultInfo.setFaultInfo("missing account number"); faultInfo.setMessage("account number is required field"); throw new AccountDetailsFault_Exception("account no missing", faultInfo); } AccountDetailsTO detailsTO = new AccountDetailsTO(); detailsTO.setAccNo(accountNo); detailsTO.setAccType("SAVING"); detailsTO.setBalance(new BigDecimal(10000)); return detailsTO; } private void validateToken(SecurityToken requestHeader) throws AccountDetailsFault_Exception { if ("83711070".equals(requestHeader.getToken()) && requestHeader.getValidTill() != null){ System.out.println("token processed successfully..."); } else { AccountDetailsFault faultInfo = new AccountDetailsFault(); faultInfo.setFaultInfo("Header token Invalid"); faultInfo.setMessage("can't help"); throw new AccountDetailsFault_Exception("invalid token", faultInfo); } } This is just a dummy implementation for illustration purpose only. Now as our service implementation is done, we proceed to package this in a war and deploy it on tomcat. Now we create a standard web.xml, which defines WSServletContextListener, WSServlet and structure of a web project. com.sun.xml.ws.transport.http.servlet.WSServletContextListener AccountDetailsServicecom.sun.xml.ws.transport.http.servlet.WSServlet 1AccountDetailsService/details Next we create a sun-jaxws.xml, defines the web service implementation class. This file is required regardless of whether we publish our web service on tomcat, glassfish or any other server. OK so far so good , let's build our application and deploy it on tomcat, here is the ant script. NOTE: jars in lib should be carefully chosen, otherwise it will make your life hell. Now let's test the application, point your browser to http://localhost:8080/account/details , if you see something like this, it means you have successfully deployed the service. Now let's test the service, we can use the client generated by the wsimport tool as : public static void main(String[] args) throws Exception { AccountDetails accountDetails = new AccountDetails(); AccountDetailsPortType port = accountDetails.getAccountDetailsPort(); AccountDetailsTO details = port.getAccountDetails(new GeAccountDetailsINTO(), new SecurityToken()); } For those who want to invoke the service using soap stuff, they can use tool like soapUI (can be downloaded from www.soapui.org), lets make some soap calls now: case: invalid security header case: valid security header By default, Tomcat does not comes with any JAX-WS dependencies, So, you have to include it manually. Go here Download JAX-WS RI distribution, you will find the wsimport tool in lib directory. Please note for running this project you can just skip this step as I have already done this step for you and included the required libraries in the lib folder of the project. I have included eclipse project for this demo, please get it from resource section. To deploy just change build.properties to point to TOMCAT_HOME and run ant build. Thats all for now ... Please provide your valuable comments or any suggestion and DON'T forget to vote :) It's Munish Gogna signing off now :)
December 1, 2010
by Munish Gogna
· 68,811 Views · 1 Like
article thumbnail
Setting mouse cursor position with WinAPI
Setting the mouse cursor position on a Windows machine with the help of .NET Framework shouldn't be that big of a problem. After all, there is the built-in Cursor class that lets you do that by executing a simple line of code: Cursor.Position = new System.Drawing.Point(0, 0); Of course, here 0 and 0 are the absolute coordinates for the mouse cursor on the screen. One thing to mention about this type of position setting is that Cursor requires a reference to System.Windows.Forms. And in some cases you don't want this extra reference. If that's the case, WinAPI is your solution. It requires some more work compared to the regular .NET way (class instance -> method call) but at the end you get more control than you would expect. When using WinAPI to set the cursor position, there are two ways you can go: mouse_event SendInput mouse_event is the very basic function that is only able to set the mouse coordinates. It was superseded and Microsoft recomends using SendInput instead. Nonetheless, it still works (although I cannot say for sure whether it will be working in future releases of Windows). So to start, I have a very basic class: class WINAPI_SUPERSEDED { [DllImport("user32.dll",SetLastError=true)] public static extern void mouse_event(uint dwFlags, uint dx, uint dy, uint dwData, int dwExtraInfo); public enum MouseFlags { MOUSEEVENTF_ABSOLUTE = 0x8000, MOUSEEVENTF_LEFTDOWN = 0x0002, MOUSEEVENTF_LEFTUP = 0x0004, MOUSEEVENTF_MIDDLEDOWN = 0x0020, MOUSEEVENTF_MIDDLEUP = 0x0040, MOUSEEVENTF_MOVE = 0x0001, MOUSEEVENTF_RIGHTDOWN = 0x0008, MOUSEEVENTF_RIGHTUP = 0x0010, MOUSEEVENTF_WHEEL = 0x0800, MOUSEEVENTF_XDOWN = 0x0080, MOUSEEVENTF_XUP = 0x0100 } public enum DataFlags { XBUTTON1 = 0x0001, XBUTTON2 = 0x0002 } } So to set the cursor position to 0,0 I would use this: WINAPI_SUPERSEDED.mouse_event((int)WINAPI_SUPERSEDED.MouseFlags.MOUSEEVENTF_MOVE | (int)WINAPI_SUPERSEDED.MouseFlags.MOUSEEVENTF_ABSOLUTE, 0, 0, 0, 0); If you go through the documentation, you will notice that in fact, dx and dy are in no way direct coordinates but rather normalized values ranged between 0 and 65,535 - that is only when the MOUSEEVENTF_ABSOLUTE flag is present. Otherwise, the position will be adjusted according to the current mouse cursor position. This method doesn't return any value so I cannot be informed whether it was successful or not. GetLastError won't give much information either. In this case, SendInput comes to the rescue. Here is what I have defined as the main class: class WINAPI { [DllImport("kernel32.dll")] public static extern uint GetLastError(); public enum MouseData { XBUTTON1 = 0x0001, XBUTTON2 = 0x0002 } public enum MouseFlags { MOUSEEVENTF_ABSOLUTE = 0x8000, MOUSEEVENTF_HWHEEL = 0x01000, MOUSEEVENTF_MOVE = 0x0001, MOUSEEVENTF_MOVE_NOCOALESCE = 0x2000, MOUSEEVENTF_LEFTDOWN = 0x0002, MOUSEEVENTF_LEFTUP = 0x0004, MOUSEEVENTF_RIGHTDOWN = 0x0008, MOUSEEVENTF_RIGHTUP = 0x0010, MOUSEEVENTF_MIDDLEDOWN = 0x0020, MOUSEEVENTF_MIDDLEUP = 0x0040, MOUSEEVENTF_VIRTUALDESK = 0x4000, MOUSEEVENTF_WHEEL = 0x0800, MOUSEEVENTF_XDOWN = 0x0080, MOUSEEVENTF_XUP = 0x0100 } [DllImport("user32.dll", SetLastError=true)] public static extern uint SendInput(uint nInputs, ref INPUT pInputs, int cbSize); [StructLayout (LayoutKind.Explicit)] public struct INPUT { [FieldOffset(0)] public int type; [FieldOffset(4)] public MOUSEINPUT mi; } public struct MOUSEINPUT { public int dx; public int dy; public int mouseData; public int dwFlags; public int time; public int extraInfo; } } This class is a bit more complicated, but at the same time you have to understand that in some cases, SendInput is used for hardware and keyboard input as well. Of course, for experimentation purposes I removed those parts in the sample class. Here you have the same MouseFlags enum that will let you pass custom flags defining the mouse behavior. Notice, that SendInput has SetLastError set to true, therefore if something wrong happens via this method, the error will be easily obtained via GetLastError, that is implemented as a helper method in the same class. VERY IMPORTANT: When you define the INPUT struct, make sure you use LayoutKind.Explicit since when passed to an unamanaged call, a specific field layout is required - as you can see, every field is decorated with a FieldOffset attribute. Also taking about StructLayout, you don't have to set StructLayout.Sequential to MOUSEINPUT since it is setautomatically by CLR. When I want to call the method above, I can simply use this snippet: WINAPI.MOUSEINPUT mouseInput = new WINAPI.MOUSEINPUT(); mouseInput.dx = 100; mouseInput.dy = 10; mouseInput.dwFlags = (int)WINAPI.MouseFlags.MOUSEEVENTF_ABSOLUTE | (int)WINAPI.MouseFlags.MOUSEEVENTF_MOVE; WINAPI.INPUT input = new WINAPI.INPUT(); input.type = 0; input.mi = mouseInput; uint x = WINAPI.SendInput(1, ref input, Marshal.SizeOf(input)); Console.WriteLine(x); Console.WriteLine(WINAPI.GetLastError()); Console.ReadLine(); Notice that I have to call the unmanaged version of sizeof and pass the INPUT struct to it in order for the method to correctly execute. The regular C# sizeof won't cut it here. When I am defining the type of input, 0 is repesenting the INPUT_MOUSE flag, since I am only handling the mouse here. Of course, I can re-organize my method to accept a set of INPUT instances - the native call itself allows this by requesting an array of INPUT and the correct indication of the number of INPUT instances passed, but that is not required for testing purposes.
November 28, 2010
by Denzel D.
· 16,384 Views
article thumbnail
Maven Profile Best Practices
Maven profiles, like chainsaws, are a valuable tool, with whose power you can easily get carried away, wielding them upon problems to which they are unsuited. Whilst you're unlikely to sever a leg misusing Maven profiles, I thought it worthwhile to share some suggestions about when and when not to use them. These three best practices are all born from real-world mishaps: The build must pass when no profile has been activated Never use Use profiles to manage build-time variables, not run-time variables and not (with rare exceptions) alternative versions of your artifact I'll expand upon these recommendations in a moment. First, though, let's have a brief round-up of what Maven profiles are and do. Maven Profiles 101 A Maven profile is a sub-set of POM declarations that you can activate or disactivate according to some condition. When activated, they override the definitions in the corresponding standard tags of the POM. One way to activate a profile is to simply launch Maven with a -P flag followed by the desired profile name(s), but they can also be activated automatically according to a range of contextual conditions: JDK version, OS name and version, presence or absence of a specific file or property. The standard example is when you want certain declarations to take effect automatically under Windows and others under Linux. Almost all the tags that can be placed directly in a POM can also be enclosed within a tag. The easiest place to read up further about the basics is the Build Profiles chapter of Sonatype's Maven book. It's freely available, readable, and explains the motivation behind profiles: making the build portable across different environments. The build must pass when no profile has been activated (Thanks to for this observation.) Why? Good practice is to minimise the effort required to make a successful build. This isn't hard to achieve with Maven, and there's no excuse for a simple mvn clean package not to work. A maintainer coming to the project will not immediately know that profile wibblewibble has to be activated for the build to succeed. Don't make her waste time finding it out. How to achieve it It can be achieved simply by providing sensible defaults in the main POM sections, which will be overridden if a profile is activated. Never use Why not? This flag activates the profile if no other profile is activated. Consequently, it will fail to activate the profile if any other profile is activated. This seems like a simple rule which would be hard to misunderstand, but in fact it's surprisingly easy to be fooled by its behaviour. When you run a multimodule build, the activeByDefault flag will fail to operate when any profile is activated, even if the profile is not defined in the module where the activeByDefault flag occurs. (So if you've got a default profile in your persistence module, and a skinny war profile in your web module... when you build the whole project, activating the skinny war profile because you don't want JARs duplicated between WAR and EAR, you'll find your persistence layer is missing something.) activeByDefault automates profile activation, which is a good thing; activates implicitly, which is less good; and has unexpected behaviour, which is thoroughly bad. By all means activate your profiles automatically, but do it explicitly and automatically, with a clearly defined rule. How to avoid it There's another, less documented way to achieve what aims to achieve. You can activate a profile in the absence of some property: !foo.bar This will activate the profile "nofoobar" whenever the property foo.bar is not defined. Define that same property in some other profile: nofoobar will automatically become active whenever the other is not. This is admittedly more verbose than , but it's more powerful and, most importantly, surprise-free. Use profiles to adapt to build-time context, not run-time context, and not (with rare exceptions) to produce alternative versions of your artifact Profiles, in a nutshell, allow you to have multiple builds with a single POM. You can use this ability in two ways: Adapt the build to variable circumstances (developer's machine or CI server; with or without integration tests) whilst still producing the same final artifact, or Produce variant artifacts. We can further divide the second option into: structural variants, where the executable code in the variants is different, and variants which vary only in the value taken by some variable (such as a database connection parameter). If you need to vary the value of some variable at run-time, profiles are typically not the best way to achieve this. Producing structural variants is a rarer requirement -- it can happen if you need to target multiple platforms, such as JDK 1.4 and JDK 1.5 -- but it, too, is not recommended by the Maven people, and profiles are not the best way of achieving it. The most common case where profiles seem like a good solution is when you need different database connection parameters for development, test and production environments. It is tempting to meet this requirement by combining profiles with Maven's resource filtering capability to set variables in the deliverable artifact's configuration files (e.g. Spring context). This is a bad idea. Why? It's indirect: the point at which a variable's value is determined is far upstream from the point at which it takes effect. It makes work for the software's maintainers, who will need to retrace the chain of events in reverse It's error prone: when there are multiple variants of the same artifact floating around, it's easy to generate or use the wrong one by accident. You can only generate one of the variants per build, since the profiles are mutually exclusive. Therefore you will not be able to use the Maven release plugin if you need release versions of each variant (which you typically will). It's against Maven convention, which is to produce a single artifact per project (plus secondary artifacts such as documentation). It slows down feedback: changing the variable's value requires a rebuild. If you configured at run-time you would only need to restart the application (and perhaps not even that). One should always aim for rapid feedback. Profiles are there to help you ensure your project will build in a variety of environments: a Windows developer's machine and a CI server, for instance. They weren't intended to help you build variant artifacts from the same project, nor to inject run-time configuration into your project. How to achieve it If you need to get variable runtime configuration into your project, there are alternatives: Use JNDI for your database connections. Your project only contains the resource name of the datasource, which never changes. You configure the appropriate database parameters in the JNDI resource on the server. Use system properties: Spring, for example, will pick these up when attempting to resolve variables in its configuration. Define a standard mechanism for reading values from a configuration file that resides outside the project. For example, you could specify the path to a properties file in a system property. Structural variants are harder to achieve, and I confess I have no first-hand experience with them. I recommend you read this explanation of how to do them and why they're a bad idea, and if you still want to do them, take the option of multiple JAR plugin or assembly plugin executions, rather than profiles. At least that way, you'll be able to use the release plugin to generate all your artifacts in one build, rather than a single one at a time. Further reading Profiles chapter from the Sonatype Maven book. Deploying to multiple environments (prod, test, dev): Stackoverflow.com discussion; see the first and top-rated answer. Short of creating a specific project for the run-time configuration, you could simply use run-time parameters such as system properties. Creating multiple artifacts from one project: How to Create Two JARs from One Project (…and why you shouldn’t) by Tim O'Brien of Sonatype (the Maven people) Blog post explaining the same technique Maven best practices (not specifically about profiles): http://mindthegab.com/2010/10/21/boost-your-maven-build-with-best-practices/ http://blog.tallan.com/2010/09/16/maven-best-practices/ This article is a completely reworked version of a post from my blog.
November 27, 2010
by Andrew Spencer
· 140,207 Views · 4 Likes
article thumbnail
Java Thread Local – How to Use and Code Sample
Read about what a Thread Local is, and learn how to use it in this awesome tutorial.
November 23, 2010
by Veera Sundar
· 247,731 Views · 10 Likes
article thumbnail
Real-Time Charts on the Java Desktop
Devoxx, and all similar conferences, is a place where you make new discoveries, continually. One of these, in my case, at last week's Devoxx, started from a discussion with Jaroslav Bachorik from the VisualVM team. He had presented VisualVM's extensibility in a session at Devoxx. I had heard that, when creating extensions for VisualVM, one can also create new charts using VisualVM's own charting API. Jaroslav confirmed this and we created a small demo together to prove it, i.e., there's a charting API in VisualVM. Since VisualVM is based on the NetBeans Platform, I went further and included the VisualVM charts in a generic NetBeans Platform application. Then I wondered what the differences are between JFreeChart and VisualVM charts, so asked the VisualVM chart architect, Jiri Sedlacek. He sent me a very interesting answer: JFreeCharts are great for creating any kind of static graphs (typically for reports). They provide support for all types of existing chart types. The benefit of using JFreeChart is fully customizable appearance and export to various formats. The only problem of this library is that it's not primarily designed for displaying live data. You can hack it to display data in real time, but the performance is poor. That's why I've created the VisualVM charts. The primary (and so far only) goal is to provide charts optimized for displaying live data with minimal performance and memory overhead. You can easily display a fullscreen graph and it will still scroll smoothly while running and adding new values (when running on physical hardware, virtualized environment may give slightly worse results). There's a real rendering engine behind the charts which ensures that only the changed areas of the chart are repainted (no full-repaints because of a 1px change). Scrolling the chart means moving the already rendered image and only painting the newly displayed area. Last but not least, the charts are optimized for displaying over a remote X session - rendering is automatically switched to low-quality ensuring good response times and interactivity. The Tracer engine introduced in VisualVM 1.3 further improves performance of the charts. I've intensively profiled and optimized the charts to minimize the cpu cycles/memory allocations for each repaint. As of now, I believe that the VisualVM charts are the fastest real time Java charts with the lowest cpu/memory footprint. Best of all is that everything described above is in the JDK. That's because VisualVM is in the JDK. Here's a small NetBeans Platform application (though you could also use the VisualVM chart API without using the NetBeans Platform, just include these JARs on your classpath: org-netbeans-lib-profiler-charts.jar, com-sun-tools-visualvm-charts.jar, com-sun-tools-visualvm-uisupport.jar and org-netbeans-lib-profiler-ui.jar) that makes use of the VisualVM chart API outlined above: The chart that you see above is updated in real time and you can change to full screen and you can scroll through it and, at the same time, there is no lag and it is very performant. Below is all the code (from the unit test package in the VisualVM sources) that you see in the JPanel above: public class Demo extends JPanel { private static final long SLEEP_TIME = 500; private static final int VALUES_LIMIT = 150; private static final int ITEMS_COUNT = 8; private SimpleXYChartSupport support; public Demo() { createModels(); setLayout(new BorderLayout()); add(support.getChart(), BorderLayout.CENTER); } private void createModels() { SimpleXYChartDescriptor descriptor = SimpleXYChartDescriptor.decimal(0, 1000, 1000, 1d, true, VALUES_LIMIT); for (int i = 0; i < ITEMS_COUNT; i++) { descriptor.addLineFillItems("Item " + i); } descriptor.setDetailsItems(new String[]{"Detail 1", "Detail 2", "Detail 3"}); descriptor.setChartTitle("Demo Chart"); descriptor.setXAxisDescription("X Axis [time]"); descriptor.setYAxisDescription("Y Axis [units]"); support = ChartFactory.createSimpleXYChart(descriptor); new Generator(support).start(); } private static class Generator extends Thread { private SimpleXYChartSupport support; public void run() { while (true) { try { long[] values = new long[ITEMS_COUNT]; for (int i = 0; i < values.length; i++) { values[i] = (long) (1000 * Math.random()); } support.addValues(System.currentTimeMillis(), values); support.updateDetails(new String[]{1000 * Math.random() + "", 1000 * Math.random() + "", 1000 * Math.random() + ""}); Thread.sleep(SLEEP_TIME); } catch (Exception e) { e.printStackTrace(System.err); } } } private Generator(SimpleXYChartSupport support) { this.support = support; } } } Here is the related Javadoc. To get started using the VisualVM charts in your own application, read this blog, and then look in the "lib" folder of the JDK to find the JARs you will need. And then have fun with real-time data in your Java desktop applications.
November 20, 2010
by Geertjan Wielenga
· 71,404 Views
article thumbnail
SOAP/SAAJ/XML Issues When Migrating to Java 6 (with Axis 1.2)
When you migrate an application using Apache Axis 1.2 from Java 4 or 5 to Java 6 (JRE 1.6) you will most likely encounter a handful of strange SOAP/SAAJ/XML errors and ClassCastExceptions. This is due to the fact that Sun’s implementation of SAAJ 1.3 has been integrated directly into the 1.6 JRE. Due to this integration it’s loaded by the bootstrap class loader and thus cannot see various classes that you might be referencing in your old code. As mentioned on Spring pages: Java 1.6 ships with SAAJ 1.3, JAXB 2.0, and JAXP 1.4 (a custom version of Xerces and Xalan). Overriding these libraries by putting different version on the classpath will result in various classloading issues, or exceptions in org.apache.xml.serializer.ToXMLSAXHandler. The only option for using more recent versions is to put the newer version in the endorsed directory (see above). Fortunately, there is a simple solution, at least for Axis 1.2. Some of the exceptions that we’ve encountered Sample Axis code import javax.xml.messaging.URLEndpoint;import javax.xml.soap.MessageFactory;import javax.xml.soap.SOAPConnection;import javax.xml.soap.SOAPConnectionFactory;import javax.xml.soap.SOAPMessage;...public static callAxisWebservice() {SOAPConnectionFactory soapconnectionfactory = SOAPConnectionFactory.newInstance();SOAPConnection soapconnection = soapconnectionfactory.createConnection();MessageFactory messagefactory = MessageFactory.newInstance();SOAPMessage soapmessage = messagefactory.createMessage();...URLEndpoint urlendpoint = new URLEndpoint(string);SOAPMessage soapmessage_18_ = soapconnection.call(soapmessage, urlendpoint);...} SOAPExceptionImpl: Bad endPoint type com.sun.xml.internal.messaging.saaj.SOAPExceptionImpl: Bad endPoint type http://example.com/ExampleAxisService at com.sun.xml.internal.messaging.saaj.client.p2p.HttpSOAPConnection.call(HttpSOAPConnection.java:161) This extremely confusing error is caused by the following, seemingly innocent code above, namely by the ‘… new URLEndpoint(string)’ and the call itself. The problem here is that Sun’s HttpSOAPConnection can’t see the javax.xml.messaging.URLEndpoint because it is not part of the JRE and is contained in another JAR, not visible to the classes loaded by the bootstrap loader. If you check the HttpSOAPConnection’s code (this is not exactly the version I have but close enough) you will see that it calls “Class.forName(“javax.xml.messaging.URLEndpoint”);” on line 101. For the reason mentioned it fails with a ClassNotFoundException (as indicated by the log “URLEndpoint is available only when JAXM is there” when you enable the JDK logging for the finest level) and thus the method isn’t able to recognize the type of the argument and fails with the confusing Bad endPoint message. A soluti0n in this case would be to pass a java.net.URL or a String instead of a URLEndpoint (though it might lead to other errors, like the one below). Related: Oracle saaj:soap1.2 bug SOAPExceptionImpl: Bad endPoint type. DOMException: NAMESPACE_ERR org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or change an object in a way which is incorrect with regard to namespaces. at org.apache.xerces.dom.AttrNSImpl.setName(Unknown Source) at org.apache.xerces.dom.AttrNSImpl.(Unknown Source) at org.apache.xerces.dom.CoreDocumentImpl.createAttributeNS(Unknown Source) I don’t rembember exactly what we have changed on the classpath to get this confusing exception and I’ve no idea why it is thrown. Bonus: Conflict between Axis and IBM WebSphere JAX-RPC “thin client” Additionally, if you happen to have com.ibm.ws.webservices.thinclient_7.0.0.jar somewhere on the classpath, you may get this funny exception: java.lang.ClassCastException: org.apache.axis.Message incompatible with com.ibm.ws.webservices.engine.Message at com.ibm.ws.webservices.engine.soap.SOAPConnectionImpl.call(SOAPConnectionImpl.java:198) You may wonder why Java tries to use Axis Message with WebSphere SOAP connection. Well, it’s because the SAAJ lookup mechanism prefers the websphere implementation, for it declares itself via META-INF/services/javax.xml.soap.SOAPFactory pointing to com.ibm.ws.webservices.engine.soap.SOAPConnectionFactoryImpl, but instantiates the org.apache.axis.soap.MessageFactoryImpl for message creation for the websphere thin client doesn’t provide an implementation of this factory. The solution here is the same as for all the other exception, to use exclusively Axis. But if you are interested, check the description how to correctly create a Message with the websphere runtime on page 119 of the IBM WebSphere Application Server V7.0 Web Services Guide (md = javax.xml.ws.Service.create(serviceName).createDispatch(portName, SOAPMessage.class, Service.Mode.MESSAGE); ((SOAPBinding) ((BindingProvider) md).getBinding()).getMessageFactory();). Solution The solution that my collegue Jan Nad has found is to force JRE to use the SOAP/SAAJ implementation provided by Axis, something like: java -Djavax.xml.soap.SOAPFactory=org.apache.axis.soap.SOAPFactoryImpl -Djavax.xml.soap.MessageFactory=org.apache.axis.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPConnectionFactory=org.apache.axis.soap.SOAPConnectionFactoryImpl example.MainClass It’s also described in issue AXIS-2777. Check details of the lookup process in the SOAPFactory.newInstance() JavaDoc. From http://theholyjava.wordpress.com/2010/11/19/soapsaajxml-issues-when-migrating-to-java-6-with-axis-1-2/
November 20, 2010
by Jakub Holý
· 27,341 Views
article thumbnail
Java Web Start (Jnlp) Hello World Example
this tutorial shows you how to create a java web start (jnlp) file for user download. when the user clicks on the downloaded jnlp file, it launches a simple awt program. here's the summary steps : create a simple awt program and jar it as testjnlp.jar add keystore into testjnlp.jar create a jnlp file put it all into the tomcat folder access testjnlp.jar from web through http://localhost:8080/test.jnlp before starting this tutorial, lets read this brief java web start explanation from oracle. java web start is a mechanism for program delivery through a standard web server. typically initiated through the browser, these programs are deployed to the client and executed outside the scope of the browser. once deployed, the programs do not need to be downloaded again, and they can automatically download updates on startup without requiring the user to go through the whole installation process again. ok, let's go ~ 1. install jdk and tomcat install java jdk/jre version above 1.5 and tomcat. 2. directory structure directory structure of this example. 3. awt + jnlp see the content of testjnlp.java, it's just a simple awt program with jnlp supported. package com.mkyong;import java.awt.*;import javax.swing.*;import java.net.*;import javax.jnlp.*;import java.awt.event.actionlistener;import java.awt.event.actionevent;public class testjnlp { static basicservice basicservice = null; public static void main(string args[]) { jframe frame = new jframe("mkyong jnlp unofficial guide"); frame.setdefaultcloseoperation(jframe.exit_on_close); jlabel label = new jlabel(); container content = frame.getcontentpane(); content.add(label, borderlayout.center); string message = "jnln hello word"; label.settext(message); try { basicservice = (basicservice) servicemanager.lookup("javax.jnlp.basicservice"); } catch (unavailableserviceexception e) { system.err.println("lookup failed: " + e); } jbutton button = new jbutton("http://www.mkyong.com"); actionlistener listener = new actionlistener() { public void actionperformed(actionevent actionevent) { try { url url = new url(actionevent.getactioncommand()); basicservice.showdocument(url); } catch (malformedurlexception ignored) { } } }; button.addactionlistener(listener); content.add(button, borderlayout.south); frame.pack(); frame.show(); } p.s if "import javax.jnlp.*;" is not found, please include the jnlp library which islocated at jre/lib/javaws.jar. 4. jar it located your java classes folder. jar it with following command in command prompt jar -cf testjnlp.jar *.* this will package all the java's classes into a new jar file, named " testjnlp.jar ". 5. create keystore add a new keystore named " testkeys " keytool -genkey -keystore testkeys -alias jdc it will ask for a keystore password, first name, last name , organization's unit...etc..just fill them all. 6. assign keystore to jar file attached newly generated keystore " testkeys " to your " testjnlp.jar " file jarsigner -keystore testkeys testjnlp.jar jdc it will ask password for your newly created keystore 7. deploy jar it copy " testjnlp.jar " to tomcat's default web server folder, for example, in widnows - c:\program files\apache\tomcat 6.0\webapps\root 8. create jnlp file create a new test.jnlp file, content put this yong mook kim testing testing 9. deploy jnlp file copy test.jnlp to your tomcat default web server folder also. c:\program files\apache\tomcat 6.0\webapps\root 10. start tomcat start tomcat , c:\tomcat folder\bin\tomcat6.exe 11. test it access url http://localhost:8080/test.jnlp , it will prompt you to download the test.jnlp file, just accept and double click on it. if everything went fine, you should see the following output click on the "run" button to lauch the awt program. note if jnlp has no response, put the following code in your web.xml , which is located in the tomcat conf folder. jnlp application/x-java-jnlp-file
November 16, 2010
by Yong Mook Kim
· 93,804 Views · 3 Likes
article thumbnail
Generating Client JAVA code for WSDL using SOAP UI
create a soap ui project using your wsdl. set the preferences in soap ui for axis2 home directory. right click on the wsdl in soap ui and click generate code. select adb binding and the following settings and click generate following is the directory structure and code files generated. that’s it, you can now use this code from you ide by importing it. ps: you will need to add the axis2 jars to your project class path. for more details visit my blog @ http://nitinaggarwal.wordpress.com/
November 12, 2010
by Nitin Aggarwal
· 209,633 Views · 3 Likes
article thumbnail
ASP.NET MVC 3: Building simple image editor using WebImage helper
In my previous posting about WebImage helper I introduced how to use WebImage for different image manipulations. In this posting I will show you how to build simple online image editor using WebImage helper. Source code You can find source code of this example from Visual Studio 2010 experiments repository at GitHub. Source code repository GitHub Simple image editor belongs to Experiments.AspNetMvc3NewFeatures solution, project name is Experiments.AspNetMvc3NewFeatures.Aspx. Here you can see the screenshot of simple image editor. Four simple operations is enough for current example. Building editor view Before going to controller that is extremely simple – believe me, it is no-brainer – let’s take a look at view. The editor is made up from following parts: image tag that shows image with current effects, effects checkboxes, JavaScript that requests image with selected effects from server. Here is the view. Image Workshop Image Workshop Flip horizontally Flip vertically Rotate left Rotate right The JavaScript function updateImage() checks what checkboxes are checked and creates query string for GetImage controller action. This query string contains values only for applied effects. Controller actions When you take a look at image definition in view you can see that we need two separate actions – one that returns editor view and the other that returns image. I said before that controller is extremely simple. Don’t mind my not so nice solution for image action – it is my working draft. public class ImageWorkshopController : Controller { [HttpGet] public ActionResult Index() { return View(); } public void GetImage(string horizontalFlip="", string verticalFlip="", string rotateLeft="", string rotateRight="") { var imagePath = Server.MapPath("~/images/bunny-peanuts.jpg"); var image = new WebImage(imagePath); if (!string.IsNullOrWhiteSpace(verticalFlip)) image = image.FlipVertical(); if (!string.IsNullOrWhiteSpace(horizontalFlip)) image = image.FlipHorizontal(); if (!string.IsNullOrWhiteSpace(rotateLeft)) image = image.RotateLeft(); if (!string.IsNullOrWhiteSpace(rotateRight)) image = image.RotateRight(); image.Write(); } } Applying effects to image is very easy. We just check the values sent from view and apply the effect only if appropriate value is not null or empty string. That’s why I created query string with effect values as ones ("1") in JavaScript.
November 9, 2010
by Gunnar Peipman
· 16,267 Views
article thumbnail
Struts 2 : Creating and Accessing Maps
Today, in this post, I am going to discuss how to create and access HashMaps in Struts 2. My environment has the following jar files. struts2-core-2.1.8.1.jar ognl-2.7.3.jar Struts 2 makes extensive use of OGNL in order to retrieve the values of elements. OGNL stands for Object Graph Navigation Language. As the name suggests, OGNL is used to navigate an object graph. In this post, i am going to use the OGNL syntax to create Map on a jsp page, and show you how to iterate over it to fetch the keys and values from the map. In the following example, i will create a map, on they fly in the iterator tag, and will use it in the body of the iterator tag. Note the syntax that has been used to create a Map on a jsp page in struts 2 using OGNL. Once the map is created, the iterator tag can be used to iterate over each element of the map. Now suppose the map that we want to access is in a map inside the HttpRequest. Assume that some action somewhere in the chain has kept a map in the request using the key "myMap". In order to iterate over the elements of the map, we can do the following Okay now, enough with iterating over maps. I don't think there are any more permutations for accessing maps. But if i do find more, ill document them down here. Consider a case where you dont want to iterate over the entire map. Instead, all that you want to do is to extract a value form the map based upon a key that is already known to you in your jsp page. Assume that you have a variable in page scope called "runtimeKey" that you set using the s:set tag. The value of this variable is a string key that can be used to get a value from a map. Here is how you can fetch the value from the map without iterating over it. As you see, since the variable "runtimeKey" is an OGNL variable and is available on the value stack, it can be referenced using the # notations. Also not that instead of using the dot notation, I have used square brackets to fetch the key. This is because the value of my expression "#runtimeKey will only be evaluated when its inside the brackets. Also note that the value of the key runtimeKey is contained within single quotes to direct OGNL to evaluate it as a string when setting as the value for my key. Consider another situation where your keys follow a pattern. For example key_1, key_2. And you have the values 1 and 2 as page scoped variables. So, now instead of having the string value of the key directly, you may have to construct the key using concatenation. Your key pattern was set above. And you Map has a key called key_1. So, here is how you would have to concatenate your strings in order to construct the key and fetch the value. Huh. So easy! Thats all for now folks.. Stay tuned for more! Happy Programming :) From http://mycodefixes.blogspot.com/2010/11/struts-2-creating-and-accessing-maps.html
November 8, 2010
by Ryan Sukale
· 32,144 Views
article thumbnail
WCF: The maximum message size quota for incoming messages (65536) has been exceeded
When using WCF services you may get the following error: "The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element." This error is given because of size limits set to responses in your application configuration. You need to increase the value of two limiting parameters: maxBufferSize and maxReceivedMessageSize. In my example these values are set to 5000000. After increasing these values your service requests should work normally.
November 8, 2010
by Gunnar Peipman
· 33,219 Views
article thumbnail
An introduction to Spock
Spock is an open source testing framework for Java and Groovy that has been attracting a growing following, especially in the Groovy community. It lets you write concise, expressive tests, using a quite readable BDD-style notation. It even comes with its own mocking library built in. Oh. I thought he was a sci-fi character. Can I see an example? Sure. Here's a simple one from a coding kata I did recently: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def "I plus I should equal II"() { given: def calculator = new RomanCalculator() when: def result = calculator.add("I", "I") then: result == "II" } } In Spock, you don't have tests, you have specifications. These are normal Groovy classes that extend the Specifications class, which is actually a JUnit class. Your class contains a set of specifications, represented by methods with funny-method-names-in-quotes™. The funny-method-names-in-quotes™ take advantage of some Groovy magic to let you express your requirements in a very readable form. And since these classes are derived from JUnit, you can run them from within Eclipse like a normal Groovy unit test, and they produce standard JUnit reports, which is nice for CI servers. Another thing: notice the structure of this test? We are using given:, when: and then: to express actions and expected outcomes. This structure is common in Behaviour-Driven Development, or BDD, frameworks like Cucumber and easyb. Though Spock-style tests are generally more concise more technically-focused than tools like Cucumber and easyb, which are often used for automating acceptance tests. But I digress... Actually, the example I gave earlier was a bit terse. We could make our intent clearer by adding text descriptions after the when: and then: labels, as I've done here: def "I plus I should equal II"() { when: "I add two roman numbers together" def result = calculator.add("I", "I") then: "the result should be the roman number equivalent of their sum" result == "II" } This is an excellent of clarifying your ideas and documenting your API. But where are the AssertEquals statements? Aha! I'm glad you asked! Spock uses a feature called Power Asserts. The statement after the then: is your assert. If this test fails, Spock will display a detailed analysis of what went wrong, along the following lines: I plus I should equal II(com.wakaleo.training.spocktutorial.RomanCalculatorSpec) Time elapsed: 0.33 sec <<< FAILURE! Condition not satisfied: result == "II" | | I false 1 difference (50% similarity) I(-) I(I) at com.wakaleo.training.spocktutorial .RomanCalculatorSpec.I plus I should equal II(RomanCalculatorSpec.groovy:17) Nice! But in JUnit, I have @Before and @After for fixtures. Can I do that in Spock? Sure, but you don't use annotations. Instead you implement setup() and cleanup() methods (which are run before and after each specification). I've added one here to show you what they look like: import spock.lang.Specification; class RomanCalculatorSpec extends Specification { def calculator def setup() { calculator = new RomanCalculator() } def "I plus I should equal II"() { when: def result = calculator.add("I", "I") then: result == "II" } } You can also define a setupSpec() and cleanupSpec(), which are run just before the first test and just after the last one. I'm a big fan of parameterized tests in JUnit 4. Can I do that in Spock! You sure can! In fact it's one of Spock's killer features! def "The lowest number should go at the end"() { when: def result = calculator.add(a, b) then: result == sum where: a | b | sum "X" | "I" | "XI" "I" | "X" | "XI" "XX" | "I" | "XXI" "XX" | "II" | "XXII" "II" | "XX" | "XXII" } This code will run the test 5 times. The variables a, b, and sum are initialized from the rows in the table in the where: clause. And if any of the tests fail, you get That's pretty cool too. What about mocking? Can I use Mockito? Sure, if you want. but Spock actually comes with it's own mocking framework, which is pretty neat. You set up a mock or a stub using the Mock() method. I've shown two possible ways to use this method here: given: Subscriber subscriber1 = Mock() def subscriber2 = Mock(Subscriber) ... You can set these mocks up to behave in certain ways. Here are a few examples. You can say a method should return a certain value using the >> operator: subscriber1.isActive() >> true subscriber2.isActive() >> false Or you could get a method to throw an exception when it is called: subscriber.activate() >> { throw new BlacklistedSubscriberException() } Then you can test outcomes in a few different ways. Here is a more complicated example to show you some of your options: def "Messages published by the publisher should only be received by active subscribers"() { given: "a publisher" def publisher = new Publisher() and: "some active subscribers" Subscriber activeSubscriber1 = Mock() Subscriber activeSubscriber2 = Mock() activeSubscriber1.isActive() >> true activeSubscriber2.isActive() >> true publisher.add activeSubscriber1 publisher.add activeSubscriber2 and: "a deactivated subscriber" Subscriber deactivatedSubscriber = Mock() deactivatedSubscriber.isActive() >> false publisher.add deactivatedSubscriber when: "a message is published" publisher.publishMessage("Hi there") then: "the active subscribers should get the message" 1 * activeSubscriber1.receive("Hi there") 1 * activeSubscriber2.receive({ it.contains "Hi" }) and: "the deactivated subscriber didn't receive anything" 0 * deactivatedSubscriber.receive(_) } That does look neat. So what is the best place to use Spock? Spock is great for unit or integration testing of Groovy or Grails projects. On the other hand, tools like easyb amd cucumber are probably better for automated acceptance tests - the format is less technical and the reporting is more appropriate for non-developers. From http://www.wakaleo.com/blog/303-an-introduction-to-spock
November 4, 2010
by John Ferguson Smart
· 38,157 Views · 4 Likes
article thumbnail
Dynamic Mock Testing
Have you ever had to create a mock object in which most methods do nothing and are not called, but in others something useful needs to be done? EasyMock has some newish functionality to let you stub individual methods. But before I had heard about that, I had built a little framework (one base class) for creating mock objects which stubs those methods you want to stub, as well as logging every call made to the classes being mocked. It works like this: you choose a class which you need to mock, for example a service class called FooService, and you create a new class called FooServiceMock. You make it extend from AbstractMock, where T is the class you are mocking. As an example: public class FooServiceMock extends AbstractMock { public FooServiceMock() { super(FooService.class); } It needs to have a constructor to call the super constructor passing the class being mocked too. Perhaps that could be optimised, I don't have too much time right now. Next, you implement only those methods you expect to be called. For example: public class FooServiceMock extends AbstractMock { public FooServiceMock() { super(FooService.class); } /** * this is a method which exists in FooService, * but I want it to do something else. */ public String sayHello(String name){ return "Hello " + name + ", Foo here! This is a stub method!"; } To use the mock, you'll notice that it doesn't extend the class which it mocks, which might be problematic... Well, there are good reasons. To do the mocking, the abstract base class is actually going to create a dynamic proxy which wraps itself behind the interface of the class being mocked. To the caller, it looks like the FooService, but it's not actually anything related to it. Anytime a call to the FooService is made, the first thing which the proxy does is log that call, using XStream to create an XML representation of the parameters being passed into the method. Then, the proxy goes and looks in the instance of the mock class to see if it can find the method being called (well at least a method which takes the same parameters and has the same name and return type). If it finds such a method, it calls it. In our example, the sayHello(String) method would get called. It returns the result if there is one, to the caller. In the case where it cannot find the method, it throws an exception, because it assumes that if it was not implemented, you didn't expect it to be called. You could of course change this to suit your needs, maybe even calling the actual FooService. So, how to you use the FooServiceMock to create a FooService instance which you can use to mock your service? In the test, where you setup the class under test, you do this: FooServiceMock fooService = new FooServiceMock(); //perhaps tell it about objects you would //like it to return... instanceOfClassUnderTest.setFooService( fooService.getMock()); The setFooService(FooService) method on the instance of the class you are testing is in my case present, but you might not have it and may need to use reflection to do it. It's a question of how testable you write your classes, and is a design choice. The getMock() method on the AbstractMock class is the method which creates the dynamic proxy which wraps the instance of the mock. You can now test the class. There is however still something useful you can do after testing, i.e. assert that the right calls were made in the correct order with the right parameters. You do this in the test class to: assertEquals(1, fooService.getCalls().size()); assertEquals("[sayHello: Ant]", fooService.getCalls().toString()); The above tests that the sayHello(String) method was called just once, and passed the name "Ant". There are times when you might want to clear the call log, between parts of the test. For that, call the clearCalls() method on the mock object: fooService.clearCalls(); Have fun! From http://blog.maxant.co.uk/pebble/2010/11/03/1288813500000.html
November 4, 2010
by Ant Kutschera
· 9,651 Views
article thumbnail
ASP.NET: Converting X-Forwarded-For To REMOTE_HOST
Just keeping notes here. If you’re using IIS Mod Rewrite or if you’re running a reverse proxy or a non-transparent load balancer, you may find the request URL server variable mismatching and the REMOTE_ADDR server variable coming back as the IP address of the proxy server or as the load balancer IP. This is no good, because all of the user’s IP address information is lost. Well, almost lost. What should happen is the the reverse proxy is supposed to add an X-Forwarded-For HTTP header in the request headers containing the user’s IP address, as well as X-Original-URL. This can then be read back by the web server. The problem is, even with X-Forwarded-For being passed in to the web server, REMOTE_ADDR is still wrong. I’m looking for the easiest path to just fix this once and for all without delving into web application code. So I’m playing with this simple global.asax tweak to fix this. I haven’t fully tested this yet so, again, just taking notes here .. void Application_BeginRequest(object sender, EventArgs e) { var ss = Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(ss) && ss.Contains("Microsoft-IIS")) // doesn't work w/ Cassini { string SourceIP = String.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; Request.ServerVariables.Set("REMOTE_ADDR", SourceIP); Request.ServerVariables.Set("REMOTE_HOST", SourceIP); string OrigUrl = Request.ServerVariables["HTTP_X_ORIGINAL_URL"]; if (!string.IsNullOrEmpty(OrigUrl)) { var url = new Uri(Request.Url, OrigUrl); Request.ServerVariables.Set("SERVER_NAME", url.Host); Request.ServerVariables.Set("SERVER_PORT", url.Port.ToString()); } } }
November 4, 2010
by Jon Davis
· 15,803 Views
article thumbnail
Specification Pattern by Scala
Specification pattern is simple and clean solution to implement your business rules. This pattern is mixture between composite and factory Gof patterns introduced by Eric Evans & Martin Fowler. In this blog, we will not explain how this pattern works, but you can refer to wikipedia and Xebia for more details. But, we try here to translate specification pattern using the Scala language. Implementation : package me.jtunisie.specifications trait TSpecification [T]{ def ->(candidate :T ):Boolean def ||( specification : TSpecification[T]):TSpecification[T] def &&(specification :TSpecification[T] ):TSpecification[T] def unary_!():TSpecification[T] //Addition def | ( specification : TSpecification[T]):TSpecification[T] def & (specification :TSpecification[T] ):TSpecification[T] } The last two methods has been added compared to the specification. Sometimes, we need to do full check as A || B is not always equal to B || A. {Take as example true || (1/0 == 0 ). "We have encountered this problem when we validate an old embedded compiler written in C and in some platform like Solaris ( A || B ) starts by checking B and then A. } abstract class ASpecification[T] extends TSpecification [T]{ def ->(candidate :T ):Boolean def ||( s : TSpecification[T]):TSpecification[T] = OrSpecification(this,s) def &&(s :TSpecification[T] ):TSpecification[T] = AndSpecification(this,s) def unary_!():TSpecification[T] = NotSpecification(this) // Recurssive Add-0ns def | ( s : TSpecification[T]):TSpecification[T] = ROrSpecification(this,s) def &(s :TSpecification[T] ):TSpecification[T] = RAndSpecification(this,s) } To implement recursive and not recursive behavior, we have used /: and \: . According to documentation : (z /: List(a, b, c)) (op) equal to op(op(op(z, a), b), c) and (List(a, b, c) :\ z) (op) equal to op(a, op(b, op(c, z))) case class AndSpecification[T]( s: TSpecification[T]* ) extends ASpecification[T]{ override def -> (candidate :T ):Boolean= (true /: s) (_ -> candidate && _ -> candidate) } case class OrSpecification[T]( s: TSpecification[T]* ) extends ASpecification[T]{ override def ->(candidate :T ):Boolean= (false /: s) (_ -> candidate || _ -> candidate) } case class NotSpecification[T]( s: TSpecification[T] ) extends ASpecification[T]{ override def ->(candidate :T )= ! (s -> candidate ) } case class RAndSpecification[T]( s: TSpecification[T]* ) extends ASpecification[T]{ override def ->(candidate :T ):Boolean= (s :\ true) (_ -> candidate && _ -> candidate) } case class ROrSpecification[T]( s: TSpecification[T]* ) extends ASpecification[T]{ override def ->(candidate :T ):Boolean= (s :\ false) (_ -> candidate || _ -> candidate) } This code will not compile. In fact, Boolean class doesn't have -> method (IsSatisfiedBy). By Scala, we can use open class tool ( adding this method to Boolean class). Here is the implicit implementation : package me.jtunisie package object specifications { class MyBool(b:Boolean){ def ->(candidate :Any ):Boolean = b } implicit def toMyBool(b:Boolean)= new MyBool(b) } source code is under : http://github.com/ouertani/Rules Mode of use ( with precedence checked ): Object AlwaysOk extends ASpecification[Boolean] { override def -> (b: Boolean)= true } object AlwaysKo extends ASpecification[Boolean] { override def -> (b: Boolean)= false } object AlwaysOk extends ASpecification[Boolean] { override def -> (b: Boolean)= true } object AlwaysKo extends ASpecification[Boolean] { override def -> (b: Boolean)= false } false mustBe ((AlwaysOk || AlwaysKo) && AlwaysKo)-> (true) true mustBe (AlwaysOk || (AlwaysKo && AlwaysKo))-> (true) true mustBe ( AlwaysOk || AlwaysKo && AlwaysKo)-> (true)
November 3, 2010
by Slim Ouertani
· 8,610 Views
  • Previous
  • ...
  • 818
  • 819
  • 820
  • 821
  • 822
  • 823
  • 824
  • 825
  • 826
  • 827
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: