DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Spring Test with thymeleaf for Views
I am a recent convert to thymeleaf for view templating in Spring based web applications, preferring it over jsp's. All the arguments that thymeleaf documentation makes on why thymeleaf over jsp holds water and I am definitely sold. One of the big reasons for me, apart from being able to preview the template, is the way the view is rendered at runtime. Whereas the application stack has to defer the rendering of jsp to the servlet container, it has full control over the rendering of thymeleaf templates. To clarify this a little more, with jsp as the view technology an application only returns the location of the jsp and it is upto the servlet container to render the jsp. So why again is this a big reason - because using the mvc test support in spring-test module, now the actual rendered content can be asserted on rather than just the name of the view. Consider a sample Spring MVC controller : @Controller @RequestMapping("/shop") public class ShopController { ... @RequestMapping("/products") public String listProducts(Model model) { model.addAttribute("products", this.productRepository.findAll()); return "products/list"; } } Had the view been jsp based, I would have had a test which looks like this: @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration(classes = SampleWebApplication.class) public class ShopControllerWebTests { @Autowired private WebApplicationContext wac; private MockMvc mockMvc; @Before public void setup() { this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build(); } @Test public void testListProducts() throws Exception { this.mockMvc.perform(get("/shop/products")) .andExpect(status().isOk()) .andExpect(view().name("products/list")); } } the assertion is only on the name of the view. Now, consider a test with thymeleaf used as the view technology: @Test public void testListProducts() throws Exception { this.mockMvc.perform(get("/shop/products")) .andExpect(status().isOk()) .andExpect(content().string(containsString("Dummy Book1"))); } Here, I am asserting on the actual rendered content. This is really good, whereas with jsp I would had to validate that the jsp is rendered correctly at runtime with a real container, with thymeleaf I can validate that rendering is clean purely using tests.
April 15, 2014
by Biju Kunjummen
· 26,668 Views · 2 Likes
article thumbnail
How to Convert C# Object Into JSON String with JSON.NET
Before some time I have written a blog post – Converting a C# object into JSON string in that post one of reader, Thomas Levesque commented that mostly people are using JSON.NET a popular high performance JSON for creating for .NET Created by James Newton- King. I agree with him if we are using .NET Framework 4.0 or higher version for earlier version still JavaScriptSerializer is good. So in this post we are going to learn How we can convert C# object into JSON string with JSON.NET framework. What is JSON.NET: JSON.NET is a very high performance framework compared to other serializer for converting C# object into JSON string. It is created by James Newton-Kind. You can find more information about this framework from following link. http://james.newtonking.com/json How to convert C# object into JSON string with JSON.NET framework: For this I am going to use old application that I have used in previous post. Following is a employee class with two properties first name and last name. public class Employee { public string FirstName { get; set; } public string LastName { get; set; } } I have created same object of “Employee” class as I have created in previous post like below. Employee employee=new Employee {FirstName = "Jalpesh", LastName = "Vadgama"}; Now it’s time to add JSON.NET Nuget package. You install Nuget package via following command. I have installed like below. Now we are done with adding NuGet package. Following is code I have written to convert C# object into JSON string. string jsonString = Newtonsoft.Json.JsonConvert.SerializeObject(employee); Console.WriteLine(jsonString); Let's run application and following is a output as expected. That’s it. It’s very easy. Hope you like it. Stay tuned for more.
April 14, 2014
by Jalpesh Vadgama
· 192,979 Views
article thumbnail
Creating Object Pool in Java
In this post, we will take a look at how we can create an object pool in Java. In recent times, JVM performance has been multiplied manifold and so object creation is no longer considered as expensive as it was done earlier. But there are few objects, for which creation of new object still seems to be slight costly as they are not considered as lightweight objects. e.g.: database connection objects, parser objects, thread creation etc. In any application we need to create multiple such objects. Since creation of such objects is costly, it’s a sure hit for the performance of any application. It would be great if we can reuse the same object again and again. Object Pools are used for this purpose. Basically, object pools can be visualized as a storage where we can store such objects so that stored objects can be used and reused dynamically. Object pools also controls the life-cycle of pooled objects. As we understood the requirement, let’s come to real stuff. Fortunately, there are various open source object pooling frameworks available, so we do not need to reinvent the wheel. In this post we will be using apache commons pool to create our own object pool. At the time of writing this post Version 2.2 is the latest, so let us use this. The basic thing we need to create is- 1. A pool to store heavyweight objects (pooled objects). 2. A simple interface, so that client can - a.) Borrow pooled object for its use. b.) Return the borrowed object after its use. Let’s start with Parser Objects. Parsers are normally designed to parse some document like xml files, html files or something else. Creating new xml parser for each xml file (having same structure) is really costly. One would really like to reuse the same (or few in concurrent environment) parser object(s) for xml parsing. In such scenario, we can put some parser objects into pool so that they can be reused as and when needed. Below is a simple parser declaration: package blog.techcypher.parser; /** * Abstract definition of Parser. * * @author abhishek * */ public interface Parser { /** * Parse the element E and set the result back into target object T. * * @param elementToBeParsed * @param result * @throws Exception */ public void parse(E elementToBeParsed, T result) throws Exception; /** * Tells whether this parser is valid or not. This will ensure the we * will never be using an invalid/corrupt parser. * * @return */ public boolean isValid(); /** * Reset parser state back to the original, so that it will be as * good as new parser. * */ public void reset(); } Let’s implement a simple XML Parser over this as below: package blog.techcypher.parser.impl; import blog.techcypher.parser.Parser; /** * Parser for parsing xml documents. * * @author abhishek * * @param * @param */ public class XmlParser implements Parser { private Exception exception; @Override public void parse(E elementToBeParsed, T result) throws Exception { try { System.out.println("[" + Thread.currentThread().getName()+ "]: Parser Instance:" + this); // Do some real parsing stuff. } catch(Exception e) { this.exception = e; e.printStackTrace(System.err); throw e; } } @Override public boolean isValid() { return this.exception == null; } @Override public void reset() { this.exception = null; } } At this point, as we have parser object we should create a pool to store these objects. Here, we will be using GenericObjectPool to store the parse objects. Apache commons pool has already build-in classes for pool implementation. GenericObjectPool can be used to store any object. Each pool can contain same kind of object and they have factory associated with them. GenericObjectPool provides a wide variety of configuration options, including the ability to cap the number of idle or active instances, to evict instances as they sit idle in the pool, etc. If you want to create multiple pools for different kind of objects (e.g. parsers, converters, device connections etc.) then you should use GenericKeyedObjectPool . package blog.techcypher.parser.pool; import org.apache.commons.pool2.PooledObjectFactory; import org.apache.commons.pool2.impl.GenericObjectPool; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import blog.techcypher.parser.Parser; /** * Pool Implementation for Parser Objects. * It is an implementation of ObjectPool. * * It can be visualized as- * +-------------------------------------------------------------+ * | ParserPool | * +-------------------------------------------------------------+ * | [Parser@1, Parser@2,...., Parser@N] | * +-------------------------------------------------------------+ * * @author abhishek * * @param * @param */ public class ParserPool extends GenericObjectPool>{ /** * Constructor. * * It uses the default configuration for pool provided by * apache-commons-pool2. * * @param factory */ public ParserPool(PooledObjectFactory> factory) { super(factory); } /** * Constructor. * * This can be used to have full control over the pool using configuration * object. * * @param factory * @param config */ public ParserPool(PooledObjectFactory> factory, GenericObjectPoolConfig config) { super(factory, config); } } As we can see, the constructor of pool requires a factory to manage lifecycle of pooled objects. So we need to create a parser factory which can create parser objects. Commons pool provide generic interface for defining a factory(PooledObjectFactory). PooledObjectFactory create and manage PooledObjects . These object wrappers maintain object pooling state, enabling PooledObjectFactory methods to have access to data such as instance creation time or time of last use. A DefaultPooledObject is provided, with natural implementations for pooling state methods. The simplest way to implement a PoolableObjectFactory is to have it extend BasePooledObjectFactory . This factory provides a makeObject() that returns wrap(create()) where create and wrap are abstract. We provide an implementation of create to create the underlying objects that we want to manage in the pool and wrap to wrap created instances in PooledObjects. So, here is our factory implementation for parser objects- package blog.techcypher.parser.pool; import org.apache.commons.pool2.BasePooledObjectFactory; import org.apache.commons.pool2.PooledObject; import org.apache.commons.pool2.impl.DefaultPooledObject; import blog.techcypher.parser.Parser; import blog.techcypher.parser.impl.XmlParser; /** * Factory to create parser object(s). * * @author abhishek * * @param * @param */ public class ParserFactory extends BasePooledObjectFactory> { @Override public Parser create() throws Exception { return new XmlParser(); } @Override public PooledObject> wrap(Parser parser) { return new DefaultPooledObject>(parser); } @Override public void passivateObject(PooledObject> parser) throws Exception { parser.getObject().reset(); } @Override public boolean validateObject(PooledObject> parser) { return parser.getObject().isValid(); } } Now, at this point we have successfully created our pool to store parser objects and we have a factory as well to manage the life-cycle of parser objects. You should notice that, we have implemented couple of extra methods- 1. boolean validateObject(PooledObject obj): This is used to validate an object borrowed from the pool or returned to the pool based on configuration. By default, validation remains off. Implementing this ensures that client will always get a valid object from the pool. 2. void passivateObject(PooledObject obj): This is used while returning an object back to pool. In the implementation we can reset the object state, so that the object behaves as good as a new object on another borrow. Since, we have everything in place, let’s create a test to test this pool. Pool clients can – 1. Get object by calling pool.borrowObject() 2. Return the object back to pool by calling pool.returnObject(object) Below is our code to test Parser Pool- package blog.techcypher.parser; import static org.junit.Assert.fail; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.ExecutorService; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import junit.framework.Assert; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import org.junit.Before; import org.junit.Test; import blog.techcypher.parser.pool.ParserFactory; import blog.techcypher.parser.pool.ParserPool; /** * Test case to test- * 1. object creation by factory * 2. object borrow from pool. * 3. returning object back to pool. * * @author abhishek * */ public class ParserFactoryTest { private ParserPool pool; private AtomicInteger count = new AtomicInteger(0); @Before public void setUp() throws Exception { GenericObjectPoolConfig config = new GenericObjectPoolConfig(); config.setMaxIdle(1); config.setMaxTotal(1); /*---------------------------------------------------------------------+ |TestOnBorrow=true --> To ensure that we get a valid object from pool | |TestOnReturn=true --> To ensure that valid object is returned to pool | +---------------------------------------------------------------------*/ config.setTestOnBorrow(true); config.setTestOnReturn(true); pool = new ParserPool(new ParserFactory(), config); } @Test public void test() { try { int limit = 10; ExecutorService es = new ThreadPoolExecutor(10, 10, 0L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue(limit)); for (int i=0; i parser = null; try { parser = pool.borrowObject(); count.getAndIncrement(); parser.parse(null, null); } catch (Exception e) { e.printStackTrace(System.err); } finally { if (parser != null) { pool.returnObject(parser); } } } }; es.submit(r); } es.shutdown(); try { es.awaitTermination(1, TimeUnit.MINUTES); } catch (InterruptedException ignored) {} System.out.println("Pool Stats:\n Created:[" + pool.getCreatedCount() + "], Borrowed:[" + pool.getBorrowedCount() + "]"); Assert.assertEquals(limit, count.get()); Assert.assertEquals(count.get(), pool.getBorrowedCount()); Assert.assertEquals(1, pool.getCreatedCount()); } catch (Exception ex) { fail("Exception:" + ex); } } } Result: [pool-1-thread-1]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-2]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-3]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-4]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-5]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-8]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-7]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-9]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-6]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 [pool-1-thread-10]: Parser Instance:blog.techcypher.parser.impl.XmlParser@fcfa52 Pool Stats: Created:[1], Borrowed:[10] You can easily see that single parser object was created and reused dynamically. Commons Pool 2 stands far better in term of performance and scalability over Commons Pool 1. Also, version 2 includes robust instance tracking and pool monitoring. Commons Pool 2 requires JDK 1.6 or above. There are lots of configuration options to control and manage the life-cycle of pooled objects. And so ends our long post… :-) Hope this article helped. Keep learning!
April 14, 2014
by Abhishek Kumar
· 100,666 Views · 9 Likes
article thumbnail
How to Migrate from MySQL to MongoDB
In the last week I was working on a key project to migrate a BI platform from MySQL to MongoDB. We chose that database due to its support and scalability.
April 14, 2014
by Moshe Kaplan
· 115,556 Views · 6 Likes
article thumbnail
How JOIN Order Can Increase Performance in SQL Queries
Introduction All developers are very much concerned about performance. If someone say that this increase performance, all the developer are running behind it. It is not a bad practice at all. Rather as per my point of view we must span all our effort related improve the performance of query. “One common question that we find that, if we change the ordering of table join in case of inner join will effect or increase performance” To understand it lets take a simple example of Inner join. There is two tables named Table-A and Table-B. We can us the Inner Join on both the table. Like this FROM [Table-A] AS a INNER JOIN [Table-B] AS b ON a.IDNO = b.IDNO OR FROM [Table-B] AS a INNER JOIN [Table-A] AS b ON a.IDNO = b.IDNO Which one is best for performance? To answer this question we all know that whenever a SQL Query is executed the MS SQL server create several query plans with different join Order and choose the best one. That means the Join order that we are writing in the query may not be executed by execution plan. May be different join order is used by the execution plan. In the above case the execution plan decide which Join order he will chose depends on best possible costing of execution. Here [Table-A] JOIN [Table-B] or [Table-B] JOIN [Table-A], MS SQL Server knows it well that both are same. To understand it Details Lets take an Example Step-1 [ Create Base Table and Insert Some Records ] -- Item Master IF OBJECT_ID(N'dbo.tbl_ITEMDTLS', N'U')IS NOT NULL BEGIN DROP TABLE [dbo].[tbl_ITEMDTLS]; END GO CREATE TABLE [dbo].[tbl_ITEMDTLS] ( ITEMCD INT NOT NULL IDENTITY PRIMARY KEY, ITEMNAME VARCHAR(50) NOT NULL ) GO -- Inserting Records INSERT INTO [dbo].[tbl_ITEMDTLS] (ITEMNAME) VALUES ('ITEM-1'),('ITEM-2'),('ITEM-3'); -- Item UOM Master IF OBJECT_ID(N'dbo.tbl_UOMDTLS', N'U')IS NOT NULL BEGIN DROP TABLE [dbo].[tbl_UOMDTLS]; END GO CREATE TABLE [dbo].[tbl_UOMDTLS] ( UOMCD INT NOT NULL IDENTITY PRIMARY KEY, UOMNAME VARCHAR(50) NOT NULL ) GO -- Inserting Records INSERT INTO [dbo].[tbl_UOMDTLS] (UOMNAME) VALUES ('KG'),('LTR'),('GRM'); GO -- Transaction Table IF OBJECT_ID(N'dbo.tbl_SBILL', N'U')IS NOT NULL BEGIN DROP TABLE [dbo].[tbl_SBILL]; END GO CREATE TABLE [dbo].[tbl_SBILL] ( TRID INT NOT NULL IDENTITY PRIMARY KEY, ITEMCD INT NOT NULL, UOMCD INT NOT NULL, QTY DECIMAL(18,3) NOT NULL, RATE DECIMAL(18,2) NOT NULL, AMOUNT AS QTY * RATE ); GO -- Foreign Key Constraint ALTER TABLE [dbo].[tbl_SBILL] ADD CONSTRAINT FK_ITEM_tbl_SBILL FOREIGN KEY(ITEMCD) REFERENCES [dbo].[tbl_ITEMDTLS](ITEMCD); GO ALTER TABLE [dbo].[tbl_SBILL] ADD CONSTRAINT FK_UOMCD_tbl_SBILL FOREIGN KEY(UOMCD) REFERENCES [dbo].[tbl_UOMDTLS](UOMCD); -- Insert Records INSERT INTO [dbo].[tbl_SBILL] (ITEMCD, UOMCD, QTY, RATE) VALUES (1, 1, 20, 2000),(2, 3, 23, 1400); Step-2 [ Now Make Some JOIN ] SELECT b.TRID, b.ITEMCD, a.ITEMNAME, b.UOMCD, c.UOMNAME, b.QTY, b.RATE, b.AMOUNT FROM [dbo].[tbl_ITEMDTLS] AS a INNER JOIN [dbo].[tbl_SBILL] AS b ON a.ITEMCD = b.ITEMCD INNER JOIN [dbo].[tbl_UOMDTLS] AS c ON b.UOMCD = c.UOMCD; Here [tbl_ITEMDETAILS] JOIN [tbl_SALES] JOIN [tbl_UOMDETAILS] If we look at the Execution Plan We find that [tbl_SALES] JOIN [tbl_ITEMDETAILS] JOIN [tbl_UOMDETAILS] Step-2 [ Now we need to Force Order Hint to maintain Join Order ] SELECT b.TRID, b.ITEMCD, a.ITEMNAME, b.UOMCD, c.UOMNAME, b.QTY, b.RATE, b.AMOUNT FROM [dbo].[tbl_ITEMDTLS] AS a INNER JOIN [dbo].[tbl_SBILL] AS b ON a.ITEMCD = b.ITEMCD INNER JOIN [dbo].[tbl_UOMDTLS] AS c ON b.UOMCD = c.UOMCD OPTION ( QUERYRULEOFF JoinCommute); For this we need the FORCE ORDER Hint. The query optimizer uses different rules to evaluate different plan and one of the rules is called JoinCommute. We can turn it off using the undocumented query hint QUERYRULEOFF. Hope you like it.
April 14, 2014
by Joydeep Das
· 26,314 Views
article thumbnail
How to Setup Remote Debug with WebLogic Server and Eclipse
Here is how I enable remote debugging with WebLogic Server (11g) and Eclipse IDE. (Actually the java option is for any JVM, just the instruction here is WLS specific.) 1. Edit /bin/setDomainEnv.sh file and add this on top: JAVA_OPTIONS="$JAVA_OPTIONS -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y" The suspend=y will start your server and wait for you to connect with IDE before continue. If you don't want this, then set to suspend=n instead. 2. Start/restart your WLS with /bin/startWebLogic.sh 3. Once WLS is running, you may connect to it using Eclipse IDE. Go to Menu: Run > Debug Configuration ... > Remote Java Application and create a new entry. Ensure your port number is matching to what you used above. Read more java debugging options here: http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#DebuggingOptions
April 12, 2014
by Zemian Deng
· 72,813 Views
article thumbnail
Tracking Exceptions - Part 4 - Spring's Mail Sender
If you've read any of the previous blogs in this series, you may remember that I'm developing a small but almost industrial strength application that searches log files for exceptions. You may also remember that I now have a class that can contain a whole bunch of results that will need sending to any one whose interested. This will be done by implementing my simple Publisher interface shown below. public interface Publisher { public boolean publish(T report);} If you remember, the requirement was: 7 . Publish the report using email or some other technique. In this blog I’m dealing with the concrete part of the requirement: sending a report by email. As this is a Spring app, then the simplest way of sending an email is to use Spring’s email classes. Unlike those stalwarts of the Spring API, template classes such as JdbcTemplate and JmsTemplate, the Spring email classes are based around a couple of interfaces and their implementations. The interfaces are: MailSender JavaMailSender extends MailSender MailMessage …and the implementations are: JavaMailSenderImpl implements JavaMailSender SimpleMailMessage implements MailMessage Note that these are the ‘basic’ classes; you can send nicer looking, more sophisticated email content using classes such as: MimeMailMessage, MimeMailMessageHelper, ConfigurableMimeFileTypeMap and MimeMessagePreparator. Before getting down to some code, there’s the little matter of project configuration. To use the Spring email classes, you need the following entry in your Maven POM file: javax.mail mail 1.4 This ensures that the underlying Java Mail classes are available to your application. Once the Java Mail classes are configured in the build, the next thing to do is to set up the Spring XML config. For the purposes of this app, which is sending out automated reports, I’ve included two Spring beans: mailSender and mailMessage.mailSender, is a JavaMailSenderImpl instance configured to use a specific SMTP mail server, with all other properties, such as TCP port, left as defaults. The second Spring bean is mailMessage, an instance of SimpleMailMessage. This time I’ve pre-configured three properties: ‘to’, ‘from’ and ‘subject’. This is because, being automated messages, these values are always identical. You can of course configure these programatically, something you’d probably need to do if you were creating a mail GUI. All this XML makes the implementation of the Publisher very simple. @Service public class EmailPublisher implements Publisher { private static final Logger logger = LoggerFactory.getLogger(EmailPublisher.class); @Autowired private MailSender mailSender; @Autowired private SimpleMailMessage mailMessage; @Override public boolean publish(T report) { logger.debug("Sending report by email..."); boolean retVal = false; try { String message = (String) report; mailMessage.setText(message); mailSender.send(mailMessage); retVal = true; } catch (Exception e) { logger.error("Can't send email... " + e.getMessage(), e); } return retVal;} } The Publisher class contains one method: publish, which takes a generic argument T report. This, as I’ve said before, has to be the same type as the argument returned by the Formatter implementation from my previous blog. There are only really three steps in this code to consider: firstly, the generic T is cast to a String (this is where it’ll all fall over if the argument T report isn’t a String. The second step is to attach the email’s text to the mailMessage and then to send the message using mailSender.send(…). The final step is fulfil the Publisher contract by returning true, unless the email fails to send in which case the exception is logged and the return value is false. In terms of developing the code that’s about it. The next step is to sort out the scheduling, so that the report is generated on time, but more on that later… The code for this blog is available on Github at: https://github.com/roghughe/captaindebug/tree/master/error-track. If you want to look at other blogs in this series take a look here… Tracking Application Exceptions With Spring Tracking Exceptions With Spring - Part 2 - Delegate Pattern Error Tracking Reports - Part 3 - Strategy and Package Private
April 11, 2014
by Roger Hughes
· 12,941 Views · 1 Like
article thumbnail
Be a Lazy But Productive Android Developer, Part 5: Image Loading Library
Welcome to part 5 of “Be a lazy but a productive android developer” series. If you are a lazy Android developer and looking for image loading library, which could help you to load image(s) asynchronously without writing a logic for downloading and caching images then this article is for you. This series so far: Part 1: We looked at RoboGuice, a dependency injection library by which we can reduce the boiler plate code, save time and there by achieve productivity during Android app development. Part 2: We saw and explored about Genymotion, which is a rocket speed emulator and super-fast emulator as compared to native emulator. And we can use Genymotion while developing apps and can quickly test apps and there by can achieve productivity. Part 3: We understood and explored about JSON Parsing libraries (GSON and Jackson), using which we can increase app performance, we can decrease boilerplate code and there by can optimize productivity. Part 4: We talked about Card UI and explored card library, also created a basic card and simple card list demo. In this part In this part, we are going to talk about some image libraries using which we can load image(s) asynchronously, can cache images and also can download images into the local storage. Required features for loading images Almost every android app has a need to load remote images. While loading remote images, we have to take care of below things: Image loading process must be done in background (i.e. asynchronously) to avoid blocking UI main thread. Image recycling image should be done. Image should be displayed once its loaded successfully. Images should be cached in local memory for the later use. If remote image gets failed (due to network connection or bad url or any other reasons) to load then it should be managed perfectly for avoiding duplicate requests to load the same again, instead it should load if and only if net connection is available. Memory management should be done efficiently. In short, we have to write a code to manage each and every aspects of image loading but there are some awesome libraries available, using which we can load/download image asynchronously. We just have to call the load image method and success/failure callbacks. Asynchronous image loading Consider a case where we are having 50 images and 50 titles and we try to load all the images/text into the listview, it won’t display anything until all the images get downloaded. Here Asynchronous image loading process comes in picture. Asynchronous image loading is nothing but a loading process which happens in background so that it doesn’t block main UI thread and let user to play with other loaded data on the screen. Images will be getting displayed as and when it gets downloaded from background threads. Asynchronous image loading libraries Nostra’s Universal Image loader – https://github.com/nostra13/Android-Universal-Image-Loader Picasso – http://square.github.io/picasso/ UrlImageViewHelper by Koush Volley - By Android team members @ Google Novoda’s Image loader – https://github.com/novoda/ImageLoader Let’s have a look at examples using Picasso and Universal Image loader libraries. Example 1: Nostra’s Universal Image loader Step 1: Initialize ImageLoader configuration ? public class MyApplication extends Application{ @Override public void onCreate() { // TODO Auto-generated method stub super.onCreate(); // Create global configuration and initialize ImageLoader with this configuration ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(getApplicationContext()).build(); ImageLoader.getInstance().init(config); } } Step 2: Declare application class inside Application tag in AndroidManifest.xml file ? Step 3: Load image and display into ImageView ? ImageLoader.getInstance().displayImage(objVideo.getThumb(), holder.imgVideo); Now, Universal Image loader also provides a functionality to implement success/failure callback to check whether image loading is failed or successful. ? ImageLoader.getInstance().displayImage(photoUrl, imgView, new ImageLoadingListener() { @Override public void onLoadingStarted(String arg0, View arg1) { // TODO Auto-generated method stub findViewById(R.id.EL3002).setVisibility(View.VISIBLE); } @Override public void onLoadingFailed(String arg0, View arg1, FailReason arg2) { // TODO Auto-generated method stub findViewById(R.id.EL3002).setVisibility(View.GONE); } @Override public void onLoadingComplete(String arg0, View arg1, Bitmap arg2) { // TODO Auto-generated method stub findViewById(R.id.EL3002).setVisibility(View.GONE); } @Override public void onLoadingCancelled(String arg0, View arg1) { // TODO Auto-generated method stub findViewById(R.id.EL3002).setVisibility(View.GONE); } }); Example 2: Picasso Image loading straight way: ? Picasso.with(context).load("http://postimg.org/image/wjidfl5pd/").into(imageView); Image re-sizing: ? Picasso.with(context) .load(imageUrl) .resize(100, 100) .centerCrop() .into(imageView) Example 3: UrlImageViewHelper library It’s an android library that sets an ImageView’s contents from a url, manages image downloading, caching, and makes your coffee too. UrlImageViewHelper will automatically download and manage all the web images and ImageViews. Duplicate urls will not be loaded into memory twice. Bitmap memory is managed by using a weak reference hash table, so as soon as the image is no longer used by you, it will be garbage collected automatically. Image loading straight way: ? UrlImageViewHelper.setUrlDrawable(imgView, "http://yourwebsite.com/image.png"); Placeholder image when image is being downloaded: ? UrlImageViewHelper.setUrlDrawable(imgView, "http://yourwebsite.com/image.png", R.drawable.loadingPlaceHolder); Cache images for a minute only: ? UrlImageViewHelper.setUrlDrawable(imgView, "http://yourwebsite.com/image.png", null, 60000); Example 4: Volley library Yes Volley is a library developed and being managed by some android team members at Google, it was announced by Ficus Kirkpatrick during the last I/O. I wrote an article about Volley library 10 months back , read it and give it a try if you haven’t used it yet. Let’s look at an example of image loading using Volley. Step 1: Take a NetworkImageView inside your xml layout. ? Step 2: Define a ImageCache class Yes you are reading title perfectly, we have to define an ImageCache class for initializing ImageLoader object. ? public class BitmapLruCache extends LruCache implements ImageLoader.ImageCache { public BitmapLruCache() { this(getDefaultLruCacheSize()); } public BitmapLruCache(int sizeInKiloBytes) { super(sizeInKiloBytes); } @Override protected int sizeOf(String key, Bitmap value) { return value.getRowBytes() * value.getHeight() / 1024; } @Override public Bitmap getBitmap(String url) { return get(url); } @Override public void putBitmap(String url, Bitmap bitmap) { put(url, bitmap); } public static int getDefaultLruCacheSize() { final int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); final int cacheSize = maxMemory / 8; return cacheSize; } } Step 3: Create an ImageLoader object and load image Create an ImageLoader object and initialize it with ImageCache object and RequestQueue object. ? ImageLoader.ImageCache imageCache = new BitmapLruCache(); ImageLoader imageLoader = new ImageLoader(Volley.newRequestQueue(context), imageCache); Step 4: Load an image into ImageView ? NetworkImageView imgAvatar = (NetworkImageView) findViewById(R.id.imgDemo); imageView.setImageUrl(url, imageLoader); Which library to use? Can you decide which library you would use? Let us know which and what are the reasons? Selection of the library is always depends on the requirement. Let’s look at the few fact points about each library so that you would able to compare exactly and can take decision. Picasso: It’s just a one liner code to load image using Picasso. No need to initialize ImageLoader and to prepare a singleton instance of image loader. Picasso allows you to specify exact target image size. It’s useful when you have memory pressure or performance issues, you can trade off some image quality for speed. Picasso doesn’t provide a way to prepare and store thumbnails of local images. Sometimes you need to check image loading process is in which state, loading, finished execution, failed or cancelled image loading. Surprisingly It doesn’t provide a callback functionality to check any state. “fetch()” dose not pass back anything. “get()” is for synchronously read, and “load()” is for asynchronously draw a view. Universal Image loader (UIL): It’s the most popular image loading library out there. Actually, it’s based on the Fedor Vlasov’s project which was again probably a very first complete solution and also a most voted answer (for the image loading solution) on Stackoverflow. UIL library is better in documentation and even there’s a demo example which highlights almost all the features. UIL provides an easy way to download image. UIL uses builders for customization. Almost everything can be configured. UIL doesn’t not provide a way to specify image size directly you want to load into a view. It uses some rules based on the size of the view. Indirectly you can do it by mentioning ImageSize argument in the source code and bypass the view size checking. It’s not as flexible as Picasso. Volley: It’s officially by Android dev team, Google but still it’s not documented. It’s just not an image loading library only but an asynchronous networking library Developer has to define ImageCache class their self and has to initialize ImageLoader object with RequestQueue and ImageCache objects. So now I am sure now you can be able to compare libraries. Choosing library is a bit difficult talk because it always depends on the requirement and type of projects. If the project is large then you should go for Picasso or Universal Image loader. If the project is small then you can consider to use Volley librar, because Volley isn’t an image loading library only but it tries to solve a more generic solution.). I suggest you to start with Picasso. If you want more control and customization, go for UIL. Read more: http://blog.bignerdranch.com/3177-solving-the-android-image-loading-problem-volley-vs-picasso/ http://stackoverflow.com/questions/19995007/local-image-caching-solution-for-android-square-picasso-vs-universal-image-load https://plus.google.com/103583939320326217147/posts/bfAFC5YZ3mq Hope you liked this part of “Lazy android developer: Be productive” series. Till the next part, keep exploring image loading libraries mentioned above and enjoy!
April 11, 2014
by Paresh Mayani
· 63,771 Views · 2 Likes
article thumbnail
Integrating Node.js with a C# DLL
Recently I had to integrate a Node.js based server application with a C# DLL. Our software (a web-app) offers the possibility to execute payments over a POS terminal. This latter one is controllable through a dedicated DLL which exposes interfaces like ExecutePayment(operation, amount) and so on. As I mentioned, there is the Node.js server that somehow exposes the functionality of the POS (and some more) as a REST api. (The choice for using Node.js had specific reasons which I wouldn't want to outline right now). When you start with such an undertaking, then there are different possibilities. One is to use Edge.js which allows you to embed, reference and invoke .Net CLR objects from within your Node.js based applications. Something like this: var hello = require('edge').func({ assemblyFile: 'My.Edge.Samples.dll', typeName: 'Samples.FooBar.MyType', methodName: 'MyMethod' // Func> }); hello('Node.js', function (error, result) { ... }); Edge is a very interesting project and has a lot of potential. In fact, I just tried it quickly with a simple DLL and it worked right away. However, when using it from my Node app within node-webkit it didn't work. I'm not yet sure whether it was related to node-webkit or the POS DLL itself (because it might be COM exposed etc..). However, if you need simple integrations this might work well for you. Process invocation A second option that came to my mind is to design the DLL as a self-contained process and to invoke it using Node.js's process api. Turns out this is quite simple. Just prepare your C# application to read it's invocation arguments s.t. you can do something like.. IntegrationConsole.exe ExecutePayment 1 100 ..to "ExecutePayment" with operation number 1 and an amount of 1€. The C# console application needs to communicate it's return values to the STDOUT (you may use JSON for creating a more structured information exchange protocol format). Once you have this, you can simply execute the process from Node.js and read the according STDOUT: var process = require('child_process'); ... process.exec(execCmd, function (error, stdout, stderr) { var result = stdout; ... writeToResponse(stdout); }); execCmd is holds the instructions required to launch the EXE with the required invocation arguments. In this approach you execute the process, it does its job, returns the response and terminates. If for some reason however, you need to keep the process running for having a longer, kind of more interactive communication between the two components, you can communicate through the STDIN/STDOUT of the process. Your C# console application starts and listens on the STDIN.. static void Main(string[] args) { ... string line; do { line = Console.ReadLine(); try { // do something meaningful with the input // write to STDOUT to respond to the caller } catch (Exception e) { Console.WriteLine(e.Message); } } while (line != null); } On the Node.js side you do not exec your process, but instead you spawn a child process. var spawn = require('child_process').spawn; ... var posProc = spawn('IntegrationConsole.exe', ['ExecutePayment', 1, 100]); For getting the responses, you simply register on the STDOUT of the process... posProc.stdout.once('data', function (data) { // write it back on the response object writeToResponse(data); }); ..and you may also want to listen for when the process dies to eventually perform some cleanup. posProc.on('exit', function (code) { ... }); Writing to the STDIN of the process is simple as well: posProc.stdin.setEncoding ='utf-8'; posProc.stdin.write('...'); In this way you have a more interactive, "stateful communication", where you send a command to the EXE which responds (STDOUT) and based on the response you again react and send some other command (STDIN). Embedding this in the Request/Response pattern To expose everything as a REST api (on Node), you need to pay some attention on the registration of the event handlers on STDOUT. Suppose you do something like app.post('/someEndpoint',function(req, res){ posProc = spawn('IntegrationConsole.exe',['ExecutePayment',1,100]);... posProc.stdout.on('data',function(data){// return the result of this execution// on the response});}), app.post('/someOtherEndpoint',function(req, res){... posProc.stdout.on('data',function(data){// return the result of this execution// on the response});// write to the stdin of the before created child process posProc.stdin.setEncoding ='utf-8'; posProc.stdin.write('...');}); I excluded proper edge case handling like what happens if your process died before etc.. but the key point here is that you cannot register your events by using on(..), as otherwise you'll end up having multiple data event handlers on the stdout. So you can either register and de-register the event by using the removeListener('event name', callback) syntax or use the more handy once registration mechanism (as I did already in my samples at the beginning of the article): posProc.stdout.once('data',function(data){// write it back on the response object writeToResponse(data);});
April 7, 2014
by Juri Strumpflohner
· 46,962 Views · 2 Likes
article thumbnail
Developing an iOS Native App with Ionic
In my current project, I've been helping a client develop a native iOS app for their customers. It's written mostly in Objective-C and talks to a REST API. I talked about how we documented our REST API a couple days ago. We developed a prototype for this application back in December, using AngularJS and Bootstrap. Rather than using PhoneGap, we loaded our app in a UIWebView. It all seemed to work well until we needed to read an activation code with the device's camera. Since we didn't know how to do OCR in JavaScript, we figured a mostly-native app was the way to go. We hired an outside company to do iOS development in January and they've been developing the app since the beginning of February. In the last couple weeks, we encountered some screens that seemed fitting for HTML5, so we turned back to our AngularJS prototype. The prototype used Bootstrap heavily, but we quickly learned it didn't look like an iOS 7 app, which is what our UX Designer requested. A co-worker pointed out Ionic, developed by Drifty. It's basically Bootstrap for Native, so the apps you develop look and behave like a mobile application. What is Ionic? Free and open source, Ionic offers a library of mobile-optimized HTML, CSS and JS components for building highly interactive apps. Built with Sass and optimized for AngularJS. I started developing with Ionic a few weeks ago. Using its CSS classes and AngularJS directives, I was able to create several new screens in a matter of days. Most of the time, I was learning new things: how to override its back button behavior (to launch back into the native app), how to configure routes with ui-router, and how to make the $ionicLoading service look native. Now that I know a lot of the basics, I feel like I can really crank out some code. Tip: I learned how subviews work with ui-router thanks to a YouTube video of Tim Kindberg on Angular UI-Router. However, subviews never fully made sense until I saw Jared Bell's diagram. To demonstrate how easy it is to use Ionic, I whipped up a quick example application. You can get the source on GitHub at https://github.com/mraible/boot-ionic. The app is a refactored version of Josh Long's x-auth-security that uses Ionic instead of raw AngularJS and Bootstrap. To keep things simple, I did not develop the native app that wraps the HTML. Below are the steps I used to convert from AngularJS + Bootstrap to Ionic. If you want to convert a simple AngularJS app to use Ionic, hopefully this will help. 1. Download Ionic and add it to your project. Ionic 1.0 Beta was released earlier this week. You can download it from here. Add its files to your project. In this example, I added them to src/main/resources/public. In my index.html, I removed Bootstrap's CSS and replaced it with Ionic's. - + - + Next, I replaced Angular, Bootstrap and jQuery's JavaScript references. - - - + - What about WebJars? You might ask - why not use WebJars? You can, once this pull request is accepted and an updated version is deployed to Maven central. Here's how the application would change. 2. Change from Angular's Router to ui-router. Ionic uses ui-router for matching URLs and loading particular pages. The raw Angular routing looks pretty similar to how it does with ui-router, except it uses a $stateProvider service instead of $routeProvider. You'll notice I also added 'ionic' as a dependency. -angular.module('exampleApp', ['ngRoute', 'ngCookies', 'exampleApp.services']) +angular.module('exampleApp', ['ionic', 'ngCookies', 'exampleApp.services']) .config( - [ '$routeProvider', '$locationProvider', '$httpProvider', function($routeProvider, $locationProvider, $httpProvider) { + [ '$stateProvider', '$urlRouterProvider', '$httpProvider', function($stateProvider, $urlRouterProvider, $httpProvider) { - $routeProvider.when('/create', { templateUrl: 'partials/create.html', controller: CreateController}); + $stateProvider.state('create', {url: '/create', templateUrl: 'partials/create.html', controller: CreateController}) + .state('edit', {url: '/edit/:id', templateUrl: 'partials/edit.html', controller: EditController}) + .state('login', {url: '/login', templateUrl: 'partials/login.html', controller: LoginController}) + .state('index', {url: '/index', templateUrl: 'partials/index.html', controller: IndexController}); - $routeProvider.when('/edit/:id', { templateUrl: 'partials/edit.html', controller: EditController}); - $routeProvider.when('/login', { templateUrl: 'partials/login.html', controller: LoginController}); - $routeProvider.otherwise({templateUrl: 'partials/index.html', controller: IndexController}); - - $locationProvider.hashPrefix('!'); + $urlRouterProvider.otherwise('/index'); 3. Add Ionic elements to your index.html. In contrast to Bootstrap's navbar, Ionic has header and footer elements. Rather than using a ng-view directive, you use an . It's a pretty slick setup once you understand it, especially since they allow you to easily override back-button behavior and nav buttons. - - - - - - {{error} - - + + + {{error} + + + + Logout + + 4. Change your templates to use and . After routes are migrated and basic navigation is working, you'll need to modify your templates to use and . Here's a diff from the most complicated page in the app. - - Create - - - News - + + + + + + + + + + - - - - Remove - Edit - - {{newsEntry.date | date} - {{newsEntry.content} - - + + + {{newsEntry.date | date} + {{newsEntry.content} + + + + I did migrate to use an with delete/options buttons, so some additional JavaScript changes were needed. -function IndexController($scope, NewsService) { +function IndexController($scope, $state, NewsService) { $scope.newsEntries = NewsService.query(); + $scope.data = { + showDelete: false + }; + $scope.deleteEntry = function(newsEntry) { newsEntry.$remove(function() { $scope.newsEntries = NewsService.query(); }); }; + + $scope.itemButtons = [{ + text: 'Edit', + type: 'button-assertive', + onTap: function (item) { + $state.go('edit', {id: item.id}); + } + }]; } Screenshots After making all these changes, the app looks pretty good in Chrome. Tips and Tricks In additional to figuring out how to use Ionic, I discovered a few other tidbits along the way. First of all, we had a different default color for the header. Since Ionic uses generic color names (e.g. light, stable, positive, calm), I found it easy to change the default value for "positive" and then continue to use their class names. Modifying CSS variable colors To modify the base color for "positive", I cloned the source, and modified scss/_variables.scss. $light: #fff !default; $stable: #f8f8f8 !default; -$positive: #4a87ee !default; +$positive: #589199 !default; $calm: #43cee6 !default; $balanced: #66cc33 !default; $energized: #f0b840 !default; After making this change, I ran "grunt" and copied dist/css/ionic.css into our project. iOS Native Integration Our app uses a similar token-based authentication mechanism as x-auth-security, except its backed by Crowd. However, since users won't be logging directly into the Ionic app, we added the "else" clause in app.js to allow a token to be passed in via URL. We also allowed the backend API path to be overridden. /* Try getting valid user from cookie or go to login page */ var originalPath = $location.path(); $location.path("/login"); var user = $cookieStore.get('user'); if (user !== undefined) { $rootScope.user = user; $http.defaults.headers.common[xAuthTokenHeaderName] = user.token; $location.path(originalPath); } else { // token passed in from native app var authToken = $location.search().token; if (authToken) { $http.defaults.headers.common['X-Auth-Token'] = authToken; } } // allow overriding the base API path $rootScope.apiPath = '/api/v1.0'; if ($location.search().apiPath) { $rootScope.apiPath = $location.search().apiPath; } By adding this logic, the iOS app can pull up any particular page in a webview and let the Ionic app talk to the API. Here's what the Objective-C code looks like: NSString *versionNumber = @"v1.0"; NSString *apiPath = @"https://server.com/api/"; NSString *authToken = [TemporaryDataStore sharedInstance].authToken; // webapp is a symbolic link to the Ionic app, created with Angular Seed NSString *htmlFilePath = [[NSBundle mainBundle] pathForResource:@"index" ofType:@"html" inDirectory:@"webapp/app"]; // Note: We need to do it this way because 'fileURLWithPath:' would encode the '#' to '%23" which breaks the html page NSURL *htmlFileURL = [NSURL fileURLWithPath:htmlFilePath]; NSString *webappURLPath = [NSString stringWithFormat:@"%@#/news?apiPath=%@%@&token=%@", htmlFileURL.absoluteString, apiPath, versionNumber, authToken]; // Now convert the string to a URL (doesn't seem to encode the '#' this way) NSURL *webappURL = [NSURL URLWithString:webappURLPath]; [super updateWithURL:webappURL]; We also had to write some logic to navigate back to the native app. We used a custom URL scheme to do this, and the Ionic app simply called it. To override the default back button, I added an "ng-controller" attribute to and added a custom back button. To detect if the app was loaded by iOS (vs. a browser, which we tested in), we used the following logic: // set native app indicator if (document.location.toString().indexOf('appName.app') > -1) { $rootScope.isNative = true; } Our Ionic app has three entry points, defined by "stateName1", "stateName2" and "stateName3" in this example. The code for our NavController handles navigating back normally (when in a browser) or back to the native app. The "appName" reference below is a 3-letter acronym we used for our app. .controller('NavController', function($scope, $ionicNavBarDelegate, $state) { $scope.goBack = function() { if ($scope.isNative && backToNative($state)) { location.href='appName-ios://back'; } else { $ionicNavBarDelegate.back(); } }; function backToNative($state) { var entryPoints = ['stateName1', 'stateName2', 'stateName3']; return entryPoints.some(function (entry) { return $state.current === $state.get(entry); }); } }) Summary I've enjoyed working with Ionic over the last month. The biggest change I've had to make to our AngularJS app has been to integrate ui-router. Apart from this, the JavaScript didn't change much. However, the HTML had to change quite a bit. As far as CSS is concerned, I found myself tweaking things to fit our designs, but less so than I did with Bootstrap. When I've run into issues with Ionic, the community has been very helpful on their forum. It's the first forum I've used that's powered by Discourse, and I dig it. You can find the source from this article in my boot-ionic project. Clone it and run "mvn spring-boot:run", then open http://localhost:8080. If you're looking to create a native app using HTML5 technologies, I highly recommend you take a look at Ionic. We're glad we did. Angular 2.0 will target mobile apps and Ionic is already making them look pretty damn good.
April 7, 2014
by Matt Raible
· 14,128 Views
article thumbnail
Visualizing SQL Statements
Usually if I concentrate I am able to understand most SQL statements. There are times though such as: When a set of tables is not familiar When I did not write the SQL statement When the SQL statement is long and involves many tables and joins When I want to discuss a statement with a colleague All of the above Having a visual representation of a SQL statement can be helpful in deciphering the statement. My visualisation tool of choice for SQL is an Open Source application called Reverse Snowflake Joins (REVJ). As the name implies, this tool shines when it comes to showing you how your tables are related. I have installed the tool on my workstation but when I am on the move I use the online version of the tool. Using the tool is straight forward, simply paste your SQL statement in the text area and generate the diagram, the online version generates an SVG image. I have at times found that the tool struggles with complex CASE statements. In such cases I remove the CASE statement and just include the fields used in the case statement. Below is a sample statement to show REVJ at work. SELECT a.prod_cat_name ,b.prod_name ,c.prod_owner_name ,p.promo_id ,pt.promo_type ,sum(s.units) as total_units ,sum(s.sale_price) as total_sale_price ,sum(prev_s.units) as prev_yr_total_units FROM product_category a JOIN product b ON a.product_cat_id = b.product_cat_id LEFT OUTER JOIN product_owner c ON a.product_cat_id = c.product_cat_id JOIN sales s ON b.product_id = s.product_id JOIN sales prev_s ON s.sale_year = prev_s.sale_year-1 LEFT OUTER JOIN promotion p ON s.promo_id = p.promo_id RIGHT OUTER JOIN promo_type pt ON p.promo_type_id = pt.promo_type_id WHERE pt.promo_type IN ('Email', 'TV') AND a.prod_cat_name = 'Electronics' AND s.sale_year >=2013 GROUP BY a.prod_cat_name ,b.prod_name ,c.prod_owner_name ,p.promo_id ,pt.promo_type HAVING sum(s.units)>100 The generated image shown below. Notice how the filters applied to each table are also shown, further simplifying the task of understanding the SQL statement. For more complex examples have a look at big samples page.
April 7, 2014
by Mpumelelo Msimanga
· 18,499 Views
article thumbnail
Groovy Goodness: Converting Byte Array to Hex String
To convert a byte[] array to a String we can simply use the new String(byte[]) constructor. But if the array contains non-printable bytes we don't get a good representation. In Groovy we can use the method encodeHex() to transform a byte[] array to a hex String value. The byteelements are converted to their hexadecimal equivalents. final byte[] printable = [109, 114, 104, 97, 107, 105] // array with non-printable bytes 6, 27 (ACK, ESC) final byte[] nonprintable = [109, 114, 6, 27, 104, 97, 107, 105] assert new String(printable) == 'mrhaki' assert new String(nonprintable) != 'mr haki' // encodeHex() returns a Writable final Writable printableHex = printable.encodeHex() assert printableHex.toString() == '6d7268616b69' final nonprintableHex = nonprintable.encodeHex().toString() assert nonprintableHex == '6d72061b68616b69' // Convert back assert nonprintableHex.decodeHex() == nonprintable Code written with Groovy 2.2.1
April 6, 2014
by Hubert Klein Ikkink
· 14,081 Views · 2 Likes
article thumbnail
Compiling and Running Java Without an IDE
I’m going to start by discussing the Spring WebMVC configuration to compile and run Java without an IDE.
April 4, 2014
by Dustin Marx
· 58,543 Views · 9 Likes
article thumbnail
Common Misconceptions About Java
Java is the most widely used language in the world ([citation needed]), and everyone has an opinion about it. Due to it being mainstream, it is usually mocked, and sometimes rightly so, but sometimes the criticism just doesn’t touch reality. I’ll try to explain my favorite 5 misconceptions about Java. Java is slow – that might have been true for Java 1.0, and initially may sounds logical, since java is not compiled to binary, but to bytecode, which is in turn interpreted. However, modern versions of the JVM are very, very optimized (JVM optimizations is a topic worth not just an article, but a whole book) and this is no longer remotely true. As noted here, Java is even on-par with C++ in some cases. And it is certainly not a good idea to make a joke about Java being slow if you are a Ruby or PHP developer. Java is too verbose – here we need to split the language from the SDK and from other libraries. There is some verbosity in the JDK (e.g. java.io), which is: 1. easily overcome with de-facto standard libraries like guava 2. a good thing As for language verbosity, the only reasonable point were anonymous classes. Which are no longer an issue in Java 8 with the the functional additions. Getters and setters, Foo foo = new Foo() instead of using val – that is (possibly) boilerplate, but it’s not verbose – it doesn’t add conceptual weight to the code. It doesn’t take more time to write, read or understand. Other libraries – it is indeed pretty scary to see a class like AbstractCommonAsyncFacadeFactoryManagerImpl. But that has nothing to do with Java. It can be argued that sometimes these long names make sense, it can also be argued that they are as complex because the underlying abstraction is unnecessarily complicated, but either way, it is a design decision taken per-library, and nothing that the language or the SDK impose per-se. It is common to see overengineered stuff, but Java in no way pushes you in that direction – stuff can be done in a simple way with any language. You can certainly have AbstractCommonAsyncFacadeFactoryManagerImpl in Ruby, just there wasn’t a stupid architect that thought it’s a good idea and who uses Ruby. If “big, serious, heavy” companies were using Ruby, I bet we’d see the same. Enterprise Java frameworks are bloatware – that was certainly true back in 2002 when EJB 2 was in use (or “has been”, I’m too young to remember). And there are still some overengineered and bloated application servers that you don’t really need. The fact that people are using them is their own problem. You can have a perfectly nice, readable, easy to configure and deploy web application with a framework like Spring, Guice or even CDI; with a web framework like Spring-MVC, Play, Wicket, and even the latest JSF. Or even without any framework, if you feel like you don’t want to reuse the evolved-through-real-world-use frameworks. You can have an application using a message queue, a NoSQL and a SQL database, Amazon S3 file storage, and whatnot, without any accidental complexity. It’s true that people still like to overeingineer stuff, and add a couple of layers where they are not needed, but the fact that frameworks give you this ability doesn’t mean they make you do it. For example, here’s an application that crawls government documents, indexes them, and provides a UI for searching and subscribing. Sounds sort-of simple, and it is. It is written in Scala (in a very java way), but uses only java frameworks – spring, spring-mvc, lucene, jackson, guava. I guess you can start maintaining pretty fast, because it is straightforward. You can’t prototype quickly with Java – this is sort-of related to the previous point – it is assumed that working with Java is slow, and that’s why if you are a startup, or a weekend/hackathon project, you should use Ruby (with Rails), Python, Node JS or anything else that allows you to quickly prototype, to save & refresh, to painlessly iterate. Well, that is simply not true, and I don’t know even where it comes from. Maybe from the fact that big companies with heavy processes use Java, and so making a java app is taking more time. And Save-and-Refresh might look daunting to a beginner, but anyone who has programmed in Java (for the web) for a while, has to know a way to automate that (otherwise he’s a n00b, right?). I’ve summarized the possible approaches, and all of them are mostly OK. Another example here (which may be used as an example for the above point as well) – I made did this project for verifying secure password storage of websites within a weekend + 1 day to fix stuff in the evening. Including the security research. Spring-MVC, JSP templates, MongoDB. Again – quick and easy. You can do nothing in Java without an IDE – of course you can – you can use notepad++, vim, emacs. You will just lack refactoring, compile-on-save, call hierarchies. It would be just like programming in PHP or Python or javascript. The IDE vs Editor debate is a long one, but you can use Java without an IDE. It just doesn’t make sense to do so, because you get so much more from the IDE than from a text editor + command line tools. You may argue that I’m able to write nice and simple java applications quickly because I have a lot of experience, I know precisely which tools to use (and which not) and that I’m of some rare breed of developers with common sense. And while I’ll be flattered by that, I am no different than the good Ruby developer or the Python guru you may be. It’s just that java is too widespread to have only good developers and tools. if so many people were using other language, then probably the same amount of crappy code would’ve been generated. (And PHP is already way ahead even with less usage). I’m the last person not to laugh on jokes about Java, and it certainly isn’t the silver bullet language, but I’d be happier if people had less misconceptions either because of anecdotal evidence, or due to previous bad experience a-la “I hate Java since my previous company where the project was very bloated”. Not only because I don’t like people being biased, but because you may start your next project with a language that will not work, just because you’ve heard “Java is bad”.
April 4, 2014
by Bozhidar Bozhanov
· 21,617 Views · 1 Like
article thumbnail
A Docker ‘Hello World' With Mono
Docker is a lightweight virtualization technology for Linux that promises to revolutionize the deployment and management of distributed applications. Rather than requiring a complete operating system, like a traditional virtual machine, Docker is built on top of Linux containers, a feature of the Linux kernel, that allows light-weight Docker containers to share a common kernel while isolating applications and their dependencies. There’s a very good Docker SlideShare presentation here that explains the philosophy behind Docker using the analogy of standardized shipping containers. Interesting that the standard shipping container has done more to create our global economy than all the free-trade treaties and international agreements put together. A Docker image is built from a script, called a ‘Dockerfile’. Each Dockerfile starts by declaring a parent image. This is very cool, because it means that you can build up your infrastructure from a layer of images, starting with general, platform images and then layering successively more application specific images on top. I’m going to demonstrate this by first building an image that provides a Mono development environment, and then creating a simple ‘Hello World’ console application image that runs on top of it. Because the Dockerfiles are simple text files, you can keep them under source control and version your environment and dependencies alongside the actual source code of your software. This is a game changer for the deployment and management of distributed systems. Imagine developing an upgrade to your software that includes new versions of its dependencies, including pieces that we’ve traditionally considered the realm of the environment, and not something that you would normally put in your source repository, like the Mono version that the software runs on for example. You can script all these changes in your Dockerfile, test the new container on your local machine, then simply move the image to test and then production. The possibilities for vastly simplified deployment workflows are obvious. Docker brings concerns that were previously the responsibility of an organization’s operations department and makes them a first class part of the software development lifecycle. Now your infrastructure can be maintained as source code, built as part of your CI cycle and continuously deployed, just like the software that runs inside it. Docker also provides docker index, an online repository of docker images. Anyone can create an image and add it to the index and there are already images for almost any piece of infrastructure you can imagine. Say you want to use RabbitMQ, all you have to do is grab a handy RabbitMQ images such as https://index.docker.io/u/tutum/rabbitmq/ and run it like this: docker run -d -p 5672:5672 -p 55672:55672 tutum/rabbitmq The –p flag maps ports between the image and the host. Let’s look at an example. I’m going to show you how to create a docker image for the Mono development environment and have it built and hosted on the docker index. Then I’m going to build a local docker image for a simple ‘hello world’ console application that I can run on my Ubuntu box. First we need to create a Docker file for our Mono environment. I’m going to use the Mono debian packages from directhex. These are maintained by the official Debian/Ubuntu Mono team and are the recommended way of installing the latest Mono versions on Ubuntu. Here’s the Dockerfile: #DOCKER-VERSION 0.9.1 # #VERSION 0.1 # # monoxide mono-devel package on Ubuntu 13.10 FROM ubuntu:13.10 MAINTAINER Mike Hadlow RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q software-properties-common RUN sudo add-apt-repository ppa:directhex/monoxide -y RUN sudo apt-get update RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q mono-devel Notice the first line (after the comments) that reads, ‘FROM ubuntu:13.10’. This specifies the parent image for this Dockerfile. This is the official docker Ubuntu image from the index. When I build this Dockerfile, that image will be automatically downloaded and used as the starting point for my image. But I don’t want to build this image locally. Docker provide a build server linked to the docker index. All you have to do is create a public GitHub repository containing your dockerfile, then link the repository to your profile on docker index. You can read the documentation for the details. The GitHub repository for my Mono image is at https://github.com/mikehadlow/ubuntu-monoxide-mono-devel. Notice how the Docker file is in the root of the repository. That’s the default location, but you can have multiple files in sub-directories if you want to support many images from a single repository. Now any time I push a change of my Dockerfile to GitHub, the docker build system will automatically build the image and update the docker index. You can see image listed here:https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/ I can now grab my image and run it interactively like this: $ sudo docker pull mikehadlow/ubuntu-monoxide-mono-devel Pulling repository mikehadlow/ubuntu-monoxide-mono-devel f259e029fcdd: Download complete 511136ea3c5a: Download complete 1c7f181e78b9: Download complete 9f676bd305a4: Download complete ce647670fde1: Download complete d6c54574173f: Download complete 6bcad8583de3: Download complete e82d34a742ff: Download complete $ sudo docker run -i mikehadlow/ubuntu-monoxide-mono-devel /bin/bash mono --version Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-1~pre1) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen exit Next let’s create a new local Dockerfile that compiles a simple ‘hello world’ program, and then runs it when we run the image. You can follow along with these steps. All you need is a Ubuntu machine with Docker installed. First here’s our ‘hello world’, save this code in a file named hello.cs: using System; namespace Mike.MonoTest { public class Program { public static void Main() { Console.WriteLine("Hello World"); } } } Next we’ll create our Dockerfile. Copy this code into a file called ‘Dockerfile’: #DOCKER-VERSION 0.9.1 FROM mikehadlow/ubuntu-monoxide-mono-devel ADD . /src RUN mcs /src/hello.cs CMD ["mono", "/src/hello.exe"] Once again, notice the ‘FROM’ line. This time we’re telling Docker to start with our mono image. The next line ‘ADD . /src’, tells Docker to copy the contents of the current directory (the one containing our Dockerfile) into a root directory named ‘src’ in the container. Now our hello.cs file is at /src/hello.cs in the container, so we can compile it with the mono C# compiler, mcs, which is the line ‘RUN mcs /src/hello.cs’. Now we will have the executable, hello.exe, in the src directory. The line ‘CMD [“mono”, “/src/hello.exe”]’ tells Docker what we want to happen when the container is run: just execute our hello.exe program. As an aside, this exercise highlights some questions around what best practice should be with Docker. We could have done this in several different ways. Should we build our software independently of the Docker build in some CI environment, or does it make sense to do it this way, with the Docker build as a step in our CI process? Do we want to rebuild our container for every commit to our software, or do we want the running container to pull the latest from our build output? Initially I’m quite attracted to the idea of building the image as part of the CI but I expect that we’ll have to wait a while for best practice to evolve. Anyway, for now let’s manually build our image: $ sudo docker build -t hello . Uploading context 1.684 MB Uploading context Step 0 : FROM mikehadlow/ubuntu-monoxide-mono-devel ---> f259e029fcdd Step 1 : ADD . /src ---> 6075dee41003 Step 2 : RUN mcs /src/hello.cs ---> Running in 60a3582ab6a3 ---> 0e102c1e4f26 Step 3 : CMD ["mono", "/src/hello.exe"] ---> Running in 3f75e540219a ---> 1150949428b2 Successfully built 1150949428b2 Removing intermediate container 88d2d28f12ab Removing intermediate container 60a3582ab6a3 Removing intermediate container 3f75e540219a You can see Docker executing each build step in turn and storing the intermediate result until the final image is created. Because we used the tag (-t) option and named our image ‘hello’, we can see it when we list all the docker images: $ sudo docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE hello latest 1150949428b2 10 seconds ago 396.4 MB mikehadlow/ubuntu-monoxide-mono-devel latest f259e029fcdd 24 hours ago 394.7 MB ubuntu 13.10 9f676bd305a4 8 weeks ago 178 MB ubuntu saucy 9f676bd305a4 8 weeks ago 178 MB ... Now let’s run our image. The first time we do this Docker will create a container and run it. Each subsequent run will reuse that container: $ sudo docker run hello Hello World And that’s it. Imagine that instead of our little hello.exe, this image contained our web application, or maybe a service in some distributed software. In order to deploy it, we’d simply ask Docker to run it on any server we like; development, test, production, or on many servers in a web farm. This is an incredibly powerful way of doing consistent repeatable deployments. To reiterate, I think Docker is a game changer for large server side software. It’s one of the most exciting developments to have emerged this year and definitely worth your time to check out.
April 3, 2014
by Mike Hadlow
· 11,043 Views
article thumbnail
Multi-Level Argparse in Python (Parsing Commands Like Git)
It’s a common pattern for command line tools to have multiple subcommands that run off of a single executable. For example, git fetch origin and git commit --amend both use the same executable /usr/bin/git to run. Each subcommand has its own set of required and optional parameters. This pattern is fairly easy to implement in your own Python command-line utilities using argparse. Here is a script that pretends to be git and provides the above two commands and arguments. #!/usr/bin/env python import argparse import sys class FakeGit(object): def __init__(self): parser = argparse.ArgumentParser( description='Pretends to be git', usage='''git [] The most commonly used git commands are: commit Record changes to the repository fetch Download objects and refs from another repository ''') parser.add_argument('command', help='Subcommand to run') # parse_args defaults to [1:] for args, but you need to # exclude the rest of the args too, or validation will fail args = parser.parse_args(sys.argv[1:2]) if not hasattr(self, args.command): print 'Unrecognized command' parser.print_help() exit(1) # use dispatch pattern to invoke method with same name getattr(self, args.command)() def commit(self): parser = argparse.ArgumentParser( description='Record changes to the repository') # prefixing the argument with -- means it's optional parser.add_argument('--amend', action='store_true') # now that we're inside a subcommand, ignore the first # TWO argvs, ie the command (git) and the subcommand (commit) args = parser.parse_args(sys.argv[2:]) print 'Running git commit, amend=%s' % args.amend def fetch(self): parser = argparse.ArgumentParser( description='Download objects and refs from another repository') # NOT prefixing the argument with -- means it's not optional parser.add_argument('repository') args = parser.parse_args(sys.argv[2:]) print 'Running git fetch, repository=%s' % args.repository if __name__ == '__main__': FakeGit() The argparse library gives you all kinds of great stuff. You can run ./git.py --help and get the following: usage: git [] The most commonly used git commands are: commit Record changes to the repository fetch Download objects and refs from another repository Pretends to be git positional arguments: command Subcommand to run optional arguments: -h, --help show this help message and exit You can get help on a particular subcommand with ./git.py commit --help. usage: git.py [-h] [--amend] Record changes to the repository optional arguments: -h, --help show this help message and exit --amend Want bash completion on your awesome new command line utlity? Try argcomplete, a drop in bash completion for Python + argparse.
April 3, 2014
by Chase Seibert
· 17,821 Views · 1 Like
article thumbnail
Docker: Bulk Remove Images and Containers
I’ve just started looking at Docker. It’s a cool new technology that has the potential to make the management and deployment of distributed applications a great deal easier. I’d very much recommend checking it out. I’m especially interested in using it to deploy Mono applications because it promises to remove the hassle of deploying and maintaining the mono runtime on a multitude of Linux servers. I’ve been playing around creating new images and containers and debugging my Dockerfile, and I’ve wound up with lots of temporary containers and images. It’s really tedious repeatedly running ‘docker rm’ and ‘docker rmi’, so I’ve knocked up a couple of bash commands to bulk delete images and containers. Delete all containers: sudo docker ps -a -q | xargs -n 1 -I {} sudo docker rm {} Delete all un-tagged (or intermediate) images: sudo docker rmi $( sudo docker images | grep '' | tr -s ' ' | cut -d ' ' -f 3)
April 2, 2014
by Mike Hadlow
· 14,350 Views
article thumbnail
Spring-boot and Scala
There is actually nothing very special about writing a Spring-boot web application purely using Scala, it just works! In this blog entry, I will slowly transform a Java based Spring-boot application completely to Scala - the Java based sample is available at this github location - https://github.com/bijukunjummen/spring-boot-mvc-test To start with, I had the option of going with either a maven based build or gradle based build - I opted to go with a gradle based build as gradle has a greatscala plugin, so for scala support the only changes to a build.gradle build script is the following: ... apply plugin: 'scala' ... jar { baseName = 'spring-boot-scala-web' version = '0.1.0' } dependencies { ... compile 'org.scala-lang:scala-library:2.10.2' ... } Essentially adding in the scala plugin and specifying the version of the scala-library. Now, I have one entity, a Hotel class, it transforms to the following with Scala: package mvctest.domain .... @Entity class Hotel { @Id @GeneratedValue @BeanProperty var id: Long = _ @BeanProperty var name: String = _ @BeanProperty var address: String = _ @BeanProperty var zip: String = _ } Every property is annotated with @BeanProperty annotation to instruct scala to generate the Java bean based getter and setter on the variables. With the entity in place a Spring-data repository for CRUD operations on this entity transforms from: import mvctest.domain.Hotel; import org.springframework.data.repository.CrudRepository; public interface HotelRepository extends CrudRepository { } to the following in Scala: import org.springframework.data.repository.CrudRepository import mvctest.domain.Hotel import java.lang.Long trait HotelRepository extends CrudRepository[Hotel, Long] And the Scala based controller which uses this repository to list the Hotels - vi... import org.springframework.web.bind.annotation.RequestMapping import org.springframework.stereotype.Controller import mvctest.service.HotelRepository import org.springframework.beans.factory.annotation.Autowired import org.springframework.ui.Model @Controller @RequestMapping(Array("/hotels")) class HotelController @Autowired() (private val hotelRepository: HotelRepository) { @RequestMapping(Array("/list")) def list(model: Model) = { val hotels = hotelRepository.findAll() model.addAttribute("hotels", hotels) "hotels/list" } } Here the constructor autowiring of the HotelRepository just works!. Do note the slightly awkward way of specifying the @Autowired annotation for constructor based injection. Finally, Spring-boot based application requires a main class to bootstrap the entire application, where this bootstrap class looks like this with Java: @Configuration @EnableAutoConfiguration @ComponentScan public class SampleWebApplication { public static void main(String[] args) { SpringApplication.run(SampleWebApplication.class, args); } } In scala, though I needed to provide two classes, one to specify the annotation and other to bootstrap the application, there may be better way to do this(blame it on my lack of Scala depth!) - package mvctest import org.springframework.context.annotation.Configuration import org.springframework.boot.autoconfigure.EnableAutoConfiguration import org.springframework.context.annotation.ComponentScan import org.springframework.boot.SpringApplication @Configuration @EnableAutoConfiguration @ComponentScan class SampleConfig object SampleWebApplication extends App { SpringApplication.run(classOf[SampleConfig]); } and that's it, with this set-up the entire application just works, the application can be started up with the following: ./gradlew build && java -jar build/libs/spring-boot-scala-web-0.1.0.jar and the sample endpoint listing the hotels accessed at this url: http://localhost:8080/hotels/list I have the entire git project available at this github location: https://github.com/bijukunjummen/spring-boot-scala-web In conclusion, Scala can be considered a first class citizen for a Spring-boot based application and there is no special configuration required to get a Scala based Spring-boot application to work. It just works!
April 2, 2014
by Biju Kunjummen
· 70,617 Views · 11 Likes
article thumbnail
IntelliJ, Scala and Gradle: Revisiting Hell
So I finally made the decision on trying to learn Scala. Little did I know I was in for another round of IntelliJ integration hell. Let me rephrase that: IntelliJ with Gradle hell. I love Gradle. I love IntelliJ. However, the combination of the two is sometimes enough to drive me utterly crazy. Now take for example the Scala integration. I made the most simple Gradle build possible that compiles a standard Hello World application. apply plugin: 'scala' apply plugin: 'idea' repositories{ mavenCentral() mavenLocal() } dependencies{ compile 'org.slf4j:slf4j-api:1.7.5' compile "org.scala-lang:scala-library:2.10.4" compile "org.scala-lang:scala-compiler:2.10.4" testCompile "junit:junit:4.11" } task run(type: JavaExec, dependsOn: classes) { main = 'Main' classpath sourceSets.main.runtimeClasspath classpath configurations.runtime } First I stumbled upon the first issue: the Scala gradle plugin is incompatible with Java 8. Not a big issue, but this meant changing my java environment for this build, so it is a nuisance. Once this was fixed, the Gradle build succeeded and Hello World was printed out. I opened up IntelliJ and made sure the Scala plugin was installed. Then I imported the project using the Gradle build file. Everything looked okay, IntelliJ recognized the Scala source folder and provided the correct editor for the Scala source file. Then I tried to run the Main class. This resulted in a NoClassDefFoundException. IntelliJ didn’t want to compile my source classes. So I started digging. Apparently, the project was lacking a Scala facet. I’d expected IntelliJ to automatically add this once it saw I was using the scala plugin but it didn’t. So I tried manually adding the facet and there I got stuck. See, the facet requires you to state which scala compiler library you want to use. Luckily IntelliJ correctly added the jars to the classpath, so I was able to choose the correct jar. This, however, did not fix the issue as IntelliJ now complained it could not locate the scala runtime library (scala-library*.jar). This library was however included in the build. If you were to choose the runtime library as the scala library, it would complain it cannot find the compiler library. And this is where I am now: deadlocked. There is an issue in the bugtracker of IntelliJ here but it’s been eerily quiet at Jetbrains on this issue. As it is, it’s impossible to use IntelliJ with Gradle and Scala unless you’re willing to execute every bit of code including unit tests with Gradle instead of the IDE (which in effect defeats the purpose of an IDE). And I’ll die before adopting yet another build framework (SBT) that is supposed to work. Honestly, I really don’t know whether I want to learn Scala anymore. Just the fact that you can’t compile Scala in the most popular IDE at the moment when using the most popular build tool at the moment is something I cannot comprehend. Forcing me to adopt a Scala-specific build tool is unacceptable to me. If I were TypeSafe, I’d put an engineer on this and fix this as this would seriously aid in promoting the language. If it were easy to adopt Scala in an existing build cycle, it would pop up on more radars than it would right now. But it’s not just Scala and IntelliJ: most newer JVM languages struggle with IntelliJ. This is a real pity as this either forces me to change my IDE (i.e. Ceylon has its own IDE based on Eclipse) or not consider the language. As it is, the current viable option with IntelliJ is Java and Groovy (and Kotlin, but it’s not even near production ready quality). Wouldn’t it be nice to only need one IDE for all development? I couldn’t care less if it would cost $500, I just want things to work. I’d love to be able to write my AngularJS front-end that’s consuming my Scala/Java hydrid backend reading data from a MongoDB that’s feeded data from my Arduino sensors (for which I’ve written and uploaded the sketch from that same IDE).
April 1, 2014
by Lieven Doclo
· 23,431 Views · 2 Likes
article thumbnail
6 Simple Performance Tips for SQL SELECT Statements
Performance tuning SELECT statements can be a time consuming task which in my opinion follows Pareto principle’s. 20% effort is likely give you an 80% performance improvement. To get another 20% performance improvement you probably need to spend 80% of the time. Unless you work on the planet Venus where each day on Venus is equal to 243 Earth days, delivery deadlines are likely to mean you will not have enough time to put into tuning your SQL queries. After years writing and running SQL statements I began to develop a mental check-list of things I looked at when trying to improve query performance. These are the things I check before moving on to query plans and reading the sometimes complicated documentation of the database I am working on. My check-list is by no means comprehensive or scientific, more like a back of the envelope calculation but can I can say that most of the time I do get performance improvements following these simple steps. The check-list follows. Check Indexes There should be indexes on all fields used in the WHERE and JOIN portions of the SQL statement. Take the 3-Minute SQL performance test. Regardless of your score be sure to read through the answers as they are informative. Limit Size of Your Working Data Set Examine the tables used in the SELECT statement to see if you can apply filters in the WHERE clause of your statement. A classic example is when a query initially worked well when there were only a few thousand rows in the table. As the application grew the query slowed down. The solution may be as simple as restricting the query to looking at the current month’s data. When you have queries that have sub-selects, look to apply filtering to the inner statement of the sub-selects as opposed to the outer statements. Only Select Fields You Need Extra fields often increase the grain of the data returned and thus result in more (detailed) data being returned to the SQL client. Additionally: When using reporting and analytical applications, sometimes the slow report performance is because the reporting tool has to do the aggregation as data is received in detailed form. Occasionally the query may run quickly enough but your problem could be a network related issue as large amounts of detailed data are sent to the reporting server across the network. When using a column-oriented DBMS only the columns you have selected will be read from disk, the less columns you include in your query the less IO overhead. Remove Unnecessary Tables The reasons for removing unnecessary tables are the same as the reasons for removing fields not needed in the select statement. Writing SQL statements is a process that usually takes a number of iterations as you write and test your SQL statements. During development it is possible that you add tables to the query that may not have any impact on the data returned by the SQL code. Once the SQL is correct I find many people do not review their script and remove tables that do not have any impact or use in the final data returned. By removing the JOINS to these unnecessary tables you reduce the amount of processing the database has to do. Sometimes, much like removing columns you may find your reduce the data bring brought back by the database. Remove OUTER JOINS This can easier said than done and depends on how much influence you have in changing table content. One solution is to remove OUTER JOINS by placing placeholder rows in both tables. Say you have the following tables with an OUTER JOIN defined to ensure all data is returned: customer_id customer_name 1 John Doe 2 Mary Jane 3 Peter Pan 4 Joe Soap customer_id sales_person NULL Newbee Smith 2 Oldie Jones 1 Another Oldie NULL Greenhorn The solution is to add a placeholder row in the customer table and update all NULL values in the sales table to the placeholder key. customer_id customer_name 0 NO CUSTOMER 1 John Doe 2 Mary Jane 3 Peter Pan 4 Joe Soap customer_id sales_person 0 Newbee Smith 2 Oldie Jones 1 Another Oldie 0 Greenhorn Not only have you removed the need for an OUTER JOIN you have also standardised how sales people with no customers are represented. Other developers will not have to write statements such as ISNULL(customer_id, “No customer yet”). Remove Calculated Fields in JOIN and WHERE Clauses This is another one of those that may at times be easier said than done depending on your permissions to make changes to the schema. This can be done by creating a field with the calculated values used in the join on the table. Given the following SQL statement: FROM sales a JOIN budget b ON ((year(a.sale_date)* 100) + month(a.sale_date)) = b.budget_year_month Performance can be improved by adding a column with the year and month in the sales table. The updated SQL statement would be as follows: SELECT * FROM PRODUCTSFROM sales a JOIN budget b ON a.sale_year_month = b.budget_year_month Conclusion The recommendations boil down to a few short pointers check for indexes work with the smallest data set required remove unnecessary fields and tables and remove calculations in your JOIN and WHERE clauses. If all these recommendations fail to improve your SQL query performance my last suggestion is you move to Venus. All you will need is a single day to tune your SQL.
March 31, 2014
by Mpumelelo Msimanga
· 349,032 Views · 5 Likes
  • Previous
  • ...
  • 760
  • 761
  • 762
  • 763
  • 764
  • 765
  • 766
  • 767
  • 768
  • 769
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: