DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Learn PHP - How to Write A Class in PHP
This article represents some high-level concepts and a code example on how to write a PHP class and use it elsewhere. Please feel free to comment/suggest if I missed to mention one or more important points. Also, sorry for the typos. Following are some of the key points described later in this article: Why write a PHP class? Key aspects of a PHP class A PHP class – Code Example Using PHP class in PHP files Why Write a PHP Class? As a beginner, I have come across this common thing that PHP developers tend to write one or more functions in the PHP files/scripts. As a matter of fact, I have also come across several projects (profitable ones) which were written with a few PHP files, very large ones, having all the code put in them in form of multiple functions. When learning PHP, it is OK to do in this way. However, for web apps to go to production, this may not be the recommended way. Following are some of the disadvantages of writing PHP scripts with just the functions in it: Low Maintainability: These files with functions are difficult to maintain/manage (change). Thinking of writing unit tests is like next to impossible. In addition to low testability, there may be several functions which could be reused. However, due to the way they get written, the files score low on re-usability as well. It also propagates the code duplication which further impacts code maintainability as changing a functionality would require change at several places. Low Usability: These files are difficult to read and understand. To take care of some of the above issues, one should learn writing PHP using object-oriented manner, e.g., writing code in form of one or more classes. Writing PHP code using classes helps one should segregate similar looking functions in a class (Single Responsibility Principle) and use the class elsewhere in the code (different PHP scripts). As a matter of fact, one could easily follow SOLID principle with PHP and make the code well-structured. Doing this way does propagate high maintainability (high testability, high cohesiveness, high reusability etc) and makes code readable and understandable. Key Aspects of a PHP Class Following are some of the key aspects of a PHP class: Define a class with keyword “class” followed by name of the class Define the constructor method using “__construct” followed by arguments. The object of the class can then be instantiated using “new ClassName( arguments_list )” Define class variables. One could access specifiers such as private, public, protected etc. Define methods using “function” keyword. By default, PHP methods, if not specified with any access specifier becomes public in nature. That is it! A PHP Class – Code Example Following is the code example of a PHP class, User. Pay attention to some of the following: “class” followed by “User”, the class name Member variables such as $name, $age Member functions such as getName, isAdult class User { private $name; private $age; function __construct( $name, $age ) { $this->name = $name; $this->age = $age; } function getName() { return $this->name; } function isAdult() { return $this->age >= 18?"an Adult":"Not an Adult"; } } Save the file as User.php. Don’t forget to put the above code within Using PHP Class in PHP files Finally, its time to use the PHP class. If you are working with a sample project, go to index.php. Assuming that User.php is saved in same folder as index.php, following is how the code would look like. Pay attention to some of the following: “require” keyword used to include User class written inside User.php “new” keyword used to instantiate the User class -> used to invoke methods on the object getName(). "! You are ". $h->isAdult(); ?> getName(). "! You are ". $h->isAdult(); ?>
October 9, 2014
by Ajitesh Kumar
· 118,288 Views · 1 Like
article thumbnail
R: Filtering data frames by column type ('x' must be numeric)
I’ve been working through the exercises from An Introduction to Statistical Learning and one of them required you to create a pair wise correlation matrix of variables in a data frame. The exercise uses the ‘Carseats’ data set which can be imported like so: > install.packages("ISLR") > library(ISLR) > head(Carseats) Sales CompPrice Income Advertising Population Price ShelveLoc Age Education Urban US 1 9.50 138 73 11 276 120 Bad 42 17 Yes Yes 2 11.22 111 48 16 260 83 Good 65 10 Yes Yes 3 10.06 113 35 10 269 80 Medium 59 12 Yes Yes 4 7.40 117 100 4 466 97 Medium 55 14 Yes Yes 5 4.15 141 64 3 340 128 Bad 38 13 Yes No 6 10.81 124 113 13 501 72 Bad 78 16 No Yes filter the categorical variables from a data frame and If we try to run the ‘cor‘ function on the data frame we’ll get the following error: > cor(Carseats) Error in cor(Carseats) : 'x' must be numeric As the error message suggests, we can’t pass non numeric variables to this function so we need to remove the categorical variables from our data frame. But first we need to work out which columns those are: > sapply(Carseats, class) Sales CompPrice Income Advertising Population Price ShelveLoc Age Education "numeric" "numeric" "numeric" "numeric" "numeric" "numeric" "factor" "numeric" "numeric" Urban US "factor" "factor" We can see a few columns of type ‘factor’ and luckily for us there’s a function which will help us identify those more easily: > sapply(Carseats, is.factor) Sales CompPrice Income Advertising Population Price ShelveLoc Age Education FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE Urban US TRUE TRUE Now we can remove those columns from our data frame and create the correlation matrix: > cor(Carseats[sapply(Carseats, function(x) !is.factor(x))]) Sales CompPrice Income Advertising Population Price Age Education Sales 1.00000000 0.06407873 0.151950979 0.269506781 0.050470984 -0.44495073 -0.231815440 -0.051955242 CompPrice 0.06407873 1.00000000 -0.080653423 -0.024198788 -0.094706516 0.58484777 -0.100238817 0.025197050 Income 0.15195098 -0.08065342 1.000000000 0.058994706 -0.007876994 -0.05669820 -0.004670094 -0.056855422 Advertising 0.26950678 -0.02419879 0.058994706 1.000000000 0.265652145 0.04453687 -0.004557497 -0.033594307 Population 0.05047098 -0.09470652 -0.007876994 0.265652145 1.000000000 -0.01214362 -0.042663355 -0.106378231 Price -0.44495073 0.58484777 -0.056698202 0.044536874 -0.012143620 1.00000000 -0.102176839 0.011746599 Age -0.23181544 -0.10023882 -0.004670094 -0.004557497 -0.042663355 -0.10217684 1.000000000 0.006488032 Education -0.05195524 0.02519705 -0.056855422 -0.033594307 -0.106378231 0.01174660 0.006488032 1.000000000 Be Sociable, Share!
October 8, 2014
by Mark Needham
· 28,773 Views
article thumbnail
Using Groovy To Import XML Into MongoDB
This year I’ve been demonstrating how easy it is to create modern web apps using AngularJS, Java and MongoDB. I also use Groovy during this demo to do the sorts of things Groovy is really good at - writing descriptive tests, and creating scripts. Due to the time pressures in the demo, I never really get a chance to go into the details of the script I use, so the aim of this long-overdue blog post is to go over this Groovy script in a bit more detail. Firstly I want to clarify that this is not my original work - I stoleborrowed most of the ideas for the demo from my colleague Ross Lawley. In this blog post he goes into detail of how he built up an application that finds the most popular pub names in the UK. There’s asection in there where he talks about downloading the open street map data and using python to convert the XML into something more MongoDB-friendly - it’s this process that I basically stole, re-worked for coffee shops, and re-wrote for the JVM. I’m assuming if you’ve worked with Java for any period of time, there has come a moment where you needed to use it to parse XML. Since my demo is supposed to be all about how easy it is to work with Java, I didnot want to do this. When I wrote the demo I wasn’t really all that familiar with Groovy, but what I did know was that it has built in support for parsing and manipulating XML, which is exactly what I wanted to do. In addition, creating Maps (the data structures, not the geographical ones) with Groovy is really easy, and this is effectively what we need to insert into MongoDB. Goal Of The Script Parse an XML file containing open street map data of all coffee shops. Extract latitude and longitude XML attributes and transform intoMongoDB GeoJSON. Perform some basic validation on the coffee shop data from the XML. Insert into MongoDB. Make sure MongoDB knows this contains query-able geolocation data. The script is PopulateDatabase.groovy, that link will take you to the version I presented at JavaOne: Firstly, We Need Data I used the same service Ross used in his blog post to obtain the XML file containing “all” coffee shops around the world. Now, the open street map data is somewhat… raw and unstructured (which is why MongoDB is such a great tool for storing it), so I’m not sure I really have all the coffee shops, but I obtained enough data for an interesting demo using http://www.overpass-api.de/api/xapi?*[amenity=cafe][cuisine=coffee_shop] The resulting XML file is in the github project, but if you try this yourself you might (in fact, probably will) get different results. Each XML record looks something like: Each coffee shop has a unique identifier and a latitude and longitude as attributes of a node element. Within this node is a series of tag elements, all with k and v attributes. Each coffee shop has a varying number of these attributes, and they are not consistent from shop to shop (other than amenity and cuisine which we used to select this data). Initialisation Before doing anything else we want to prepare the database. The assumption of this script is that either the collection we want to store the coffee shops in is empty, or full of stale data. So we’re going to use the MongoDB Java Driver to get the collection that we’re interested in, and then drop it. There’s two interesting things to note here: This Groovy script is simply using the basic Java driver. Groovy can talk quite happily to vanilla Java, it doesn’t need to use a Groovy library. There are Groovy-specific libraries for talking to MongoDB (e.g. the MongoDB GORM Plugin), but the Java driver works perfectly well. You don’t need to create databases or collections (collections are a bit like tables, but less structured) explicitly in MongoDB. You simply use the database and collection you’re interested in, and if it doesn’t already exist, the server will create them for you. In this example, we’re just using the default constructor for theMongoClient, the class that represents the connection to the database server(s). This default is localhost:27017, which is where I happen to be running the database. However you can specify your own address and port - for more details on this see Getting Started With MongoDB and Java. Turn The XML Into Something MongoDB-Shaped So next we’re going to use Groovy’s XmlSlurper to read the open street map XML data that we talked about earlier. To iterate over every node we use: xmlSlurper.node.each. For those of you who are new to Groovy or new to Java 8, you might notice this is using a closure to define the behaviour to apply for every “node” element in the XML. Create GeoJSON Since MongoDB documents are effectively just maps of key-value pairs, we’re going to create a Map coffeeShop that contains the document structure that represents the coffee shop that we want to save into the database. Firstly, we initialise this map with the attributes of the node. Remember these attributes are something like: We’re going to save the ID as a value for a new field calledopenStreetMapId. We need to do something a bit more complicated with the latitude and longitude, since we need to store them as GeoJSON, which looks something like: { 'location' : { 'coordinates': [, ], 'type' : 'Point' } } In lines 12-14 you can see that we create a Map that looks like the GeoJSON, pulling the lat and lon attributes into the appropriate places. Insert Remaining Fields Now for every tag element in the XML, we get the k attribute and check if it’s a valid field name for MongoDB (it won’t let us insert fields with a dot in, and we don’t want to override our carefully constructed locationfield). If so we simply add this key as the field and its the matching vattribute as the value into the map. This effectively copies theOpenStreetMap key/value data into key/value pairs in the MongoDB document so we don’t lose any data, but we also don’t do anything particularly interesting to transform it. Save Into MongoDB Finally, once we’ve created a simple coffeeShop Map representing the document we want to save into MongoDB, we insert it into MongoDB if the map has a field called name. We could have checked this when we were reading the XML and putting it into the map, but it’s actually much easier just to use the pretty Groovy syntax to check for a key called namein coffeeShop. When we want to insert the Map we need to turn this into aBasicDBObject, the Java Driver’s document type, but this is easily done by calling the constructor that takes a Map. Alternatively, there’s a Groovy syntax which would effectively do the same thing, which you might prefer: collection.insert(coffeeShop as BasicDBObject) Tell MongoDB That We Want To Perform Geo Queries On This Data Because we’re going to do a nearSphere query on this data, we need to add a “2dsphere” index on our location field. We created the locationfield as GeoJSON, so all we need to do is call createIndex for this field. Conclusion So that’s it! Groovy is a nice tool for this sort of script-y thing - not only is it a scripting language, but its built-in support for XML, really nice Map syntax and support for closures makes it the perfect tool for iterating over XML data and transforming it into something that can be inserted into a MongoDB collection.
October 8, 2014
by Trisha Gee
· 8,785 Views
article thumbnail
How to Allow Only HTTPS on an S3 Bucket
It is possible to disable HTTP access on S3 bucket, limiting S3 traffic to only HTTPS requests. The documentation is scattered around the Amazon AWS documentation, but the solution is actually straightforward. All you need to do to block HTTP traffic on an S3 bucket is add a Condition in your bucket's policy. AWS supports a global condition for verifying SSL. So you can add a condition like this: "Condition": { "Bool": { "aws:SecureTransport": "true" } } Here's a complete example: { "Version": "2008-10-17", "Id": "some_policy", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my_bucket/*", "Condition": { "Bool": { "aws:SecureTransport": "true" } } } ] } Now accessing the contents of my_bucket over HTTP will produce a 403 error, while using HTTPS will work fine.
October 8, 2014
by Matt Butcher
· 17,104 Views
article thumbnail
Adding License Information Using Maven
Recently, I got a task where licensing was required to be added. I have done such tasks using ant in the past but this time I was supposed to use maven. Some quick search made it clear that maven provides a plugin to do such activities but the documentation was not upto the mark (or I can say it was a bit confusing or too generic). To save other people from such situation I am going to demonstrate it using a simple example. Lets suppose you want to have licensing information given below in all java files of your project: /** * Copyright (C) 2014 My Coaching Company. All rights reserved This software is the confidential * and proprietary information of My Coaching Company. You shall not disclose such confidential * information and shall use it only in accordance with the terms of the license agreement you * entered into with My Coaching Company. * */ Here are steps to do so: 1. Create a txt file named License.txt and place it in parallel with pom.xml and make sure that your license file should not contain comments like /** ... */. It should look like, Copyright (C) 2014 My Coaching Company. All rights reserved This software is the confidential and proprietary information of My Coaching Company. You shall not disclose such confidential information and shall use it only in accordance with the terms of the license agreement you entered into with My Coaching Company. 2. Add following snippet to pom.xml ${basedir} 3. Now add plugin configuration for adding license to java files in maven project, com.mycila.maven-license-plugin maven-license-plugin 1.10.b1 ${license.dir}/license.txt ${project.name} ${project.organization.name} ${project.inceptionYear} ${founder-website} src/main/java/** src/test/java/** format process-sources com.mycila licenses 1 4. Now you are all set to fire the command mvn license:format This will add license information on top of java code. Note: If you have projects under subproject something like project ---| | --> sub-project | --> sub-project2 then you are required to add following snippet into the pom.xml of sub-projects: ${project.parent.basedir} I hope this should help lots of developers around. This is one of the most simple usage of this plugin for more please refer to the official site.
October 7, 2014
by Prateek Jain
· 15,205 Views · 1 Like
article thumbnail
PostgreSQL: ERROR: Column Does Not Exist
I’ve been playing around with PostgreSQL recently and in particular the Northwind dataset typically used as an introductory data set for relational databases. Having imported the data I wanted to take a quick look at the employees table: postgres=# SELECT * FROM employees LIMIT 1; EmployeeID | LastName | FirstName | Title | TitleOfCourtesy | BirthDate | HireDate | Address | City | Region | PostalCode | Country | HomePhone | Extension | Photo | Notes | ReportsTo | PhotoPath ------------+----------+-----------+----------------------+-----------------+------------+------------+-----------------------------+---------+--------+------------+---------+----------------+-----------+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+-------------------------------------- 1 | Davolio | Nancy | Sales Representative | Ms. | 1948-12-08 | 1992-05-01 | 507 - 20th Ave. E.\nApt. 2A | Seattle | WA | 98122 | USA | (206) 555-9857 | 5467 | \x | Education includes a BA IN psychology FROM Colorado State University IN 1970. She also completed "The Art of the Cold Call." Nancy IS a member OF Toastmasters International. | 2 | http://accweb/emmployees/davolio.bmp (1 ROW) That works fine but what if I only want to return the ‘EmployeeID’ field? postgres=# SELECT EmployeeID FROM employees LIMIT 1; ERROR: COLUMN "employeeid" does NOT exist LINE 1: SELECT EmployeeID FROM employees LIMIT 1; I hadn’t realised (or had forgotten) that field names get lower cased so we need to quote the name if it’s been stored in mixed case: postgres=# SELECT "EmployeeID" FROM employees LIMIT 1; EmployeeID ------------ 1 (1 ROW) From my reading the suggestion seems to be to have your field names lower cased to avoid this problem but since it’s just a dummy data set I guess I’ll just put up with the quoting overhead for now.
October 7, 2014
by Mark Needham
· 17,113 Views
article thumbnail
MockRunner with JMS Spring Unit Test
This article shows how to mock your JMS infrastructure using MockRunner and test it using Spring.
October 6, 2014
by Upender Chinthala
· 58,194 Views · 2 Likes
article thumbnail
Simple SecurePasswordVault in Java
There are some instances when you want to store your passwords in files to be used by programs or scripts. But storing your passwords in plain text is not a good idea. Use the SecurePasswordVault to encrypt your passwords before storing and get it decrypted when you want to use it. You can use the SecurePasswordVault described here to store any number of encrypted passwords. Passwords are stored as key value pairs. Key - any name given by the user for the password Value - encrypted password SecurePasswordVault will create a file with the given name in the working directory if it doesn't exist. If a file exists then the information in that file will be read. Passwords are encrypted using the MAC address of the network card. SecurePasswordVault will use the first network card MAC which is not the loop back interface. So the encrypted file can only be decrypted with that particular MAC address. If you want to reset the pass word details, just delete the password file and run the SecurePasswordVault. You can download the sample code from the following GitHub repository https://github.com/jsdjayanga/secure_password com.wso2.devgov; import org.bouncycastle.util.encoders.Base64; import javax.crypto.*; import javax.crypto.spec.SecretKeySpec; import java.io.*; import java.net.NetworkInterface; import java.net.SocketException; import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.Security; import java.util.*; /** * Created by jayanga on 3/31/14. */ public class SecurePasswordVault { private static final int AES_KEY_LEN = 32; private static final int PASSWORD_LEN = 256; private static boolean initialized; private final String secureFile; private final byte[] networkHardwareHaddress; private Map secureDataMap; private List secureDataList; SecretKeySpec secretKey; public SecurePasswordVault(String filename, String[] secureData) throws IOException { Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); initialized = false; secureFile = filename; networkHardwareHaddress = SecurePasswordVault.readNetworkHardwareAddress(); secureDataMap = new HashMap(); this.secureDataList = new ArrayList(secureData.length); Collections.addAll(secureDataList, secureData); byte[] key = new byte[AES_KEY_LEN]; Arrays.fill(key, (byte)0); for(int index = 0; index < networkHardwareHaddress.length; index++){ key[index] = networkHardwareHaddress[index]; } secretKey = new SecretKeySpec(key, "AES"); if (!isInitialized()){ readSecureData(secureDataList); persistSecureData(); } readSecureDataFromFile(); } private boolean isInitialized(){ if (initialized == true){ return true; }else{ File file = new File(secureFile); if (file.exists()){ initialized = true; return initialized; } } return false; } private static byte[] readNetworkHardwareAddress() throws SocketException { Enumeration networkInterfaceEnumeration = NetworkInterface.getNetworkInterfaces(); if (networkInterfaceEnumeration != null){ NetworkInterface networkInterface = null; while (networkInterfaceEnumeration.hasMoreElements()){ networkInterface = networkInterfaceEnumeration.nextElement(); if (!networkInterface.isLoopback()){ break; } } if (networkInterface == null){ networkInterface = networkInterfaceEnumeration.nextElement(); } byte[] hwaddr = networkInterface.getHardwareAddress(); return hwaddr; }else{ throw new RuntimeException("Cannot initialize. Failed to generate unique id."); } } private byte[] encrypt(String word) { byte[] password = new byte[PASSWORD_LEN]; Arrays.fill(password, (byte)0); byte[] pw = new byte[0]; try { pw = word.getBytes("UTF-8"); for(int index = 0; index < pw.length; index++){ password[index] = pw[index]; } byte[] cipherText = new byte[password.length]; Cipher cipher = null; try { cipher = Cipher.getInstance("AES/ECB/NoPadding"); try { cipher.init(Cipher.ENCRYPT_MODE, secretKey); int ctLen = 0; try { ctLen = cipher.update(password, 0, password.length, cipherText, 0); ctLen += cipher.doFinal(cipherText, ctLen); return cipherText; } catch (ShortBufferException e) { e.printStackTrace(); } catch (BadPaddingException e) { e.printStackTrace(); } catch (IllegalBlockSizeException e) { e.printStackTrace(); } } catch (InvalidKeyException e) { e.printStackTrace(); } } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (NoSuchPaddingException e) { e.printStackTrace(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } return null; } private String decrypt(byte[] cipherText) { byte[] plainText = new byte[PASSWORD_LEN]; Cipher cipher = null; try { cipher = Cipher.getInstance("AES/ECB/NoPadding"); try { cipher.init(Cipher.DECRYPT_MODE, secretKey); int plainTextLen = 0; try { plainTextLen = cipher.update(cipherText, 0, PASSWORD_LEN, plainText, 0); try { plainTextLen += cipher.doFinal(plainText, plainTextLen); String password = new String(plainText); return password.trim(); } catch (IllegalBlockSizeException e) { e.printStackTrace(); } catch (BadPaddingException e) { e.printStackTrace(); } } catch (ShortBufferException e) { e.printStackTrace(); } } catch (InvalidKeyException e) { e.printStackTrace(); } } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (NoSuchPaddingException e) { e.printStackTrace(); } return null; } public void readSecureData(List secureDataList) throws IOException { BufferedReader bufferRead = new BufferedReader(new InputStreamReader(System.in)); for(int index = 0; index < secureDataList.size(); index++){ System.out.println("Please enter the value for :" + secureDataList.get(index)); String value = new String(Base64.encode(encrypt(bufferRead.readLine()))); secureDataMap.put(secureDataList.get(index), value); } } public String getSecureData(String key) { String value = secureDataMap.get(key); if (value != null){ return decrypt(Base64.decode(value.getBytes())); } throw new RuntimeException("Given key is unknown. [key=" + key + "]"); } private void readSecureDataFromFile() throws IOException { BufferedReader br = new BufferedReader(new FileReader(secureFile)); String line; while ((line = br.readLine()) != null){ int dividerPoint = line.indexOf("="); if (dividerPoint > 0){ secureDataMap.put(line.substring(0, dividerPoint), line.substring(dividerPoint + 1)); } } } private void persistSecureData() throws IOException { FileWriter fileWriter = new FileWriter(secureFile); for(String key : secureDataMap.keySet()){ fileWriter.append(key + "=" + secureDataMap.get(key) + "\n"); } fileWriter.close(); } }
October 5, 2014
by Jayanga Dissanayake
· 14,952 Views
article thumbnail
Comparison of SQL Server Compact, SQLite, SQL Server Express and LocalDB
Now that SQL Server 2014 and SQL Server Compact 4 has been released, some developers are curious about the differences between SQL Server Compact 4.0 and SQL Server Express 2014 (including LocalDB) I have updated the comparison table from the excellent discussion of the differences between Compact 3.5 and Express 2005 here to reflect the changes in the newer versions of each product. Information about LocalDB comes from here and SQL Server 2014 Books Online. LocalDB is the full SQL Server Express engine, but invoked directly from the client provider. It is a replacement of the current “User Instance” feature in SQL Server Express. Feature SQL Server Compact 3.5 SP2 SQL Server Compact 4.0 SQLite, incl SQLite ADO.NET Provider SQL Server Express 2012 SQL Server 2012 LocalDB Deployment/ Installation Features Installation size 2.5 MB download size 12 MB expanded on disk 2.5 MB download size 18 MB expanded on disk 10 MB download, 14 MB expanded on disk 120 MB download size > 300 MB expanded on disk 32 MB download size > 160 MB on disk ClickOnce deployment Yes Yes Yes Yes Yes Privately installed, embedded, with the application Yes Yes Yes No No Non-admin installation option Yes Yes Yes No No Runs under ASP.NET No Yes Yes Yes Yes Runs on Windows Mobile / Windows Phone platform Yes No Yes No No Runs on WinRT (Phone/Store Apps) No No Yes No No Runs on non-Microsoft platforms No No Yes No No Installed centrally with an MSI Yes Yes Yes Yes Yes Runs in-process with application Yes Yes Yes No No (as process started by app) 64-bit support Yes Yes Yes Yes Yes Runs as a service No – In process with application No - In process with application No - In process with application Yes No – as launched process Data file features File format Single file Single file Single file Multiple files Multiple files Data file storage on a network share No No No No No Support for different file extensions Yes Yes Yes No No Database size support 4 GB 4 GB 140 TB 10 GB 10 GB XML storage Yes – stored as ntext Yes - stored as ntext Yes, stored as text Yes, native Yes, native Binary (BLOB) storage Yes – stored as image Yes - stored as image Yes Yes Yes FILESTREAM support No No No Yes No Code free, document safe, file format Yes Yes Yes No No Programmability Transact-SQL - Common Query Features Yes Yes No Yes Yes Procedural T-SQL - Select Case, If, features No No Limited Yes Yes Remote Data Access (RDA) Yes No (not supported) No No No ADO.NET Sync Framework Yes No No Yes Yes LINQ to SQL Yes No (not supported) No Yes Yes ADO.NET Entity Framework 4.1 Yes (no Code First) Yes Yes Yes Yes ADO.NET Entity Framework 6 Yes (fully) Yes (fully) Yes (limited) Yes Yes Subscriber for merge replication Yes No No Yes No Simple transactions Yes Yes Yes Yes Yes Distributed transactions No No No Yes Yes Native XML, XQuery/XPath No No No Yes Yes Stored procedures, views, triggers No No Views and triggers Yes Yes Role-based security No No No Yes Yes Number of concurrent connections 256 (100) 256 Unlimited Unlimited Unlimited (but only local) There is also a table here that allows you to determine which Transact-SQL commands, features, and data types are supported by SQL Server Compact 3.5 (which are the same a 4.0 with very few exceptions), compared with SQL Server 2005 and 2008.
October 4, 2014
by Erik Ejlskov Jensen
· 24,094 Views
article thumbnail
Checking for Null Values in Java with Objects.requiresNonNull()
Checking method/constructor parameters for null values is a common task problem in Java. To assist you with this, various Java libraries provide validation utilities (see Guava Preconditions, Commons LangValidate or Spring's Assert documentation). However, if you only want to validate for non null values you can use the static requiresNonNull()method of java.util.Objects. This is a little utility introduced by Java 7 that appears to be rarely known. With Objects.requiresNonNull() the following piece of code public void foo(SomeClass obj) { if (obj == null) { throw new NullPointerException("obj must not be null"); } // work with obj } can be replaced with: import java.util.Objects; public void foo(SomeClass obj) { Objects.requireNonNull(obj, "obj must not be null"); // work with obj }
October 3, 2014
by Michael Scharhag
· 65,672 Views · 2 Likes
article thumbnail
String Encoding with Mule
Sometimes one would want to handle strings which contain characters not included in UTF-8 or the default encoding (set in mule-deploy.properties). In these scenarios a different encoding which is capable of handling these characters (such as UTF-16 or UTF-32) can be used. To do so the default encoding can be easily changed by making a few modifications according to the type of transformer being used. Changing Encoding with the Datamapper When using the datamapper with data such as XML, one can easily choose the encoding by clicking on the settings button in the mapping (this should be set properly for both input and output) : Settings button datamapper A similar panel to the one below should appear: Changing Encoding when using “simple” transformers When using transformers such as object-to-string or byte-array-to-string, one would think that setting the “encoding” attribute on the transformer would do the trick: Unfortunately this doesn’t work, since the current Mule’s behaviour is to use this property just to set the MULE_ENCODING outbound property after the transformation is done. However, instead we should make sure that MULE_ENCODING outbound property is set properly before invoking the transformer. The transformer would then be able to transform the payload correctly for us.
October 3, 2014
by Andre Schembri
· 23,148 Views · 3 Likes
article thumbnail
Building Projects with Eclipse from the Command Line
eclipse has a great user interface (ui). but what if i want to do things from the command line, without the gui? for example to build one or more projects in the workspace without using the eclipse ui? with this, i can do automated check-outs and do automated builds. performed a command line project build with eclipse the solution to this: there is a command line version of eclipse which i can use to run eclipse in the command line version. inside the eclipse folder on windows, there is the eclipsec program which is the command-line version of eclipse: eclipsec program, a command line version of eclipse the options of this command line version (for eclipse kepler) are described here: http://help.eclipse.org/kepler/index.jsp?topic=%2forg.eclipse.platform.doc.isv%2freference%2fmisc%2fruntime-options.html for example eclipsec.exe -nosplash -application org.eclipse.cdt.managedbuilder.core.headlessbuild -data c:\my_wsp -build k64f will launch eclipse without splash screen ( -nosplash ), uses the - application command to load the managed make builder (which is used to build projects), with -data i specify the workspace to be used, and with the -build command it will the project k64f. more options and details are shown here: http://stackoverflow.com/questions/344797/build-several-cdt-c-projects-from-commandline and a very good article with additional background information how to use it with the gnu arm eclipse plubins can be found here: http://gnuarmeclipse.livius.net/blog/headless-builds/ happy headlessing :-)
October 2, 2014
by Erich Styger
· 18,389 Views
article thumbnail
Java - Top 5 Exception Handling Coding Practices to Avoid
The best coding practices related with Java exception handling that you may want to watch out for while doing coding for exception handling.
October 1, 2014
by Ajitesh Kumar
· 112,700 Views · 3 Likes
article thumbnail
Embedded Jetty and Apache CXF: Secure REST Services With Spring Security
Recently I ran into very interesting problem which I thought would take me just a couple of minutes to solve: protecting Apache CXF (current release 3.0.1)/ JAX-RS REST services with Spring Security (current stable version 3.2.5) in the application running inside embedded Jetty container (current release 9.2). At the end, it turns out to be very easy, once you understand how things work together and known subtle intrinsic details. This blog post will try to reveal that. Our example application is going to expose a simple JAX-RS / REST service to manage people. However, we do not want everyone to be allowed to do that so the HTTP basic authentication will be required in order to access our endpoint, deployed at http://localhost:8080/api/rest/people. Let us take a look on thePeopleRestService class: package com.example.rs; import javax.json.Json; import javax.json.JsonArray; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path( "/people" ) public class PeopleRestService { @Produces( { "application/json" } ) @GET public JsonArray getPeople() { return Json.createArrayBuilder() .add( Json.createObjectBuilder() .add( "firstName", "Tom" ) .add( "lastName", "Tommyknocker" ) .add( "email", "a@b.com" ) ) .build(); } } As you can see in the snippet above, nothing is pointing out to the fact that this REST service is secured, just couple of familiar JAX-RS annotations. Now, let us declare the desired security configuration following excellent Spring Security documentation. There are many ways to configure Spring Security but we are going to show off two of them: using in-memory authentication and using user details service, both built on top of WebSecurityConfigurerAdapter. Let us start with in-memory authentication as it is the simplest one: package com.example.config; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity( securedEnabled = true ) public class InMemorySecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser( "user" ).password( "password" ).roles( "USER" ).and() .withUser( "admin" ).password( "password" ).roles( "USER", "ADMIN" ); } @Override protected void configure( HttpSecurity http ) throws Exception { http.httpBasic().and() .sessionManagement().sessionCreationPolicy( SessionCreationPolicy.STATELESS ).and() .authorizeRequests().antMatchers("/**").hasRole( "USER" ); } } In the snippet above there two users defined: user with the role USER and admin with the roles USER,ADMIN. We also protecting all URLs (/**) by setting authorization policy to allow access only users with roleUSER. Being just a part of the application configuration, let us plug it into the AppConfig class using @Importannotation. package com.example.config; import java.util.Arrays; import javax.ws.rs.ext.RuntimeDelegate; import org.apache.cxf.bus.spring.SpringBus; import org.apache.cxf.endpoint.Server; import org.apache.cxf.jaxrs.JAXRSServerFactoryBean; import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.DependsOn; import org.springframework.context.annotation.Import; import com.example.rs.JaxRsApiApplication; import com.example.rs.PeopleRestService; @Configuration @Import( InMemorySecurityConfig.class ) public class AppConfig { @Bean( destroyMethod = "shutdown" ) public SpringBus cxf() { return new SpringBus(); } @Bean @DependsOn ( "cxf" ) public Server jaxRsServer() { JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class ); factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) ); factory.setAddress( factory.getAddress() ); factory.setProviders( Arrays.< Object >asList( new JsrJsonpProvider() ) ); return factory.create(); } @Bean public JaxRsApiApplication jaxRsApiApplication() { return new JaxRsApiApplication(); } @Bean public PeopleRestService peopleRestService() { return new PeopleRestService(); } } At this point we have all the pieces except the most interesting one: the code which runs embedded Jettyinstance and creates proper servlet mappings, listeners, passing down the configuration we have created. package com.example; import java.util.EnumSet; import javax.servlet.DispatcherType; import org.apache.cxf.transport.servlet.CXFServlet; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.servlet.FilterHolder; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import org.springframework.web.filter.DelegatingFilterProxy; import com.example.config.AppConfig; public class Starter { public static void main( final String[] args ) throws Exception { Server server = new Server( 8080 ); // Register and map the dispatcher servlet final ServletHolder servletHolder = new ServletHolder( new CXFServlet() ); final ServletContextHandler context = new ServletContextHandler(); context.setContextPath( "/" ); context.addServlet( servletHolder, "/rest/*" ); context.addEventListener( new ContextLoaderListener() ); context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() ); context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() ); // Add Spring Security Filter by the name context.addFilter( new FilterHolder( new DelegatingFilterProxy( "springSecurityFilterChain" ) ), "/*", EnumSet.allOf( DispatcherType.class ) ); server.setHandler( context ); server.start(); server.join(); } } Most of the code does not require any explanation except the the filter part. This is what I meant by subtle intrinsic detail: the DelegatingFilterProxy should be configured with the filter name which must be exactlyspringSecurityFilterChain, as Spring Security names it. With that, the security rules we have configured are going to apply to any JAX-RS service call (the security filter is executed before the Apache CXF servlet), requiring the full authentication. Let us quickly check that by building and running the project: mvn clean package java -jar target/jax-rs-2.0-spring-security-0.0.1-SNAPSHOT.jar Issuing the HTTP GET call without providing username and password does not succeed and returns HTTP status code 401. > curl -i http://localhost:8080/rest/api/people HTTP/1.1 401 Full authentication is required to access this resource WWW-Authenticate: Basic realm="Realm" Cache-Control: must-revalidate,no-cache,no-store Content-Type: text/html; charset=ISO-8859-1 Content-Length: 339 Server: Jetty(9.2.2.v20140723) The same HTTP GET call with username and password provided returns successful response (with some JSON generated by the server). > curl -i -u user:password http://localhost:8080/rest/api/people HTTP/1.1 200 OK Date: Sun, 28 Sep 2014 20:07:35 GMT Content-Type: application/json Content-Length: 65 Server: Jetty(9.2.2.v20140723) [{"firstName":"Tom","lastName":"Tommyknocker","email":"a@b.com"}] Excellent, it works like a charm! Turns out, it is really very easy. Also, as it was mentioned before, the in-memory authentication could be replaced with user details service, here is an example how it could be done: package com.example.config; import java.util.Arrays; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity(securedEnabled = true) public class UserDetailsSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService( userDetailsService() ); } @Bean public UserDetailsService userDetailsService() { return new UserDetailsService() { @Override public UserDetails loadUserByUsername( final String username ) throws UsernameNotFoundException { if( username.equals( "admin" ) ) { return new User( username, "password", true, true, true, true, Arrays.asList( new SimpleGrantedAuthority( "ROLE_USER" ), new SimpleGrantedAuthority( "ROLE_ADMIN" ) ) ); } else if ( username.equals( "user" ) ) { return new User( username, "password", true, true, true, true, Arrays.asList( new SimpleGrantedAuthority( "ROLE_USER" ) ) ); } return null; } }; } @Override protected void configure( HttpSecurity http ) throws Exception { http .httpBasic().and() .sessionManagement().sessionCreationPolicy( SessionCreationPolicy.STATELESS ).and() .authorizeRequests().antMatchers("/**").hasRole( "USER" ); } } Replacing the @Import( InMemorySecurityConfig.class ) with @Import( UserDetailsSecurityConfig.class ) in the AppConfig class leads to the same results, as both security configurations define the identical sets of users and their roles. I hope, this blog post will save you some time and gives a good starting point, as Apache CXF and Spring Security are getting along very well under Jetty umbrella! The complete source code is available on GitHub.
September 30, 2014
by Andriy Redko
· 18,540 Views · 1 Like
article thumbnail
Spring WebApplicationInitializer and ApplicationContextInitializer confusion
These are two concepts that I mix up occasionally - a WebApplicationInitializer and an ApplicationContextInitializer, and wanted to describe each of them to clarify them for myself. I have previously blogged about WebApplicationInitializerhere and here. It is relevant purely in a Servlet 3.0+ spec compliant servlet container and provides a hook to programmatically configure the servlet context. How does this help - you can have a web application without potentially any web.xml file, typically used in a Spring based web application to describe the root application context and the Spring web front controller called theDispatcherServlet. An example of using WebApplicationInitializer is the following: public class CustomWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override protected Class[] getRootConfigClasses() { return new Class[]{RootConfiguration.class}; } @Override protected Class[] getServletConfigClasses() { return new Class[]{MvcConfiguration.class}; } @Override protected String[] getServletMappings() { return new String[]{"/"}; } } Now, what is an ApplicationContextInitializer. It is essentially code that gets executed before the Spring application context gets completely created. A good use case for using an ApplicationContextInitializer would be to set a Spring environment profile programmatically, along these lines: public class DemoApplicationContextInitializer implements ApplicationContextInitializer { @Override public void initialize(ConfigurableApplicationContext ac) { ConfigurableEnvironment appEnvironment = ac.getEnvironment(); appEnvironment.addActiveProfile("demo"); } } If you have a Spring-Boot based application then registering an ApplicationContextInitializer is fairly straightforward: @Configuration @EnableAutoConfiguration @ComponentScan public class SampleWebApplication { public static void main(String[] args) { new SpringApplicationBuilder(SampleWebApplication.class) .initializers(new DemoApplicationContextInitializer()) .run(args); } } For a non Spring-Boot Spring application though, it is a little more tricky, if it is a programmatic configuration of web.xml, then the configuration is along these lines: public class CustomWebAppInitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext container) { AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext(); rootContext.register(RootConfiguration.class); ContextLoaderListener contextLoaderListener = new ContextLoaderListener(rootContext); container.addListener(contextLoaderListener); container.setInitParameter("contextInitializerClasses", "mvctest.web.DemoApplicationContextInitializer"); AnnotationConfigWebApplicationContext webContext = new AnnotationConfigWebApplicationContext(); webContext.register(MvcConfiguration.class); DispatcherServlet dispatcherServlet = new DispatcherServlet(webContext); ServletRegistration.Dynamic dispatcher = container.addServlet("dispatcher", dispatcherServlet); dispatcher.addMapping("/"); } } If it a normal web.xml configuration then the initializer can be specified this way: contextInitializerClasses com.myapp.spring.SpringContextProfileInit org.springframework.web.context.ContextLoaderListener So to conclude, except for the Initializer suffix, both WebApplicationInitializer and ApplicationContextInitializer serve fairly different purposes. Whereas the WebApplicationInitializer is used by a Servlet Container at startup of the web application and provides a way for programmatic creating a web application(replacement for a web.xml file), ApplicationContextInitializer provides a hook to configure the Spring application context before it gets fully created.
September 30, 2014
by Biju Kunjummen
· 101,059 Views · 36 Likes
article thumbnail
How to Setup Eclipse IDE for Sonar Analysis
this article describes steps required to configure your eclipse for sonarqube such that developers are not required to leave the eclipse ide to manage their source code quality. please feel free to comment/suggest if i missed to mention one or more important points. also, sorry for the typos. following are the key points described later in this article: installing & configuraing sonarqube in eclipse analyzing code using sonarqube installing & configuraing sonarqube in eclipse follow the steps given on following pages to install and configure sonarqube in your eclipse ide. the page on installing sonarqube in eclipse consists of steps required to install sonarqube in the ide. once downloaded, it would prompt to restart the eclipse ide. for proceeding ahead, you must restart the eclipse ide. the page on configuring sonarqube in eclipse consists of steps required to configure eclipse for sonarqube. the key thing to note is the fact that as you click on preferences > sonarqube > servers , you see the default server entry for http://localhost:9000. initially, as per instruction page, seeing the default entry, i went ahead to next step for linking the project to sonarqube server and run the analysis. however, it throw the error such as unknowb version of sonarqube server . after a little research, i figured out that the error was thrown as the sonarqube server was not reachable although i saw it configured as http://localhost:9000. the solution is to go to preferences > sonarqube > servers , click “edit” button and you would see sonarqube server url, username and password. the key is to give username and password and run the analysis once again. analyzing code using sonarqube once you are done with installation and configuration of sonarqube server in eclipse ide, for code analysis, all you need is right click on your project and click sonarqube > analyze and that is it. the violations would appear in sonarqube analysis as shown in the screenshot below. sonarqube analysis in eclipse however, do note that these issues may not appear if you try to access on the browser. for that to happen, you need to once again run “sonar-runner” from the project root.
September 29, 2014
by Ajitesh Kumar
· 89,057 Views
article thumbnail
Java 8 Optional - Avoid Null and NullPointerException Altogether - and Keep It Pretty
There have been a couple of articles on null, NPE's and how to avoid them. They make some point, but could stress the easy, safe, beautiful aspects of Java 8's Optional. This article shows some way of dealing with optional values, without additional utility code. The old way Let's consider this code: String unsafeTypeDirName = project.getApplicationType().getTypeDirName(); System.out.println(unsafeTypeDirName); This can obviously break with NullPointerException if any term is null. A typical way of avoiding this: // safe, ugly, omission-prone if (project != null) { ApplicationType applicationType = project.getApplicationType(); if (applicationType != null) { String typeDirName = applicationType.getTypeDirName(); if (typeDirName != null) { System.out.println(typeDirName); } } } This won't explode, but is just ugly, and it's easy to avoid some null check. Java 8 Let's try with Java 8's Optional: // let's assume you will get this from your model in the future; in the meantime... Optional optionalProject = Optional.ofNullable(project); // safe, java 8, but still ugly and omission-prone if (optionalProject.isPresent()) { ApplicationType applicationType = optionalProject.get().getApplicationType(); Optional optionalApplicationType = Optional.ofNullable(applicationType); if (optionalApplicationType.isPresent()) { String typeDirName = optionalApplicationType.get().getTypeDirName(); Optional optionalTypeDirName = Optional.ofNullable(typeDirName); if (optionalTypeDirName.isPresent()) { System.out.println(optionalTypeDirName); } } As noted in a lot of posts, this isn't a lot better than null checks. Some argue that it makes your intent clear. I don't see any big difference, most null checks being pretty obvious on those kind of situations. Ok, let's use the functional interfaces and get more power from Optional: // safe, prettier Optional optionalTypeDirName = optionalProject .flatMap(project -> project.getApplicationTypeOptional()) .flatMap(applicationType -> applicationType.getTypeDirNameOptional()); optionalTypeDirName.ifPresent(typeDirName -> System.out.println(typeDirName)); flatMap() will always return an Optional, so no nulls possible here, and you avoid having to wrap/unwrap to Optional. Please note that I added *Optional() methods in the types for that. There are other ways to do it (map + flatMap to Optional::ofNullable is one). The best one: only return optional value where it makes sense: if you know the value will always be provided, make it non-optional. By the way, this advice works for old style null checks too. ifPresent() will only run the code if it's there. No default or anything. Let's just use member references to express the same in a tight way: // safe, yet prettier optionalProject .flatMap(Project::getApplicationTypeOptional) .flatMap(ApplicationType::getTypeDirNameOptional) .ifPresent(System.out::println); Or if you know that Project has an ApplicationType anyway: // safe, yet prettier optionalProject .map(Project::getApplicationType) .flatMap(ApplicationType::getTypeDirNameOptional) .ifPresent(System.out::println); Conclusion By using Optional, and never working with null, you could avoid null checks altogether. Since they aren't needed, you also avoid omitting a null check leading to NPEs. Still, make sure that values returned from legacy code (Map, ...), which can be null, are wrapped asap in Optional.
September 27, 2014
by Yannick Majoros
· 200,667 Views · 9 Likes
article thumbnail
Executing Multiple Commands as Post-Build Steps in Eclipse
the gnu arm eclipse plugins from liviu already offer several built-in actions which can be performed at the end of a build: creating flash image , create listing file and printing the code and data size: gnu arm eclipse extra post build steps but what if i need different things, or even more things? post-build steps for this there is the ‘post-build steps’ settings i can use: that command is executed at the end of the build: print size post-build step :!: the post build step is only executed if sources files have been compiled and linked. if you want to enforce that there is always a ‘true’ post build, then you need to delete some files in the pre-build step to enforce a compilation and a link phase. multiple post-build steps but what i need more than one action in the post-build step? i could call a batch or script file, but this is probably an overkill in too many cases, and adds a dependency to that script file. a better approach is to directly execute multiple commands as post-build step. unfortunately, the documentation found about the post-build step with a web-search is misleading (e.g. in the eclipse luna documentation ): “command: specifies one or more commands to execute immediately after the execution of the build. use semicolons to separate multiple commands.” unfortunately, semicolons is plain wrong (at least did not work for me) :-(. the solution is to use ‘ & ‘ (ampersand) to separate multiple commands on windows : :idea: on linux, use the ‘;’ to separate commands as noted in the documentation/help, and use ‘&’ on windows. unfortunately, this makes project not cross-platform. multiple post-build commands and this works for me, at least under windows 7 :-).
September 26, 2014
by Erich Styger
· 11,384 Views
article thumbnail
CodePro Integration with Eclipse Kepler
CodePro Analytix is the premier Java software testing tool for Eclipse developers.
September 25, 2014
by Achala Chathuranga Aponso
· 31,777 Views · 5 Likes
article thumbnail
Optional and Objects: Null Pointer Saviours!
No one loves Null Pointer Exceptions ! Is there a way we can get rid of them ? Maybe . . . Couple of techniques have been discussed in this post Optional type (new in Java 8) Objects class (old Java 7 stuff ;-) ) Optional type in Java 8 What is it? A new type (class) introduced in Java 8 Meant to act as a ‘wrapper‘ for an object of a specific type or for scenarios where there is no object (null) In plain words, its a better substitute for handling nulls (warning: it might not be very obvious at first !) Basic Usage It’a a type (a class) – so, how do I create an instance of it? Just use three static methods in the Optional class public static Optional stringOptional(String input) { return Optional.of(input); } Plain and simple – create an Optional wrapper containing the value. Beware – will throw NPE in case the value itself is null ! public static Optional stringNullableOptional(String input) { if (!new Random().nextBoolean()) { input = null; } return Optional.ofNullable(input); } Slightly better in my personal opinion. There is no risk of an NPE here – in case of a null input, an empty Optional would be returned public static Optional emptyOptional() { return Optional.empty(); } In case you want to purposefully return an ‘empty’ value. ‘empty’ does not imply null Alright – what about consuming/using an Optional? public static void consumingOptional() { Optional wrapped = Optional.of("aString"); if (wrapped.isPresent()) { System.out.println("Got string - " + wrapped.get()); } else { System.out.println("Gotcha !"); } } A simple way is to check whether or not the Optional wrapper has an actual value (use theisPresent method) – this will make you wonder if its any better than usingif(myObj!=null) ;-) Don’t worry, I’ll explain that as well public static void consumingNullableOptional() { String input = null; if (new Random().nextBoolean()) { input = "iCanBeNull"; } Optional wrapped = Optional.ofNullable(input); System.out.println(wrapped.orElse("default")); } One can use the orElse which can be used to return a default value in case the wrapped value is null – the advantage is obvious. We get to avoid the the obvious verbosity of invoking ifPresent before extracting the actual value public static void consumingEmptyOptional() { String input = null; if (new Random().nextBoolean()) { input = "iCanBeNull"; } Optional wrapped = Optional.ofNullable(input); System.out.println(wrapped.orElseGet( () -> { return "defaultBySupplier"; } )); } I was a little confused with this. Why two separate methods for similar goals ? orElse andorElseGet could well have been overloaded (same name, different parameter) Anyway, the only obvious difference here is the parameter itself – you have the option of providing a Lambda Expression representing instance of a Supplier (a Functional Interface) How is using Optional better than regular null checks???? By and large, the major benefit of using Optional is to be able to express your intent clearly – simply returning a null from a method leaves the consumer in a sea of doubt (when the actual NPE occurs) as to whether or not it was intentional and requires further introspection into the javadocs (if any). With Optional, its crystal clear ! There are ways in which you can completely avoid NPE with Optional – as mentioned in above examples, the use of Optional.ofNullable (during Optional creation) andorElse and orElseGet (during Optional consumption) shield us from NPEs altogether Another savior! (in case you can’t use Java 8) Look at this code snippet package com.abhirockzz.wordpress.npesaviors; import java.util.Map; import java.util.Objects; public class UsingObjects { String getVal(Map aMap, String key) { return aMap.containsKey(key) ? aMap.get(key) : null; } public static void main(String[] args) { UsingObjects obj = new UsingObjects(); obj.getVal(null, "dummy"); } } What can possibly be null? The Map object The key against which the search is being executed The instance on which the method is being called When a NPE is thrown in this case, we can never be sure as to What is null? Enter The Objects class package com.abhirockzz.wordpress.npesaviors; import java.util.Map; import java.util.Objects; public class UsingObjects { String getValSafe(Map aMap, String key) { Map safeMap = Objects.requireNonNull(aMap, "Map is null"); String safeKey = Objects.requireNonNull(key, "Key is null"); return safeMap.containsKey(safeKey) ? safeMap.get(safeKey) : null; } public static void main(String[] args) { UsingObjects obj = new UsingObjects(); obj.getValSafe(null, "dummy"); } } The requireNonNull method Simply returns the value in case its not null Throws a NPE will the specified message in case the value in null Why is this better than if(myObj!=null) The stack trace which you would see will clearly have theObjects.requireNonNull method call. This, along with your custom error message will help you catch bugs faster . . much faster IMO ! You can write your user defined checks as well e.g. implementing a simple check which enforces non-emptiness import java.util.Collections; import java.util.List; import java.util.Objects; import java.util.function.Predicate; public class RandomGist { public static T requireNonEmpty(T object, Predicate predicate, String msgToCaller){ Objects.requireNonNull(object); Objects.requireNonNull(predicate); if (predicate.test(object)){ throw new IllegalArgumentException(msgToCaller); } return object; } public static void main(String[] args) { //Usage 1: an empty string (intentional) String s = ""; System.out.println(requireNonEmpty(Objects.requireNonNull(s), (s1) -> s1.isEmpty() , "My String is Empty!")); //Usage 2: an empty List (intentional) List list = Collections.emptyList(); System.out.println(requireNonEmpty(Objects.requireNonNull(list), (l) -> l.isEmpty(), "List is Empty!").size()); //Usage 3: an empty User (intentional) User user = new User(""); System.out.println(requireNonEmpty(Objects.requireNonNull(user), (u) -> u.getName().isEmpty(), "User is Empty!")); } private static class User { private String name; public User(String name){ this.name = name; } public String getName(){ return name; } } } Don’t let NPEs be a pain in the wrong place. We have more than a decent set of tools at our disposal to better handle NPEs or eradicate them altogether ! Cheers! :-)
September 25, 2014
by Abhishek Gupta DZone Core CORE
· 8,209 Views
  • Previous
  • ...
  • 749
  • 750
  • 751
  • 752
  • 753
  • 754
  • 755
  • 756
  • 757
  • 758
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: