DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Add Java 8 support to Eclipse Kepler
want to add java 8 support to kepler? java 8 has not yet landed in our standard download packages . but you can add it to your existing eclipse kepler package. i’ve got three different eclipse installations running java 8: a brand new kepler sr2 installation of the eclipse ide for java developers; a slightly used kepler sr1 installation of the eclipse for rcp/rap developers (with lots of other features already added); and a nightly build (dated march 24/2014) of eclipse 4.4 sdk. the jdt team recommends that you start from kepler sr2, the second and final service release for kepler (but using the exact same steps, i’ve installed it into kepler sr1 and sr2 packages). there are some detailed instructions for adding java 8 support by installing a feature patch in the eclipsepedia wiki . the short version is this: from kepler sr2, use the “help > install new software…” menu option to open the “available software” dialog; enter http://download.eclipse.org/eclipse/updates/4.3-p-builds/ into the “work with” field (highlighted below); put a checkbox next to “eclipse java 8 support (for kepler sr2)” (highlighted below); click “next”, click “next”, read and accept the license, and click “finish” watch the pretty progress bar move relatively quickly across the bottom of the window; and restart eclipse when prompted. select “help > install new software…” to open the available software dialog. voila! support for java 8 is installed. if you’ve already got the java 8 jdk installed and the corresponding jre is the default on your system, you’re done. if you’re not quite ready to make the leap to a java 8 jre, there’s still hope (my system is still configured with java 7 as the default). install the java 8 jdk; open the eclipse preferences, and navigate to “java > installed jres”; java runtime environment preferences click “add…”; select “standard vm”, click “next”; enter the path to the java 8 jre (note that this varies depending on platform, and how you obtain and install the bits); java 8 jre definition click “finish”. before closing the preferences window, you can set your workspace preference to use the newly-installed java 8 jre. or, if you’re just planning to experiment with java 8 for a while, you can configure this on a project-by-project basis. in the create a java project dialog, specify that your project will use a javase-1.8 jre. it’s probably better to do this on the project as this will become a project setting that will follow the project into your version control system. next step… learn how wrong my initial impressions of java 8 were (hint: it’s far better). the lambda is so choice. if you have the means, i highly recommend picking one up. about these ads
March 30, 2014
by Wayne Beaton
· 67,303 Views · 1 Like
article thumbnail
Servlet 3.0 ServletContainerInitializer and Spring WebApplicationInitializer
Spring WebApplicationInitializer provides a programatic way to configure the Spring DispatcherServlet and ContextLoaderListener in Servlet 3.0+ compliant servlet containers , rather than adding this configuration through a web.xml file. This is a quick note to show how implementation through WebApplicationInitializer interface internally works, given that this interface does not derive from any Servlet related interface! The answer is the ServletContainerInitializer interface introduced with Servlet 3.0 specification, implementors of this interface are notified during the context startup phase and can perform any programatic registration through the provided ServletContext. Spring implements the ServletContainerInitializer through SpringServletContainerInitializer class. Per the Servlet specs, this implementation must be declared in a META-INF/services/javax.servlet.ServletContainerInitializer file of the libraries jar file - Spring declares this in spring-web*.jar jar file and has an entry `org.springframework.web.SpringServletContainerInitializer` SpringServletContainerInitializer class has a @HandlerTypes annotation with a value of WebApplicationInitializer, this means that the Servlet container will scan for classes implementing the WebApplicationInitializer implementation and call the onStartUp method with these classes and that is where the WebApplicationInitializer fits in. A little convoluted, but the good thing is all these details are totally abstracted away within the spring-web framework and the developer only has to configure an implementation of WebApplicationInitializer and live in a web.xml free world.
March 29, 2014
by Biju Kunjummen
· 15,849 Views
article thumbnail
Converting Markdown to PDF with PHP
Recently, I had to take some content in markdown, specifically markdown extra, and convert it to a series of PDFs styled with a specific branding. While some will argue that PDFs are dead and “long live the web”, many of us still need to produce PDFs for one reason or another. In this case I had to take markdown extra, with some html sprinkled in, clean it up, and convert it to a styled PDF. What follows is how I did that using QueryPath for the cleanup and DOMPDF to make the conversion. The Setup At the root of this little app was a PHP script with the dependencies managed through composer. The composer.json file looked like: { "name": "foo/bar", "description": "Convert markdown to PDF.", "type": "application", "require": { "php": ">=5.3.0", "michelf/php-markdown": "1.4.*", "dompdf/dompdf" : "0.6.*", "querypath/querypath": "3.*", "masterminds/html5": "1.*" } } Turning the Markdown into HTML Within the script I started with a file we’ll call $file. Form here it was easy using the official markdown extra conversion utility. $markdown = file_get_contents($file); $markdownParser = new \Michelf\MarkdownExtra(); $html = $markdownParser->transform($markdown); This produces the html needed to go inside the body of an html page. From here I wrapped it in a document because I could easily link to a CSS file for styling purposes. DOMPDF supports quite a bit of CSS 2.1. $html = '' . $html . '’; pdf.css is where you can style the PDF. If you know how to style web pages using CSS you can manage to style a PDF document. Cleaning Up The Content There were a number of places html had been injected into the markdown that was either broken, unwanted in a PDF, or an edge case that DOMPDF didn’t support. To make these changes I used QueryPath. For example, I needed to take relative links, normally used in generation of a website, and add a domain name to them: $dom = \HTML5::loadHTML($html); $links = htmlqp($dom, 'a'); foreach ($links as $link) { $href = $link->attr('href'); if (substr($href, 0, 1) == '/' && substr($href, 1, 1) != '/') { $link->attr('href', $domain_name . $href); } } $html = \HTML5::saveHTML($dom); Note, I used the HTML5 parser and writer rather than the built-in one designed for xhtml and HTML 4. This is because DOMPDF attempts to work with HTML5 and I wanted to keep that consistent from the beginning. Converting to PDF There is a little setup before using DOMPDF. It has a built in autoloader which should be disabled and needs a config file. In my case I used the default config file and handled this with: define('DOMPDF_ENABLE_AUTOLOAD', false); require_once __DIR__ . '/vendor/dompdf/dompdf/dompdf_config.inc.php'; The conversion was fairly straight forward. I used a snippet like: $dompdf = new DOMPDF(); $dompdf->load_html($html); $dompdf->render(); $output = $dompdf->output(); file_put_contents(‘path/to/file.pdf', $output); DOMPDF has a lot of options and some quirks. It wasn’t exactly designed for composer. For example, if you want to work with custom fonts you need to get the project from git and install submodules. Despite the quirks, needing to cleanup some of the html, and brand the documents, I was able to write a conversion script that handled dozens of documents quickly. Almost all of my time was on html cleanup and css styling.
March 28, 2014
by Matt Farina
· 9,686 Views
article thumbnail
How to Run a SQL Query Across Multiple Databases with One Query
In SQL Server management studio, using, View, Registered Servers (Ctrl+Alt+G) set up the servers that you want to execute the same query across all servers for, right click the group, select new query. Then when you execute the query, the results will come back with the first column showing you the database instance that that row came from.
March 28, 2014
by Merrick Chaffer
· 48,997 Views
article thumbnail
Documenting Your Spring API with Swagger
over the last several months, i've been developing a rest api using spring boot . my client hired an outside company to develop a native ios app, and my development team was responsible for developing its api. our main task involved integrating with epic , a popular software system used in health care. we also developed a crowd -backed authentication system, based loosely on philip sorst's angular rest security . to document our api, we used spring mvc integration for swagger (a.k.a. swagger-springmvc). i briefly looked into swagger4spring-web , but gave up quickly when it didn't recognize spring's @restcontroller. we started with swagger-springmvc 0.6.5 and found it fairly easy to integrate. unfortunately, it didn't allow us to annotate our model objects and tell clients which fields were required. we were quite pleased when a new version (0.8.2) was released that supports swagger 1.3 and its @apimodelproperty. what is swagger? the goal of swagger is to define a standard, language-agnostic interface to rest apis which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. to demonstrate how swagger works, i integrated it into josh long's x-auth-security project. if you have a boot-powered project, you should be able to use the same steps. 1. add swagger-springmvc dependency to your project. com.mangofactory swagger-springmvc 0.8.2 note: on my client's project, we had to exclude "org.slf4j:slf4j-log4j12" and add "jackson-module-scala_2.10:2.3.1" as a dependency. i did not need to do either of these in this project. 2. add a swaggerconfig class to configure swagger. the swagger-springmvc documentation has an example of this with a bit more xml. package example.config; import com.mangofactory.swagger.configuration.jacksonscalasupport; import com.mangofactory.swagger.configuration.springswaggerconfig; import com.mangofactory.swagger.configuration.springswaggermodelconfig; import com.mangofactory.swagger.configuration.swaggerglobalsettings; import com.mangofactory.swagger.core.defaultswaggerpathprovider; import com.mangofactory.swagger.core.swaggerapiresourcelisting; import com.mangofactory.swagger.core.swaggerpathprovider; import com.mangofactory.swagger.scanners.apilistingreferencescanner; import com.wordnik.swagger.model.*; import org.springframework.beans.factory.annotation.autowired; import org.springframework.beans.factory.annotation.value; import org.springframework.context.annotation.bean; import org.springframework.context.annotation.componentscan; import org.springframework.context.annotation.configuration; import java.util.arraylist; import java.util.arrays; import java.util.list; import static com.google.common.collect.lists.newarraylist; @configuration @componentscan(basepackages = "com.mangofactory.swagger") public class swaggerconfig { public static final list default_include_patterns = arrays.aslist("/news/.*"); public static final string swagger_group = "mobile-api"; @value("${app.docs}") private string docslocation; @autowired private springswaggerconfig springswaggerconfig; @autowired private springswaggermodelconfig springswaggermodelconfig; /** * adds the jackson scala module to the mappingjackson2httpmessageconverter registered with spring * swagger core models are scala so we need to be able to convert to json * also registers some custom serializers needed to transform swagger models to swagger-ui required json format */ @bean public jacksonscalasupport jacksonscalasupport() { jacksonscalasupport jacksonscalasupport = new jacksonscalasupport(); //set to false to disable jacksonscalasupport.setregisterscalamodule(true); return jacksonscalasupport; } /** * global swagger settings */ @bean public swaggerglobalsettings swaggerglobalsettings() { swaggerglobalsettings swaggerglobalsettings = new swaggerglobalsettings(); swaggerglobalsettings.setglobalresponsemessages(springswaggerconfig.defaultresponsemessages()); swaggerglobalsettings.setignorableparametertypes(springswaggerconfig.defaultignorableparametertypes()); swaggerglobalsettings.setparameterdatatypes(springswaggermodelconfig.defaultparameterdatatypes()); return swaggerglobalsettings; } /** * api info as it appears on the swagger-ui page */ private apiinfo apiinfo() { apiinfo apiinfo = new apiinfo( "news api", "mobile applications and beyond!", "https://helloreverb.com/terms/", "matt@raibledesigns.com", "apache 2.0", "http://www.apache.org/licenses/license-2.0.html" ); return apiinfo; } /** * configure a swaggerapiresourcelisting for each swagger instance within your app. e.g. 1. private 2. external apis * required to be a spring bean as spring will call the postconstruct method to bootstrap swagger scanning. * * @return */ @bean public swaggerapiresourcelisting swaggerapiresourcelisting() { //the group name is important and should match the group set on apilistingreferencescanner //note that swaggercache() is by defaultswaggercontroller to serve the swagger json swaggerapiresourcelisting swaggerapiresourcelisting = new swaggerapiresourcelisting(springswaggerconfig.swaggercache(), swagger_group); //set the required swagger settings swaggerapiresourcelisting.setswaggerglobalsettings(swaggerglobalsettings()); //use a custom path provider or springswaggerconfig.defaultswaggerpathprovider() swaggerapiresourcelisting.setswaggerpathprovider(apipathprovider()); //supply the api info as it should appear on swagger-ui web page swaggerapiresourcelisting.setapiinfo(apiinfo()); //global authorization - see the swagger documentation swaggerapiresourcelisting.setauthorizationtypes(authorizationtypes()); //every swaggerapiresourcelisting needs an apilistingreferencescanner to scan the spring request mappings swaggerapiresourcelisting.setapilistingreferencescanner(apilistingreferencescanner()); return swaggerapiresourcelisting; } @bean /** * the apilistingreferencescanner does most of the work. * scans the appropriate spring requestmappinghandlermappings * applies the correct absolute paths to the generated swagger resources */ public apilistingreferencescanner apilistingreferencescanner() { apilistingreferencescanner apilistingreferencescanner = new apilistingreferencescanner(); //picks up all of the registered spring requestmappinghandlermappings for scanning apilistingreferencescanner.setrequestmappinghandlermapping(springswaggerconfig.swaggerrequestmappinghandlermappings()); //excludes any controllers with the supplied annotations apilistingreferencescanner.setexcludeannotations(springswaggerconfig.defaultexcludeannotations()); // apilistingreferencescanner.setresourcegroupingstrategy(springswaggerconfig.defaultresourcegroupingstrategy()); //path provider used to generate the appropriate uri's apilistingreferencescanner.setswaggerpathprovider(apipathprovider()); //must match the swagger group set on the swaggerapiresourcelisting apilistingreferencescanner.setswaggergroup(swagger_group); //only include paths that match the supplied regular expressions apilistingreferencescanner.setincludepatterns(default_include_patterns); return apilistingreferencescanner; } /** * example of a custom path provider */ @bean public apipathprovider apipathprovider() { apipathprovider apipathprovider = new apipathprovider(docslocation); apipathprovider.setdefaultswaggerpathprovider(springswaggerconfig.defaultswaggerpathprovider()); return apipathprovider; } private list authorizationtypes() { arraylist authorizationtypes = new arraylist<>(); list authorizationscopelist = newarraylist(); authorizationscopelist.add(new authorizationscope("global", "access all")); list granttypes = newarraylist(); loginendpoint loginendpoint = new loginendpoint(apipathprovider().getappbasepath() + "/user/authenticate"); granttypes.add(new implicitgrant(loginendpoint, "access_token")); return authorizationtypes; } @bean public swaggerpathprovider relativeswaggerpathprovider() { return new apirelativeswaggerpathprovider(); } private class apirelativeswaggerpathprovider extends defaultswaggerpathprovider { @override public string getappbasepath() { return "/"; } @override public string getswaggerdocumentationbasepath() { return "/api-docs"; } } } the apipathprovider class referenced above is as follows: package example.config; import com.mangofactory.swagger.core.swaggerpathprovider; import org.springframework.beans.factory.annotation.autowired; import org.springframework.web.util.uricomponentsbuilder; import javax.servlet.servletcontext; public class apipathprovider implements swaggerpathprovider { private swaggerpathprovider defaultswaggerpathprovider; @autowired private servletcontext servletcontext; private string docslocation; public apipathprovider(string docslocation) { this.docslocation = docslocation; } @override public string getapiresourceprefix() { return defaultswaggerpathprovider.getapiresourceprefix(); } public string getappbasepath() { return uricomponentsbuilder .fromhttpurl(docslocation) .path(servletcontext.getcontextpath()) .build() .tostring(); } @override public string getswaggerdocumentationbasepath() { return uricomponentsbuilder .fromhttpurl(getappbasepath()) .pathsegment("api-docs/") .build() .tostring(); } @override public string getrequestmappingendpoint(string requestmappingpattern) { return defaultswaggerpathprovider.getrequestmappingendpoint(requestmappingpattern); } public void setdefaultswaggerpathprovider(swaggerpathprovider defaultswaggerpathprovider) { this.defaultswaggerpathprovider = defaultswaggerpathprovider; } } in src/main/resources/application.properties , add an "app.docs" property. this will need to be changed as you move your application from local -> test -> staging -> production. spring boot's externalized configuration makes this fairly simple. app.docs=http://localhost:8080 3. verify swagger produces json. after completing the above steps, you should be able to see the json swagger generates for your api. open http://localhost:8080/api-docs in your browser or curl http://localhost:8080/api-docs . { "apiversion": "1", "swaggerversion": "1.2", "apis": [ { "path": "http://localhost:8080/api-docs/mobile-api/example_newscontroller", "description": "example.newscontroller" } ], "info": { "title": "news api", "description": "mobile applications and beyond!", "termsofserviceurl": "https://helloreverb.com/terms/", "contact": "matt@raibledesigns.com", "license": "apache 2.0", "licenseurl": "http://www.apache.org/licenses/license-2.0.html" } } 4. copy swagger ui into your project. swagger ui is a good-looking javascript client for swagger's json. i integrated it using the following steps: git clone https://github.com/wordnik/swagger-ui cp -r swagger-ui/dist ~/dev/x-auth-security/src/main/resources/public/docs i modified docs/index.html, deleting its header () element, as well as made its url dynamic. ... $(function () { var apiurl = window.location.protocol + "//" + window.location.host; if (window.location.pathname.indexof('/api') > 0) { apiurl += window.location.pathname.substring(0, window.location.pathname.indexof('/api')) } apiurl += "/api-docs"; log('api url: ' + apiurl); window.swaggerui = new swaggerui({ url: apiurl, dom_id: "swagger-ui-container", ... after making these changes, i was able to open fire up the app with "mvn spring-boot:run" and view http://localhost:8080/docs/index.html in my browser. 5. annotate your api. there are two services in x-auth-security: one for authentication and one for news. to provide more information to the "news" service's documentation, add @api and @apioperation annotations. these annotations aren't necessary to get a service to show up in swagger ui, but if you don't specify the @api("user"), you'll end up with an ugly-looking class name instead (e.g. example_xauth_userxauthtokencontroller). @restcontroller @api(value = "news", description = "news api") class newscontroller { map entries = new concurrenthashmap(); @requestmapping(value = "/news", method = requestmethod.get) @apioperation(value = "get news", notes = "returns news items") collection entries() { return this.entries.values(); } @requestmapping(value = "/news/{id}", method = requestmethod.delete) @apioperation(value = "delete news item", notes = "deletes news item by id") newsentry remove(@pathvariable long id) { return this.entries.remove(id); } @requestmapping(value = "/news/{id}", method = requestmethod.get) @apioperation(value = "get a news item", notes = "returns a news item") newsentry entry(@pathvariable long id) { return this.entries.get(id); } @requestmapping(value = "/news/{id}", method = requestmethod.post) @apioperation(value = "update news", notes = "updates a news item") newsentry update(@requestbody newsentry news) { this.entries.put(news.getid(), news); return news; } ... } you might notice the screenshot above only shows news. this is because swaggerconfig.default_include_patterns only specifies news. the following will include all apis. public static final list default_include_patterns = arrays.aslist("/.*"); after adding these annotations and modifying swaggerconfig , you should see all available services. in swagger-springmvc 0.8.x, the ability to use @apimodel and @apimodelproperty annotations was added. this means you can annotate newsentry to specify which fields are required. @apimodel("news entry") public static class newsentry { @apimodelproperty(value = "the id of the item", required = true) private long id; @apimodelproperty(value = "content", required = true) private string content; // getters and setters } this results in the model's documentation showing up in swagger ui. if "required" isn't specified, a property shows up as optional . parting thoughts the qa engineers and 3rd party ios developers have been very pleased with our api documentation. i believe this is largely due to swagger and its nice-looking ui. the swagger ui also provides an interface to test the endpoints by entering parameters (or json) into html forms and clicking buttons. this could benefit those qa folks that prefer using selenium to test html (vs. raw rest endpoints). i've been quite pleased with swagger-springmvc, so kudos to its developers. they've been very responsive in fixing issues i've reported . the only thing i'd like is support for recognizing jsr303 annotations (e.g. @notnull) as required fields. to see everything running locally, checkout my modified x-auth-security project on github and the associated commits for this article.
March 27, 2014
by Matt Raible
· 119,630 Views · 5 Likes
article thumbnail
jdeps: JDK 8 Command-line Static Dependency Checker
Here's a great JDK 8 command-line static dependency checker.
March 27, 2014
by Dustin Marx
· 23,724 Views · 1 Like
article thumbnail
XA Object Store Location is Relative to the Start Up Location in Mule
A Mule application which uses the Jboss TX transaction manager needs a persistent Object Store to hold the objects and states of the transactions being processed (further information about different object stores can be found in the following page). By default Mule uses the ShadowNoFileLockStorem, which uses the file system to store the objects. As one can guess, if an application does not have permission to write the object store to the file system, the Jboss Transaction Manager will not be able to work properly and will throw an exception similar to the following: com.arjuna.ats.arjuna: ARJUNA12218: cant create new instance of {0} java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ... Caused by: com.arjuna.ats.arjuna.exceptions.ObjectStoreException: ARJUNA12225: FileSystemStore::setupStore - cannot access root of object store: //ObjectStore/ShadowNoFileLockStore/defaultStore/ at com.arjuna.ats.internal.arjuna.objectstore.FileSystemStore.(FileSystemStore.java:482) at com.arjuna.ats.internal.arjuna.objectstore.ShadowingStore.(ShadowingStore.java:619) at com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore.(ShadowNoFileLockStore.java:53) ... 36 more Since the object store is not created, the XA Transaction Manager is not initialised properly. This will throw a ‘Could not initialize class’ exception whenever the transaction manager is invoked. org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: errorCode: 0 javax.resource.spi.work.WorkCompletedException: errorCode: 0 at org.mule.work.WorkerContext.run(WorkerContext.java:335) at java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2025) ... Caused by: java.lang.NoClassDefFoundError: Could not initialize class com.arjuna.ats.arjuna.coordinator.TxControl at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.begin(BaseTransaction.java:87) at org.mule.transaction.XaTransaction.doBegin(XaTransaction.java:63) at org.mule.transaction.AbstractTransaction.begin(AbstractTransaction.java:66) at org.mule.transaction.XaTransactionFactory.beginTransaction(XaTransactionFactory.java:32) at org.mule.execution.BeginAndResolveTransactionInterceptor.execute(BeginAndResolveTransactionInterceptor.java:51) at org.mule.execution.ResolvePreviousTransactionInterceptor.execute(ResolvePreviousTransactionInterceptor.java:48) at org.mule.execution.SuspendXaTransactionInterceptor.execute(SuspendXaTransactionInterceptor.java:54) at org.mule.execution.ValidateTransactionalStateInterceptor.execute(ValidateTransactionalStateInterceptor.java:44) at org.mule.execution.IsolateCurrentTransactionInterceptor.execute(IsolateCurrentTransactionInterceptor.java:44) at org.mule.execution.ExternalTransactionInterceptor.execute(ExternalTransactionInterceptor.java:52) at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:32) at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:17) at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:113) at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:34) at org.mule.transport.jms.XaTransactedJmsMessageReceiver.poll(XaTransactedJmsMessageReceiver.java:214) at org.mule.transport.AbstractPollingMessageReceiver.performPoll(AbstractPollingMessageReceiver.java:219) at org.mule.transport.PollingReceiverWorker.poll(PollingReceiverWorker.java:84) at org.mule.transport.PollingReceiverWorker.run(PollingReceiverWorker.java:53) at org.mule.work.WorkerContext.run(WorkerContext.java:311) ... 15 more Mule computes the default directory where to write the object store as follows : muleInternalDir = config.getWorkingDirectory(); (see the code for further analysis). If Mule is started from a directory where the user does not have write permissions, the problems mentioned above will be faced. The easiest way to fix this issue is to make sure that the user running Mule as full write permission to the working directory. If that cannot be achieved, fear not, there is a solution. On first analysis, one would be tempted to set the Object Store Directory by using Spring properties as follows: Unfortunately this will not work since the Jboss Transaction Manager is a Singleton and this property is used in the constructor of the object. Hence a behaviour similar to the following will be experienced: Caused by: com.arjuna.ats.arjuna.exceptions.ObjectStoreException: ARJUNA12225: FileSystemStore::setupStore - cannot access root of object store: PutObjectStoreDirHere/ShadowNoFileLockStore/defaultStore/ (Please note that “PutObjectStoreDirHere” is the default directory assigned by the JBoss TX transaction manager). One way to go around this issue is to be sure that these properties are set before the object is initialised. There are at least two ways to be sure that this is achieved: 1. Set the properties on start up as follows: ./mule -M-Dcom.arjuna.ats.arjuna.objectstore.objectStoreDir=/path/to/objectstoreDir -M-DObjectStoreEnvironmentBean.objectStoreDir=/path/to/objectstoreDir 2. Set the properties in the wrapper.config as follow: wrapper.java.additional.x=-Dcom.arjuna.ats.arjuna.objectstore.objectStoreDir=/path/to/objectstoreDir wrapper.java.additional.x=-DObjectStoreEnvironmentBean.objectStoreDir=/path/to/objectstoreDir (x is the next number available in the wrapper.config by default this is 4). Otherwise, take the easiest route and make sure that Mule can write to the start up directory.
March 27, 2014
by Andre Schembri
· 6,698 Views
article thumbnail
How To Add Images To A GitHub Wiki
Every GitHub repository comes with its own wiki. This is a great place to put the documentation for your project. What isn’t clear from the wiki documentation is how to add images to your wiki. Here’s my step-by-step guide. I’m going to add a logo to the main page of my WikiDemo repository’s wiki: https://github.com/mikehadlow/WikiDemo/wiki/Main-Page First clone the wiki. You grab the clone URL from the button at the top of the wiki page. $ git clone git@github.com:mikehadlow/WikiDemo.wiki.git Cloning into 'WikiDemo.wiki'... Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa': remote: Counting objects: 6, done. remote: Compressing objects: 100% (3/3), done. remote: Total 6 (delta 0), reused 0 (delta 0) Receiving objects: 100% (6/6), done. Create a new directory called ‘images’ (it doesn’t matter what you call it, this is just a convention I use): $ mkdir images Then copy your picture(s) into the images directory (I’ve copied my logo_design.png file to my images directory). $ ls -l -rwxr-xr-x 1 mike.hadlow Domain Users 12971 Sep 5 2013 logo_design.png Commit your changes and push back to GitHub: $ git add -A $ git status # On branch master # Changes to be committed: # (use "git reset HEAD ..." to unstage) # # new file: images/logo_design.png # $ git commit -m "Added logo_design.png" [master 23a1b4a] Added logo_design.png 1 files changed, 0 insertions(+), 0 deletions(-) create mode 100755 images/logo_design.png $ git push Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa': Counting objects: 5, done. Delta compression using up to 4 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 9.05 KiB, done. Total 4 (delta 0), reused 0 (delta 0) To git@github.com:mikehadlow/WikiDemo.wiki.git 333a516..23a1b4a master -> master Now we can put a link to our image in ‘Main Page’: Save and there’s your image for all to see:
March 27, 2014
by Mike Hadlow
· 24,972 Views · 1 Like
article thumbnail
Integration Testing for Spring Applications with JNDI Connection Pools
We all know we need to use connection pools where ever we connect to a database. All of the modern drivers using JDBC type 4 support it. In this post we will have look at an overview ofconnection pooling in spring applications and how to deal with same context in a non JEE enviorements (like tests). Most examples of connecting to database in spring is done using DriverManagerDataSource. If you don't read the documentation properly then you are going to miss a very important point. NOTE: This class is not an actual connection pool; it does not actually pool Connections. It just serves as simple replacement for a full-blown connection pool, implementing the same standard interface, but creating new Connections on every call. Useful for test or standalone environments outside of a J2EE container, either as a DataSource bean in a corresponding ApplicationContext or in conjunction with a simple JNDI environment. Pool-assuming Connection.close() calls will simply close the Connection, so any DataSource-aware persistence code should work. Yes, by default the spring applications does not use pooled connections. There are two ways to implement the connection pooling. Depending on who is managing the pool. If you are running in a JEE environment, then it is prefered use the container for it. In a non-JEE setup there are libraries which will help the application to manage the connection pools. Lets discuss them in bit detail below. 1. Server (Container) managed connection pool (Using JNDI) When the application connects to the database server, establishing the physical actual connection takes much more than the execution of the scripts. Connection pooling is a technique that was pioneered by database vendors to allow multiple clients to share a cached set of connection objects that provide access to a database resource. The JavaWorld article gives a good overview about this. In a J2EE container, it is recommended to use a JNDI DataSource provided by the container. Such a DataSource can be exposed as a DataSource bean in a Spring ApplicationContext via JndiObjectFactoryBean, for seamless switching to and from a local DataSource bean like this class. The below articles helped me in setting up the data source in JBoss AS. 1. DebaJava Post 2. JBoss Installation Guide 3. JBoss Wiki Next step is to use these connections created by the server from the application. As mentioned in the documentation you can use the JndiObjectFactoryBean for this. It is as simple as below If you want to write any tests using springs "SpringJUnit4ClassRunner" it can't load the context becuase the JNDI resource will not be available. For tests, you can then either set up a mock JNDI environment through Spring's SimpleNamingContextBuilder, or switch the bean definition to a local DataSource (which is simpler and thus recommended). As I was looking for a good solutions to this problem (I did not want a separate context for tests) this SO answer helped me. It sort of uses the various tips given in the Javadoc to good effect. The issue with the above solution is the repetition of code to create the JNDI connections. I have solved it using a customized runner SpringWithJNDIRunner. This class adds the JNDI capabilities to the SpringJUnit4ClassRunner. It reads the data source from "test-datasource.xml" file in the class path and binds it to the JNDI resource with name "java:/my-ds". After the execution of this code the JNDI resource is available for the spring container to consume. import javax.naming.NamingException; import org.junit.runners.model.InitializationError; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.mock.jndi.SimpleNamingContextBuilder; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; /** * This class adds the JNDI capabilities to the SpringJUnit4ClassRunner. * @author mkadicha * */ public class SpringWithJNDIRunner extends SpringJUnit4ClassRunner { public static boolean isJNDIactive; /** * JNDI is activated with this constructor. * * @param klass * @throws InitializationError * @throws NamingException * @throws IllegalStateException */ public SpringWithJNDIRunner(Class klass) throws InitializationError, IllegalStateException, NamingException { super(klass); synchronized (SpringWithJNDIRunner.class) { if (!isJNDIactive) { ApplicationContext applicationContext = new ClassPathXmlApplicationContext( "test-datasource.xml"); SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder(); builder.bind("java:/my-ds", applicationContext.getBean("dataSource")); builder.activate(); isJNDIactive = true; } } } } To use this runner you just need to use the annotation @RunWith(SpringWithJNDIRunner.class) in your test. This class extends SpringJUnit4ClassRunner beacuse a there can only be one class in the @RunWith annotation. The JNDI is created only once is a test cycle. This class provides a clean solution to the problem. 2. Application managed connection pool If you need a "real" connection pool outside of a J2EE container, consider Apache's Jakarta Commons DBCP or C3P0. Commons DBCP's BasicDataSource and C3P0's ComboPooledDataSource are full connection pool beans, supporting the same basic properties as this class plus specific settings (such as minimal/maximal pool size etc). Below user guides can help you configure this. 1. Spring Docs 2. C3P0 Userguide 3. DBCP Userguide The below articles speaks about the general guidelines and best practices in configuring the connection pools. 1. SO question on Spring JDBC Connection pools 2. Connection pool max size in MS SQL Server 2008 3. How to decide the max number of connections 4. Monitoring the number of active connections in SQL Server 2008 Note:- All the text in italics are copied from the spring documentation of the DriverManagerDataSource.
March 26, 2014
by Manu Pk
· 24,948 Views · 1 Like
article thumbnail
Postgres and Oracle Compatibility with Hibernate
Postgres and Oracle compatibility with Hibernate There are situations your JEE application needs to support Postgres and Oracle as a Database. Hibernate should do the job here, however, there are some specifics worth mentioning. While enabling Postgres for application already running Oracle I came across following tricky parts: BLOBs support, CLOBs support, Oracle not knowing Boolean type (using Integer) instead and DUAL table. These were the tricks I had to apply to make the @Entity classes running on both of these. Please note I’ve used Postgres 9.3 with Hibernate 4.2.1.SP1. BLOBs support The problem with Postgres is that it offers 2 types of BLOB storage: bytea - data stored in table oid - table holds just identifier to data stored elsewhere I guess in the most of the situations you can live with the bytea as well as I did. The other one as far as I’ve read is to be used for some huge data (in gigabytes) as it supports streams for IO operations. Well, it sounds nice there is such a support, however using Hibernate in this case can make things quite problematic (due to need to use the specific annotations), especially if you try to achieve compatibility with Oracle. To see the trouble here, see StackOverflow: proper hibernate annotation for byte[] All- the combinations are described there: annotation postgres oracle works on ------------------------------------------------------------- byte[] + @Lob oid blob oracle byte[] bytea raw(255) postgresql byte[] + @Type(PBA) oid blob oracle byte[] + @Type(BT) bytea blob postgresql where @Type(PBA) stands for: @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType") and @Type(BT) stands for: @Type(type="org.hibernate.type.BinaryType"). These result in all sorts of Postgres errors, like: ERROR: column “foo” is of type oid but expression is of type bytea or ERROR: column “foo” is of type bytea but expression is of type oid Well, there seems to be a solution, still it includes patching of Hibernate library (something I see as the last option when playing with 3.rd party library). There is also a reference to official blog post from the Hibernate guys on the topic: PostgreSQL and BLOBs. Still solution described in blog post seems not working for me and based on the comments, seems to be invalid for more people. BLOBs solved OK, so now the optimistic part. After quite some debugging I ended up with the Entity definition like this : @Lob private byte[] foo; Oracle has no trouble with that, moreover I had to customize the Postgres dialect in a way: public class PostgreSQLDialectCustom extends PostgreSQL82Dialect { @Override public SqlTypeDescriptor remapSqlTypeDescriptor(SqlTypeDescriptor sqlTypeDescriptor) { if (sqlTypeDescriptor.getSqlType() == java.sql.Types.BLOB) { return BinaryTypeDescriptor.INSTANCE; } return super.remapSqlTypeDescriptor(sqlTypeDescriptor); } } That’s it! Quite simple right? That works for persisting to bytea typed columns in Postgres (as that fits my usecase). CLOBs support The errors in misconfiguration looked something like this: org.postgresql.util.PSQLException: Bad value for type long : ... So first I’ve found (on String LOBs on PostgreSQL with Hibernate 3.6) following solution: @Lob @Type(type = "org.hibernate.type.TextType") private String foo; Well, that works, but for Postgres only. Then there was a suggestion (on StackOverflow: Postgres UTF-8 clobs with JDBC) from to go for: @Lob @Type(type="org.hibernate.type.StringClobType") private String foo; That pointed me the right direction (the funny part was that it was just a comment to some answers). It was quite close, but didn’t work for me in all cases, still resulted in errors in my tests. CLOBs solved The important was @deprecation javadocs in the org.hibernate.type.StringClobType that brought me to working one: @Lob @Type(type="org.hibernate.type.MaterializedClobType") private String foo; That works for both Postgres and Oracle, without any further hacking (on Hibernate side) needed. Boolean type Oracle knows no Boolean type and the trouble is that Postgres does. As there was also some plain SQL present, I ended up In Postgres with error: ERROR: column “foo” is of type boolean but expression is of type integer I decided to enable cast from Integer to Boolean in Postgres rather than fixing all the plain SQL places (in a way found in Forum: Automatically Casting From Integer to Boolean): update pg_cast set castcontext = 'i' where oid in ( select c.oid from pg_cast c inner join pg_type src on src.oid = c.castsource inner join pg_type tgt on tgt.oid = c.casttarget where src.typname like 'int%' and tgt.typname like 'bool%'); Please note you should run the SQL update by user with provileges to update catalogs (probably not your postgres user used for DB connection from your application), as I’ve learned on Stackoverflow: Postgres - permission denied on updating pg_catalog.pg_cast. DUAL table There is one more specific in the Oracle I came across. If you have plain SQL, in Oracle there is DUAL table provied (see more info on Wikipedia on that) that might harm you in Postgres. Still the solution is simple. In Postgres create a view that would fill the similar purpose. It can be created like this: create or replace view dual as select 1; Conclusion Well that should be it. Enjoy your cross DB compatible JEE apps.
March 26, 2014
by Peter Butkovic
· 21,232 Views · 1 Like
article thumbnail
Interface Default Methods in Java 8
Want to learn more about interface default methods in Java 8? Check out this tutorial to learn how using this new feature.
March 24, 2014
by Muhammad Ali Khojaye
· 514,060 Views · 33 Likes
article thumbnail
The Economics of Reuse
If you need the same functionality in two projects, you should reuse code between them, right? Or should you? For as long as there has been a profession of software engineering, we have tried to achieve more reuse. But reuse has both a benefit and a cost. Too often, the cost is forgotten. In this article, I examine the economics of reuse. True story: One of the earliest projects to embrace object-oriented programming in the 1990s did so with the goal of maximizing reuse. The team responsible for creating the company wide framework used the following formula for calculating the value of their work: [Value of reuse] = [numbers of uses of framework] * [value of the framework to reusers] – [cost of developing the framework] This formula is obviously correct, but this is where they went horribly wrong: The organization said [value of framework to reusers] = [cost of developing framework]. In other words: The more expensive it was to create, the more valuable it was to use. We have clearly progressed beyond this thinking. A more updated formula would say: [value of framework to reusers] = [cost of developing the feature in question]. But even this is too optimistic. No library comes for free to its users. At the very least, you have to discover the features and learn about the details. The cost of reusing depends on many factors, such as the quality of the framework and the documentation and also upon the type of feature. A complex algorithm with a simple interface is cheap to use, while most domain-specific frameworks require relatively much work to reuse. We can express this as a reuse value factor, likely between 90% and 50%. For most cases, my guess would be at about 75%. So we have: [value of reuse] = [number of users] * ([cost of feature] * [reuse cost factor]) – [cost of developing the reusable component] What about the other important factor: [cost of developing the reusable component]? It’s easy to assume that the cost of developing a feature in a framework is equal to that of developing the feature in an application, but on further analysis shows that this is far from true. A reusable component needs more documentation, it needs to handle more special cases and it has a slower feedback cycle. This cost is actually substantial and may mean that it costs between 150% to 300% or more to develop a feature for reuse. Personally, I think the reusability cost factor lies around 300%. And the lower this number, the higher the cost factor of reuse is likely to be, because that may mean we skimped on documentation etc. A revised number would be: [value of reuse] = [number of users] * ([cost of feature] * [reuse value factor]) – [reusability cost factor] * [cost of feature] Or [value of reuse] = [cost of feature] * ([number of users] * [reuse value factor] – [reusability cost factor]) The more complex formula actually lets us make a few predictions. Let’s say we assume a reuse value factor of 75 % (meaning that it requires 1/4 of the effort to reuse a library rather than creating the feature from scratch) and a reusability cost factor of 300 % (meaning that it requires three times the effort to create something that’s worth reusing). This means: [value of reuse] = [cost of feature] * ([number of users] * 75% – 300%) This equation breaks even when [number of users] = 4. That means that to get any value from your reused component, you better have five or more reusers or you have to find a way to substantially improve the [reuse value factor] or [reusability cost factor]. Very smart people have failed to do this. Improving the value: Increase the number of reusers: Simple enough, but when you do, you risk that the [reuse value factor] goes down as the framework doesn’t suit everybody equally well. Reduce the cost of reusing the library: This means investing in documentation, improving your design, improving testing to reduce the number of bugs, handle bug reports and feature requests faster from your reusers – all of which increase your cost reusability cost factor. Reduce the extra work in making the library reusable: The most important way to reduce the cost of developing for reuse is to choose the right kind of problem to solve. Problems with a small surface and big volume are best. That means: Easy to describe, hard to implement. Sadly, most of the juiciest fruit was picked years ago by the standard library in your programming language and by open source frameworks. On a global scale, reuse has saved the software industry tremendous amounts. In an organization, it can be hard to get the same effect. Reuse comes at a cost to the reuser and to the developer of the reusable library. How do you evaluate and improve your [reuse value factor] and your [reusability cost factor]?
March 24, 2014
by Johannes Brodwall
· 12,345 Views · 1 Like
article thumbnail
JavaScript Webapps with Gradle
Gradle, a versatile JVM build tool, effectively handles JavaScript and CSS tasks for web applications and server components.
March 24, 2014
by Kon Soulianidis
· 39,201 Views · 4 Likes
article thumbnail
Clearing the Database with Django Commands
In a previous post, I presented a method of loading initial data into a Django database by using a custom management command. An accompanying task is cleaning the database up. Here I want to discuss a few options for doing that. First, some general design notes on Django management commands. If you run manage.py help you’ll see a whole bunch of commands starting with sql. These all share a common idiom – print SQL statements to the standard output. Almost all DB engines have means to pipe commands from the standard input, so this plays great with the Unix philosophy of building pipes of single-task programs. Django even provides a convenient shortcut for us to access the actual DB that’s being used with a given project – the dbshell command. As an example, we have the sqlflush command, which returns a list of the SQL statements required to return all tables in the database to the state they were in just after they were installed. In a simple blog-like application with "post" and "tag" models, it may return something like: $ python manage.py sqlflush BEGIN; DELETE FROM "auth_permission"; DELETE FROM "auth_group"; DELETE FROM "django_content_type"; DELETE FROM "django_session"; DELETE FROM "blogapp_tag"; DELETE FROM "auth_user_groups"; DELETE FROM "auth_group_permissions"; DELETE FROM "auth_user_user_permissions"; DELETE FROM "blogapp_post"; DELETE FROM "blogapp_post_tags"; DELETE FROM "auth_user"; DELETE FROM "django_admin_log"; COMMIT; Note there’s a lot of tables here, because the project also installed the admin and auth applications from django.contrib. We can actually execute these SQL statements, and thus wipe out all the DB tables in our database, by running: $ python manage.py sqlflush | python manage.py dbshell For this particular sequence, since it’s so useful, Django has a special built-in command named flush. But there’s a problem with running flush that may or may not bother you, depending on what your goals are. It wipes out all tables, and this means authentication data as well. So if you’ve created a default admin user when jump-starting the application, you’ll have to re-create it now. Perhaps there’s a more gentle way to delete just your app’s data, without messing with the other apps? Yes. In fact, I’m going to show a number of ways. First, let’s see what other existing management commands have to offer. sqlclear will emit the commands needed to drop all tables in a given app. For example: $ python manage.py sqlclear blogapp BEGIN; DROP TABLE "blogapp_tag"; DROP TABLE "blogapp_post"; DROP TABLE "blogapp_post_tags"; COMMIT; So we can use it to target a specific app, rather than using the kill-all approach of flush. There’s a catch, though. While flush runs delete to wipe all data from the tables, sqlclear removes the actual tables. So in order to be able to work with the database, these tables have to be re-created. Worry not, there’s a command for that: $ python manage.py sql blogapp BEGIN; CREATE TABLE "blogapp_post_tags" ( "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "post_id" integer NOT NULL REFERENCES "blogapp_post" ("id"), "tag_id" varchar(50) NOT NULL REFERENCES "blogapp_tag" ("name"), UNIQUE ("post_id", "tag_id") ) ; CREATE TABLE "blogapp_post" ( "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, <.......> ) ; CREATE TABLE "blogapp_tag" ( <.......> ) ; COMMIT; So here’s a first way to do a DB cleanup: pipe sqlclear appname into dbshell. Then pipe sql appname to dbshell. An alternative way, which I like less, is to take the subset of DELETE statements generated by sqlflush, save them in a text file, and pipe it through to dbshell when needed. For example, for the blog app discussed above, these statements should do it: BEGIN; DELETE FROM "blogapp_tag"; DELETE FROM "blogapp_post"; DELETE FROM "blogapp_post_tags"; DELETE COMMIT; The reason I don’t like it is that it forces you to have explicit table names stored somewhere, which is a duplication of the existing models. If you happen to change some of your foreign keys, for example, tables will need changing so this file will have to be regenerated. The approach I like best is more programmatic. Django’s model API is flexible and convenient, and we can just use it in a custom management command: from django.core.management.base import BaseCommand from blogapp.models import Post, Tag class Command(BaseCommand): def handle(self, *args, **options): Tag.objects.all().delete() Post.objects.all().delete() Save this code as blogapp/management/commands/clear_models.py, and now it can be invoked with: $ python manage.py clear_models
March 24, 2014
by Eli Bendersky
· 19,023 Views
article thumbnail
Google Maps in Java Swing Application
If you need to embed and display Google Maps in your Java Desktop Swing application, then JxBrowser Java library is what you need.
March 22, 2014
by Vladimir Ikryanov
· 152,465 Views · 6 Likes
article thumbnail
Top 5 Reasons to Choose ScalaTest Over JUnit
Testing is a major part of our development process. After working with JUnit for some time we leaned back and thought: How can we improve our test productivity? Since we were all fond of Scala we looked at ScalaTest. We liked it from the start so we decided to go with ScalaTest for all new tests. Sure enough there were and are critics in the team who say “I just want to write my tests without having to bother with a new technology…” to convince even the last person on the team I will give you my top 5 reasons to choose ScalaTest over JUnit. 1. Multiple Comparisons Simple yet very nice is that you can do multiple comparisons for a single object. Say we have a list of books. Now we want to assure that the list contains exactly one book which is our book “Ruling the Universe”. The test code allows us to express it just like that: books should { not be empty and have size 1 and contain rulingTheUniverse } 2. Great DSLs There are many great DSLs to make the test code much shorter and nicer to read. These DSLs for Scala are much more powerful that those for Java. I will give you just two small examples for Mockito and Selenium. Mockito Sugar Say I have a book mock and I want to to check that the method publish has been called exactly once but I don’t care with which arguments. So here you go: val book = mock[Book] book expects 'publish withArgs (*) once Selenium We want to open our application in the browser check the title is “Aweseome Books” and then click on the link to explore books. With the Selenium DSL this is expressed like that: go to "http://localhost/book_app/index.html") pageTitle should be ("Awesome Books") click on linkText("Explore ...”) 3. Powerful Matchers Who needs assertions when you can have matchers? When I started out with ScalaTest I used a lot of assertions because thats what I knew. When I discovered matchers I started to use those as they are much more powerful and have a great syntax which allows you to write your test code very close to the what you actually want to express. I will give just a few examples to give you a first impression of just what you can do with matchers: Array(3,2,1) should have size 3// check the size of an array string should include regex "wo.ld"// check string against regular expression temp should be a 'file // check that temp is a file 4. Tag support JUnit has categories and ScalaTest has tags. You can tag your tests as you like and the execute only tests with certain tags or do other stuff with the tags. And that’s how you tag a test as “DbTest” and “SlowTest”: it must "save the book correctly"taggedAs(SlowTest, DbTest) in { // call to database } 5. JavaBean-style checking of object properties Say you have a book object with properties such as title and authors. Then you write a test where you want to verify the title is “Ruling the Universe” and it was published in 2012. In JUnit you write assertions like assertEquals(“Ruling the Universe”, book.getTitle()) and you need another assertion for the publication year. ScalaTest allows for JavaBean-style checking of object properties. So in ScalaTest you can declare the expected values for properties of an object. Instead of the assertions you write the property title of the book should be “Ruling the Universe” and the property publicationYear should be 2012. And thats how this looks in ScalaTest: book should have ( ‘title ("Ruling the Universe"), ‘author (List("Zaphod", "Ford")), ‘publicationYear (2012) ) Are you willing to give ScalaTest a try? You should. I like it more and more with every test I write and maybe you will too!
March 22, 2014
by Jan
· 13,845 Views · 1 Like
article thumbnail
Grails Goodness: Using Hibernate Native SQL Queries
Sometimes we want to use Hibernate native SQL in our code. For example we might need to invoke a selectable stored procedure, we cannot invoke in another way. To invoke a native SQL query we use the method createSQLQuery() which is available from the Hibernate session object. In our Grails code we must then first get access to the current Hibernate session. Luckily we only have to inject the sessionFactory bean in our Grails service or controller. To get the current session we invoke the getCurrentSession() method and we are ready to execute a native SQL query. The query itself is defined as a String value and we can use placeholders for variables, just like with other Hibernate queries. In the following sample we create a new Grails service and use a Hibernate native SQL query to execute a selectable stored procedure with the nameorganisation_breadcrumbs. This stored procedure takes one argument startId and will return a list of results with an id, name and level column. // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails import com.mrhaki.grails.Organisation class OrganisationService { // Auto inject SessionFactory we can use // to get the current Hibernate session. def sessionFactory List breadcrumbs(final Long startOrganisationId) { // Get the current Hiberante session. final session = sessionFactory.currentSession // Query string with :startId as parameter placeholder. final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' // Create native SQL query. final sqlQuery = session.createSQLQuery(query) // Use Groovy with() method to invoke multiple methods // on the sqlQuery object. final results = sqlQuery.with { // Set domain class as entity. // Properties in domain class id, name, level will // be automatically filled. addEntity(Organisation) // Set value for parameter startId. setLong('startId', startOrganisationId) // Get all results. list() } results } } In the sample code we use the addEntity() method to map the query results to the domain class Organisation. To transform the results from a query to other objects we can use the setResultTransformer() method. Hibernate (and therefore Grails if we use the Hibernate plugin) already has a set of transformers we can use. For example with the org.hibernate.transform.AliasToEntityMapResultTransformer each result row is transformed into a Map where the column aliases are the keys of the map. // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails import org.hibernate.transform.AliasToEntityMapResultTransformer class OrganisationService { def sessionFactory List> breadcrumbs(final Long startOrganisationId) { final session = sessionFactory.currentSession final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' final sqlQuery = session.createSQLQuery(query) final results = sqlQuery.with { // Assign result transformer. // This transformer will map columns to keys in a map for each row. resultTransformer = AliasToEntityMapResultTransformer.INSTANCE setLong('startId', startOrganisationId) list() } results } } Finally we can execute a native SQL query and handle the raw results ourselves using the Groovy Collection API enhancements. The result of thelist() method is a List of Object[] objects. In the following sample we use Groovy syntax to handle the results: // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails class OrganisationService { def sessionFactory List> breadcrumbs(final Long startOrganisationId) { final session = sessionFactory.currentSession final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' final sqlQuery = session.createSQLQuery(query) final queryResults = sqlQuery.with { setLong('startId', startOrganisationId) list() } // Transform resulting rows to a map with key organisationName. final results = queryResults.collect { resultRow -> [organisationName: resultRow[1]] } // Or to only get a list of names. //final List names = queryResults.collect { it[1] } results } } Code written with Grails 2.3.7.
March 20, 2014
by Hubert Klein Ikkink
· 22,873 Views · 1 Like
article thumbnail
Change Font Terminal Tool Window in IntelliJ IDEA
IntelliJ IDEA 13 added the Terminal tool window to the IDE. We can open a terminal window with Tools | Open Terminal.... To change the font of the terminal we must open the preferences and select IDE Settings | Editor | Colors & Fonts | Console Font. Here we can choose a font and change the font size:
March 18, 2014
by Hubert Klein Ikkink
· 35,458 Views · 1 Like
article thumbnail
How HTML5 Apps Can be More Secure than Native Mobile Apps
As businesses accelerate their move toward making B2E applications available to employees on mobile devices, the subject of mobile application security is getting more attention. Mobile Device Management (MDM) solutions are being deployed in the largest enterprises - but there are still application-level security issues that are important to consider. Furthermore, medium size businesses are moving to mobilize their applications prior to having a formalized MDM solution or policy in place. A key element of a mobile app strategy is whether to go Native, Hybrid, or pure HTML5. As an early proponent of HTML5 platforms, Gizmox has been thinking about the security angle of HTML5 applications for a long time. In a recent webinar, we discussed 4 ways that HTML5 - done right - can be more secure than native apps. 1. Applications should leverage HTML5's basic security model HTML5 represents a revolutionary step for HTML-based browsers as the first truly cross-platform technology for rich, interactive applications. It has earned endorsements by all the major IT vendors (e.g. Google, Microsoft, IBM, Oracle, etc...). Security of applications and websites has been a consideration from the start of HTML5 development. The first element of the security model is that HTML5 applications live within the secure shell of the browser sandbox. Application code is to a large degree insulated from the device. The browser's interaction with the device and any other application on the device is highly limited. This makes it difficult for HTML5 application code to influence other applications/data on the device or for other applications to interact with the application running on the browser. The second element is that, built correctly, HTML5 thin clients are "secure by design." Application logic running on the server insultates sensitive intellectual property from the client. Proper design strategies would include minimal or no data caching; keeping tokens, passwords, credentials, and security profiles on the server; minimizing logic on the client - focusing on pure UI interaction with the server. Finally, HTML5 apps should be architected to ensure that no data is left behind in cache. 2. HTML5 apps can be containerized within secure browsers Secure browsers are just one element of MDM that can be deployed on their own to enhance application security. HTML5 application security can be extended with the use of secure browsers that restrict access to enterprise-approved URLs, prevent cross-site scripting, and integrate with company VPNs. Furthermore, secure browsers further harden the interaction between HTML5 applications and the device, the device OS and other applciations on the device. 3. Integration with Mobile Device Management MDM solutions play a variety of security roles including application inventory management (i.e. who gets access to what on which device), application distribution (i.e. through enterprise app store), implementation of security standards (e.g. passwords, encryption, VPN, authentication, etc...), and implemetation of enterprise access control policies. While MDM was in part conceived to enable secure distribution and control of native applications, HTML5 apps can be managed and further secured as well. While full MDM solutions are not required for HTML5 security, HTML5 apps can be integrated into a broader mobile security strategy that incorporates MDM. 4. HTML5 was conceived for the BYOD world The complexity of managing security for native apps gets multiplied as application variants are created for different mobile device form factors and operating systems. With cross-platform HTML5 applications that run on any desktop, tablet, or smartphone, security strategy is implemented and controlled centrally. Updates and security fixes are implemented on the server and there are no concerns with users not applying updates to the apps on their devices. There are many reasons to evaluate HTML5 as the platform for mobile business applications. Security of HTML5 apps (built with good practices and leveraging a full platform like Visual WebGui) is a particularly compelling reason to consider. Check out this slide share from recent webinar on HTML5 security strategies. Security strategies for html5 enterprise mobile apps from Gizmox
March 15, 2014
by Moran Shayovitch
· 4,726 Views
article thumbnail
Signing SOAP Messages - Generation of Enveloped XML Signatures
Digital signing is a widely used mechanism to make digital contents authentic. By producing a digital signature for some content, we can let another party capable of validating that content. It can provide a guarantee that, is not altered after we signed it, with this validation. With this sample I am to share how to generate the a signature for SOAP envelope. But of course this is valid for any other content signing as well. Here, I will sign The SOAP envelope itself An attachment Place the signature inside SOAP header With the placement of signature inside the SOAP header which is also signed by the signature, this becomes a demonstration of enveloped signature. I am using Apache Santuario library for signing. Following is the code segment I used. I have shared the complete sample here to to be downloaded. public static void main(String unused[]) throws Exception { String keystoreType = "JKS"; String keystoreFile = "src/main/resources/PushpalankaKeystore.jks"; String keystorePass = "pushpalanka"; String privateKeyAlias = "pushpalanka"; String privateKeyPass = "pushpalanka"; String certificateAlias = "pushpalanka"; File signatureFile = new File("src/main/resources/signature.xml"); Element element = null; String BaseURI = signatureFile.toURI().toURL().toString(); //SOAP envelope to be signed File attachmentFile = new File("src/main/resources/sample.xml"); //get the private key used to sign, from the keystore KeyStore ks = KeyStore.getInstance(keystoreType); FileInputStream fis = new FileInputStream(keystoreFile); ks.load(fis, keystorePass.toCharArray()); PrivateKey privateKey = (PrivateKey) ks.getKey(privateKeyAlias, privateKeyPass.toCharArray()); //create basic structure of signature javax.xml.parsers.DocumentBuilderFactory dbf = javax.xml.parsers.DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder dBuilder = dbFactory.newDocumentBuilder(); Document doc = dBuilder.parse(attachmentFile); XMLSignature sig = new XMLSignature(doc, BaseURI, XMLSignature.ALGO_ID_SIGNATURE_RSA_SHA1); //optional, but better element = doc.getDocumentElement(); element.normalize(); element.getElementsByTagName("soap:Header").item(0).appendChild(sig.getElement()); { Transforms transforms = new Transforms(doc); transforms.addTransform(Transforms.TRANSFORM_C14N_OMIT_COMMENTS); //Sign the content of SOAP Envelope sig.addDocument("", transforms, Constants.ALGO_ID_DIGEST_SHA1); //Adding the attachment to be signed sig.addDocument("../resources/attachment.xml", transforms, Constants.ALGO_ID_DIGEST_SHA1); } //Signing procedure { X509Certificate cert = (X509Certificate) ks.getCertificate(certificateAlias); sig.addKeyInfo(cert); sig.addKeyInfo(cert.getPublicKey()); sig.sign(privateKey); } //write signature to file FileOutputStream f = new FileOutputStream(signatureFile); XMLUtils.outputDOMc14nWithComments(doc, f); f.close(); } At first it reads in the private key which is to be used in signing. To create a key pair for your own, this post will be helpful. Then it has created the signature and added the SOAP message and the attachment as the documents to be signed. Finally it performs signing and write the signed document to a file. The signed SOAP message looks as follows. FUN PARTY uri:www.pjxml.org/socialService/Ping FUN PARTY FUN 59c64t0087fg3kfs000003n9 uri:www.pjxml.org/socialService/ Ping FUN 59c64t0087fg3kfs000003n9 2013-10-22T17:12:20 uri:www.pjxml.org/socialService/ Ping 9RXY9kp/Klx36gd4BULvST4qffI= 3JcccO8+0bCUUR3EJxGJKJ+Wrbc= d0hBQLIvZ4fwUZlrsDLDZojvwK2DVaznrvSoA/JTjnS7XZ5oMplN9 THX4xzZap3+WhXwI2xMr3GKO................x7u+PQz1UepcbKY3BsO8jB3dxWN6r+F4qTyWa+xwOFxqLj546WX35f8zT4GLdiJI5oiYeo1YPLFFqTrwg== MIIDjTCCAnWgAwIBAgIEeotzFjANBgkqhkiG9w0BAQsFADB3MQswCQYDVQQGEwJMSzEQMA4GA1UE...............qXfD/eY+XeIDyMQocRqTpcJIm8OneZ8vbMNQrxsRInxq+DsG+C92b k5y0amGgOQ2O/St0Kc2/xye80tX2fDEKs2YOlM/zCknL8VgK0CbAKVAwvJoycQL9mGRkPDmbitHe............StGofmsoKURzo8hofYEn41rGsq5wCuqJhhHYGDrPpFcuJiuI3SeXgcMtBnMwsIaKv2uHaPRbNX31WEuabuv6Q== AQAB 1.90 In a next post lets see how to verify this signature, so that we can guarantee signed documents are not changed. Cheers!
March 14, 2014
by Pushpalanka Jayawardhana
· 36,943 Views · 1 Like
  • Previous
  • ...
  • 761
  • 762
  • 763
  • 764
  • 765
  • 766
  • 767
  • 768
  • 769
  • 770
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: