DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Software Design and Architecture Topics

article thumbnail
Exposing and Consuming SOAP Web Service Using Apache Camel-CXF Component and Spring
Let’s take the customer endpoint in my earlier article. Here I am going to use Apache Camel-CXF component to expose this customer endpoint as a web service. @WebService(serviceName="customerService") public interface CustomerService { public Customer getCustomerById(String customerId); } public class CustomerEndpoint implements CustomerService { private CustomerEndPointService service; @Override public Customer getCustomerById(String customerId) { Customer customer= service.getCustomerById(customerId); return customer; } } Exposing the service using Camel-CXF component Remember to specify the schema Location and namespace in spring context file Consuming SOAP web service using Camel-CXF Say you have a SOAP web service to the address http://localhost:8181/OrderManagement/order Then you can invoke this web service from a camel route. Please the code snippet below. In a camel-cxf component you can also specify the data format for an endpoint like given below. Hope this will help you to create SOAP web service using Camel-CXF component.
March 5, 2015
by Roshan Thomas
· 52,428 Views
article thumbnail
Quick Way to Open Closed Project in Eclipse
Sometimes it is all about knowing the simple tricks, even if they might be obvious ;-). In my post “Eclipse Performance Improvement Tip: Close Unused Projects” I explained why it is important to close the ‘not used’ projects in the workspace to improve Eclipse performance: Closing Project in Eclipse Workspace To open the projects (or the selected projects), the ‘Open Project’ context menu (or menu Project > Open Project can be used: Open Project Context Menu An even easier way (and this might not be obvious!) is simply to double-click on the closed project folder: Double Click on the Closed Project to Open it That’s it! It will open the project which much easier, simpler and faster than using the menu or context menu. Unfortunately I’m not aware of a similar trick to close it. Anyone? Happy Opening :-)
March 3, 2015
by Erich Styger
· 16,546 Views · 1 Like
article thumbnail
Standing Up a Local Netflix Eureka
Here I will consider two different ways of standing up a local instance of Netflix Eureka. If you are not familiar with Eureka, it provides a central registry where (micro)services can register themselves and client applications can use this registry to look up specific instances hosting a service and to make the service calls. Approach 1: Native Eureka Library The first way is to simply use the archive file generated by the Netflix Eureka build process: 1. Clone the Eureka source repository here: https://github.com/Netflix/eureka 2. Run "./gradlew build" at the root of the repository, this should build cleanly generating a war file in eureka-server/build/libs folder 3. Grab this file, rename it to "eureka.war" and place it in the webapps folder of either tomcat or jetty. For this exercise I have used jetty. 4. Start jetty, by default jetty will boot up at port 8080, however I wanted to instead bring it up at port 8761, so you can start it up this way, "java -jar start.jar -Djetty.port=8761" The server should start up cleanly and can be verified at this endpoint - "http://localhost:8761/eureka/v2/apps" Approach 2: Spring-Cloud-Netflix Spring-Cloud-Netflix provides a very neat way to bootstrap Eureka. To bring up Eureka server using Spring-Cloud-Netflix the approach that I followed was to clone the sample Eureka server application available here: https://github.com/spring-cloud-samples/eureka 1. Clone this repository 2. From the root of the repository run "mvn spring-boot:run", and that is it!. The server should boot up cleanly and the REST endpoint should come up here: "http://localhost:8761/eureka/apps". As a bonus, Spring-Cloud-Netflix provides a neat UI showing the various applications who have registered with Eureka at the root of the webapp at "http://localhost:8761/". Just a few small issues to be aware of, note that the context url's are a little different in the two cases "eureka/v2/apps" vs "eureka/apps", this can be adjusted on the configurations of the services which register with Eureka. Conclusion Your mileage with these approaches may vary. I have found Spring-Cloud-Netflix a little unstable at times but it has mostly worked out well for me. The documentation at the Spring-Cloud site is also far more exhaustive than the one provided at the Netflix Eureka site.
February 26, 2015
by Biju Kunjummen
· 13,047 Views
article thumbnail
How to Detect Java Deadlocks Programmatically
Deadlocks are situations in which two or more actions are waiting for the others to finish, making all actions in a blocked state forever. They can be very hard to detect during development, and they usually require restart of the application in order to recover. To make things worse, deadlocks usually manifest in production under the heaviest load, and are very hard to spot during testing. The reason for this is it’s not practical to test all possible interleavings of a program’s threads. Although some statical analysis libraries exist that can help us detect the possible deadlocks, it is still necessary to be able to detect them during runtime and get some information which can help us fix the issue or alert us so we can restart our application or whatever. Detect deadlocks programmatically using ThreadMXBean class Java 5 introduced ThreadMXBean - an interface that provides various monitoring methods for threads. I recommend you to check all of the methods as there are many useful operations for monitoring the performance of your application in case you are not using an external tool. The method of our interest is findMonitorDeadlockedThreads, or, if you are using Java 6,findDeadlockedThreads. The difference is that findDeadlockedThreads can also detect deadlocks caused by owner locks (java.util.concurrent), while findMonitorDeadlockedThreads can only detect monitor locks (i.e. synchronized blocks). Since the old version is kept for compatibility purposes only, I am going to use the second version. The idea is to encapsulate periodical checking for deadlocks into a reusable component so we can just fire and forget about it. One way to impement scheduling is through executors framework - a set of well abstracted and very easy to use multithreading classes. ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); this.scheduler.scheduleAtFixedRate(deadlockCheck, period, period, unit); Simple as that, we have a runnable called periodically after a certain amount of time determined by period and time unit. Next, we want to make our utility is extensive and allow clients to supply the behaviour that gets triggered after a deadlock is detected. We need a method that receives a list of objects describing threads that are in a deadlock: void handleDeadlock(final ThreadInfo[] deadlockedThreads); Now we have everything we need to implement our deadlock detector class. public interface DeadlockHandler { void handleDeadlock(final ThreadInfo[] deadlockedThreads); } public class DeadlockDetector { private final DeadlockHandler deadlockHandler; private final long period; private final TimeUnit unit; private final ThreadMXBean mbean = ManagementFactory.getThreadMXBean(); private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); final Runnable deadlockCheck = new Runnable() { @Override public void run() { long[] deadlockedThreadIds = DeadlockDetector.this.mbean.findDeadlockedThreads(); if (deadlockedThreadIds != null) { ThreadInfo[] threadInfos = DeadlockDetector.this.mbean.getThreadInfo(deadlockedThreadIds); DeadlockDetector.this.deadlockHandler.handleDeadlock(threadInfos); } } }; public DeadlockDetector(final DeadlockHandler deadlockHandler, final long period, final TimeUnit unit) { this.deadlockHandler = deadlockHandler; this.period = period; this.unit = unit; } public void start() { this.scheduler.scheduleAtFixedRate( this.deadlockCheck, this.period, this.period, this.unit); } } Let’s test this in practice. First, we will create a handler to output deadlocked threads information to System.err. We could use this to send email in a real world scenario, for example: public class DeadlockConsoleHandler implements DeadlockHandler { @Override public void handleDeadlock(final ThreadInfo[] deadlockedThreads) { if (deadlockedThreads != null) { System.err.println("Deadlock detected!"); Map stackTraceMap = Thread.getAllStackTraces(); for (ThreadInfo threadInfo : deadlockedThreads) { if (threadInfo != null) { for (Thread thread : Thread.getAllStackTraces().keySet()) { if (thread.getId() == threadInfo.getThreadId()) { System.err.println(threadInfo.toString().trim()); for (StackTraceElement ste : thread.getStackTrace()) { System.err.println("\t" + ste.toString().trim()); } } } } } } } } This iterates through all stack traces and prints stack trace for each thread info. This way we can know exactly on which line each thread is waiting, and for which lock. This approach has one downside - it can give false alarms if one of the threads is waiting with a timeout which can actually be seen as a temporary deadlock. Because of that, original thread could no longer exist when we handle our deadlock and findDeadlockedThreads will return null for such threads. To avoid possible NullPointerExceptions, we need to guard for such situations. Finally, lets force a simple deadlock and see our system in action: DeadlockDetector deadlockDetector = new DeadlockDetector(new DeadlockConsoleHandler(), 5, TimeUnit.SECONDS); deadlockDetector.start(); final Object lock1 = new Object(); final Object lock2 = new Object(); Thread thread1 = new Thread(new Runnable() { @Override public void run() { synchronized (lock1) { System.out.println("Thread1 acquired lock1"); try { TimeUnit.MILLISECONDS.sleep(500); } catch (InterruptedException ignore) { } synchronized (lock2) { System.out.println("Thread1 acquired lock2"); } } } }); thread1.start(); Thread thread2 = new Thread(new Runnable() { @Override public void run() { synchronized (lock2) { System.out.println("Thread2 acquired lock2"); synchronized (lock1) { System.out.println("Thread2 acquired lock1"); } } } }); thread2.start(); Output: Thread1 acquired lock1 Thread2 acquired lock2 Deadlock detected! “Thread-1” Id=11 BLOCKED on java.lang.Object@68ab95e6 owned by “Thread-0” Id=10 deadlock.DeadlockTester$2.run(DeadlockTester.java:42) java.lang.Thread.run(Thread.java:662) “Thread-0” Id=10 BLOCKED on java.lang.Object@58fe64b9 owned by “Thread-1” Id=11 deadlock.DeadlockTester$1.run(DeadlockTester.java:28) java.lang.Thread.run(Thread.java:662) Keep in mind that deadlock detection can be an expensive operation and you should test it with your application to determine if you even need to use it and how frequent you will check. I suggest an interval of at least several minutes as it is not crucial to detect deadlock more frequently than this as we don’t have a recovery plan anyway - we can only debug and fix the error or restart the application and hope it won’t happen again. If you have any suggestions about dealing with deadlocks, or a question about this solution, drop a comment below.
February 25, 2015
by Ivan Korhner
· 52,153 Views · 4 Likes
article thumbnail
Per Client Cookie Handling With Jersey
A lot of REST services will use cookies as part of the authentication / authorisation scheme. This is a problem because by default the old Jersey client will use the singletonCookieHandler.getDefault which is most cases will be null and if not null will not likely work in a multithreaded server environment. (This is because in the background the default Jersey client will use URL.openConnection) Now you can work around this by using the Apache HTTP Client adapter for Jersey; but this is not always available. So if you want to use the Jersey client with cookies in a server environment you need to do a little bit of reflection to ensure you use your own private cookie jar. final CookieHandler ch = new CookieManager(); Client client = new Client(new URLConnectionClientHandler( new HttpURLConnectionFactory() { @Override public HttpURLConnection getHttpURLConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })); This will only work if your environment is using the internal implementation ofsun.net.www.protocol.http.HttpURLConnection that comes with the JDK. This appears to be the case for modern versions of WLS. For JAX-RS 2.0 you can do a similar change using Jersey 2.x specific ClientConfig classand HttpUrlConnectorProvider. final CookieHandler ch = new CookieManager(); Client client = ClientBuilder.newClient(new ClientConfig().connectorProvider(new HttpUrlConnectorProvider().connectionFactory(new HttpUrlConnectorProvider.ConnectionFactory() { @Override public HttpURLConnection getConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection(); try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); } return connect; } })));
February 24, 2015
by Gerard Davison
· 7,979 Views
article thumbnail
Sneak Peek into the JCache API (JSR 107)
This post covers the JCache API at a high level and provides a teaser – just enough for you to (hopefully) start itching about it ;-) In this post …. JCache overview JCache API, implementations Supported (Java) platforms for JCache API Quick look at Oracle Coherence Fun stuff – Project Headlands (RESTified JCache by Adam Bien) , JCache related talks at Java One 2014, links to resources for learning more about JCache What is JCache? JCache (JSR 107) is a standard caching API for Java. It provides an API for applications to be able to create and work with in-memory cache of objects. Benefits are obvious – one does not need to concentrate on the finer details of implementing the Caching and time is better spent on the core business logic of the application. JCache components The specification itself is very compact and surprisingly intuitive. The API defines high level components (interfaces) some of which are listed below Caching Provider – used to control Caching Managers and can deal with several of them, Cache Manager – deals with create, read, destroy operations on a Cache Cache – stores entries (the actual data) and exposes CRUD interfaces to deal with the entries Entry – abstraction on top of a key-value pair akin to a java.util.Map Hierarchy of JCache API components JCache Implementations JCache defines the interfaces which of course are implemented by different vendors a.k.a Providers. Oracle Coherence Hazelcast Infinispan ehcache Reference Implementation – this is more for reference purpose rather than a production quality implementation. It is per the specification though and you can be rest assured of the fact that it does in fact pass the TCK as well From the application point of view, all that’s required is the implementation to be present in the classpath. The API also provides a way to further fine tune the properties specific to your provider via standard mechanisms. You should be able to track the list of JCache reference implementations from the JCP website link public class JCacheUsage{ public static void main(String[] args){ //bootstrap the JCache Provider CachingProvider jcacheProvider = Caching.getCachingProvider(); CacheManager jcacheManager = jcacheProvider.getCacheManager(); //configure cache MutableConfiguration jcacheConfig = new MutableConfiguration<>(); jcacheConfig.setTypes(String.class, MyPreciousObject.class); //create cache Cache cache = jcacheManager.createCache("PreciousObjectCache", jcacheConfig); //play around String key = UUID.randomUUID().toString(); cache.put(key, new MyPreciousObject()); MyPreciousObject inserted = cache.get(key); cache.remove(key); cache.get(key); //will throw javax.cache.CacheException since the key does not exist } } JCache provider detection JCache provider detection happens automatically when you only have a single JCache provider on the class path You can choose from the below options as well //set JMV level system property -Djavax.cache.spi.cachingprovider=org.ehcache.jcache.JCacheCachingProvider //code level config System.setProperty("javax.cache.spi.cachingprovider","org.ehcache.jcache.JCacheCachingProvider //you want to choose from multiple JCache providers at runtime CachingProvider ehcacheJCacheProvider = Caching.getCachingProvider("org.ehcache.jcache.JCacheCachingProvider"); //which JCache providers do I have on the classpath? Iterable jcacheProviders = Caching.getCachingProviders(); Java Platform support Compliant with Java SE 6 and above Does not define any details in terms of Java EE integration. This does not mean that it cannot be used in a Java EE environment – it’s just not standardized yet. Could not be plugged into Java EE 7 as a tried and tested standard Candidate for Java EE 8 Project Headlands: Java EE and JCache in tandem By none other than Adam Bien himself ! Java EE 7, Java SE 8 and JCache in action Exposes the JCache API via JAX-RS (REST) Uses Hazelcast as the JCache provider Highly recommended ! Oracle Coherence This post deals with high level stuff w.r.t JCache in general. However, a few lines about Oracle Coherence in general would help put things in perspective Oracle Coherence is a part of Oracle’s Cloud Application Foundation stack It is primarily an in-memory data grid solution Geared towards making applications more scalable in general What’s important to know is that from version 12.1.3 onwards, Oracle Coherence includes a reference implementation for JCache (more in the next section) JCache support in Oracle Coherence Support for JCache implies that applications can now use a standard API to access the capabilities of Oracle Coherence This is made possible by Coherence by simply providing an abstraction over its existing interfaces (NamedCache etc). Application deals with a standard interface (JCache API) and the calls to the API are delegated to the existing Coherence core library implementation Support for JCache API also means that one does not need to use Coherence specific APIs in the application resulting in vendor neutral code which equals portability How ironic – supporting a standard API and always keeping your competitors in the hunt ;-) But hey! That’s what healthy competition and quality software is all about ! Talking of healthy competition – Oracle Coherence does support a host of other features in addition to the standard JCache related capabilities. The Oracle Coherence distribution contains all the libraries for working with the JCache implementation The service definition file in the coherence-jcache.jar qualifies it as a valid JCache provider implementation Curious about Oracle Coherence ? Quick Starter page Documentation Installation Further reading about Coherence and JCache combo – Oracle Coherence documentation JCache at Java One 2014 Couple of great talks revolving around JCache at Java One 2014 Come, Code, Cache, Compute! by Steve Millidge Using the New JCache by Brian Oliver and Greg Luck Hope this was fun :-) Cheers !
February 23, 2015
by Abhishek Gupta DZone Core CORE
· 5,886 Views · 1 Like
article thumbnail
Retry-After HTTP Header in Practice
Retry-After is a lesser known HTTP response header.
February 20, 2015
by Tomasz Nurkiewicz DZone Core CORE
· 15,862 Views
article thumbnail
Exception Handling in Spring REST Web Service
Learn how to handle exception in Spring controller using: ResponseEntity and HttpStatus, @ResponseStatus on the custom exception class, and more custom methods.
February 18, 2015
by Roshan Thomas
· 284,598 Views · 13 Likes
article thumbnail
Converting an Application to JHipster
I've been intrigued by JHipster ever since I first tried it last September. I'd worked with AngularJS and Spring Boot quite a bit, and I liked the idea that someone had combined them, adding some nifty features along the way. When I spoke about AngularJS earlier this month, I included a few slides on JHipster near the end of the presentation. This week, I received an email from someone who attended that presentation. Hey Matt, We met a few weeks back when you presented at DOSUG. You were talking about JHipster which I had been eyeing for a few months and wanted your quick .02 cents. I have built a pretty heavy application over the last 6 months that is using mostly the same tech as JHipster. Java Spring JPA AngularJS Compass Grunt It's ridiculously close for most of the tech stack. So, I was debating rolling it over into a JHipster app to make it a more familiar stack for folks. My concern is that it I will spend months trying to shoehorn it in for not much ROI. Any thoughts on going down this path? What are the biggest issues you've seen in using JHipster? It seems pretty straightforward except for the entity generators. I'm concerned they are totally different than what I am using. The main difference in what I'm doing compared to JHipster is my almost complete use of groovy instead of old school Java in the app. I would have to be forced into going back to regular java beans... Thoughts? I replied with the following advice: JHipster is great for starting a project, but I don't know that it buys you much value after the first few months. I would stick with your current setup and consider JHipster for your next project. I've only prototyped with it, I haven't created any client apps or put anything in production. I have with Spring Boot and AngularJS though, so I like that JHipster combines them for me. JHipster doesn't generate Scala or Groovy code, but you could still use them in a project as long as you had Maven/Gradle configured properly. You might try generating a new app with JHipster and examine how they're doing this. At the very least, it can be a good learning tool, even if you're not using it directly. Java Hipsters: Do you agree with this advice? Have you tried migrating an existing app to JHipster? Are any of you using Scala or Groovy in your JHipster projects?
February 13, 2015
by Matt Raible
· 8,289 Views · 2 Likes
article thumbnail
Getting Started with Dropwizard: Authentication, Configuration and HTTPS
Basic Authentication is the simplest way to secure access to a resource.
February 10, 2015
by Dmitry Noranovich
· 45,922 Views · 1 Like
article thumbnail
The API Gateway Pattern: Angular JS and Spring Security Part IV
Written by Dave Syer in the Spring blog In this article we continue our discussion of how to use Spring Security with Angular JS in a “single page application”. Here we show how to build an API Gateway to control the authentication and access to the backend resources using Spring Cloud. This is the fourth in a series of articles, and you can catch up on the basic building blocks of the application or build it from scratch by reading the first article, or you can just go straight to the source code in Github. In the last article we built a simple distributed application that used Spring Session to authenticate the backend resources. In this one we make the UI server into a reverse proxy to the backend resource server, fixing the issues with the last implementation (technical complexity introduced by custom token authentication), and giving us a lot of new options for controlling access from the browser client. Reminder: if you are working through this article with the sample application, be sure to clear your browser cache of cookies and HTTP Basic credentials. In Chrome the best way to do that for a single server is to open a new incognito window. Creating an API Gateway An API Gateway is a single point of entry (and control) for front end clients, which could be browser based (like the examples in this article) or mobile. The client only has to know the URL of one server, and the backend can be refactored at will with no change, which is a significant advantage. There are other advantages in terms of centralization and control: rate limiting, authentication, auditing and logging. And implementing a simple reverse proxy is really simple with Spring Cloud. If you were following along in the code, you will know that the application implementation at the end of the last article was a bit complicated, so it’s not a great place to iterate away from. There was, however, a halfway point which we could start from more easily, where the backend resource wasn’t yet secured with Spring Security. The source code for this is a separate project in Github so we are going to start from there. It has a UI server and a resource server and they are talking to each other. The resource server doesn’t have Spring Security yet so we can get the system working first and then add that layer. Declarative Reverse Proxy in One Line To turn it into an API Gateawy, the UI server needs one small tweak. Somewhere in the Spring configuration we need to add an @EnableZuulProxy annotation, e.g. in the main (only)application class: @SpringBootApplication @RestController @EnableZuulProxy public class UiApplication { ... } and in an external configuration file we need to map a local resource in the UI server to a remote one in the external configuration (“application.yml”): security: ... zuul: routes: resource: path: /resource/** url: http://localhost:9000 This says “map paths with the pattern /resource/** in this server to the same paths in the remote server at localhost:9000”. Simple and yet effective (OK so it’s 6 lines including the YAML, but you don’t always need that)! All we need to make this work is the right stuff on the classpath. For that purpose we have a few new lines in our Maven POM: org.springframework.cloud spring-cloud-starter-parent 1.0.0.BUILD-SNAPSHOT pom import org.springframework.cloud spring-cloud-starter-zuul ... Note the use of the “spring-cloud-starter-zuul” - it’s a starter POM just like the Spring Boot ones, but it governs the dependencies we need for this Zuul proxy. We are also using because we want to be able to depend on all the versions of transitive dependencies being correct. Consuming the Proxy in the Client With those changes in place our application still works, but we haven’t actually used the new proxy yet until we modify the client. Fortunately that’s trivial. We just need to go from this implementation of the “home” controller: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('http://localhost:9000/').success(function(data) { $scope.greeting = data; }) }); to a local resource: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('resource/').success(function(data) { $scope.greeting = data; }) }); Now when we fire up the servers everything is working and the requests are being proxied through the UI (API Gateway) to the resource server. Further Simplifications Even better: we don’t need the CORS filter any more in the resource server. We threw that one together pretty quickly anyway, and it should have been a red light that we had to do anything as technically focused by hand (especially where it concerns security). Fortunately it is now redundant, so we can just throw it away, and go back to sleeping at night! Securing the Resource Server You might remember in the intermediate state that we started from there is no security in place for the resource server. Aside: Lack of software security might not even be a problem if your network architecture mirrors the application architecture (you can just make the resource server physically inaccessible to anyone but the UI server). As a simple demonstration of that we can make the resource server only accessible on localhost. Just add this to application.properties in the resource server: server.address: 127.0.0.1 Wow, that was easy! Do that with a network address that’s only visible in your data center and you have a security solution that works for all resource servers and all user desktops. Suppose that we decide we do need security at the software level (quite likely for a number of reasons). That’s not going to be a problem, because all we need to do is add Spring Security as a dependency (in the resource server POM): org.springframework.boot spring-boot-starter-security That’s enough to get us a secure resource server, but it won’t get us a working application yet, for the same reason that it didn’t in Part III: there is no shared authentication state between the two servers. Sharing Authentication State We can use the same mechanism to share authentication (and CSRF) state as we did in the last, i.e. Spring Session. We add the dependency to both servers as before: org.springframework.session spring-session 1.0.0.RELEASE org.springframework.boot spring-boot-starter-redis but this time the configuration is much simpler because we can just add the same Filterdeclaration to both. First the UI server (adding @EnableRedisHttpSession): @SpringBootApplication @RestController @EnableZuulProxy @EnableRedisHttpSession public class UiApplication { ... } and then the resource server. There are two changes to make: one is adding@EnableRedisHttpSession and a HeaderHttpSessionStrategy bean to theResourceApplication: @SpringBootApplication @RestController @EnableRedisHttpSession class ResourceApplication { ... @Bean HeaderHttpSessionStrategy sessionStrategy() { new HeaderHttpSessionStrategy(); } } and the other is to explicitly ask for a non-stateless session creation policy inapplication.properties: security.sessions: NEVER As long as redis is still running in the background (use the fig.yml if you like to start it) then the system will work. Load the homepage for the UI at http://localhost:8080 and login and you will see the message from the backend rendered on the homepage. How Does it Work? What is going on behind the scenes now? First we can look at the HTTP requests in the UI server (and API Gateway): VERB PATH STATUS RESPONSE GET / 200 index.html GET /css/angular-bootstrap.css 200 Twitter bootstrap CSS GET /js/angular-bootstrap.js 200 Bootstrap and Angular JS GET /js/hello.js 200 Application logic GET /user 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /resource 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /login.html 200 Angular login form partial POST /login 302 Redirect to home page (ignored) GET /user 200 JSON authenticated user GET /resource 200 (Proxied) JSON greeting That’s identical to the sequence at the end of Part II except for the fact that the cookie names are slightly different (“SESSION” instead of “JSESSIONID”) because we are using Spring Session. But the architecture is different and that last request to “/resource” is special because it was proxied to the resource server. We can see the reverse proxy in action by looking at the “/trace” endpoint in the UI server (from Spring Boot Actuator, which we added with the Spring Cloud dependencies). Go tohttp://localhost:8080/trace in a browser and scroll to the end (if you don’t have one already get a JSON plugin for your browser to make it nice and readable). You will need to authenticate with HTTP Basic (browser popup), but the same credentials are valid as for your login form. At or near the end you should see a pair of requests something like this: { "timestamp": 1420558194546, "info": { "method": "GET", "path": "/", "query": "" "remote": true, "proxy": "resource", "headers": { "request": { "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260", "x-forwarded-prefix": "/resource", "x-forwarded-host": "localhost:8080" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } }, } }, { "timestamp": 1420558200232, "info": { "method": "GET", "path": "/resource/", "headers": { "request": { "host": "localhost:8080", "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } } } }, The second entry there is the request from the client to the gateway on “/resource” and you can see the cookies (added by the browser) and the CSRF header (added by Angular as discussed inPart II). The first entry has remote: true and that means it’s tracing the call to the resource server. You can see it went out to a uri path “/” and you can see that (crucially) the cookies and CSRF headers have been sent too. Without Spring Session these headers would be meaningless to the resource server, but the way we have set it up it can now use those headers to re-constitute a session with authentication and CSRF token data. So the request is permitted and we are in business! Conclusion We covered quite a lot in this article but we got to a really nice place where there is a minimal amount of boilerplate code in our two servers, they are both nicely secure and the user experience isn’t compromised. That alone would be a reason to use the API Gateway pattern, but really we have only scratched the surface of what that might be used for (Netflix uses it for a lot of things). Read up on Spring Cloud to find out more on how to make it easy to add more features to the gateway. The next article in this series will extend the application architecture a bit by extracting the authentication responsibilities to a separate server (the Single Sign On pattern).
February 9, 2015
by Pieter Humphrey DZone Core CORE
· 16,021 Views
article thumbnail
NetBeans in the Classroom: MySQL JDBC Connection Pool & JDBC Resource for GlassFish
This tutorial assumes that you have installed the Java EE version of NetBeans 8.02. It is further assumed that you have replaced the default GlassFish server instance in NetBeans with a new instance with its own folder for the domain. This is required for all platforms. See my previous article “Creating a New Instance of GlassFish in NetBeans IDE” The other day I presented to my students the steps necessary to make GlassFish responsible for JDBC connections. As I went through the steps I realized that I needed to record the steps for my students to reference. Here then are these steps. Steps 1 through 4 are required, Step 5 is optional. Step 1a: Manually Adding the MySQL driver to the GlassFish Domain Before we even start NetBeans we must do a bit of preliminary work. With the exception of Derby, GlassFish does not include the MySQL driver or any other driver in its distribution. Go to the MySql Connector/J download site at http://dev.mysql.com/downloads/connector/j/ and download the latest version. I recommend downloading the Platform Independent version. If your OS is Windows download the ZIP archive otherwise download the TAR archive. You are looking for the driver file named mysql-connector-java-5.1.34-bin.jar in the archive. Copy the driver file to the lib folder in the directory where you placed your domain. On my system the folder is located at C:\Users\Ken\personal_domain\lib. If GlassFish is already running then you will have to restart it so that it picks up the new library. Step 1b: Automatically Adding the MySQL driver to the GlassFish Domain NetBeans has a feature that deploys the database driver to the domain’s lib folder if that driver is in NetBeans’ folder of drivers. On my Windows 8.1 system the MySQL driver can be found in C:\Program Files\NetBeans 8.0.2\ide\modules\ext. Start NetBeans and go to the Services tab, expand Servers and right mouse click on your GlassFish Server. Click on Properties and the Servers dialog will appear. On this dialog you will see a check box labelled Enable JDBC Driver Deployment. By default it is checked. NetBeans determines the driver to copy to GlassFish from the file glassfish-resources.xml that we will create in Step 4 of this tutorial. Without this file and if you have not copied the driver into GlassFish manually then GlassFish will not be able to connect to the database. Any code in your web application will not work and all you will likely see are blank pages. Step 1a or Step 1b? I recommend Step 1a and manually add the driver. The reason I prefer this approach is that I can be certain that the most recent driver is in use. As of this writing NetBeans contains version 5.1.23 of the connector but the current version is 5.1.34. If you copy a driver into the lib folder then NetBeans will not replace it with an older driver even if the check box on the Server dialog is checked. NetBeans does not replace a driver if one is already in place. If you need a driver that NetBeans does have a copy of then Step 1b is your only choice. Step 2: Create a Database Connection in NetBeans One feature I have always liked in NetBeans is that it has an interface for working with databases. All that is required is that you create a connection to the database. It also has additional features for managing a MySQL server but we won’t need those. If you have not already started your MySQL DBMS then do that now. I assume that the database you wish to connect to already exists. Go to the Services tab and right mouse click on New Connection. In the next dialog you must choose the database driver you wish to use. It defaults to Java DB (Embedded). Pull down the combobox labeled Driver: and select MySQL (Connector/J driver). Click on Next and you will now see the Customize Connection dialog. Here you can enter the details of the connection. On my system the server is localhost and the database name is Aquarium. Here is what my dialog looks like. Notice the Test Connection button. I have clicked on mine and so I have the message Connection Succeeded. Click on Next. There is nothing to do on this dialog so click on Next. On this last dialog you have the option of assigning a name to the connection. By default it uses the URL but I prefer a more meaningful name. I have used AquariumMySQL. Click on Finish and the connection will appear under Databases. If the icon next to AquariumMySQL has what looks like a crack in it similar to the jdbc:derby connection then this means that a connection to the database could not be made. Verify that the database is running and is accessible. If it is then delete the connection and start over. Having a connection to the database in NetBeans is invaluable. You can interact with the database directly and issue SQL commands. As a MySQL user this means that I do not need to run the MySQL command line program to interact with the database. Step 3: Create a Web Application Project in NetBeans If you have not already done so create a New Project in NetBeans. I require my students to create a New Project in the Maven category of a Web Application project. Click on Next. In this dialog you can give the project a name and a location in your file system. The Artifact Id, Group Id and Version are used by Maven. The final dialog lets you select the application server that your application will use and the version of Java EE that your code must be compliant with. Here is my project ready for the next step. Step 4: Create the GlassFish JDBC Resource For GlassFish to manage your database connection you need to set up two resources, a JDBC Connection Pool and a JDBC Resource. You can create both in one step by creating a GlassFish JDBC Resource because you can create the Connection Pool as part of the same operation. Right mouse click on the project name and select New and then Other … Scroll down the Categories list and select GlassFish. In the File Types list select JDBC Resource. Click on Next. The next dialog is the General Attributes. Click on the radio button for Create New JDBC Connection Pool. In the text field JNDI Name enter a name that is unique for the project. JNDI names for connection resources always begin with jdbc/ followed by a name that starts with a lower case letter. I have used jdbc/myAquarium. Do not prefix the name with java:app/ as some tutorials suggest. An upcoming article will explain why. Click on Next. There is nothing for us to enter on the Properties dialog. Click on Next. On the Choose Database Connection dialog we will give our connection pool a name and select the database connection we created in Step 2. Notice that in the list of available connections you are shown the connection URL and not the name you assigned to it back in Step 2. Click on Next. On the Add Connection Pool Properties dialog you will see the connection URL and the user name and password. We do need to make one change. The resource type shows javax.sql.DataSource and we must change it to javax.sql.ConnectionPoolDataSource. Click on Next. There is nothing we need to change on Add Connection Pool Optional Properties so click on Finish. A new folder has appeared in the Projects view named Other Sources. It contains a sub folder named setup. In this folder is the file glassfish-resources.xml. The glassfish-resources.xml file will contain the following. I have reformatted the file for easier viewing. OPTIONAL Step 5: Configure GlassFish with glassfish-resources.xml The glassfish-resources.xml file, when included in the application’s WAR file in the WEB-INF folder, can configure the resource and pool for the application when it is deployed in GlassFish. When the application is un-deployed the resource and pool are removed. If you want to set up the resource and pool permanently in GlassFish then follow these steps. Go to the Services tab and select Servers and then right mouse click on GlassFish. If GlassFish is not running then click on Start. With the server started click on View Domain Admin Console. Your web browser will now open and show you the GlassFish console. If you assigned a user name and password to the server you will have to enter this information before you see the console. In the Common Tasks tree select Resources. You should now see in the panel adjacent to the tree the following: Click on Add Resources. You should now see: In the Location click on Choose File and locate your glassfish-resources.xml file. Mine is found at D:\NetBeansProjects\GlassFishTutorial\src\main\setup. You should now see: Click on OK. If everything has gone well you should see: The final task in this step is to test if the connection works. In the Common Tasks tree select Resources, JDBC, JDBC Connection Pools and aquariumPool. Click on Ping. You should see: The most common reason for the Ping to fail is that the database driver is not in the domain’s lib folder. Go to Step 1a and manually add the driver. The resources are now visible in NetBeans. Having the resource and pool add to GlassFish permanently will allow other applications to share this same resource and pool. You are now ready to code!
February 9, 2015
by Ken Fogel
· 51,957 Views · 3 Likes
article thumbnail
(C# tutorial) How to create edge detection function
I am really interested in Computer Vision technology, so I started to dig deeper in this topic. This is how I found the code below for edge detection, a method belonging to object detection. If you are interested in implementing edge detection in C#, too, here you can find the code I tried. The code is from a prewritten C# camera library you can all access. To develop the edge detection function you only need to have a Visual C# WPF Application created in Visual Studio and the VOIPSDK.dll and NVA.dll files (from www.camera-sdk.com ) added to the references. Creating the user interface is the first step. It will help you to use edge detection by providing an easy-to-use interface. You will have two fields to display the original image and the processed image of the camera, you can set values for Canny Threshold and Canny Threshold Linking and you can set whether you want the edges of the detected elements to be white or colorized. You can find the code of the GUI under Form1.Designer.cs. Under Form1.cs there is the code for the edge detection function. You can see how to code is built up and what you should to create this function. There will be the different methods you have to call and all the configurations are described. Trust me, guys, this source code will help you a lot, it made my life easier. Good luck! // Form1.Designer.cs namespace EdgeDetection { partial class Form1 { /// /// Required designer variable. /// private System.ComponentModel.IContainer components = null; /// /// Clean up any resources being used. /// /// true if managed resources should be disposed; otherwise, false. protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Windows Form Designer generated code /// /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// private void InitializeComponent() { this.label1 = new System.Windows.Forms.Label(); this.label2 = new System.Windows.Forms.Label(); this.btn_Set = new System.Windows.Forms.Button(); this.tb_CannyThreshold = new System.Windows.Forms.TextBox(); this.groupBox1 = new System.Windows.Forms.GroupBox(); this.chk_Colorized = new System.Windows.Forms.CheckBox(); this.label3 = new System.Windows.Forms.Label(); this.tb_CannyThresholdLinking = new System.Windows.Forms.TextBox(); this.label4 = new System.Windows.Forms.Label(); this.groupBox1.SuspendLayout(); this.SuspendLayout(); // // label1 // this.label1.AutoSize = true; this.label1.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(238))); this.label1.Location = new System.Drawing.Point(30, 265); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(87, 13); this.label1.TabIndex = 0; this.label1.Text = "Original image"; // // label2 // this.label2.AutoSize = true; this.label2.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(238))); this.label2.Location = new System.Drawing.Point(370, 265); this.label2.Name = "label2"; this.label2.Size = new System.Drawing.Size(103, 13); this.label2.TabIndex = 1; this.label2.Text = "Processed image"; // // btn_Set // this.btn_Set.Location = new System.Drawing.Point(182, 188); this.btn_Set.Name = "btn_Set"; this.btn_Set.Size = new System.Drawing.Size(58, 23); this.btn_Set.TabIndex = 2; this.btn_Set.Text = "Set"; this.btn_Set.UseVisualStyleBackColor = true; this.btn_Set.Click += new System.EventHandler(this.btn_Set_Click); // // tb_CannyThreshold // this.tb_CannyThreshold.Location = new System.Drawing.Point(153, 31); this.tb_CannyThreshold.Name = "tb_CannyThreshold"; this.tb_CannyThreshold.Size = new System.Drawing.Size(87, 20); this.tb_CannyThreshold.TabIndex = 4; // // groupBox1 // this.groupBox1.Controls.Add(this.chk_Colorized); this.groupBox1.Controls.Add(this.label3); this.groupBox1.Controls.Add(this.tb_CannyThresholdLinking); this.groupBox1.Controls.Add(this.label4); this.groupBox1.Controls.Add(this.btn_Set); this.groupBox1.Controls.Add(this.tb_CannyThreshold); this.groupBox1.Location = new System.Drawing.Point(677, 12); this.groupBox1.Name = "groupBox1"; this.groupBox1.Size = new System.Drawing.Size(258, 230); this.groupBox1.TabIndex = 12; this.groupBox1.TabStop = false; this.groupBox1.Text = "Settings"; // // chk_Colorized // this.chk_Colorized.AutoSize = true; this.chk_Colorized.CheckAlign = System.Drawing.ContentAlignment.MiddleRight; this.chk_Colorized.Location = new System.Drawing.Point(98, 115); this.chk_Colorized.Name = "chk_Colorized"; this.chk_Colorized.Size = new System.Drawing.Size(69, 17); this.chk_Colorized.TabIndex = 15; this.chk_Colorized.Text = "Colorized"; this.chk_Colorized.UseVisualStyleBackColor = true; // // label3 // this.label3.AutoSize = true; this.label3.Location = new System.Drawing.Point(27, 76); this.label3.Name = "label3"; this.label3.Size = new System.Drawing.Size(121, 13); this.label3.TabIndex = 14; this.label3.Text = "CannyThresholdLinking:"; // // tb_CannyThresholdLinking // this.tb_CannyThresholdLinking.Location = new System.Drawing.Point(153, 73); this.tb_CannyThresholdLinking.Name = "tb_CannyThresholdLinking"; this.tb_CannyThresholdLinking.Size = new System.Drawing.Size(87, 20); this.tb_CannyThresholdLinking.TabIndex = 13; // // label4 // this.label4.AutoSize = true; this.label4.Location = new System.Drawing.Point(61, 34); this.label4.Name = "label4"; this.label4.Size = new System.Drawing.Size(87, 13); this.label4.TabIndex = 11; this.label4.Text = "CannyThreshold:"; // // MainForm // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.ClientSize = new System.Drawing.Size(947, 307); this.Controls.Add(this.groupBox1); this.Controls.Add(this.label2); this.Controls.Add(this.label1); this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedSingle; this.MaximizeBox = false; this.Name = "MainForm"; this.Text = "Edge Detection"; this.Load += new System.EventHandler(this.MainForm_Load); this.groupBox1.ResumeLayout(false); this.groupBox1.PerformLayout(); this.ResumeLayout(false); this.PerformLayout(); } #endregion private System.Windows.Forms.Label label1; private System.Windows.Forms.Label label2; private System.Windows.Forms.Button btn_Set; private System.Windows.Forms.TextBox tb_CannyThreshold; private System.Windows.Forms.GroupBox groupBox1; private System.Windows.Forms.Label label4; private System.Windows.Forms.Label label3; private System.Windows.Forms.TextBox tb_CannyThresholdLinking; private System.Windows.Forms.CheckBox chk_Colorized; } } // Form1.cs using System; using System.Drawing; using System.Windows.Forms; using Ozeki.Media.MediaHandlers; using Ozeki.Media.MediaHandlers.Video; using Ozeki.Media.MediaHandlers.Video.CV; using Ozeki.Media.MediaHandlers.Video.CV.Processer; using Ozeki.Media.Video.Controls; namespace EdgeDetection { public partial class Form1 : Form { WebCamera _webCamera; MediaConnector _connector; ImageProcesserHandler _imageProcesserHandler; IEdgeDetector _edgeDetector; FrameCapture _frameCapture; VideoViewerWF _originalView; VideoViewerWF _processedView; DrawingImageProvider _originalImageProvider; DrawingImageProvider _processedImageProvider; public Form1() { InitializeComponent(); } void MainForm_Load(object sender, EventArgs e) { Init(); SetVideoViewers(); InitDetectorFields(); ConnectWebcam(); Start(); } void Init() { _frameCapture = new FrameCapture(); _frameCapture.SetInterval(5); _webCamera = WebCamera.GetDefaultDevice(); _connector = new MediaConnector(); _originalImageProvider = new DrawingImageProvider(); _processedImageProvider = new DrawingImageProvider(); _edgeDetector = ImageProcesserFactory.CreateEdgeDetector(); _imageProcesserHandler = new ImageProcesserHandler(); _imageProcesserHandler.AddProcesser(_edgeDetector); } void SetVideoViewers() { _originalView = new VideoViewerWF { BackColor = Color.Black, Location = new Point(10, 20), Size = new Size(320, 240) }; _originalView.SetImageProvider(_originalImageProvider); Controls.Add(_originalView); _processedView = new VideoViewerWF { BackColor = Color.Black, Location = new Point(350, 20), Size = new Size(320, 240) }; _processedView.SetImageProvider(_processedImageProvider); Controls.Add(_processedView); } void InitDetectorFields() { InvokeGUIThread(() => { tb_CannyThreshold.Text = _edgeDetector.CannyThreshold.ToString(); tb_CannyThresholdLinking.Text = _edgeDetector.CannyThresholdLinking.ToString(); chk_Colorized.Checked = _edgeDetector.Colorized; }); } void ConnectWebcam() { _connector.Connect(_webCamera, _originalImageProvider); _connector.Connect(_webCamera, _frameCapture); _connector.Connect(_frameCapture, _imageProcesserHandler); _connector.Connect(_imageProcesserHandler, _processedImageProvider); } void Start() { _originalView.Start(); _processedView.Start(); _frameCapture.Start(); _webCamera.Start(); } void btn_Set_Click(object sender, EventArgs e) { InvokeGUIThread(() => { _edgeDetector.CannyThreshold = Double.Parse(tb_CannyThreshold.Text); _edgeDetector.CannyThresholdLinking = Double.Parse(tb_CannyThresholdLinking.Text); _edgeDetector.Colorized = chk_Colorized.Checked; }); } void InvokeGUIThread(Action action) { BeginInvoke(action); } } }
February 6, 2015
by Mahendra Gadhavi
· 3,359 Views
article thumbnail
Microservices: Five Architectural Constraints
Microservices is a new software architecture and delivery paradigm, where applications are composed of several small runtime services. The current mainstream approach for software delivery is to build, integrate, and test entire applications as a monolith. This approach requires any software change, however small, to require a full test cycle of the entire application. With Microservices a software module is delivered as an independent runtime service with a well defined API. The Microservices approach allow faster delivery of smaller incremental changes to an application. There are several tradeoffs to consider with the Microservices architecture. On one hand, the Microservices approach builds on several best practices and patterns for software design, architecture, and DevOps style organization. On the other hand, Microservices requires expertise in distributed programming and can become an operational nightmare without proper tooling in place. There are several good posts that highlight the pros-and-cons of Microservices, and I have added in the references section. In the remainder of this post, I will define five architectural constraints (principles that drive desired properties) for the Microservices architectural style. To be a Microservice, a service must be: Elastic Resilient Composable Minimal, and; Complete Microservice Constraint #1 - Elastic A microservice must be able to scale, up or down, independently of other services in the same application. This constraint implies that based on load, or other factors, you can fine tune your applications performance, availability, and resource usage. This constraint can be realized in different ways, but a popular pattern is to architect the system so that you can run multiple stateless instances of each microservice, and there is a mechanism for Service naming, registration, and discovery along with routing and load-balancing of requests. Microservice Constraint #2 - Resilient A microservice must fail without impacting other services in the same application. A failure of a single service instance should have minimal impact on the application. A failure of all instances of a microservice, should only impact a single application function and users should be able to continue using the rest of the application without impact. Adrian Cockroft describes Microservices as loosely coupled service oriented architecture with bounded contexts [3]. To be resilient a service has to be loosely coupled with other services, and a bounded context limits a service’s failure domain. Microservice Constraint #3 - Composable A microservice must offer an interface that is uniform and is designed to support service composition. Microservice APIs should be designed with a common way of identifying, representing, and manipulating resources, describing the API schema and supported API operations. The ‘Uniform Interfaces constraint of the REST architectural style describes this in detail. Service Composition is a SOA principle that has fairly obvious benefits, but few guidelines on how it can be achieved. A Microservice interface should be designed to support composition patterns like aggregation, linking, and higher-level functions such as caching, proxies and gateways. I previously discussed REST constraints and elements in as two part blog post: REST is not about APIs Microservice Constraint #4 - Minimal A microservice must only contain highly cohesive entities In software, cohesion is a measure of whether things belong together. A module is said to have high cohesion if all objects and functions in it are focused on the same tasks. Higher cohesion leads to more maintainable software. A Microservice should perform a single business function, which implies that all of its components are highly cohesive. This is also an Single Responsibility Principle (SRP) of object-oriented design [5] Microservice Constraint #5 - Complete A microservice must be functionally complete Bjarne Stroustrup, the creator of C++, stated that a good interface must be, “minimal but complete” i.e. as small as possible, and no smaller. Similarly, a Microservice must offer a complete function, with minimal dependencies (loose coupling) to other services in the application. This is important, as otherwise its becomes impossible to version and upgrade individual services. This constraint is designed to oppose the minimal constraint. Put together a microservice must be “minimal but complete.” Conclusions Designing a Microservices application requires application of several principles, patterns, and best practices of modular design and service-oriented architectures. In this post, I've outlined five architectural constraints which can help guide and retain the key benefits of a Microservices-style architecture. For example, Microservices Constraint# 1 - Elastic steers implementations towards separating the data tier from the application tier, and leads to stateless services. At Nirmata we have built our solution, that makes it easy to deploy and operate microservices applications, using these very same principles. We believe that Microservices style applications, running in containers, will power the next generation of software innovation. If you are using, or interested in using microservices, I would love to hear from you. Jim Bugwadia Founder and CEO Nirmata -- For additional content and articles follow us at @NirmataCloud. -- If you are in the San Francisco Bay Area, come join our Microservices meetup group. References [1] Microservices, Martin Fowler and James Lewis, http://martinfowler.com/articles/microservices.html [2] Microservices Are Not a free lunch!, Benjamin Wootton, http://contino.co.uk/microservices-not-a-free-lunch/ [3] State of the Art in Microservices, Adrian Cockroft, http://thenewstack.io/dockercon-europe-adrian-cockcroft-on-the-state-of-microservices/ [4] The Principles of Object-Oriented Design, Robert C. Martin, http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
February 5, 2015
by Jim Bugwadia DZone Core CORE
· 12,858 Views · 7 Likes
article thumbnail
Date Time Format Conversion with XSLT Mediator in WSO2 ESB
I recently came across this requirement where a xsd:datetime in the payload is needed to be converted to a different date time format as follows, Original format : 2015-01-07T09:30:10+02:00 Required date: 2015/01/07 09:30:10 In WSO2 ESB, I found that this transformation can be achieved through a XSLT mediator, class mediator or a script mediator. In an overview, XSLT mediator uses a XSL stylesheet to format the xml payload passed to the mediator whereas in class mediator and script mediator we use java code and javascript code respectively to manipulate the message context. In this blog post I am going to present how this transformation can be achieved by means of the XSLT mediator. XSL Stylesheet Proxy configuration dateTime.xsl XLS style sheet is stored as an inline xml local entry in ESB. In the proxy, the original date is passed as an parameter ("date_time") to the XLS style sheet. I have used format-dateTime function, a function of XSL 2.0, to do the transformation. Sample request 2015-01-07T09:30:10+02:00 Console output 2015/01/07 09:30:10 GMT+2
February 4, 2015
by Kalpa Welivitigoda
· 8,267 Views · 1 Like
article thumbnail
Dropwizard vs Spring Boot—A Comparison Matrix
Of late, I have been looking into Microservice containers that are available out there to help speed up the development. Although, Microservice is a generic term however there is some consensus with respect to what it means. Hence, we may conveniently refer to the definition Microservice as an "architectural design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task." There are several Microservice containers out there. However, in my experience I have found Dropwizard and Spring-boot to have had received more attention and they appear to be widely used compared to the rest. In my current role, I was asked create a comparison matrix between the two, so it's here below. Dropwizard Spring-Boot What is it? Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done. [more...] Takes an opinionated view of building production-ready Spring applications. Spring Boot favours convention over configuration and is designed to get you up and running as quickly as possible. [more...] Overview? Dropwizard straddles the line between being a library and a framework. Provide performant, reliable implementations of everything a production-ready web application needs. [more...] Spring-boot takes an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration. [more...] Out of the box features? Dropwizard has out-of-the-box support for sophisticated configuration, application metrics, logging, operational tools, and much more, allowing you and your team to ship a production-quality web service in the shortest time possible. [more...] Spring-boot provides a range of non-functional features that are common to large classes of projects (e.g. embedded servers, security, metrics, health checks, externalized configuration). [more...] Libraries Core: Jetty, Jersey, Jackson and Matrics Others: Guava, Liquibase and Joda Time. Spring, JUnit, Logback, Guava. There are several starter POM files covering various use cases, which can be included in the POM to get started. Dependency Injection? No built in Dependency Injection. Requires a 3rd party dependency injection framework such as Guice, CDI or Dagger. [Ref...] Built in Dependency Injection provided by Spring Dependency Injection container. [Ref...] Types of Services i.e. REST, SOAP Has some support for other types of services but primarily is designed for performant HTTP/REST LAYER. If ever need to integrate SOAP, there is a dropwizard bundle for building SOAP web services using JAX-WS API is provided here but it’s not official drop-wizard sub project. [more...] As well as supporting REST Spring-boot has support for other types of services such as JMS, Advanced Message Queuing Protocol, SOAP based Web Services to name a few. [more...] Deployment? How it creates the Executable Jar? Uses Shading to build executable fat jars, where a shaded jar spackages all classes, from all jars, into a single 'uber jar'. [Ref...] Spring-boot adopts a different approach and avoids shaded jars, as it becomes hard to see which libraries you are actually using in your application. It can also be problematic if the same filename is used in Shaded jars. Instead it uses “Nested Jar” approach where all classes from all jars do not need to be included into a single “uber jar” instead all dependent jars should be in the “lib” folder, spring loader loads them appropriately. [Ref...] Contract First Web Services? No built in support. Would have to refer to 3rd party library (CXF or any other JAX-WS implementation) if needed a solution for the Contract First SOAP based services. Contract First services support is available with the help of spring-boot-starter-ws starter application. [Ref...] Externalised Configuration for properties and YAML Supports both Properties and YAML Supports both Properties and YAML Concluding Remarks If dealing with only REST micro services, drop wizard is an excellent choice. Where Spring-boot shines is the types of services supported i.e. REST, JMS, Messaging, and Contract First Services. Not least a fully built in Dependency Injection container. Disclaimer: The matrix is purely based on my personal views and experiences, having tried both frameworks and is by no means an exhaustive guide. Readers are requested to do their own research before making a strategic decision between the two very formidable frameworks.
February 2, 2015
by Rizwan Ullah
· 73,432 Views · 9 Likes
article thumbnail
Resource Injection vs. Dependency Injection Explained!
Fellow geeks, the following article provides an overview of injection in Java EE and describes the two injection mechanisms provided by the platform: Resource Injection and Dependency Injection. Java EE provides injection mechanisms that enable our objects to obtain the references to resources and other dependencies without having to instantiate them directly (explicitly with ‘new’ keyword). We simply declare the needed resources & other dependencies in our classes by drawing fields or methods with annotations that denotes the injection point to the compiler. The container then provides the required instances at runtime. The advantage of Injection is that it simplifies our code and decouples it from the implementations of its dependencies. Note should be given for the fact that Dependency Injection is a specification (also a design pattern) and Context and Dependency Injection (CDI) is an implementation andJava standard for DI. The following topics are discussed here: · Resource Injection · Dependency Injection · Difference between Context and Dependency Injection 1. Resource Injection One of the simplification features of Java EE is the implementation of basic Resource Injection to simplify web and EJB components. Resource injection enables you to inject any resource available in the JNDI namespace into any container-managed object, such as a servlet, an enterprise bean, or a managed bean. For eg, we can use resource injection to inject data sources, connectors, or any other desired resources available in the JNDI namespace. The type we’ll use for the reference to the instance happen to be injected is usually an interface, which would decouple our code from the implementation of the resource. For better understanding of the above statement let’s take a look at the example. The resource injection can be performed in the following three ways: · Field Injection · Method Injection · Class injection Now, the javax.annotation.Resource annotation is used to declare a reference to a resource. So before proceeding, let’s learn few elements of @Resource annotation. @Resource has the following elements: · name: The JNDI name of the resource · type: The Java type of the resource · authenticationType: The authentication type to use for the resource · shareable: Indicates whether the resource can be shared · mappedName: A non-portable, implementation-specific name to which the resource should be mapped · description: The description of the resource Thenameelement is the JNDI name of the resource, and is optional for field- and method-based injection. For field injection, d defaultnameis the field name. For method-based injection, the defaultnameis the JavaBeans property name based on the method. The‘name’ and ‘type’element must be specified for class injection. Thedescriptionelement is the description of the resource (optional). Let’s hop on to the example now. Field Injection: To use field-based resource injection, declare a field and annotate it with the @Resource annotation. The container will refer the name and type of the resource if the name and type elements are not specified. If you do specify the type element, it must match the field’s type declaration. package com.example; public class SomeClass { @Resource private javax.sql.DataSource myDB; ... } In the code above, the container infers the name of the resource based on the class name and the field name: com.example.SomeClass/myDB. The inferred type isjavax.sql.DataSource.class. package com.example; public class SomeClass { @Resource(name="customerDB") private javax.sql.DataSource myDB; ... } In the code above, the JNDI name is customerDB, and the inferred type is javax.sql.DataSource.class. Method Injection: To use method injection, declare a setter method and preceding with the @Resource annotation. The container will itself refer the name and type of the resource if in case it is not specified by programmer. The setter method must follow the JavaBeans conventions for property names: the method name must begin with set, have a void return type, and only one parameter (needless to say :P). Anyways, if you do specify the return type, it must match the field’s type declaration. package com.example; public class SomeClass { private javax.sql.DataSource myDB; ... @Resource private void setMyDB(javax.sql.DataSource ds) { myDB = ds; } ... } In the code above, the container refers the name of the resource according to the class name and the field name: com.example.SomeClass/myDB. The type which is javax.sql.DataSource.class. package com.example; public class SomeClass { private javax.sql.DataSource myDB; ... @Resource (name="customerDB") private void setMyDB (javax.sql.DataSource ds) { myDB = ds; } ... } In the code above, the JNDI name is customerDB, and the inferred type is javax.sql.DataSource.class. Class Injection: To use class-based injection, decorate the class with a @Resource annotation, and set the requiredname and type elements. @Resource(name="myMessageQueue", type="javax.jms.ConnectionFactory") public class SomeMessageBean { ... } Declaring Multiple Resources The @Resources annotation is used to group together multiple @Resource declarations for class injection only. @Resources({ @Resource(name="myMessageQueue", type="javax.jms.ConnectionFactory"), @Resource(name="myMailSession", type="javax.mail.Session") }) public class SomeMessageBean { ... } The code above shows the @Resources annotation containing two @Resource declarations. One is a JMS (Java Messagin Service) message queue, and the other is a JavaMail session. 2. Dependency Injection Dependency injection enables us to turn regular Java classes into managed objects and to inject them into any other managed object (objects wich are managed by the container). Using DI, our code can declare dependencies on any managed object. The container automatically provides instances of these dependencies at the injection points at runtime, n it also manages the lifecycle of these instances right from class loading to releasing it for Garbage Collection. Dependency injection in Java EE defines scopes. For eg, a managed object that is only happen to respond to a single client request (such as a currency converter) has a different scope than a managed object that is needed to process multiple client requests within a session (such as a shopping cart). We can define managed objects (also called managed beans) so that we can later inject by assigning a scope to a needed class: @javax.enterprise.context.RequestScoped public class CurrencyConverter { ... } Use the javax.inject.Inject annotation to inject managed beans; for example: public class MyServlet extends HttpServlet { @Inject CurrencyConverter cc; ... } Umlike resource injection, dependency injection is typesafe because it resolves by type. To decouple our code from the implementation of the managed bean, we can reference the injected instances using an interface type and have our managed bean (regular class controlled by container) implement that interface. I wouldn’t like to discuss more on DI or better saying CDI since we already have a great article published on this. 3. Difference between Resource Injection and Dependency Injection The differences between the RI and DI are listed below. 1. Resource Injection can inject JNDI Resources directly whereas Dependency Injection cannot. 2. Dependency Injection can inject Regular Classes (managed bean) directly whereas Resource Injection cannot. 3. Resource Injection resolves by resource name whereas Dependency Injectin resolves by type. 4. Dependency Injection is typesafe whereas Resoiurce Injection is not. Conclusion: Thus we learnt concept on types on Injection in Java EE and the differences between them. Just a brief. There’s more to come
February 2, 2015
by Lalit Rao
· 68,518 Views · 10 Likes
article thumbnail
How-To: Setup Development Environment for Hadoop MapReduce
This post is intended for folks who are looking out for a quick start on developing a basic Hadoop MapReduce application. We will see how to set up a basic MR application for WordCount using Java, Maven and Eclipse and run a basic MR program in local mode , which is easy for debugging at an early stage. Assuming JDK 1.6+ is already installed and Eclipse has a setup for Maven plugin and download from default maven repository is not restriced. Problem Statement : To count the occurrence of each word appearing in an input file using MapReduce. Step 1 : Adding Dependency Create a maven project in eclipse and use following code in your pom.xml. 4.0.0 com.saurzcode.hadoop MapReduce 0.0.1-SNAPSHOT jar org.apache.hadoop hadoop-client 2.2.0 Upon saving it should download all required dependencies for running a basic Hadoop MapReduce program. Step 2 : Mapper Program Map step involves tokenizing the file, traversing the words, and emitting a count of one for each word that is found. Our mapper class should extend Mapper class and override it’s map method. When this method is called the value parameter of the method will contain a chunk of the lines of file to be processed and the output parameter is used to emit word instances. In real world clustered setup, this code will run on multiple nodes which will be consumed by set of reducers to process further. public class WordCountMapper extends Mapper { private final IntWritable ONE = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while(tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, ONE); } } } Step 3 : Reducer Program Our reducer extends the Reducer class and implement logic to sum up each occurrence of word token received from mappers.Output from Reducers will go to the output folder as a text file ( default or as configured in Driver program for Output format) named as part-r-00000 along with a _SUCCESS file. public class WordCountReducer extends Reducer { public void reduce(Text text, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable value : values) { sum += value.get(); } context.write(text, new IntWritable(sum)); } } Step 4 : Driver Program Our driver program will configure the job by supplying the map and reduce program we just wrote along with various input , output parameters. public class WordCount { public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Path inputPath = new Path(args[0]); Path outputDir = new Path(args[1]); // Create configuration Configuration conf = new Configuration(true); // Create job Job job = new Job(conf, "WordCount"); job.setJarByClass(WordCountMapper.class); // Setup MapReduce job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); job.setNumReduceTasks(1); // Specify key / value job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // Input FileInputFormat.addInputPath(job, inputPath); job.setInputFormatClass(TextInputFormat.class); // Output FileOutputFormat.setOutputPath(job, outputDir); job.setOutputFormatClass(TextOutputFormat.class); // Delete output if exists FileSystem hdfs = FileSystem.get(conf); if (hdfs.exists(outputDir)) hdfs.delete(outputDir, true); // Execute job int code = job.waitForCompletion(true) ? 0 : 1; System.exit(code); } } That’s It !! We are all set to execute our first MapReduce Program in eclipse in local mode. Let’s assume there is an input text file called input.txt in folder input which contains following text : foo bar is foo count count foo for saurzcode Expected output : foo 3 bar 1 is 1 count 2 for 1 saurzcode 1 Let’s run this program in eclipse as Java Application :- We need to give path to input and output folder/file to the program as argument.Also, note output folder shouldn’t exist before running this program else program will fail. java com.saurzcode.mapreduce.WordCount input/inputfile.txt output If this program runs successfully emitting set of lines while it is executing mappers and reducers, we should see a output folder and with following files : output/ _SUCCESS part-r-00000
January 30, 2015
by Saurabh Chhajed
· 12,764 Views · 1 Like
article thumbnail
Code Coverage for Embedded Target with Eclipse, gcc and gcov
The great thing with open source tools like Eclipse and GNU (gcc, gdb) is that there is a wealth of excellent tools: one thing I had in mind to explore for a while is how to generate code coverage of my embedded application. Yes, GNU and Eclipse come with code profiling and code coverage tools, all for free! The only downside seems to be that these tools seem to be rarely used for embedded targets. Maybe that knowledge is not widely available? So here is my attempt to change this :-). Or: How cool is it to see in Eclipse how many times a line in my sources has been executed? Line Coverage in Eclipse And best of all, it does not stop here…. Coverage with Eclipse To see how much percentage of my files and functions are covered? gcov in Eclipse Or even to show the data with charts? Coverage Bar Graph View Outline In this tutorial I’m using a Freescale FRDM-K64F board: this board has ARM Cortex-M4F on it, with 1 MByte FLASH and 256 KByte of RAM. The approach used in this tutorial can be used with any embedded target, as long there is enough RAM to store the coverage data on the target. I’m using Eclipse Kepler with the ARM Launchpad GNU tools (q3 2014 release), but with small modifications any Eclipse version or GNU toolchain could be used. To generate the Code Coverage information, I’m using gcov. Freescale FRDM-K64F Board Generating Code Coverage Information with gcov gcov is an open source program which can generate code coverage information. It tells me how often each line of a program is executed. This is important for testing, as that way I can know which parts of my application actually has been executed by the testing procedures. Gcov can be used as well for profiling, but in this post I will use it to generate coverage information only. The general flow to generate code coverage is: Instrument code: Compile the application files with a special option. This will add (hidden) code and hooks which records how many times a piece of code is executed. Generate Instrumentation Information: as part of the previous steps, the compiler generates basic block and line information. This information is stored on the host as *.gcno (Gnu Coverage Notes Object?) files. Run the application: While the application is running on the target, the instrumented code will record how many the lines or blocks in the application are executed. This information is stored on the target (in RAM). Dump the recorded information: At application exit (or at any time), the recorded information needs to be stored and sent to the host. By default gcov stores information in files. As a file system might not be alway available, other methods can be used (serial connection, USB, ftp, …) to send and store the information. In this tutorial I show how the debugger can be used for this. The information is stored as *.gcda (Gnu Coverage Data Analysis?) files. Generate the reports and visualize them with gcov. General gcov Flow gcc does the instrumentation and provides the library for code coverage, while gcov is the utility to analyze the generated data. Coverage: Compiler and Linker Options To generate the *.gcno files, the following option has to be added for each file which should generate coverage information: -fprofile-arcs -ftest-coverage :idea: There is as well the ‘–coverage’ option (which is a shortcut option) which can be used both for the compiler and linker. But I prefer the ‘full’ options so I know what is behind the options. -fprofile-arcs Compiler Option The option -fprofile-arcs adds code to the program flow to so execution of source code lines are counted. It does with instrumenting the program flow arcs. From https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html: -fprofile-arcs Add code so that program flow arcs are instrumented. During execution the program records how many times each branch and call is executed and how many times it is taken or returns. When the compiled program exits it saves this data to a file called auxname.gcda for each source file. The data may be used for profile-directed optimizations (-fbranch-probabilities), or for test coverage analysis (-ftest-coverage). Each object file’s auxname is generated from the name of the output file, if explicitly specified and it is not the final executable, otherwise it is the basename of the source file. In both cases any suffix is removed (e.g. foo.gcda for input file dir/foo.c, or dir/foo.gcda for output file specified as -o dir/foo.o). See Cross-profiling. If you are not familiar with compiler technology or graph theory: An ‘Arc‘ (alternatively ‘edge’ or ‘branch’) is a directed link between a pair ‘Basic Blocks‘. A Basic is a sequence of code which has no branching in it (it is executed in a single sequence). For example if you have the following code: k = 0; if (i==10) { i += j; j++; } else { foo(); } bar(); Then this consists of the following four basic blocks: Basic Blocks The ‘Arcs’ are the directed edges (arrows) of the control flow. It is important to understand that not every line of the source gets instrumented, but only the arcs: This means that the instrumentation overhead (code size and data) depends how ‘complicated’ the program flow is, and not how many lines the source file has. However, there is an important aspect to know about gcov: it provides ‘condition coverage‘ if a full expression evaluates to TRUE or FALSE. Consider the following case: if (i==0 || j>=20) { In other words: I get coverage how many times the ‘if’ has been executed, but *not* how many times ‘i==0′ or ‘j>=20′ (which would be ‘decision coverage‘, which is not provided here). See http://www.bullseye.com/coverage.html for all the details. -ftest-coverage Compiler Option The second option for the compiler is -ftest-coverage (from https://gcc.gnu.org/onlinedocs/gcc-3.4.5/gcc/Debugging-Options.html): -ftest-coverage Produce a notes file that the gcov code-coverage utility (see gcov—a Test Coverage Program) can use to show program coverage. Each source file’s note file is called auxname.gcno. Refer to the -fprofile-arcs option above for a description of auxname and instructions on how to generate test coverage data. Coverage data will match the source files more closely, if you do not optimize. So this option generates the *.gcno file for each source file I decided to instrument: gcno file generated This file is needed later to visualize the data with gcov. More about this later. Adding Compiler Options So with this knowledge, I need to add -fprofile-arcs -ftest-coverage as compiler option to every file I want to profile. It is not necessary profile the full application: to save ROM and RAM and resources, I can add this option only to the files needed. Actually as a starter, I recommend to instrument a single source file only at the beginning. For this I select the properties (context menu) of my file Test.c I add the options in ‘other compiler flags': Coverage Added to Compilation File -fprofile-arcs Linker Option Profiling not only needs a compiler option: I need to tell the linker that it needs to link with the profiler library. For this I add -fprofile-arcs to the linker options: -fprofile-arcs Linker Option Coverage Stubs Depending on your library settings, you might now get a lot of unresolved symbol linker errors. This is because by default the profiling library assumes to write the profiling information to a file system. However, most file systems do *not* have a file system. To overcome this, I add a stubs for all the needed functions. I have them added with a file to my project (see latest version of that file on GitHub): /* * coverage_stubs.c * * These stubs are needed to generate coverage from an embedded target. */ #include #include #include #include #include #include #include "UTIL1.h" #include "coverage_stubs.h" /* prototype */ void gcov_exit(void); /* call the coverage initializers if not done by startup code */ void static_init(void) { void (**p)(void); extern uint32_t __init_array_start, __init_array_end; /* linker defined symbols, array of function pointers */ uint32_t beg = (uint32_t)&__init_array_start; uint32_t end = (uint32_t)&__init_array_end; while(begst_mode = S_IFCHR; return 0; } int _getpid(void) { return 1; } int _isatty(int file) { switch (file) { case STDOUT_FILENO: case STDERR_FILENO: case STDIN_FILENO: return 1; default: errno = EBADF; return 0; } } int _kill(int pid, int sig) { (void)pid; (void)sig; errno = EINVAL; return (-1); } int _lseek(int file, int ptr, int dir) { (void)file; (void)ptr; (void)dir; return 0; /* return offset in file */ } #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wreturn-type" __attribute__((naked)) static unsigned int get_stackpointer(void) { __asm volatile ( "mrs r0, msp \r\n" "bx lr \r\n" ); } #pragma GCC diagnostic pop void *_sbrk(int incr) { extern char __HeapLimit; /* Defined by the linker */ static char *heap_end = 0; char *prev_heap_end; char *stack; if (heap_end==0) { heap_end = &__HeapLimit; } prev_heap_end = heap_end; stack = (char*)get_stackpointer(); if (heap_end+incr > stack) { _write (STDERR_FILENO, "Heap and stack collision\n", 25); errno = ENOMEM; return (void *)-1; } heap_end += incr; return (void *)prev_heap_end; } int _read(int file, char *ptr, int len) { (void)file; (void)ptr; (void)len; return 0; /* zero means end of file */ } :idea: In this code I’m using the UTIL1 (Utility) Processor Expert component, available on SourceForge. If you do not want/need this, you can remove the lines with UTIL1. - Coverage Stubs File in Project Coverage Constructors There is one important thing to mention: the coverage data structures need to be initialized, similar to constructors for C++. Depending on your startup code, this might *not* be done automatically. Check your linker .map file for some _GLOBAL__ symbols: .text._GLOBAL__sub_I_65535_0_TEST_Test 0x0000395c 0x10 ./Sources/Test.o Such a symbol should exist for every source file which has been instrumented with coverage information. These are functions which need to be called as part of the startup code. Set a breakpoint in your code at the given address to check if it gets called. If not, you need to call it yourself. :!: Typically I use the linker option ‘-nostartfiles’), and I have my startup code. In that case, these constructors are not called by default, so I need to do myself. See http://stackoverflow.com/questions/6343348/global-constructor-call-not-in-init-array-section In my linker file I have this: .init_array : { PROVIDE_HIDDEN (__init_array_start = .); KEEP (*(SORT(.init_array.*))) KEEP (*(.init_array*)) PROVIDE_HIDDEN (__init_array_end = .); } > m_text This means that there is a list of constructor function pointers put together between __init_array_start and __init_array_end. So all what I need is to iterate through this array and call the function pointers: /* call the coverage initializers if not done by startup code */ void static_init(void) { void (**p)(void); extern uint32_t __init_array_start, __init_array_end; /* linker defined symbols, array of function pointers */ uint32_t beg = (uint32_t)&__init_array_start; uint32_t end = (uint32_t)&__init_array_end; while(beg stack) { _write (STDERR_FILENO, "Heap and stack collision\n", 25); errno = ENOMEM; return (void *)-1; } heap_end += incr; return (void *)prev_heap_end; } :!: It might be that several kBytes of heap are needed. So if you are running in a memory constraint system, be sure that you have enough RAM available. The above implementation assumes that I have space between my heap end and the stack area. :!: If your memory mapping/linker file is different, of course you will need to change that _sbrk() implementation. Compiling and Building Now the application should compile and link without errors.Check that the .gcno files are generated: :idea: You might need to refresh the folder in Eclipse. - .gcno files generated In the next steps I’m showing how to get the coverage data as *.gcda files to the host using gdb. Using Debugger to get the Coverage Data The coverage data gets dumped when _exit() gets called by the application. Alternatively I could call gcov_exit() or __gcov_flush() any time. What it then does is Open the *.gcda file with _open() for every instrumented source file. Write the data to the file with _write(). So I can set a breakpoint in the debugger to both _open() and _write() and have all the data I need :-) With _open() I get the file name, and I store it in a global pointer so I can reference it in _write(): static const unsigned char *fileName; /* file name used for _open() */ int _open (const char *ptr, int mode) { (void)mode; fileName = (const unsigned char*)ptr; /* store file name for _write() */ return 0; } In _write() I get a pointer to the data and the length of the data. Here I can dump the data to a file using the gdb command: dump binary memory I could use a calculator to calculate the memory dump range, but it is much easier if I let the program generate the command line for gdb :-): int _write(int file, char *ptr, int len) { static unsigned char gdb_cmd[128]; /* command line which can be used for gdb */ (void)file; /* construct gdb command string */ UTIL1_strcpy(gdb_cmd, sizeof(gdb_cmd), (unsigned char*)"dump binary memory "); UTIL1_strcat(gdb_cmd, sizeof(gdb_cmd), fileName); UTIL1_strcat(gdb_cmd, sizeof(gdb_cmd), (unsigned char*)" 0x"); UTIL1_strcatNum32Hex(gdb_cmd, sizeof(gdb_cmd), (uint32_t)ptr); UTIL1_strcat(gdb_cmd, sizeof(gdb_cmd), (unsigned char*)" 0x"); UTIL1_strcatNum32Hex(gdb_cmd, sizeof(gdb_cmd), (uint32_t)(ptr+len)); return 0; } That way I can copy the string in the gdb debugger: Generated GDB Memory Dump Command That command gets pasted and executed in the gdb console: gdb command line After execution of the program, the *.gcda file gets created (refresh might be necessary to show it up): gcda file created Repeat this for all instrumented files as necessary. Showing Coverage Information To show the coverage information, I need the *.gcda, the *.gcno plus the .elf file. :idea: Use Refresh if not all files are shown in the Project Explorer view Files Ready to Show Coverage Information Then double-click on the gcda file to show coverage results: Double Click on gcda File Press OK, and it opens the gcov view. Double click on file in that view to show the details: gcov Views Use the chart icon to create a chart view: Chart view Bar Graph View Video of Steps to Create and Use Coverage The following video summarizes the steps needed: Data and Code Overhead Instrumenting code to generate coverage information means that it is an intrusive method: it impacts the application execution speed, and needs extra RAM and ROM. How much heavily depends on the complexity of the control flow and on the number of arcs. Higher compiler optimizations would reduce the code size footprint, however optimizations are not recommended for coverage sessions, as this might make the job of the coverage much harder. I made a quick comparison using my test application. I used the ‘size’ GNU command (see “Printing Code Size Information in Eclipse”). Without coverage enabled, the application footprint is: arm-none-eabi-size --format=berkeley "FRDM-K64F_Coverage.elf" text data bss dec hex filename 6360 1112 5248 12720 31b0 FRDM-K64F_Coverage.elf With coverage enabled only for Test.c gave: arm-none-eabi-size --format=berkeley "FRDM-K64F_Coverage.elf" text data bss dec hex filename 39564 2376 9640 51580 c97c FRDM-K64F_Coverage.elf Adding main.c to generate coverage gives: arm-none-eabi-size --format=berkeley "FRDM-K64F_Coverage.elf" text data bss dec hex filename 39772 2468 9700 51940 cae4 FRDM-K64F_Coverage.elf So indeed there is some initial add-up because of the coverage library, but afterwards adding more source files does not add up much. Summary It took me a while and reading many articles and papers to get code coverage implemented for an embedded target. Clearly, code coverage is easier if I have a file system and plenty of resources available. But I’m now able to retrieve coverage information from a rather small embedded system using the debugger to dump the data to the host. It is not practical for large sets of files, but at least a starting point :-). I have committed my Eclipse Kepler/Launchpad project I used in this tutorial on GitHub. Ideas I have in my mind: Instead using the debugger/gdb, use FatFS and SD card to store the data Exploring how to use profiling Combining multiple coverage runs Happy Covering :-) Links: Blog article who helped me to explore gcov for embedded targets: http://simply-embedded.blogspot.ch/2013/08/code-coverage-introduction.html Paper about using gcov for Embedded Systems: http://sysrun.haifa.il.ibm.com/hrl/greps2007/papers/gcov-on-an-embedded-system.pdf Article about coverage options for GNU compiler and linker: http://bobah.net/d4d/tools/code-coverage-with-gcov How to call static constructor methods manually: http://stackoverflow.com/questions/6343348/global-constructor-call-not-in-init-array-section Article about using gcov with lcov: https://qiaomuf.wordpress.com/2011/05/26/use-gcov-and-lcov-to-know-your-test-coverage/ Explanation of different coverage methods and terminology: http://www.bullseye.com/coverage.html
January 28, 2015
by Erich Styger
· 14,975 Views
article thumbnail
A very quick guide to deadlock diagnosis in SQL Server
Recently I was asked about diagnosing deadlocks in SQL Server – I’ve done a lot of work in this area way back in 2008, so I figure it’s time for a refresher. If there’s a lot of interest in exploring SQL Server and deadlocks further, I’m happy to write an extended article going into far more detail. Just let me know. Before we get into diagnosis and investigation, it’s a good time to pose the question: “what is a deadlock?”: From TechNet: A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. The following graph presents a high level view of a deadlock state where: Task T1 has a lock on resource R1 (indicated by the arrow from R1 to T1) and has requested a lock on resource R2 (indicated by the arrow from T1 to R2). Task T2 has a lock on resource R2 (indicated by the arrow from R2 to T2) and has requested a lock on resource R1 (indicated by the arrow from T2 to R1). Because neither task can continue until a resource is available and neither resource can be released until a task continues, a deadlock state exists. The SQL Server Database Engine automatically detects deadlock cycles within SQL Server. The Database Engine chooses one of the sessions as a deadlock victim and the current transaction is terminated with an error to break the deadlock. Basically, it’s a resource contention issue which blocks one process or transaction from performing actions on resources within SQL Server. This can be a serious condition, not just for SQL Server as processes become suspended, but for the applications which rely on SQL Server as well. The T-SQL Approach A fast way to respond is to execute a bit of T-SQL on SQL Server, making use of System Views. The following T-SQL will show you the “victim” processes, much like activity monitor does: select * from sys.sysprocesses where blocked > 0 Which is not particularly useful (but good to know, so you can see the blocked count). To get to the heart of the deadlock, this is what you want (courtesy of this SO question/answer): SELECT Blocker.text –, Blocker.*, * FROM sys.dm_exec_connections AS Conns INNER JOIN sys.dm_exec_requests AS BlockedReqs ON Conns.session_id = BlockedReqs.blocking_session_id INNER JOIN sys.dm_os_waiting_tasks AS w ON BlockedReqs.session_id = w.session_id CROSS APPLY sys.dm_exec_sql_text(Conns.most_recent_sql_handle) AS Blocker This will show you line and verse (the actual statement causing the resource block) – see the attached screenshot for an example. However, the generally accepted way to determine and diagnose deadlocks is through the use of SQL Server trace flags. SQL Trace Flags They are (usually) set temporarily, and they cause deadlocking information to be dumped to the SQL management logs. The flags that are useful are flags 1204 and 1222. From TechNet: https://technet.microsoft.com/en-us/library/ms178104%28v=sql.105%29.aspx Trace flags are set on or off by using either of the following methods: · Using the DBCC TRACEON and DBCC TRACEOFF commands. For example, DBCC TRACEON 2528: To enable the trace flag globally, use DBCC TRACEON with the -1 argument: DBCC TRACEON (2528, -1). To turn off a global trace flag, use DBCC TRACEOFF with the -1 argument. · Using the -T startup option to specify that the trace flag be set on during startup. The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option. So to enable or disable deadlock trace flags globally, you’d use the following T-SQL: DBCC TRACEON (1204, -1) DBCC TRACEON (1222, -1) DBCC TRACEOFF (1204, -1) DBCC TRACEOFF (1222, -1) Due to the overhead, it’s best to enable the flag at runtime rather than on start up. Note that the scope of a non-startup trace flag can be global or session-level. Basic Deadlock Simulation By way of a very simple scenario, you can make use of SQL Management Studio (and breakpoints) to roughly simulate a deadlock scenario. Given the following basic table schema: CREATE TABLE [dbo].[UploadedFile]( [Id] [int] NOT NULL, [Filename] [nvarchar](50) NOT NULL, [DateCreated] [datetime] NOT NULL, [DateModified] [datetime] NULL, CONSTRAINT [PK_UploadedFile] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ) With some basic test data in it: If you create two separate queries in SQL Management Studio, use the following transaction (Query #1) to lock rows in the table: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION SELECT [Id],[Filename],[DateCreated],[DateModified] FROM [dbo].[UploadedFile] WHERE DateCreated > ‘2015-01-01′ ROLLBACK TRANSACTION Now add a “victim” script (Query #2) in a separate query session: UPDATE [dbo].[UploadedFile] SET [DateModified] = ‘2014-12-31′ WHERE DateCreated > ‘2015-01-01′ As long as you set a breakpoint on the ROLLBACK TRANSACTION statement, you’ll block the second query due to the isolation level of the transaction which wraps query #1. Now you can use the diagnostic T-SQL to examine the victim and the blocking transaction. Enjoy!
January 27, 2015
by Rob Sanders
· 164,902 Views
  • Previous
  • ...
  • 648
  • 649
  • 650
  • 651
  • 652
  • 653
  • 654
  • 655
  • 656
  • 657
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: