DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
Lambda Architecture for Big Data
An increasing number of systems are being built to handle the Volume, Velocity and Variety of Big Data, and hopefully help gain new insights and make better business decisions. Here, we will look at ways to deal with Big Data’s Volume and Velocity simultaneously, within a single architecture solution. Volume + Velocity Apache Hadoop provides both reliable storage (HDFS) and a processing system (MapReduce) for large data sets across clusters of computers. MapReduce is a batch query processor that is targeted at long-running background processes. Hadoop can handle Volume. But to handle Velocity, we need real-time processing tools that can compensate for the high-latency of batch systems, and serve the most recent data continuously, as new data arrives and older data is progressively integrated into the batch framework. Therefore we need both batch and real-time to run in parallel, and add a real-time computational system (e.g. Apache Storm) to our batch framework. This architectural combination of batch and real-time computation is referred to as a Lambda Architecture (λ). Generic Lambda λ has three layers: The Batch Layer manages the master data and precomputes the batch views The Speed Layer serves recent data only and increments the real-time views The Serving Layer is responsible for indexing and exposing the views so that they can be queried. The three layers are outlined in the below diagram along with a sample choice of technology stacks: Incoming data is dispatched to both Batch and Speed layers for processing. At the other end, queries are answered by merging both batch and real-time views. Note that real-time views are transient by nature and their data is discarded (making room for newer data) once propagated through the Batch and Serving layers. Most of the complexity is pushed onto the much smaller Speed layer where the results are only temporary, a process known as “complexity isolation“. We are indeed isolating the complexity of concurrent data updates in a layer that is regularly purged and kept small in size. λ is technology agnostic. The data pipeline is broken down into layers with clear demarcation of responsibilities, and at each layer, we can choose from a number of technologies. The Speed layer for instance could use either Apache Storm, or Apache Spark Streaming, or Spring “XD” ( eXtreme Data) etc. How do we recover from mistakes in λ ? Basically, we recompute the views. If that takes too long, we just revert to the previous, non-corrupted versions of our data. We can do that because of data immutability in the master dataset: data is never updated, only appended to (time-based ordering). The system is therefore Human Fault-Tolerant: if we write bad data, we can just remove that data altogether and recompute. Unified Lambda The downside of λ is its inherent complexity. Keeping in sync two already complex distributed systems is quite an implementation and maintenance challenge. People have started to look for simpler alternatives that would bring just about the same benefits and handle the full problem set. There are basically three approaches: 1) Adopt a pure streaming approach, and use a flexible framework such as Apache Samza to provide some type of batch processing. Although its distributed streaming layer is pluggable, Samza typically relies on Apache Kafka. Samza’s streams are replayable, ordered partitions. Samza can be configured for batching, i.e. consume several messages from the same stream partition in sequence. 2) Take the opposite approach, and choose a flexible Batch framework that would also allow micro-batches, small enough to be close to real-time, with Apache Spark/Spark Streaming or Storm’s Trident. Spark streaming is essentially a sequence of small batch processes that can reach latency as low as one second.Trident is a high-level abstraction on top of Storm that can process streams as small batches as well as do batch aggregation. 3) Use a technology stack already combining batch and real-time, such as Spring “XD”, Summingbird or Lambdoop. Summingbird (“Streaming MapReduce”) is a hybrid system where both batch/real-time workflows can be run at the same time and the results merged automatically.The Speed layer runs on Storm and the Batch layer on Hadoop, Lambdoop (Lambda-Hadoop, with HBase, Storm and Redis) also combines batch/real-time by offering a single API for both processing paradigms: The integrated approach (unified λ) seeks to handle Big Data’s Volume and Velocity by featuring a hybrid computation model, where both batch and real-time data processing are combined transparently. And with a unified framework, there would be only one system to learn, and one system to maintain.
January 17, 2015
by Tony Siciliani
· 40,090 Views · 6 Likes
article thumbnail
Angular JS: Use an Angular Websocket Client with a Java Websocket Endpoint
In this tip you can see how to use the Angular Websocket module for connecting client applications to servers.
January 16, 2015
by Anghel Leonard DZone Core CORE
· 40,179 Views · 7 Likes
article thumbnail
ORM and Angular -- Make Your App Smarter
Posted by Gilad F on Back& Blog. Current approaches to web development rely upon having two kinds of intelligence built into your application – business intelligence in the server, and presentation intelligence on the client side. This institutes a clear delineation in responsibilities, which is often desirable from an architectural standpoint. However, this approach does have some drawbacks. Processing time for business logic, for example, is centralized on the server. This can introduce bottlenecks in the application’s performance, or add complexity when it comes to cross-server communication. For smaller applications that nonetheless have a large user base, this can often be the single greatest performance concern – the time spent computing solutions by the server. One way this can be offset is through the use of Object-Relational Mapping, or ORM. Below we’ll look at the concept of ORM, and how creating an ORM system in Angular can help make your application smarter. What is an ORM? Simply put, Object-Relational Mapping is the concept of creating representations of your underlying data that know how to manage themselves. Most web applications boil down to four basic actions, known as the “CRUD” approach – Create a record, Retrieve records, Update a record, or Delete a record. With an ORM, you simply encapsulate each of these functions within a class that represents a given record in the database. In essence, the objects you create to represent your data on the front end also know how to manipulate that data on the back end. Why Use an ORM? The primary benefit of an ORM is that it hides a lot of the functional complexity of database integration behind an established API. Communication with the database to implement each of the CRUD methods can be complex, but once it’s been accomplished for one model it can be easily ported to all of the other models in your system. An ORM focuses on hiding as much of this code as possible, allowing your models to care only about how they are represented – and how they interact with other elements in the system. A series of calls to establish a connection to the database, for example, becomes a single call to a method named “Save” on the model instance. This also allows you to centralize your database code, giving you only one location where you need to look for database-related bugs instead of having to search a complex code base for different custom data communication handlers. Why Use an ORM in Angular? While the JavaScript stack is particularly performant when compared to more heavyweight offerings such as Rails and Django, it still faces the issues common to the standard web application architecture – the server has the potential to be a bottleneck, handling the incoming traffic from a number of locations. By focusing your development efforts to create a pure CRUD API in your server, and developing a rudimentary ORM in Angular, you can offload a lot of that processing load to the client machines – in essence parallelizing the process at the expense of increased network communication. This allows you to reduce the overall dependence of your application on the server, making the server a “thin” client that simply updates the database based upon the API calls issued by the client. After a certain point, your back-end can be outsourced completely to an external provider that specializes in providing this type of access – such as Backand – allowing you to completely offload scalability and security concerns. In essence, it allows you to focus on your application as opposed to focusing on the attendant resources. Conclusion Object-Relational Mapping is a powerful paradigm that eases communication with a database for the basic CRUD activities associated with web applications. As most existing web development environments focus on implementing ORM on the server side, this can result in performance and communication bottlenecks – not to mention increased infrastructure costs. By offloading some of these ORM tasks to AngularJS, you can parallelize many of these tasks and reduce overall server load, in some cases obviating the need for the server entirely. If your application is facing a bloated back-end communication pattern, it might be worth your time to look at working towards implementation of a client-side ORM system. Build your Angular app and connect it to any database with Backand today. – Get started now.translate in hindi
January 16, 2015
by Itay Herskovits
· 8,487 Views
article thumbnail
How to Send SMS Messages in Java Using HTTP Requests
In this article I am going to present a solution about sending SMS messages in Java for those developers and marketers who think – like me – that SMS is not dead.
January 15, 2015
by Timothy Walker
· 286,592 Views · 5 Likes
article thumbnail
Fail-fast Validations Using Java 8 Streams
I’ve lost count of the number of times I’ve seen code which fail-fast validates the state of something, using an approach like public class PersonValidator { public boolean validate(Person person) { boolean valid = person != null; if (valid) valid = person.givenName != null; if (valid) valid = person.familyName != null; if (valid) valid = person.age != null; if (valid) valid = person.gender != null; // ...and many more } } It works, but it’s a brute force approach that’s filled with repetition due to the valid check. If your code style enforces braces for if statements (+1 for that), your method is also three times longer and growing every time a new check is added to the validator. Using Java 8’s new stream API, we can improve this by taking the guard condition of if (valid) and making a generic validator that handles the plumbing for you. import java.util.LinkedList; import java.util.List; import java.util.function.Predicate; public class GenericValidator implements Predicate { private final List> validators = new LinkedList<>(); public GenericValidator(List> validators) { this.validators.addAll(validators); } @Override public boolean test(final T toValidate) { return validators.parallelStream() .allMatch(predicate -> predicate.test(toValidate)); } } Using this, we can rewrite the Person validator to be a specification of the required validations. public class PersonValidator extends GenericValidator { private static final List> VALIDATORS = new LinkedList<>(); static { VALIDATORS.add(person -> person.givenName != null); VALIDATORS.add(person -> person.familyName != null); VALIDATORS.add(person -> person.age != null); VALIDATORS.add(person -> person.gender != null); // ...and many more } public PersonValidator() { super(VALIDATORS); } } PersonValidator, and all your other validators, can now focus completely on validation. The behaviour hasn’t changed – the validation still fails fast. There’s no boiler plate, which is A Good Thing. This one’s going in the toolbox.
January 15, 2015
by Steve Chaloner
· 20,244 Views · 2 Likes
article thumbnail
Upload and Download File From Mongo Using Bottle and Flask
If you have a requirement to save and serve files, then there are at least a couple options. Save the file onto the server and serve it from there. Mongo1 provide GridFS2 store that allows you not only to store files but also metadata related to the file. For example: you can store author, tags, group etc right with the file. You can provide this functionality via option 1 too, but you would need to make your own tables and link the files to the metadata information. Besides replication of data is in built in Mongo. Bottle You can upload and download mongo files using Bottle3 like so: import json from bottle import run, Bottle, request, response from gridfs import GridFS from pymongo import MongoClient FILE_API = Bottle() MONGO_CLIENT = MongoClient('mongodb://localhost:27017/') DB = MONGO_CLIENT['TestDB'] GRID_FS = GridFS(DB) @FILE_API.put('/upload/< file_name>') def upload(file_name): response.content_type = 'application/json' with GRID_FS.new_file(filename=file_name) as fp: fp.write(request.body) file_id = fp._id if GRID_FS.find_one(file_id) is not None: return json.dumps({'status': 'File saved successfully'}) else: response.status = 500 return json.dumps({'status': 'Error occurred while saving file.'}) @FILE_API.get('/download/< file_name>') def index(file_name): grid_fs_file = GRID_FS.find_one({'filename': file_name}) response.headers['Content-Type'] = 'application/octet-stream' response.headers["Content-Disposition"] = "attachment; filename={}".format(file_name) return grid_fs_file run(app=FILE_API, host='localhost', port=8080) And here's the break down of the code: Upload method: Line 12: Sets up upload method to recieve a PUT request for /upload/ url. Line 15-17: Create a new GridFS file with file_name and get the content from request.body. request.body may be StringIO type or a File type because Python is smart enough to decipher the body type based on the content. Line 18-19: If we can find the file by file name then it was saved successfully and therefore return a success response. Line 20-22: Return error if file was not saved successfully. Download method: Line 27: Find the GridFS file. Line 28-29: Set the response Content-Type as application-octet-stream and Content-Disposition to attachment; filename= Line 31: Return the GridOut object. Based on Bottle documentation below we can return an object which has .read() method available and Bottle understands that to be a File object. File objects Everything that has a .read() method is treated as a file or file-like object and passed to the wsgi.file_wrapper callable defined by the WSGI server framework. Some WSGI server implementations can make use of optimized system calls (sendfile) to transmit files more efficiently. In other cases this just iterates over chunks that fit into memory. And we are done (as far as Bottle is concerned). Flask You can upload/download files using Flask4 like so: import json from gridfs import GridFS from pymongo import MongoClient from flask import Flask, make_response from flask import request __author__ = 'ravihasija' app = Flask(__name__) mongo_client = MongoClient('mongodb://localhost:27017/') db = mongo_client['TestDB'] grid_fs = GridFS(db) @app.route('/upload/', methods=['PUT']) def upload(file_name): with grid_fs.new_file(filename=file_name) as fp: fp.write(request.data) file_id = fp._id if grid_fs.find_one(file_id) is not None: return json.dumps({'status': 'File saved successfully'}), 200 else: return json.dumps({'status': 'Error occurred while saving file.'}), 500 @app.route('/download/') def index(file_name): grid_fs_file = grid_fs.find_one({'filename': file_name}) response = make_response(grid_fs_file.read()) response.headers['Content-Type'] = 'application/octet-stream' response.headers["Content-Disposition"] = "attachment; filename={}".format(file_name) return response app.run(host="localhost", port=8081) The Flask upload and download code is very similar to Bottle. It differs only in a few places detailed below: Line 14: Routing is configured differently in Flask. You mention the URL and the methods that apply for that URL. Line 17: Instead of request.body you use request.data Line 28-31: Make the response with the file content and set up the appropriate headers. Finally, return the response object. Questions? Thoughts? Please feel free to leave me a comment below. Thank you for your time. Github repo: https://github.com/RaviH/file-upload-download-mongo References: MongoDB: http://www.mongodb.org/↩ GridFS: http://docs.mongodb.org/manual/core/gridfs/↩ Bottle: http://bottlepy.org/docs/dev/tutorial.html↩ Flask: http://flask.pocoo.org/↩ PyMongo GridFS doc http://api.mongodb.org/python/current/api/gridfs/index.html?highlight=gridfs#module-gridfs↩ Get to know GridFS: https://dzone.com/articles/get-know-gridfs-mongodb↩
January 14, 2015
by Ravi Isnab
· 13,761 Views · 1 Like
article thumbnail
AngularJS Two-Way Data Binding
Traditional web development builds a bridge between the front end, where the user performs their manipulations of the application’s data, and the back end, where that data is stored. In traditional web development, this process is driven by successive networking calls, communicating changes between the server and the client via re-rendering the involved pages. AngularJS enhances this with two-way data binding. Below we’ll look at what two-way data binding is, and how it differs from the traditional data processing approach. The Traditional Approach Most web frameworks focus on one-way data binding. This involves reading the input from the DOM, serializing the data, sending it to the server, waiting for the process to finish, then modifying the DOM to indicate any errors, or reloading the DOM if the call is successful. While this provides a traditional web application all the time it needs to perform data processing, this benefit is only really applicable to web apps with highly complex data structures. If your application has a simpler data format, with relatively flat models, then the extra work can needlessly complicate the process. Furthermore, all models need to wait for server confirmation before their data can be updated, meaning that related data depending upon those models won’t have the latest information. Tying Together the UI and the Model AngularJS addresses this with two-way data binding. With two-way data binding, the user interface changes are immediately reflected in the underlying data model, and vice-versa. This allows the data model to serve as an atomic unit that the view of the application can always depend upon to be accurate. Many web frameworks implement this type of data binding with a complex series of event listeners and event handlers – an approach that can quickly become fragile. AngularJS, on the other hand, makes this approach to data a primary part of its architecture. Instead of creating a series of callbacks to handle the changing data, AngularJS does this automatically without any needed intervention by the programmer Benefits and Considerations The primary benefit of two-way data binding is that updates to (and retrievals from) the underlying data store happen more or less automatically. When the data store updates, the UI updates as well. This allows you to remove a lot of logic from the front-end display code, particularly when making effective use of AngularJS’s declarative approach to UI presentation. In essence, it allows for true data encapsulation on the front-end, reducing the need to do complex and destructive manipulation of the DOM. While this solves a lot of problems with a website’s presentation architecture, there are some disadvantages to take into consideration. First, AngularJS uses a dirty-checking approach that can be slow in some browsers – not a problem for thin presentation pages, but any page with heavy processing may run into problems in older browsers. Additionally, two-way binding is only truly beneficial for relatively simple objects. Any data that requires heavy parsing work, or extensive manipulation and processing, will simply not work well with two-way binding. Additionally, some uses of Angular – such as using the same binding directive more than once – can break the data binding process. Conclusion While the traditional approach to data binding has a lot of benefits when it comes to performing complex data manipulations and calculations, it can introduce some problems with respect to the design of the web application’s front-end architecture. With AngularJS’s use of two-way data binding, your application can greatly simplify its presentation layer, allowing the UI to be built off of a cleaner, less-destructive approach to DOM presentation. While it isn’t useful in every situation, the two-way data binding AngularJS provides can greatly ease web application development, and reduce the pain faced by your front-end developers. Build your Angular app and connect it to any database with Backand today. – Get started now.
January 13, 2015
by Itay Herskovits
· 6,452 Views
article thumbnail
Using Netflix Hystrix Annotations with Spring
My objective here is to recreate a similar set-up in a smaller unit test mode.
January 12, 2015
by Biju Kunjummen
· 36,729 Views · 1 Like
article thumbnail
Java 8 Stream and Lambda Expressions – Parsing File Example
Recently I wanted to extract certain data from an output log. Here’s part of the log file: 2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: eVentToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557] 2015-01-06 11:33:03 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar 2015-01-06 11:33:04 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557] 2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo 2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz I decided to do it using the Java8 Stream and Lambda Expression features. Read the file First, I needed to read the log file and put the lines in a Stream: Stream lines = Files.lines(Paths.get(args[1])); Filter relevant lines I needed to get the packages names and write them into another file. Not all lines contained the data I need, hence filter only relevant ones. lines.filter(line -> line.contains("===---> Loaded package")) Parsing the relevant lines Then, I needed to parse the relevant lines. I did it by first splitting each line to an array of Strings and then taking the last element in that array. In other words, I did a double mapping. First a line to an array and then an array to a String. .map(line -> line.split(" ")) .map(arr -> arr[arr.length - 1]) Writing to output file The last part was taking each string and write it to a file. That was the terminal operation. .forEach(package -> writeToFile(fw, package)); writeToFile is a method I created. The reason is that Java File System throws IOException. You can’t use checked exceptions in lambda expressions. Here’s a full example (note, I don’t check input) import java.io.FileWriter; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import java.util.Arrays; import java.util.List; import java.util.stream.Stream; public class App { public static void main(String[] args) throws IOException { Stream lines = null; if (args.length == 2) { lines = Files.lines(Paths.get(args[1])); } else { String s1 = "2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: adEventToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]"; String s2 = "2015-01-06 11:33:03 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]"; String s3 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar"; String s4 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo"; String s5 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz"; List rows = Arrays.asList(s1, s2, s3, s4, s5); lines = rows.stream(); } new App().parse(lines, args[0]); } private void parse(Stream lines, String output) throws IOException { final FileWriter fw = new FileWriter(output); //@formatter:off lines.filter(line -> line.contains("===---> Loaded package")) .map(line -> line.split(" ")) .map(arr -> arr[arr.length - 1]) .forEach(package -> writeToFile(fw, package)); //@formatter:on fw.close(); lines.close(); } private void writeToFile(FileWriter fw, String package) { try { fw.write(String.format("%s%n", package)); } catch (IOException e) { throw new RuntimeException(e); } } } If you enjoyed this article and want to learn more about Java Streams, check out this collection of tutorials and articles on all things Java Streams.
January 12, 2015
by Eyal Golan
· 47,782 Views · 1 Like
article thumbnail
Byte Buddy, an alternative to cglib and Javassist
Byte Buddy is a code generation library for creating Java classes during the runtime of a Java application and without the help of a compiler. Other than the code generation utilities that ship with the Java Class Library, Byte Buddy allows the creation of arbitrary classes and is not limited to implementing interfaces for the creation of runtime proxies. In order to use Byte Buddy, one does not require an understanding of Java byte code or the class file format. In contrast, Byte Buddy’s API aims for code that is concise and easy to understand for everybody. Nevertheless, Byte Buddy remains fully customizable down to the possibility of defining custom byte code. Furthermore, the API was designed to be as non-intrusive as possible and as a result, Byte Buddy does not leave any trace in the classes that were created by it. For this reason, the generated classes can exist without requiring Byte Buddy on the class path. Because of this feature, Byte Buddy’s mascot was chosen to be a ghost. Byte Buddy is written in Java 6 but supports the generation of classes for any Java version. Byte Buddy is a light-weight library and only depends on the visitor API of the Java byte code parser library ASM which does itself not require any further dependencies. At first sight, runtime code generation can appear to be some sort of black magic that should be avoided and only few developers write applications that explicitly generate code during their runtime. However, this picture changes when creating libraries that need to interact with arbitrary code and types that are unknown at compile time. In this context, a library implementer must often choose between either requiring a user to implement library-proprietary interfaces or to generate code at runtime when the user’s types becomes first known to the library. Many known libraries such as for example Spring or Hibernate choose the latter approach which is popular among their users under the term of using Plain Old Java Objects. As a result, code generation has become an ubiquitous concept in the Java space. Byte Buddy is an attempt to innovate the runtime creation of Java types in order to provide a better tool set to those relying on code generation. Hello World Saying Hello World with Byte Buddy is as easy as it can get. Any creation of a Java class starts with an instance of the ByteBuddy class which represents a configuration for creating new types: Class dynamicType = new ByteBuddy() .subclass(Object.class) .method(named("toString")).intercept(FixedValue.value("Hello World!")) .make() .load(getClass().getClassLoader(), ClassLoadingStrategy.Default.WRAPPER) .getLoaded(); assertThat(dynamicType.newInstance().toString(), is("Hello World!")); The default ByteBuddy configuration which is used in the above example creatse a Java class in the newest version of the class file format that is understood by the processing Java virtual machine. As hopefully obvious from the example code, the created type will extend the Object class and intercept its toString method which should return a fixed value of Hello World!. The method to be intercepted is identified by a so-called method matcher. In the above example, a predefined method matcher named(String) is used which identifies a method by its exact name. Byte Buddy comes with numerous predefined and well-tested method matchers which are collected in the MethodMatchers class. The creation of custom matchers is however as simple as implementing the (functional) MethodMatcher interface. For implementing the toString method, the FixedValue class defines a constant return value for the intercepted method. Defining a constant value is only one example of many method interceptors that ship with Byte Buddy. By implementing the Instrumentation interface, a method could however even be defined by custom byte code. Finally, the described Java class is created and then loaded into the Java virtual machine. For this purpose, a target class loader is required as well as a class loading strategy where we choose a wrapper strategy. The latter creates a new child class loader which wraps the given class loader and only knows about the newly created dynamic type. Eventually, we can convince ourselves of the result by calling the toString method on an instance of the created class and finding the return value to represent the constant value we expected. Where to go from here? Byte Buddy is a comprehensive library and we only scratched the surface of Byte Buddy's capabilities. However, Byte Buddy aims for being easy to use by providing a domain-specific language for creating classes. Most runtime code generation can be done by writing readable code and without any knowledge of Java's class file format. If you want to learn more about Byte Buddy, you can find such a tutorial on Byte Buddy's web page. Furthermore, Byte Buddy comes with a detailed in-code documentation and extensive test case coverage which can also serve as example code. you can also find the source code of Byte Buddy on GitHub .
January 11, 2015
by Rafael Winterhalter
· 10,107 Views · 6 Likes
article thumbnail
Run your ANTLR DSL as a Groovy Script
Introduction So your thinking on designing your own DSL or to use an existing grammar from ANTLR, but how do you execute it? That is what this tutorial is about. Groovy is great for building DSL's, but the expressiveness is limited to language features already included in the Groovy syntax. ANTLR offers a much larger set of language features that you can use. Just look at the language's already available at: GitHub - ANTLR 4 Grammars. You could pick one grammar file, or two, modify them to your needs and rapidly create your DSL. One thing is missing though from all books and articles on DSL's with groovy and ANTLR, at least from the ones I've read, and that is, how to run a script written in your newly created DSL? With ANTLR, you can parse the script, but you'll need to write a Java or Groovy program with a main class to read in the file. This may be sufficient for certain use cases, for example when translating between two syntaxes, or when reading in a configuraiton file, or when you have a command language for your server, but in certain cases you just want the customers of your DSL to be able to use it as a 'real' language. In this article I will demonstrate how the powerful Groovy AST transformations can be used to execute a script written in a ANTLR grammar with a custom extension as a Groovy script. If you're new to Groovy DSL's and AST transformations I recommend to read the following articles as well: Joe's Groovy Blog on AST Transformations Global AST Transformations Groovy Global AST Transformations Groovy makes a distinction between a global and a local AST Transformation. A global AST Transformation is fired once and has the entire script, or SourceUnit, as scope. A local AST Transformation is fired when found and has the statement where it is found as scope. Global transformations are always executed when present on the classpath, every time the groovy compiler is invoked. Local transformations are only invoked when they are present in a script. Local transformations are useful when a single statement needs to be transformed. Global transformations are useful when the entire script needs to be transformed. In this example we will use global transformations. Global transformations have an extra cool feature: you can specify your own file extension. This feature will be used to specify a syntax with ANTLR, and then run it with Groovy. Our DSL: the Cymbol language Cymbol is a little language written by Terence Parr and published in his book: The Definitive ANTLR 4 Reference. Cymbol only contains the basic C-style syntax needed to write declarations and functions. It is a good point to start if you want to implement your own DSL with some extra features. The grammar file is available at GitHub: Cymbol.g4. This file is included in the grammar directory of the sample workspace. You can install the ANTLRWorks workbench, if you want to try the examples out for yourself. The workbench can be found at: Tunnel Vision Labs. The ANTLR Workbench was used to generate the necessary classes to parse the DSL. This is however not necessary to run this example and all needed classes are already included in the attached workspace zip file. A sample Cymbol program could look as follows (taken from the ANTLR 4 Reference, p. 99): // Cymbol test int g = 9; // a global variable int fact(int x) { // a factorial function if x==0 then return 1; return x * fact(x-1); } Set up the workspace Eclipse is used as IDE. To start, you can download the Cymbol_workspace.zip file and import it in Eclipse. After that, you'll need to download a few libraries, the antlr_4.4_complete.jar, hamcrest_1.3.jar and junit_4.11.jar. Due to upload size limitations I couldn't include the libraries in the zip file. Links to the download sites are provided in the README.txt file in the libs directory of the Eclipse project. Add the libraries to the build path and the project set up is complete. Now you can execute DslScriptTransformTest.java as JUnit test and you should see some output in the console. What is going on behind the scenes here? Generate the Parser Open the Cymbol.g4 grammar file in the ANTLR Workbench. Go to: Run > Generate Recognizer... Select the ouput directory, tick the boxes in front of Generate Listener and Generate Visitor. Don't forget to set the package to com.cymbol.parser. This will generate all classes that you'll need to parse a script written with your DSL's syntax. Create the ASTTransformation The DslScriptTransform.java file contains the ASTTransformation. The transformation class needs to implement the ASTTransformation interface from Groovy. An AST Transformation is annotated with: @GroovyASTTransformation(phase = CompilePhase.SEMANTIC_ANALYSIS) This annotation tells the Groovy compiler that this class is an AST Transformation that should be applied in the specified phase. Global transformations can occur in any compiler phase so here are a few words on how to select the right phase. The Groovy compiler hase nine phases. Of these, only the second, third and fourth are relevant for us: Initialization The resources and environment are initialized. Parsing The script text is parsed and a Concrete Syntax Tree (CST) is constructed according to the Groovy grammar. Conversion The Astract Syntaxt Tree (AST) is constructed from the CST. Semantic Analysis In this phase, the AST is checked, classes, static imports and variables are resolved. After this phase, the AST is complete and the input is seen as a valid source. Our DSL script has been transformed into a Groovy script with one method: getScriptText(). In the semantic analysis phase, all necessary Groovy structures have been initialised and we can safely modify the AST. The only thing that needs to be done now is to implement the run() method to call the ANTLR Parser and parse the script text. First, we check whether the file has the required extension: @Override public void visit(ASTNode[] nodes, SourceUnit source) { if(!source.getName().endsWith(".cymbol")) { return; } ... Second, the main class is retrieved from the AST: private ClassNode getMainClass(ModuleNode moduleNode) { for(ClassNode classNode : moduleNode.getClasses()) { if(classNode.getNameWithoutPackage().equals(moduleNode.getMainClassName())) { return classNode; } } return null; } And the run method's body is imlemented. The run method's body will look something like follows: @Override public Object run() { DslScriptTransformHelper helper = new DslScriptTransformHelper(); String scriptText = this.getScriptText(); helper.parse(scriptText); } The @Override annotation is not actually there but is included here for clarity. The annotation could be added as well but it is left out to keep the example simple. The implementation is kept as minimal as possible because writing AST Transformations is rather cumbersome. As much as posisble is factored out into a delegate called DslScriptTransformHelper. The helper class contains the actual code to parse the DSL script and to do something with it. The benefit is that complex logic can be implemented that can be checked by the compiler and by your IDE. Third, the AST Transformation to implement the run method's body is as follows: private void addRunMethod(ClassNode scriptClass) { List statements = new ArrayList<>(); /* * initialise the parser helper: * * DslScriptTransformHelper helper = new DslScriptTransformHelper() */ ClassNode helperClassNode = new ClassNode(DslScriptTransformHelper.class); ConstructorCallExpression initParserHelper = new ConstructorCallExpression(helperClassNode, new ArgumentListExpression()); VariableExpression helperVar = new VariableExpression(HELPER_VAR, helperClassNode); DeclarationExpression assignHelperExpr = new DeclarationExpression(helperVar, new Token(Types.EQUALS, "=", -1, -1), initParserHelper); ExpressionStatement initHelperExpr = new ExpressionStatement(assignHelperExpr); statements.add(initHelperExpr); /* * get the script text and assign it to a variable: * * String scriptText = this.getScriptText() */ MethodCallExpression getScriptTextExpr = new MethodCallExpression(new VariableExpression("this"), new ConstantExpression(GET_SCRIPT_TEXT_METHOD), MethodCallExpression.NO_ARGUMENTS); VariableExpression scriptTextVar = new VariableExpression(SCRIPT_TEXT_VAR, new ClassNode(String.class)); DeclarationExpression declareScriptTextExpr = new DeclarationExpression(scriptTextVar, new Token(Types.EQUALS, "=", -1, -1), getScriptTextExpr); ExpressionStatement getScriptTextStmt = new ExpressionStatement(declareScriptTextExpr); statements.add(getScriptTextStmt); /* * call the parse method on the helper: * * helper.parse(scriptText) */ MethodCallExpression callParserExpr = new MethodCallExpression(helperVar, PARSE_METHOD, new ArgumentListExpression(new VariableExpression(SCRIPT_TEXT_VAR))); ExpressionStatement callParserStmt = new ExpressionStatement(callParserExpr); statements.add(callParserStmt); // implement the run method's body BlockStatement methodBody = new BlockStatement(statements, new VariableScope()); MethodNode runMethod = scriptClass.getMethod(RUN_METHOD, new Parameter[0]); runMethod.setCode(methodBody); } All the above is needed to implement three lines of code! Fifth, the DslScriptTransformHelper class contains a parse method that calls the ANTLR parser and then does something with the ParseTree returned by ANTLR. You will probably want a semantic model, and I included a visitor that can iterate over the model to do something with it. This bit is left unimplemented because it requires, a lot, of design and coding to get this right. Here is a simple example: public class DslScriptTransformHelper { public SemanticModel parse(String scriptText) throws IOException { ANTLRInputStream is = new ANTLRInputStream(scriptText); CymbolLexer lexer = new CymbolLexer(is); CommonTokenStream tokens = new CommonTokenStream(lexer); CymbolParser parser = new CymbolParser(tokens); ParseTree tree = parser.file(); SemanticModel model = initSemanticModel(tree); SemanticModelVisitor visitor = initVisitor(); visitor.visit(model); return model; } Here, the generated Lexer and Parser are called. If you would want to create your own DSL with ANTLR, you will have to modify the DslScriptTransformHelper. The AST Transformation, DslScriptTransform, need not be modified. As much logic is factored out of the AST Transformation as posisble into a helper class. As you will see in the next section, creating an AST Transformation is difficult, time consuming, error prone, and generally not something that you will enjoy doing. by factoring out behaviour into a helper you can write logic and still have IDE support, and Unit testability for your logic. All you need to do is to initialize the helper and call a method on the helper with the DSL script's text as argument. I included two interfaces that are left unimplemented: SemanticModel and SemanticModelVisitor. private SemanticModel initSemanticModel(ParseTree tree) { SemanticModel model = new SemanticModel() { @Override public void init(ParseTree tree) { System.out.println("init: " + tree.toStringTree()); } }; model.init(tree); return model; } private SemanticModelVisitor initVisitor() { SemanticModelVisitor visitor = new SemanticModelVisitor() { @Override public void visit(SemanticModel model) { System.out.println("visit"); } }; return visitor; } } The purpose of these is that the semantic model represents, well, your semantic model. This is so to say your domain model. In the design of your DSL, deciding on a semantic model is probably the most important activity. The semantic model models what the DSL means. The grammar specifies how to write a valid script, but the model specifies what it means. Think of it as follows: not all valid English sentences are meaningful. Martin Fowler discusses this in his book: Domain Specific Languages. The visitor is included so that after you've constructed a valid semantic model, you can do something with it, if you want. As you can see from the above example, performing an AST Transformation is rather cumbersome, at least in Java. Surely Groovy offers something better? Groovy does, and there are several excellent examples on this available at: GitHub. To me, the major downside to using the Groovy Builder DSL to specify AST Transformations is that the types can't be checked. It is rather difficult to get everything right, especially without type checking. The Cymbol DSL implementation uses the Groovy compiler, but doesn't actually contain any Groovy code. I think that is a benefit because fiddling with compilers and AST's is difficult enough, even with strict typing and IDE support. The examples can be implemented in Groovy as well if you wish so. Create the Groovy configuration for the AST Transformations Groovy needs the following configuration files to know that an AST Transformation should be called, and that it should accept files with a custom extension. Two files must be added: META-INF/services/org.codehaus.groovy.source.Extensions The extension is just declared in the org.codehaus.groovy.source.Extensions file: .cymbol META-INF/services/org.codehaus.groovy.transform.ASTTransformation This file only contains the name with package of the class that contains the AST Transformation: com.dsl.transformation.DslScriptTransform Create a Source Pre-Processor Groovy supports custom compiler configurations. They are documented at: Advanced Compiler Configuration. We will use a custom compiler configuration to create a source pre-processor that reads the source of a script, a Cymbol script in our case, and that modifies the source. This happens before the Groovy compiler tries to parse the source and thus can be used as a source pre-processor. The text of the source is wrapped in a String and a method that returns that string. The Cymbol example is transformed into the following Groovy script: String getScriptText() { return '''// Cymbol test int g = 9; // a global variable int fact(int x) { // a factorial function if x==0 then return 1; return x * fact(x-1); }''' } The method getScriptText() now returns the DSL script as string. The Groovy source pre-processor is implemented as follows. First, a custom CustomParserPlugin is created. The CustomParserPlugin extends the Groovy AntlrParserPlugin and overrides the parseCST() method. This method is similar to the following example from Guillaume Laforge: Custom antlr parser plugin. Here the example is taken a step further however by calling ANTLR's StringTemplate to operate on the entire source, instead of modifiying the Groovy source by remapping the bindings, as is done in Guillaume's example. public class CustomParserPlugin extends AntlrParserPlugin { @Override public Reduction parseCST(final SourceUnit sourceUnit, Reader reader) throws CompilationFailedException { StringBuffer bfr = new StringBuffer(); int intValueOfChar; try { while ((intValueOfChar = reader.read()) != -1) { bfr.append((char) intValueOfChar); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } String text = modifyTextSource(bfr.toString()); StringReader stringReader = new StringReader(text); SourceUnit modifiedSource = SourceUnit.create(sourceUnit.getName(), text); return super.parseCST(modifiedSource, stringReader); } String modifyTextSource(String text) { ST textAsStringConstant = new ST("String getScriptText() { return '''''' }"); textAsStringConstant.add("text", text); return textAsStringConstant.render(); } } A CompilerConfiguration is equipped with a ParserPluginFactory and we also create a custom implementation that returns our CustomParserPlugin: public class SourcePreProcessor extends ParserPluginFactory { @Override public ParserPlugin createParserPlugin() { return new CustomParserPlugin(); } } A CompilerConfiguration needs to be created with our implementation of the ParserPluginFactory. Several ways of doing this are available to us: Use Groovy embedded (GroovyShell, GroovyScriptEngine, ...) and add a CompilerConfiguration Use the groovyc compiler with a configscript Both techniques will be presented. Embedded Groovy is used in the Unit test and the Groovy compiler with the configuration script is invoked from the command line. Create a Unit test with embedded groovy The Unit test simply evaluates the DSL script with a GroovyShell that has a custom CompilerConfiguration with our SourcePreProcessor: public class DslScriptTransformTest { String scriptPath = "tests"; @Test public void testPreProcessor() throws CompilationFailedException, IOException { String testScript = scriptPath + "/test1.cymbol"; File scriptFile = new File(testScript); ParserPluginFactory ppf = new SourcePreProcessor(); CompilerConfiguration cc = new CompilerConfiguration(); cc.setPluginFactory(ppf); Binding binding = new Binding(); GroovyShell shell = new GroovyShell(binding, cc); shell.evaluate(scriptFile); } } If you now call the testPreProcessor() method you should see some output on the console with the output from ANTLR's tree.toStringTree() method call and a message that the semantic models has been visited. Create a compiler configuration script First, a CompilerConfiguration is created with a custom ParserPluginFactory, and passed to the GroovyShell. Second, the groovyc compiler is invoked with the --configscript flag. The config script has a CompilerConfiguration injected before any files are compiled. The CompilerConfiguration is exposed as a variable named configuration. The compiler configuration script config.groovy looks as follows: configuration.setPluginFactory(new com.dsl.transformation.SourcePreProcessor()) We can simply set the SourcePreProcessor on the configuration as a PluginFactory. Export the Jar file With Eclipse, you can export the project as a Jar file by selecting: Export > Java > JAR File You don't need the libs folder because Jar files that are included in another Jar file aren't available on the class path. What you can do is to create a Manifest.MF file with the following class path entries: Class-Path: libs/antlr-4.4-complete.jar libs/hamcrest-core-1.3.jar This makes sure that if you run the jar file from the right location, the required libraries are on the class path. This is not bullet proof production code, but for testing it is sufficient. Now that you exported a Jar file all you need to do is to include the Jar file with your AST Transformations on the class path and Groovy will automatically understand what to do. The DSL script is invoked from the command line with the following command: groovy -cp Cymbol.jar --configscript config.groovy tests\test1.cymbol You can also invoke the groovyc compile and compile your DSL script into a Java class. The command is the same, really: groovyc -cp Cymbol.jar --configscript config.groovy tests\test1.cymbol In case you have a DSL in which you have customers that write scripts with it, you can now also compile it with a build system such as Ant or Gradle and package the results in a JAR file and deliver it to a production system. Conclusion In the previous sections you have seen a technique to develop a DSL and execute it as a Groovy script, or to compile the DSL script into a Java class file. It requires quite a lot of groovy's DSL features, but it is a good show case on what you can do with Groovy's superb DSL features and how to succesfully apply them. And I didn't even write a single line of Groovy code! The techniques that have been covered are: Generate an ANTLR Parser from a grammar file Create a Groovy source pre-processor Create Global AST Transformations that delegate to a helper class for easy implementations Execute a DSL script with a custom CompilerConfiguration with embedded Groovy or from the command line Compile a DSL script with an ANTLR grammar into a Java .class file with a build system That was a lot of ground to cover. One open point remains tool support. Now that I have my DSL, can I at least have syntax high-lighting? I hope to return to this, and other topics involving DSL design and development, in a next article.
January 11, 2015
by Reinout Korbee
· 12,047 Views · 2 Likes
article thumbnail
Java EE Interceptors
History I think it’s important to take a look at the evolution of Interceptors in Java EE because of the simple fact that it started as an EJB-specific item and later evolved into a separate spec which is now open for extension by other Java EE specifications. Version 1.0 Interceptors were first introduced in EJB 3.0 (part of Java EE 5). Interceptors did not have a dedicated spec but they were versioned 1.0 and bought basic AOP related features to managed beans (POJOs) via simple annotations @AroundInvoke – to annotate methods containing the interception logic for target class methods @Intercerptors – to bind the the interceptor classes with their target classes/methods Capability to configure interceptors for an entire module (EJB JAR) via the deployment descriptor @ExcludeDefaultInterceptors – to mute default interceptors defined in the deployment descriptor @ExcludeClassInterceptors – to mute a globally defined (class level) interceptor for a particular method/constructor of the class Interceptors 1.1 Along came Java EE 6 with EJB 3.1 – Interceptors 1.1 was still included in the EJB spec document @InterceptorBinding – a type safe way of specifying interceptors of a class or a method. Please note that this annotation was leveraged by CDI 1.0 (another specification introduced in Java EE 6) and its details are present in the CDI 1.0 spec doc rather than EJB 3.1 (light bulb moment … at least for me) @Interceptor – Used to explicitly declare a class containing an interception logic in a specific method (annotated with @AroundInvoke etc) as an interceptor along with an appropriate Interceptor Binding. This too was mentioned in the CDI 1.0 documentation only. @AroundTimeout – used to intercept time outs of EJB timers along with a way to obtain an instance of the Timer being intercepted (viajavax.interceptor.InvocationContext.getTimer()) Interceptors 1.2 Interceptors were split off into an individual spec in Java EE 7 and thus Interceptors 1.2came into being Interceptors 1.2 was a maintenance release on top of 1.1 and hence the JSR number still remained the same as EJB 3.1 (JSR 318) Interceptor.Priority (static class) – to provide capability to define the order (priority) in which the interceptors need to invoked. @AroundConstruct – used to intercept the construction of the target class i.e. invoke logic prior to the constructor of the target class is invoked It’s important to bear in mind that Interceptors are applicable to managed beans in general. Managed Beans themselves are simple POJOs which are privileged to basic services by the container – Interceptors are one of them along with life cycle callbacks, resource injection. Memory Aid It’s helpful to think of Interceptors as components which can interpose on beans throughout their life cycle before they are even constructed – @AroundConstruct after they are constructed – @PostConstruct during their life time (method invocation) – @AroundInvoke prior to destruction – @PreDestroy time outs of EJBs – @AroundTimeout Let’s look at some of the traits of Interceptors in more detail and try to answer questions like where are they applied and what do they intercept ? how to bind interceptors to the target (class) they are supposed to intercept ? Interceptors Types (based on the intercepted component) Method Interceptors Achieved by @AroundInvoke public class MethodInterceptor{ @AroundInvoke public Object interceptorMethod(InvocationContext ictx) throws Exception{ //logic goes here } } @Stateless public class AnEJB{ @Interceptors(MethodInterceptor.class) public void bizMethod(){ //any calls to this method will be intercepted by MethodInterceptor.interceptorMethod() } } The method containing the logic can be part of separate class as well as the target class (class to be intercepted) itself. Lifecycle Callback interceptors Decorate the method with @AroundConstruct in order to intercept the constructor invocation for a class public class ConstructorInterceptor{ @AroundConstruct public Object interceptorMethod(InvocationContext ictx) throws Exception{ //logic goes here } } public class APOJO{ @Interceptors(ConstructorInterceptor.class) public APOJO(){ //any calls to this constructor will be intercepted by ConstructorInterceptor.interceptorMethod() } } The method annotated with @AroundConstruct cannot be a part of the intercepted class. It has to be defined using a separate Interceptor class Use the @PostConstruct annotation on a method in order to intercept a call back method on a managed bean. Just to clarify again – the Interceptor spec does not define a new annotation as such. One needs to reuse the @PostConstruct (part of theCommon Annotations spec) on the interceptor method. public class PostConstructInterceptor{ @PostConstruct public void interceptorMethod(InvocationContext ictx) throws Exception{ //logic goes here } } @Interceptors(PostConstructInterceptor.class) public class APOJO{ @PostConstruct public void bizMethod(){ //any calls to this method will be intercepted by PostConstructInterceptor.interceptorMethod() } } The @PreDestroy (another call back annotation defined in Common Annotations spec) annotation is used in a similar fashion Time-out Interceptors As mentioned above – @AroundTimeout used to intercept time outs of EJB timers along with a way to obtain an instance of the Timer being intercepted (viajavax.interceptor.InvocationContext.getTimer()) Applying/Binding Interceptors Using @Interceptors As shown in above examples – just use the @Interceptors annotation to specify the interceptor classes @Interceptors can be applied on a class level (automatically applicable to all the methods of a class), to a particular method or multiple methods and constructor in case of a constructor specific interceptor using @AroundConstruct Using @IntercerptorBinding Interceptor Bindings (explained above) – Use @IntercerptorBinding annotation to define a binding annotation which is further used on the interceptor class as well as the target class (whose method, constructor etc needs to be intercepted) @InterceptorBinding @Target({TYPE, METHOD, CONSTRUCTOR}) @Retention(RUNTIME) public @interface @Auditable { } @Auditable @Interceptor public class AuditInterceptor { @AroundInvoke public Object audit(InvocationContext ictx) throws Exception{ //logic goes here } } @Stateless @Auditable public class AnEJB{ public void bizMethod(){ //any calls to this method will be intercepted by AuditInterceptor.audit() } } Deployment Descriptor One can also use deployment descriptors to bind interceptors and target classes either in an explicit fashion as well as in override mode to annotations. This was a rather quick overview of Java EE interceptors. Hopefully the right trigger for you to dig deeper :-) Cheers !
January 9, 2015
by Abhishek Gupta DZone Core CORE
· 30,803 Views · 8 Likes
article thumbnail
An Impatient New User's Introduction to API Management with JBoss apiman 1.0
API Management? Did you say “API Management?” Software application development models are evolutionary things. New technologies are always being created and require new approaches. It’s frequently the case today, that a service oriented architecture (SOA) model is used and that the end product is a software service that can be used by applications. The explosion in growth of mobile devices has only accelerated this trend. Every new mobile phone sold is another platform onto which applications are deployed. These applications are often built from services provided from multiple sources. The applications often consume these services through their APIs. OK, that’s all interesting, but why does this matter? Here’s why: If you are providing a service, you’d probably like to receive payment when it’s used by an application. For example, let’s say that you’ve spent months creating a new service that provides incredibly accurate and timely driving directions. You can imagine every mobile phone GPS app making use of your service someday. That is, however, assuming that you can find a way to enforce a contract on consumers of the API and provide them with a service level agreement (SLA). Also, you have to find a way to actually track consumers’ use of the API so that you can actually enforce that SLA. Finally, you have to have the means to update a service and publish new versions of services. Likewise, if you are consuming a service, for example, if you want to build the killer app that will use that cool new mapping service, you have to have the means to find the API, identify the API’s endpoint, and register your usage of the API with its provider. The approach that is followed to fulfill both service providers’ and consumers’ needs is...API Management. JBoss apiman 1.0 apiman is JBoss’ open source API Management system. apiman fulfills service API providers’ and consumers’ needs by implementing: API Manager - The API Manager provides an easy way for API/service providers to use a web UI to define service contracts for their APIs, apply these contracts across multiple APIs, and control role-based user access and API versioning. These contracts can govern access to services and limits on the rate at which consumers can access services. The same UI enables API consumers to easily locate and access APIs. API Gateway - The gateway applies the service contract policies of API Management by enforcing at runtime the rules defined in the contracts and tracking the service API consumers’ use of the APIs for every request made to the services. The way that the API Gateway works is that the consumer of the service accesses the service through a URL that designates the API Gateway as a proxy for the service. If the policies defined to govern access to the service (see a later section in this post for a discussion of apiman polices), the API Gateway then proxies requests to the service’s backend API implementation. The best way to understand API Management with apiman is to see it in action. In this post, we’ll install apiman 1.0, configure an API with contracts through the API Manager, and watch the API Gateway control access to the API and track its use. Prerequisites We don’t need very much to run apiman out of the box. Before we install apiman, you’ll have to have Java (version 1.7 or newer) installed on your system. You’ll also need to git and maven installed to be able to build the example service that we’ll use. A note on software versions: In this post we’ll use the latest available version of apiman as of December 2014. As if this writing, version 1.0 of apiman was just released (December 2014). Depending on the versions of software that you use, some screen displays may look a bit different. Getting apiman Like all JBoss software, installation of apiman is simple. First, you will need an application server on which to install and run apiman. We’ll use the open source JBoss WildFly server release 8.2 (http://www.wildfly.org/). To make things easier, apiman includes a pointer to JBoss WildFly on its download page here: http://www.apiman.io/latest/download.html To install WildFly, simply download http://download.jboss.org/wildfly/8.2.0.Final/wildfly-8.2.0.Final.zip and unzip the file into the directory in which you want to run the sever. Then, download the apiman 1.0 WildFly overlay zip file inside the directory that was created when you un-zipped the WildFly download. The apiman 1.0 WildFly overlay zip file is available here: http://downloads.jboss.org/overlord/apiman/1.0.0.Final/apiman-distro-wildfly8-1.0.0.Final-overlay.zip The commands that you will execute will look something like this: mkdir apiman cd apiman unzip wildfly-8.2.0.Final.zip unzip -o apiman-distro-wildfly8-1.0.0.Final-overlay.zip -d wildfly-8.2.0.Final Then, to start the server, execute these commands: cd wildfly-8.2.0.Final ./bin/standalone.sh -c standalone-apiman.xml The server will write logging messages to the screen. When you see some messages that look like this, you’ll know that the server is up and running with apiman installed: 13:57:03,229 INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "apiman-ds.xml" (runtime-name : "apiman-ds.xml") 13:57:03,261 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management 13:57:03,262 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990 13:57:03,262 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.2.0.Final "Tweek" started in 5518ms - Started 754 of 858 services (171 services are lazy, passive or on-demand) If this were a production server, the first thing that we’d do is to change the OOTB default admin username and/or password. apiman is configured by default to use JBoss KeyCloak (http://keycloak.jboss.org/) for password security. Also, the default database used by apiman to store contract and service information is the H2 database. For a production server, you’d want to reconfigure this to use a production database. Note: apiman includes DDLs for both MySQL and PostgreSQL. For the purposes of our demo, we’ll keep things simple and use the default configuration. To access apiman’s API Manager UI, go to: http://localhost:8080/apiman-manager, and log in. The admin user account that we’ll use has a username of “admin” and a password of “admin123!” You should see a screen that looks like this: Before we start using apiman, let’s take a look at how apiman defines how services and the meta data on which they depend are organized. Policies, Plans, and Organizations apiman uses a hierarchical data model that consists of these elements: Polices, Plans, and Organizations: Policies Policies are at the lowest level of the data model, and they are the basis on which the higher level elements of the data model are built. A policy defines an action that is performed by the API Gateway at runtime. Everything defined in the API Manager UI is there to enable apiman to apply policies to requests made to services. When a request to a service is made, apiman creates a chain of policies to be applied to that request. apiman policy chains define a specific sequence order in which the policies defined in the API Manager UI are applied to service requests. The sequence in which incoming service requests have policies applied is: First, at the application level. In apiman, an application is contracted to use one or more services. Second, at the plan level. In apiman, policies are organized into groups called plans. (We’ll discuss plans in the next section of this post.) Third, at the individual service level. What happens is that when a service request is received by the API Gateway at runtime, the policy chain is applied in the order of application, plan, and service. If no failures, such as a rate counter being exceeded, occur, the API Gateway sends the request to the service’s backend API implementation. As we mentioned earlier in this post, the API Gateway acts as a proxy for the service: Next, when the API Gateway receives a response from the service’s backend implementation, the policy chain is applied again, but this time in the reverse order. The service policies are applied first, then the plan policies, and finally the application policies. If no failures occur, then the service response is sent back to the consumer of the service. By applying the policy chain twice, both for the originating incoming request and the resulting response, apiman allows policy implementations two opportunities to provide management functionality during the lifecycle. The following diagram illustrates this two-way approach to applying policies: Plans In apiman, a “plan” is a set policies that together define the level of service that apiman provides for service. Plans enable apiman users to define multiple different levels of service for their APIs, based on policies. It’s common to define different plans for the same service, where the differences depend on configuration options. For example, a group or company may offer both a “gold” and “silver” plan for the same service. The gold plan may be more expensive than the silver plan, but it may offer a higher level of service requests in a given (and configurable) time period. Organizations The “organization” is at top level of the apiman data model. An organization contains and manages all elements used by a company, university, group inside a company, etc. for API management with apiman. All plans, services, applications, and users for a group are defined in an apiman organization. In this way, an organization acts as a container of other elements. Users must be associated with an organization before they can use apiman to create or consume services. apiman implements role-based access controls for users. The role assigned to a user defines the actions that a user can perform and the elements that a user can manage. Before we can define a service, the policies that govern how it is accessed, the users who will be able to access, and the organizations that will create and consume it, we need a service and a client to access that service. Luckily, creating the service and deploying it to our WildFly server, and accessing it through a client are easy. Getting and Building and Deploying the Example Service The source code for the example service is contained in a git repo (http://git-scm.com) hosted at github (https://github.com/apiman). To download a copy of the example service, navigate to the directory in which you want to build the service and execute this git command: git clone git@github.com:apiman/apiman-quickstarts.git As the source code is downloading, you'll see output that looks like this: git clone git@github.com:apiman/apiman-quickstarts.git Initialized empty Git repository in /tmp/tmp/apiman-quickstarts/.git/ remote: Counting objects: 104, done. remote: Total 104 (delta 0), reused 0 (delta 0) Receiving objects: 100% (104/104), 18.16 KiB, done. Resolving deltas: 100% (40/40), done. And, after the download is complete, you'll see a populated directory tree that looks like this: └── apiman-quickstarts ├── echo-service │ ├── pom.xml │ ├── README.md │ └── src │ └── main │ ├── java │ │ └── io │ │ └── apiman │ │ └── quickstarts │ │ └── echo │ │ ├── EchoResponse.java │ │ └── EchoServlet.java │ └── webapp │ └── WEB-INF │ ├── jboss-web.xml │ └── web.xml ├── LICENSE ├── pom.xml ├── README.md ├── release.sh └── src └── main └── assembly └── dist.xml As we mentioned earlier in the post, the example service is very simple. The only action that the service performs is to echo back in responses the meta data in the REST (http://en.wikipedia.org/wiki/Representational_state_transfer) requests that it receives. Maven is used to build the service. To build the service into a deployable .war file, navigate to the directory into which you downloaded the service example: cd apiman-quickstarts/echo-service And then execute this maven command: mvn package As the service is being built into a .war file, you'll see output that looks like this: [INFO] Scanning for projects... [INFO] [INFO] Using the builder org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder with a thread count of 1 [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building apiman-quickstarts-echo-service 1.0.1-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ apiman-quickstarts-echo-service --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /jboss/local/redhat_git/apiman-quickstarts/echo-service/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ apiman-quickstarts-echo-service --- [INFO] Compiling 2 source files to /jboss/local/redhat_git/apiman-quickstarts/echo-service/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ apiman-quickstarts-echo-service --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /jboss/local/redhat_git/apiman-quickstarts/echo-service/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ apiman-quickstarts-echo-service --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ apiman-quickstarts-echo-service --- [INFO] No tests to run. [INFO] [INFO] --- maven-war-plugin:2.2:war (default-war) @ apiman-quickstarts-echo-service --- [INFO] Packaging webapp [INFO] Assembling webapp in [/jboss/local/redhat_git/apiman-quickstarts/echo-service/target/apiman-quickstarts-echo-service-1.0.1-SNAPSHOT] [INFO] Processing war project [INFO] Copying webapp resources [/jboss/local/redhat_git/apiman-quickstarts/echo-service/src/main/webapp] [INFO] Webapp assembled in [23 msecs] [INFO] Building war: /jboss/local/redhat_git/apiman-quickstarts/echo-service/target/apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war [INFO] WEB-INF/web.xml already added, skipping [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.184 s [INFO] Finished at: 2014-12-26T16:11:19-05:00 [INFO] Final Memory: 14M/295M [INFO] ------------------------------------------------------------------------ If you look closely, near the end of the output, you'll see the location of the .war file: /jboss/local/redhat_git/apiman-quickstarts/echo-service/target/apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war To deploy the service, we can copy the .war file to our WildFly server's "deployments" directory. After you copy the service's .war file to the deployments directory, you'll see output like this generated by the WildFly server: 16:54:44,313 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war" (runtime-name: "apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war") 16:54:44,397 INFO [org.wildfly.extension.undertow] (MSC service thread 1-16) JBAS017534: Registered web context: /apiman-echo 16:54:44,455 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018559: Deployed "apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war" (runtime-name : "apiman-quickstarts-echo-service-1.0.1-SNAPSHOT.war") Make special note of this line of output: 16:54:44,397 INFO [org.wildfly.extension.undertow] (MSC service thread 1-16) JBAS017534: Registered web context: /apiman-echo This output indicates that the URL of the deployed example service is: [a href="http://localhost:8080/apiman-echo" style="text-decoration: none;"]http://localhost:8080/apiman-echo Remember, however, that this is the URL of the deployed example service if we access it directly. We'll refer to this as the "unmanaged service" as we are able to connect to the service directly, without going through the API Gateway. The URL to access the service through the API Gateway ("the managed service") at runtime will be different. Now that our example service is installed, it’s time to install and configure our client to access the server. Accessing the Example Service Through a Client There are a lot of options available when it comes to what we can use for a client to access our service. We’ll keep the client simple so that we can keep our focus on apiman and simply install a REST client into the FireFox browser. The REST Client FireFox add-on (http://restclient.net/) is available here: https://addons.mozilla.org/en-US/firefox/addon/restclient/versions/2.0.3 After you install the client into FireFox, you can access the deployed service using the URL that we just defined. If you execute a GET command, you’ll see output that looks like this: Now that our example service is built, deployed and running, it’s time to create the organizations for the service provider and the service consumer. The differences between the requirements of the two organizations will be evident in their apiman configuration properties. Creating Users for the Service Provider and Consumer Before we create the organizations, we have to create a user for each organization. We'll start by creating the service provider user. To do this, logout from the admin account in the API Manager UI. The login dialog will then be displayed. Select the "New user" Option and register the service provider user: Then, logout and repeat the process to register a new application developer user too: Now that the new users are registered we can create the organizations. Creating the Service Producer Organization To create the service producer organization, log back into the API Manager UI as the servprov user and select “Create a new Organization”: Select a name and description for the organization, and press “Create Organization”: And, here’s our organization: Note that in a production environment, users would request membership in an organization. The approval process for accepting new members into an organization would follow the organization's workflow, but this would be handled outside of the API Manager. For the purposes of our demonstration, we'll keep things simple. Configuring the Service, its Policies, and Plans To configure the service, we’ll first create a plan to contain the policies that we want applied by the API Gateway at runtime when requests to the service are made. To create a new plan, select the “Plans” tab. We’ll create a “gold” plan: Once the plan is created, we will add policies to it: apiman provides several OOTB policies. Since we want to be able to demonstrate a policy being applied, we’ll select a Rate Limiting Policy, and set its limit to a very low level. If our service receives more than 10 requests in a day, the policy should block all subsequent requests. So much for a “gold” level of service! After we create the policy and add it to the plan, we have to lock the plan: And, here is the finished, and locked plan: At this point, additional plans can be defined for the service. We’ll also create a “silver” plan, that will offer a lower level of service (i.e., a request rate limit lower than 10 per day) than the gold plan. Since the process to create this silver plan is identical to that of the gold plan, we’ll skip the screenshots. Now that the two plans are complete and locked, it’s time to define the service. We’ll give the service an appropriate name, so that providers and consumers alike will be able to run a query in the API Manager to find it. After the service is defined, we have to define its implementation. In the context of the API Manager, the API Endpoint is the service’s direct URL. Remember that the API Gateway will act as a proxy for the service, so it must know the service’s actual URL. In the case of our example service, the URL is: http://localhost:8080/apiman-echo The plans tab shows which plans are available to be applied to the service: Let’s make our service more secure by adding an authentication policy that will require users to login before they can access the service. Select the Policies tab, and then define a simple authentication policy. Remember the user name and password that you define here as we’ll need them later on when send requests to the service. After the authentication policy is added, we can publish the service to the API Gateway: And, here it is, the published service: OK, that finishes the definition of the service provider organization and the publication of the service. Next, we'll switch over to the service consumer side and create the service consumer organization and register an application to connect to the managed service through the proxy of the API Gateway. The Service Consumer Organization We'll repeat the process that we used to create the application development organization. Log in to the API Manager UI as the “appdev” user and create the organization: Unlike the process we used when we created the elements used by the service provider, the first step that we’ll take is to create a new application and then search for the service to be used by the application: Searching for the service is easy, as we were careful to set the service name to something memorable: Select the service name, and then specify the plan to be used. We’ll splurge and use the gold plan: Next, select “create contract” for the plan: Then, agree to the contract terms (which seem to be written in a strange form of Latin in the apiman 1.0 release): The last step is to register the application with the API Gateway so that the gateway can act as a proxy for the service: Congratulations! All the steps necessary to provide and consume the service are complete! There’s just one more step that we have to take in order for clients to be able access the service through the API Gateway. Remember the URL that we used to access the unmanaged service directly? Well, forget it. In order to access the managed service through the API Gateway acting as a proxy for other service we have to obtain the managed service's URL. In the API Manager UI, head on over to the "APIs" tab for the application, click on the the “>” character to the left of the service name. This will expose the API Key and the service’s HTTP endpoint in the API Gateway: In order to be able access the service through the API Gateway, we have to provide the API Key with each request. The API Key can be provided either through an HTTP Header (X-API-Key) or a URL query parameter. Luckily, the API Manager UI does the latter for us. Select the icon to the right of the HTTP Endpoint and this dialog is displayed: Copy the URL into the clipboard. We’ll need to enter this into the client in a bit. The combined API Key and HTTP endpoint should look something like this: http://localhost:8080/apiman-gateway/ACMEServices/echo/1.0?apikey=c374c202-d4b3-4442-b9e4-c6654f406e3d Accessing the Managed Service Through the apiman API Gateway, Watching the Policies at Runtime Thanks for hanging in there! The set up is done. Now, we can fire up the client and watch the policies in action as they are applied at runtime by the API Gateway, for example: Open the client, and enter the URL for the managed service (http://localhost:8080/apiman-gateway/ACMEServices/echo/1.0?apikey=c374c202-d4b3-4442-b9e4-c6654f406e3d) What happens first is that the authentication policy is applied and a login dialog is then displayed: Enter the username and password (user1/password) that we defined when we created the authentication policy to access the service. The fact that you are seeing this dialog confirms that you are accessing the managed service and are not accessing the service directly. When you send a GET request to the service, you should see a successful response: So far so good. Now, send 10 more requests and you will see a response that looks like this as the gold plan rate limit is exceeded: And there it is. Your gold plan has been exceeded. Maybe next time you’ll spend a little more and get the platinum plan! ;-) Wrap-up Let’s recap what we just accomplished in this demo: We installed apiman 1.0 onto a WildFly server instance. We used git to download and maven to build a sample REST client. As a service provider, we created an organization, defined policies based on service use limit rates and user authentication, and a plan, and assigned them to a service. As a service consumer, we searched for and found that service, and assigned it to an application. As a client, we accessed the service and observed how the API Gateway managed the service. And, if you note, in the process of doing all this, the only code that we had to write or build was for the client. We were able to fully configure the service, policies, plans, and the application in the API Manager UI. What’s Next? In this post, we’ve only scratched the surface of API Management with apiman. To learn more about apiman, you can explore its website here: http://www.apiman.io/ Join the project mailing list here: https://lists.jboss.org/mailman/listinfo/apiman-user And, better still, get involved! Contribute bug reports or feature requests. Write about your own experiences with apiman. Download the apiman source code, take a look around, and contribute your own additions. apiman 1.0 was just released, there’s no better time to join in and contribute! Acknowledgements The author would like to acknowledge Eric Wittmann for his (never impatient) review comments and suggestions on writing this post! Downloads Used in this Article REST Client (http://restclient.net/) FireFox Add-On - https://addons.mozilla.org/en-US/firefox/addon/restclient/versions/2.0.3 Echo service source code - https://github.com/EricWittmann/apiman-quickstarts apiman 1.0 - http://downloads.jboss.org/overlord/apiman/1.0.0.Final/apiman-distro-wildfly8-1.0.0.Final-overlay.zip WildFly 8.2.0 - http://download.jboss.org/wildfly/8.2.0.Final/wildfly-8.2.0.Final.zip Git - http://git-scm.com Maven - http://maven.apache.org References http://www.apiman.io/ apiman tutorial videos - https://vimeo.com/user34396826 http://www.softwareag.com/blog/reality_check/index.php/soa-what/what-is-api-management/ http://keycloak.jboss.org/
January 9, 2015
by Len DiMaggio
· 13,044 Views
article thumbnail
Getting Spring Boot to work with Papertrail logging
spring boot already comes with great, pre-configured logging system inside, but in real projects it's important to have an ability to search logs, aggregate them and access easy. one of the easiest option for it is http://papertrailapp.com/ . they provide logging service with syslog protocol and 100mb/mo free plan. lets prepare papertrail for our example: create logging group in papetrail dashboard ("create group" button). create log destination in papertrail dashboard. go to "account -> log destinations" click "create log destination" button. make sure your group is selected in " new systems will join" field. you can leave all others fields with their default values, just click "create". remember your log destination (will looks like logs2.papertrailapp:12345), we will use it later spring boot uses logback as default logging system. it's powerful tool for logging with many logging options. for our purposes we will use ch.qos.logback.classic.net.syslogappender . add "logback.xml" file to your "resources" folder with following content: ${papertrail_host} ${papertrail_port} user ${papertrail_app:-app} %highlight([%.-1level]) %35.35logger{35}:\t%m\t%cyan%ex{5} true i'm using environment variables ( (1), (2), and (3) ) for papertrail's credentials, it will allow you to configure different log destinations in different application environments. note excluded throwable at (4). we already have pattern for throwable in suffixpattern ( %cyan%ex{5} ). why we do this? because otherwise exception stacktraces will be printed after main log message line-by-line, it will increase traffic and also you will not be able to see stacktrace in search. i will demonstrate how it works with really simple spring boot application: package com.github.bsideup.spring.boot.example.papertrail; import org.slf4j.logger; import org.slf4j.loggerfactory; import org.springframework.boot.springapplication; import org.springframework.boot.autoconfigure.springbootapplication; import org.springframework.web.bind.annotation.requestmapping; import org.springframework.web.bind.annotation.restcontroller; @springbootapplication @restcontroller public class application { private static final logger log = loggerfactory.getlogger(application.class); public static void main(string[] args) { springapplication.run(application.class, args); } @requestmapping("/") public string index() { log.warn("i'm so tired to welcome everyone", new exception(new exception())); return "hello world!"; } } now, run your application with following environment variables: papertrail_host - host from your log destination (i.e. logs2.papertrailapp.com) papertrail_port - port from your log destination (i.e. 12345) [optional] papertrail_app - application name (default: app) your papertrails output will looks like mine: you can find sample project at github: https://github.com/bsideup/spring-boot-sample-papertrail
January 9, 2015
by Sergei Egorov
· 8,598 Views · 2 Likes
article thumbnail
Maven - How to Build Jar Files and Obtain Dependencies
This article represents facts on what would it take to build one or more jar files for a given framework/library using Maven, provided the framework’s downloadable files consisted of pom.xml. Please feel free to comment/suggest if I missed to mention one or more important points. Also, sorry for the typos. So far, whenever I came across pom.xml file in the framework that I downloaded in order to get the jar file, I hated it. I used to, then, go to internet and get the compiled jar file(s) for the framework/library. And, good thing is that I have been able to get my work done. This was purely out of my laziness that I did not use to build using maven.Then, I got a chance to work with Twitter HBC library (Java) for integrating with Twitter. And, I downloaded it and wanted to get one or more jar files. And, once again, I came across apom.xml in root folder and unique pom.xml files in hbc-core, hbc-twitter4j and hbc-examples folder. This time, I decided to build the hbc jar files on my system.Following are some of the steps I took to build hbc jar files and get dependencies to run the program using hbc jar files. Download and install Maven. Anyone wanting to install/configure Maven, go to this Maven in 5 Minutes page. It clearly states what needs to be done to install/configure Maven. Once configured, open a command prompt and execute command “mvn -version”. If the version information of Maven is displayed, you are all set. Once determined, go to the folder which consists of pom.xml file. In present case, go to hbc root folder. Go to hbc root folder, hbc-master. Execute following command to build the hbc jar files and also obtain the dependencies (jar files) required to run the library. Command is “mvn clean install -U dependency:copy-dependencies“. This command built the source file and created two different jar files in hbc-twitter4j/target (hbc-twitter4j-2.2.1-SNAPSHOT.jar) and hbc-core/target (hbc-core-2.2.1-SNAPSHOT.jar). Further to that, it downloaded all the dependent jar files in repective target/dependency folder.
January 8, 2015
by Ajitesh Kumar
· 20,129 Views
article thumbnail
How to Configure MySQL Metastore for Hive?
This is a step by step guide on How to Configure MySQL Metastore for Hive in place of Derby Metastore (Default).
January 8, 2015
by Saurabh Chhajed
· 81,237 Views · 4 Likes
article thumbnail
How to Integrate Jersey in a Spring MVC Application
I have recently started to build a public REST API with Java for Podcastpedia.org and for the JAX-RS implementation I have chosen Jersey, as I find it “natural” and powerful – you can find out more about it by following the Tutorial – REST API design and implementation in Java with Jersey and Spring. Because Podcastpedia.org is a web application powered by Spring MVC, I wanted to integrate both frameworks in podcastpedia-web, to take advantage of the backend service functionality already present in the project. Anyway this short post will present the steps I had to take to make the integration between the two frameworks work. Framework versions Current versions used: 4.1.0.RELEASE 2.14 Project dependencies The Jersey Spring extension must be present in your project’s classpath. If you are using Maven add it to the pom.xml file of your project: org.glassfish.jersey.ext jersey-spring3 ${jersey.version} org.springframework spring-core org.springframework spring-web org.springframework spring-beans org.glassfish.jersey.media jersey-media-json-jackson ${jersey.version} com.fasterxml.jackson.jaxrs jackson-jaxrs-base com.fasterxml.jackson.core jackson-annotations com.fasterxml.jackson.jaxrs jackson-jaxrs-json-provider Note: I have explicitly excluded the Spring core and the Jackson implementation libraries as they have been already imported in the project with preferred versions. Web.xml configuration In the web.xml, in addition to the Spring MVC servlet configuration I added the jersey-servlet configuration, that will map all requests starting with/api/: Spring MVC Dispatcher Servlet org.springframework.web.servlet.DispatcherServlet contextConfigLocation classpath:spring/application-context.xml 1 Spring MVC Dispatcher Servlet / jersey-serlvet org.glassfish.jersey.servlet.ServletContainer javax.ws.rs.Application org.podcastpedia.web.api.JaxRsApplication 2 jersey-serlvet /api/* Well, that’s pretty much it… If you have any questions drop me a line or comment in the discussion below. In the coming post I will present some of the results of this integration, by showing how to call one method of the REST public API with jQuery, to dynamically load recent episodes of a podcast, so stay tuned.
January 8, 2015
by Adrian Matei
· 19,818 Views
article thumbnail
Including Java Agent in Standalone Spring Boot Application
Recently at DevSKiller.com we've decided to move majority of our stuff to simple containers. It was pretty easy due to use of Spring Boot uber-jars, but the problem was in NewRelic agents which should have to be included separately. That caused uncomfortable situation so we decided to solve it by including NewRelic agent into our uber-jar applications. If you also want to simplify your life please follow provided instructions :) At first we have to add proper dependency into our pom.xml descriptor: com.newrelic.agent.java< newrelic-agent 3.12.1 provided Now since we have proper jar included into our project it's time to unpack the dependency to have all necessary classes in our application jar file: org.apache.maven.plugins maven-dependency-plugin 2.9 prepare-package unpack-dependencies newrelic-agent ${project.build.outputDirectory} After this step we've all agent related classes accessible directly from our jar. But still the file cannot be used as an agent jar. There are some important manifest entries that have to be present in every agent jar. The most important is the Premain-Class attribute specifying main agent class including premain() method. In case of NewRelic it's also important to include Can-Redefine-Classes and Can-Retransform-Classes attributes. The easiest way to do that is to extend maven-jar-plugin configuration: org.apache.maven.plugins maven-jar-plugin 2.5 com.newrelic.bootstrap.BootstrapAgent true true Now is coming the tricky part :) NewRelic agent also contains class with main() method which causes that Spring Boot repackager plugin is unable to find single main() method so build fails. It's not a problem but we have to remember to specify proper main class in spring-boot-maven-plugin (or in gradle plugin): my.custom.Application That's all! You can execute your application with following command: java -javaagent:myapp.jar -jar myapp.jar Last but not least: don't forget to include NewRelic configuration file (newrelic.yml) in the same directory as your application jar. The other solution is to set newrelic.config.file system property to point the fully qualified file name.
January 7, 2015
by Jakub Kubrynski
· 32,982 Views · 1 Like
article thumbnail
Spring Retry - Ways to Integrate With Your Project
If you have a need to implement robust retry logic in your code, a proven way would be to use the spring retry library. My objective here is not to show how to use the spring retry project itself, but in demonstrating different ways that it can be integrated into your codebase. Consider a service to invoke an external system: package retry.service; public interface RemoteCallService { String call() throws Exception; } Assume that this call can fail and you want the call to be retried thrice with a 2 second delay each time the call fails, so to simulate this behavior I have defined a mock service using Mockito this way, note that this is being returned as a mocked Spring bean: @Bean public RemoteCallService remoteCallService() throws Exception { RemoteCallService remoteService = mock(RemoteCallService.class); when(remoteService.call()) .thenThrow(new RuntimeException("Remote Exception 1")) .thenThrow(new RuntimeException("Remote Exception 2")) .thenReturn("Completed"); return remoteService; } So essentially this mocked service fails 2 times and succeeds with the third call. And this is the test for the retry logic: public class SpringRetryTests { @Autowired private RemoteCallService remoteCallService; @Test public void testRetry() throws Exception { String message = this.remoteCallService.call(); verify(remoteCallService, times(3)).call(); assertThat(message, is("Completed")); } } We are ensuring that the service is called 3 times to account for the first two failed calls and the third call which succeeds. If we were to directly incorporate spring-retry at the point of calling this service, then the code would have looked like this: @Test public void testRetry() throws Exception { String message = this.retryTemplate.execute(context -> this.remoteCallService.call()); verify(remoteCallService, times(3)).call(); assertThat(message, is("Completed")); } This is not ideal however, a better way would be where the callers don't have have to be explicitly aware of the fact that there is a retry logic in place. Given this, the following are the approaches to incorporate Spring-retry logic. Approach 1: Custom Aspect to incorporate Spring-retry This approach should be fairly intuitive as the retry logic can be considered a cross cutting concern and a good way to implement a cross cutting concern is using Aspects. An aspect which incorporates the Spring-retry would look something along these lines: package retry.aspect; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.retry.support.RetryTemplate; @Aspect public class RetryAspect { private static Logger logger = LoggerFactory.getLogger(RetryAspect.class); @Autowired private RetryTemplate retryTemplate; @Pointcut("execution(* retry.service..*(..))") public void serviceMethods() { // } @Around("serviceMethods()") public Object aroundServiceMethods(ProceedingJoinPoint joinPoint) { try { return retryTemplate.execute(retryContext -> joinPoint.proceed()); } catch (Throwable e) { throw new RuntimeException(e); } } } This aspect intercepts the remote service call and delegates the call to the retryTemplate. A full working test is here. Approach 2: Using Spring-retry provided advice Out of the box Spring-retry project provides an advice which takes care of ensuring that targeted services can be retried. The AOP configuration to weave the advice around the service requires dealing with raw xml as opposed to the previous approach where the aspect can be woven using Spring Java configuration. The xml configuration looks like this: The full working test is here. Approach 3: Declarative retry logic This is the recommended approach, you will see that the code is far more concise than with the previous two approaches. With this approach, the only thing that needs to be done is to declaratively indicate which methods need to be retried: package retry.service; import org.springframework.retry.annotation.Backoff; import org.springframework.retry.annotation.Retryable; public interface RemoteCallService { @Retryable(maxAttempts = 3, backoff = @Backoff(delay = 2000)) String call() throws Exception; } and a full test which makes use of this declarative retry logic, also available here: package retry; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.retry.annotation.EnableRetry; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import retry.service.RemoteCallService; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.is; import static org.mockito.Mockito.*; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class SpringRetryDeclarativeTests { @Autowired private RemoteCallService remoteCallService; @Test public void testRetry() throws Exception { String message = this.remoteCallService.call(); verify(remoteCallService, times(3)).call(); assertThat(message, is("Completed")); } @Configuration @EnableRetry public static class SpringConfig { @Bean public RemoteCallService remoteCallService() throws Exception { RemoteCallService remoteService = mock(RemoteCallService.class); when(remoteService.call()) .thenThrow(new RuntimeException("Remote Exception 1")) .thenThrow(new RuntimeException("Remote Exception 2")) .thenReturn("Completed"); return remoteService; } } } The @EnableRetry annotation activates the processing of @Retryable annotated methods and internally uses logic along the lines of approach 2 without the end user needing to be explicit about it. I hope this gives you a slightly better taste for how to incorporate Spring-retry in your project. All the code that I have demonstrated here is also available in my github project here: https://github.com/bijukunjummen/test-spring-retry
January 5, 2015
by Biju Kunjummen
· 76,483 Views · 4 Likes
article thumbnail
Remote JMX access to WildFly (or JBoss AS7) using JConsole
One of the goals of JBoss AS7 was to make it much more secure by default, when compared to previous versions. One of the areas which was directly impacted by this goal was that you could no longer expect the server to expose some service on a port and get access to it without any authentication/authorization. Remember that in previous versions of JBoss AS you could access the JNDI port, the JMX port without any authentication/authorization, as long as those ports were opened for communication remotely. Finer grained authorizations on such ports for communications, in JBoss AS7, allows the server to control who gets to invoke operations over that port. Of course, this is not just limited to JBoss AS7 but continues to be the goal in WildFly (which is the rename of JBoss Application Server). In fact, WildFly has gone one step further and now has the feature of "one single port" for all communication. JMX communication in JBoss AS7 and WildFly With that background, we'll now focus on JMX communication in JBoss AS7 and WildFly. I'll use WildFly (8.2.0 Final) as a reference for the rest of this article, but the same details apply (with minor changes) to other major versions of JBoss AS7 and WildFly, that have been released till date. WildFly server is composed of "subsystems", each of which expose a particular set of functionality. For example, there's the EE subsystem which supports the Java EE feature set. Then there's the Undertow subsystem which supports web/HTTP server functionality. Similarly, there's a JMX subsystem which exposes the JMX feature set on the server. As you all are aware, I'm sure, JMX service is standardly used for monitoring and even managing Java servers and this includes managing the servers remotely. The JMX subsystem in WildFly allows remote access to the JMX service and port 9990 is what is used for that remote JMX communication. JConsole for remote JMX access against JBoss AS7 and WildFly Java (JDK) comes bundled with the JConsole tool which allows connecting to local or remote Java runtimes which expose the JMX service. The tool is easy to use, all you have to do is run the jconsole command it will show up a graphical menu listing any local Java processes and also an option to specify a remote URL to connect to a remote process: # Start the JConsole $JAVA_HOME/bin/jconsole Let's assume that you have started WildFly standalone server, locally. Now when you start the jconsole, you'll notice that the WildFly Java process is listed in the local running processes to which you can connect to. When you select the WildFly Java instance, you'll be auto connected to it and you'll notice MBeans that are exposed by the server. However, in the context of this article, this "local process" mode in JConsole isn't what we are interested in. Let's use the "Remote process" option in that JConsole menu which allows you to specify the remote URL to connect to the Java runtime and username and password to use to connect to that instance. Even though our WildFly server is running locally, we can use this "Remote process" option to try and connect to it. So let's try it out. Before that though, let's consider a the following few points: Remember that the JMX subsystem in WildFly allows remote access on port 9990 For remote access to JMX, the URL is of the format - service:jmx:[vendor-specific-protocol]://[host]:[port]. The vendor specific protocol is the interesting bit here. In the case of WildFly that vendor-specific-protocol is http-remoting-jmx. Remember that WildFly is secure by default which means that just because the JMX subsystem exposes 9990 port for remote communication, it doesn't mean it's open for communication to anyone. In order to be allowed to communicate over this port, the caller client is expected to be authenticated and authorized. This is backed by the "ManagementRealm" in WildFly. Users authenticated and authorized against this realm are allowed access to that port. Keeping those points in mind, let's first create a user in the Management Realm. This can be done using the add-user command line script (which is present in JBOSS_HOME/bin folder). I won't go into the details of that since there's enough documentation for that. Let's just assume that I created a user named "wflyadmin" with an appropriate password in the Management Realm. To verify that the user has been properly created, in the right realm, let's access the WildFly admin console at the URL http://localhost:9990/console. You'll be asked for username and password for access. Use the same username and password of the newly created user. If the login works, then you are good. If not, then make sure you have done things right while adding the new user (as I said I won't go into the details of adding a new user since it's going to just stretch this article unnecessarily long). So at this point we have created a user named "wflyadmin" belonging to ManagementRealm. We'll be using this same user account for accessing the JMX service on WildFly, through JConsole. So let's now bring up the jconsole as usual: $JAVA_HOME/bin/jconsole On the JConsole menu let's again select the "Remote process" option and use the following URL in the URL text box: service:jmx:http-remoting-jmx://localhost:9990 Note: For JBoss AS 7.x and JBoss EAP 6.x, the vendor specific protocol is remoting-jmx and the port for communication is 9999. So the URL will be service:jmx:remoting-jmx://localhost:9999 In the username and password textboxes, use the same user/pass that you newly created. Finally, click on Connect. What do you see? It doesn't work! The connection fails. So what went wrong? Why isn't the JConsole remote access to WildFly not working? You did all the obvious things necessary to access the WildFly JMX service remotely but you keep seeing that JConsole can't connect to it. What could be the reason? Remember, in one of those points earlier, I noted that the "vendor specific protocol" is an interesting bit? We use http-remoting-jmx and that protocol internally relies on certain WildFly/JBoss specific libraries, primarily for remote communication and authentication and authorization. These libraries are WildFly server specific and hence aren't part of the standard Java runtime environment. When you start jconsole, it uses a standard classpath which just has the relevant libraries that are part of the JDK/JRE. To solve this problem, what you need to do is bring in the WildFly server specific libraries into the classpath of JConsole. Before looking into how to do that, let's see which are the WildFly specific libraries that are needed. All the necessary classes for this to work are part of the jboss-cli-client.jar which is present in JBOSS_HOME/bin/client/ folder. So all we need to do in include this jar in the classpath of the jconsole tool. To do that we use the -J option of jconsole tool which allows passing parameters to the Java runtime of jconsole. The command to do that is: $JAVA_HOME/bin/jconsole -J-Djava.class.path=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/jconsole.jar:/opt/wildfly-8.2.0.Final/bin/client/jboss-cli-client.jar (Note that for Windows the classpath separator is the semi-colon character instead of the colon) Note, the server specific jar for JBoss AS 7.x and JBoss EAP 6.x is named jboss-client.jar and is present at the same JBOSS_HOME/bin/client directory location. So we are passing -Djava.class.path as the parameter to the jconsole Java runtime, using the -J option. Notice that we have specified more than just our server specific jar in that classpath. That's because, using the -Djava.class.path is expected to contain the complete classpath. We are including the jars from the Java JDK lib folder that are necessary for JConsole and also our server specific jar in that classpath. Running that command should bring up JConsole as usual and let's go ahead and select the "Remote process" option and specify the same URL as before: service:jmx:http-remoting-jmx://localhost:9990 and the same username and password as before and click Connect. This time you should be able to connect and should start seeing the MBeans and others services exposed over JMX. How about providing a script which does this necessary classpath setup? Since it's a common thing to try and use JConsole for remote access against WildFly, it's reasonable to expect to have a script which sets up the classpath (as above) and you could then just use that script. That's why WildFly ships such a script. It's in the JBOSS_HOME/bin folder and is called jconsole.sh (and jconsole.bat for Windows). This is just a wrapper script which internally invokes the jconsole tool present in Java JDK, after setting up the classpath appropriately. All you have to do is run: $JBOSS_HOME/bin/jconsole.sh What about using JConsole from a really remote machine, against WildFly? So far we were using the jconsole tool that was present on the same machine as the WildFly instance, which meant that we have filesystem access to the WildFly server specific jars present in the WildFly installation directory on the filesystem. This allowed us to setup the classpath for jconsole to point to the jar on the local filesystem? What if you wanted to run jconsole from a remote machine against a WildFly server which is installed and running on a different machine. In that case, your remote client machine won't be having filesystem access to the WildFly installation directory. So to get jconsole running in such a scenario, you will have to copy over the JBOSS_HOME/bin/jboss-cli-client.jar to your remote client machine, to a directory of your choice and then setup the classpath for jconsole tool as explained earlier and point it to that jar location. That should get you access to JMX services of WildFly from jconsole on a remote machine. More questions? If you still have problems getting this to work or have other questions, please start a discussion in the JBoss community forums here https://developer.jboss.org/en/wildfly/content.
January 5, 2015
by Jaikiran Pai
· 61,297 Views · 2 Likes
  • Previous
  • ...
  • 743
  • 744
  • 745
  • 746
  • 747
  • 748
  • 749
  • 750
  • 751
  • 752
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: