DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

How does AI transform chaos engineering from an experiment into a critical capability? Learn how to effectively operationalize the chaos.

Data quality isn't just a technical issue: It impacts an organization's compliance, operational efficiency, and customer satisfaction.

Are you a front-end or full-stack developer frustrated by front-end distractions? Learn to move forward with tooling and clear boundaries.

Developer Experience: Demand to support engineering teams has risen, and there is a shift from traditional DevOps to workflow improvements.

Core Badge
Avatar

Pieter Humphrey

DZone Core CORE

Director of Developer Relations at Maria DB @PieterHumphrey

San Francisco, US

Joined Aug 2012

About

Pieter Humphrey has been in the tech industry for 20 years, working in development, marketing, sales, and developer relations to advance Java technology in the enterprise, and more recently on the cloud. Developers have been a consistent focus across his experiences at companies like BEA, Oracle, VMware, Pivotal, Elastic and DataStax. He's currently the Director of Developer Relations at Maria DB.

Stats

Reputation: 4418
Pageviews: 2.3M
Articles: 27
Comments: 16
  • Articles
  • Comments

Articles

article thumbnail
Create a Full-Stack App Using Nuxt.js, NestJS, and DataStax Astra DB (With a Little Help From GitHub Copilot)
In this post, learn how to create a full-stack application, complete with dynamic data retrieved from a cloud database by an API.
July 15, 2022
· 6,776 Views · 1 Like
article thumbnail
Python, NoSQL and FastAPI Tutorial: Web Scraping on a Schedule
Learn how to web scrape on a schedule by integrating the Python framework called FastAPI with Astra DB.
March 23, 2022
· 3,670 Views · 2 Likes
article thumbnail
The Search for a Cloud-Native Database
Cloud-native applications need cloud-native databases. In this post, let’s try and understand what the cloud-native database is and how it relates to Kubernetes
Updated February 9, 2022
· 11,514 Views · 2 Likes
article thumbnail
Best Practices for Data Pipeline Error Handling in Apache NiFi
Learn actionable strategies for error management modeling in Apache NiFi data pipelines, and understand the benefits of planning for error handling.
May 19, 2021
· 16,994 Views · 8 Likes
article thumbnail
Understanding When to Use RabbitMQ or Apache Kafka
RabbitMQ and Apache Kafka are two of the most popular messaging technologies on the market today. Get the insight you need to choose the right software for you.
May 1, 2017
· 67,487 Views · 46 Likes
article thumbnail
12 Factors and Beyond in Java
Get a sense of estimating the skills, knowledge of users, and developers. Invest adequately in the understanding of the existing codebase.
October 4, 2016
· 31,125 Views · 15 Likes
article thumbnail
A Geospatial Messenger With Kotlin, Spring Boot and PostgreSQL
Sebastien Deleuze brings use through a sample application using some essential technologies and written in Kotlin.
March 29, 2016
· 10,952 Views · 2 Likes
article thumbnail
Migrating a Spring Web MVC Application from JSP to AngularJS
Moving from server-side rendering view technologies to client-side ones can be tricky. Here are some considerations to make before starting the migration.
August 19, 2015
· 33,484 Views · 3 Likes
article thumbnail
Microservices with Spring
How to put Spring, Spring Boot, and Spring Cloud together to create a microservice.
July 15, 2015
· 20,190 Views · 6 Likes
article thumbnail
Hibernate, Jackson, Jetty etc Support in Spring 4.2
[This article was written by Juergen Hoeller.] Spring is well-known to actively support the latest versions of common open source projects out there, e.g. Hibernate and Jackson but also common server engines such as Tomcat and Jetty. We usually do this in a backwards-compatible fashion, supporting older versions at the same time - either through reflective adaptation or through separate support packages. This allows for applications to selectively decide about upgrades, e.g. upgrading to the latest Spring and Jackson versions while preserving an existing Hibernate 3 investment. With the upcoming Spring Framework 4.2, we are taking the opportunity to support quite a list of new open source project versions, including some rather major ones: Hibernate ORM 5.0 Hibernate Validator 5.2 Undertow 1.2 / WildFly 9 Jackson 2.6 Jetty 9.3 Reactor 2.0 SockJS 1.0 final Moneta 1.0 (the JSR-354 Money & Currency reference implementation) While early support for the above is shipping in the Spring Framework 4.2 RCs already, the ultimate point that we’re working towards is of course 4.2 GA - scheduled for July 15th. At this point, we’re eagerly waiting for Hibernate ORM 5.0 and Hibernate Validator 5.2 to GA (both of them are currently at RC1), as well as WildFly 9.0 (currently at RC2) and Jackson 2.6 (currently at RC3). Tight timing… By our own 4.2 GA on July 15th, we’ll keep supporting the latest release candidates, rolling any remaining GA support into our 4.2.1 if necessary. If you’d like to give some of those current release candidates a try with Spring, let us know how it goes. Now is a perfect time for such feedback towards Spring Framework 4.2 GA! P.S.: Note that you may of course keep using e.g. Hibernate ORM 3.6+ and Hibernate Validator 4.3+ even with Spring Framework 4.2. A migration to Hibernate ORM 5.0 in particular is likely to affect quite a bit of your setup, so we only recommend it in a major revision of your application, whereas Spring Framework 4.2 itself is designed as a straightforward upgrade path with no impact on existing code and therefore immediately recommended to all users.
July 2, 2015
· 2,022 Views
article thumbnail
Spring Integration Kafka 1.2 is Available, With 0.8.2 Support and Performance Enhancements
Spring Integration Kafka 1.2 is out with a major performance overhaul.
June 25, 2015
· 2,680 Views
article thumbnail
Spring XD 1.2 GA, Spring XD 1.1.3 and Flo for Spring XD Beta Released
Written by Mark Pollack. Today, we are pleased to announce the general availability of Spring XD 1.2, Spring XD 1.1.3 and the release of Flo for Spring XD Beta. 1.2.0.GA: zip 1.1.3.RELEASE: zip Flo for Spring XD Beta You can also install XD 1.2 using brew and rpm The 1.2 release includes a wide range of new features and improvements. The release journey was an eventful one, mainly due to Spring XD’s popularity with so many different groups, each with their respective request priorities. However the Spring XD team rose to the challenge and it is rewarding to look back and review the amount of innovation delivered to meet our commitments toward simplifying big data complexity. Here is a summary of what we have been busy with for the last 3 months and the value created for the community and our customers. Flo for Spring XD and UI improvements Flo for Spring XD is an HTML5 canvas application that runs on top of the Spring XD runtime, offering a graphical interface for creation, management and monitoring streaming data pipelines. Here is a short screencast showing you how to build an advanced stream definition. You can browse the documentation for additional information and links to additional screen casts of Flo in action. The XD admin screen also includes a new Analytics section that allows you to easily view gauges, counters, field-value counters and aggregate counters. Performance Improvements Anticipating increased high-throughput and low-latency IoT requirements, we’ve made several performance optimizations within the underlying message-bus implementation to deliver several million messages per second transported between Spring XD containers using Kafka as a transport. With these optimizations, we are now on par with the performance from Kafka’s own testing tools. However, we are using the more feature rich Spring Integration Kafka client instead of Kafka’s high level consumer library. For anyone who is interested in reproducing these numbers, please refer to the XD benchmarking blog, which describes the tests performed and infrastructure used in detail. Apache Ambari and Pivotal HD To help automate the deployment of Spring XD on an Apache HadoopⓇ cluster, we added an Apache AmbariⓇ plugin for Spring XD. The plugin is supported on both Pivotal HD 3.0 and Hortonworks HDP 2.2 distributions. We also added support in Spring XD for Pivotal HD 3.0, bringing the total number of Hadoop versions supported to five. New Sources, Processors, Sinks, and Batch Jobs One of Spring XD’s biggest value propositions is its complete set of out-of-the-box data connectivity adapters that can be used to create real-time and batch-based data pipelines, and these require little to no user-code for common use-cases. With the help of community contributions, we now have MongoDB, VideCap, and FTP as source modules, an XSLT-transformer processor, and FTP sink module. The XD team also developed a Cassandra sink and a language-detection processor. Recognizing the important role in the Pivotal Big Data portfolio, we have also added native integration with Pivotal Greenplum Database and Pivotal HAWQ through gpfdist sink for real-time streaming and also support for gpload based batch jobs. Adding to our developer productivity theme and the use of Spring XD in production for high-volume data ingest use-cases, we are delighted to recognize Simon Tao and Yu Cao (EMC² Office of The CTO & Labs China), who have been operationalizing Spring XD data pipelines in production since 2014 and also for the VideCap source module contribution. Their use-case and implementation specifics (in their own words) are below. “There are significant demands to extract insights from large magnitude of unstructured video streams for the video surveillance industry. Prior to being analyzed by data scientists, the video surveillance data needs to be ingested in the first place. To tackle this challenge, we built a highly scalable and extensible video-data ingestion platform using Spring XD. This platform is operationally ready to ingest different kinds of video sources into a centralized Big Data Lake. Given the out-of-the-box features within Spring XD, the platform is designed to allow rich video content processing capabilities such as video transcoding and object detection, etc. The platform also supports various types of video sources—data processors and data exporting destinations (e.g. HDFS, Gemfire XD and Spark)—which are built as custom modules in Spring XD and are highly reusable and composable. With a declarative DSL, a video ingestion stream will be handled by a video ingestion pipeline defined as Directed Acyclic Graph of modules. The pipeline is designed to be deployed in a clustered environment with upstream modules transferring data to downstream ones efficiently via the message bus. The Spring-XD distributed runtime allows each module in the pipeline to have multiple instances that run in parallel on different nodes. By scaling out horizontally, our system is capable of supporting large scale video surveillance deployment with high volume of video data and complex data processing workloads.” Custom Module Registry and HA Support Though we have had the flexibility to configure shared network location for distributed availability of custom modules (via: xd.customModule.home), we also recognized the importance of having the module-registry resilient under failure scenarios—hence, we have an HDFS backed module registry. Having this setup for production deployment provides consistent availability of custom module bits and the flexibility of choices, as needed by the business requirements. Pivotal Cloud Foundry Integration Furthering the Pivotal Cloud Foundry integration efforts, we have made several foundation-level changes to the Spring XD runtime, so we are able to run Spring XD modules as cloud-native Apps in Lattice and Diego. We have aggressive roadmap plans to launch Spring XD on Diego proper. While studying Diego’s Receptor API (written in Go!), we created a Java Receptor API, which is now proposed to Cloud Foundry for incubation. Next Steps We have some very interesting developments on the horizon. Perhaps the most important, we will be launching new projects that focus on message-driven and batch-oriented “data microservices”. These will be built directly on Spring Boot as well as Spring Integration and Spring Batch, respectively. Our main goal is to provide the simplest possible developer experience for creating cloud-native, data-centric microservice apps. In turn, Spring XD 2.0 will be refactored as a layer above those projects, to support the composition of those data microservices into streams and jobs as well as all of the “as a service” aspects that it provides today, but it will have a major focus on deployment to Cloud Foundry and Lattice. We will be posting more on these new projects soon, so stay tuned! Feedback is very important, so please get in touch with questions and comments via * StackOverflowspring-xd tag * Spring JIRA or GitHub Issues Editor’s Note: ©2015 Pivotal Software, Inc. All rights reserved. Pivotal, Pivotal HD, Pivotal Greenplum Database, Pivotal Gemfire and Pivotal Cloud Foundry are trademarks and/or registered trademarks of Pivotal Software, Inc. in the United States and/or other countries. Apache, Apache Hadoop, Hadoop and Apache Ambari are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All Posts Engineering Releases News and Events
June 21, 2015
· 3,302 Views
article thumbnail
Binding to Data Services with Spring Boot in Cloud Foundry
Written by Dave Syer on the Spring blog In this article we look at how to bind a Spring Boot application to data services (JDBC, NoSQL, messaging etc.) and the various sources of default and automatic behaviour in Cloud Foundry, providing some guidance about which ones to use and which ones will be active under what conditions. Spring Boot provides a lot of autoconfiguration and external binding features, some of which are relevant to Cloud Foundry, and many of which are not. Spring Cloud Connectors is a library that you can use in your application if you want to create your own components programmatically, but it doesn’t do anything “magical” by itself. And finally there is the Cloud Foundry java buildpack which has an “auto-reconfiguration” feature that tries to ease the burden of moving simple applications to the cloud. The key to correctly configuring middleware services, like JDBC or AMQP or Mongo, is to understand what each of these tools provides, how they influence each other at runtime, and and to switch parts of them on and off. The goal should be a smooth transition from local execution of an application on a developer’s desktop to a test environment in Cloud Foundry, and ultimately to production in Cloud Foundry (or otherwise) with no changes in source code or packaging, per the twelve-factor application guidelines. There is some simple source code accompanying this article. To use it you can clone the repository and import it into your favourite IDE. You will need to remove two dependencies from the complete project to get to the same point where we start discussing concrete code samples, namely spring-boot-starter-cloud-connectors and auto-reconfiguration. NOTE: The current co-ordinates for all the libraries being discussed are org.springframework.boot:spring-boot-*:1.2.3.RELEASE,org.springframework.boot:spring-cloud-*-connector:1.1.1.RELEASE,org.cloudfoundry:auto-reconfiguration:1.7.0.RELEASE. TIP: The source code in github includes a docker-compose.yml file (docs here). You can use that to create a local MySQL database if you don’t have one running already. You don’t actually need it to run most of the code below, but it might be useful to validate that it will actually work. Punchline for the Impatient If you want to skip the details, and all you need is a recipe for running locally with H2 and in the cloud with MySQL, then start here and read the rest later when you want to understand in more depth. (Similar options exist for other data services, like RabbitMQ, Redis, Mongo etc.) Your first and simplest option is to simply do nothing: do not define a DataSource at all but put H2 on the classpath. Spring Boot will create the H2 embedded DataSource for you when you run locally. The Cloud Foundry buildpack will detect a database service binding and create a DataSource for you when you run in the cloud. If you add Spring Cloud Connectors as well, your app will also work in other cloud platforms, as long as you include a connector. That might be good enough if you just want to get something working. If you want to run a serious application in production you might want to tweak some of the connection pool settings (e.g. the size of the pool, various timeouts, the important test on borrow flag). In that case the buildpack auto-reconfiguration DataSource will not meet your requirements and you need to choose an alternative, and there are a number of more or less sensible choices. The best choice is probably to create a DataSource explicitly using Spring Cloud Connectors, but guarded by the “cloud” profile: @Configuration @Profile("cloud") public class DataSourceConfiguration { @Bean public Cloud cloud() { return new CloudFactory().getCloud(); } @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return cloud().getSingletonServiceConnector(DataSourceclass, null); } } You can use spring.datasource.* properties (e.g. in application.properties or a profile-specific version of that) to set the additional properties at runtime. The “cloud” profile is automatically activated for you by the buildpack. Now for the details. We need to build up a picture of what’s going on in your application at runtime, so we can learn from that how to make a sensible choice for configuring data services. Layers of Autoconfiguration Let’s take a a simple app with DataSource (similar considerations apply to RabbitMQ, Mongo, Redis): @SpringBootApplication public class CloudApplication { @Autowired private DataSource dataSource; public static void main(String[] args) { SpringApplication.run(CloudApplication.class, args); } } This is a complete application: the DataSource can be @Autowired because it is created for us by Spring Boot. The details of the DataSource (concrete class, JDBC driver, connection URL, etc.) depend on what is on the classpath. Let’s assume that the application uses Spring JDBC via the spring-boot-starter-jdbc (or spring-boot-starter-data-jpa), so it has aDataSource implementation available from Tomcat (even if it isn’t a web application), and this is what Spring Boot uses. Consider what happens when: Classpath contains H2 (only) in addition to the starters: the DataSource is the Tomcat high-performance pool from DataSourceAutoConfiguration and it connects to an in memory database “testdb”. Classpath contains H2 and MySQL: DataSource is still H2 (same as before) because we didn’t provide any additional configuration for MySQL and Spring Boot can’t guess the credentials for connecting. Add spring-boot-starter-cloud-connectors to the classpath: no change inDataSource because the Spring Cloud Connectors do not detect that they are running in a Cloud platform. The providers that come with the starter all look for specific environment variables, which they won’t find unless you set them, or run the app in Cloud Foundry, Heroku, etc. Run the application in “cloud” profile with spring.profiles.active=cloud: no change yet in the DataSource, but this is one of the things that the Java buildpack does when your application runs in Cloud Foundry. Run in “cloud” profile and provide some environment variables to simulate running in Cloud Foundry and binding to a MySQL service: VCAP_APPLICATION={"name":"application","instance_id":"FOO"} VCAP_SERVICES={"mysql":[{"name":"mysql","tags":["mysql"],"credentials":{"uri":"mysql://root:root@localhost/test"}]} (the “tags” provides a hint that we want to create a MySQL DataSource, the “uri” provides the location, and the “name” becomes a bean ID). The DataSource is now using MySQL with the credentials supplied by the VCAP_* environment variables. Spring Boot has some autoconfiguration for the Connectors, so if you looked at the beans in your application you would see a CloudFactory bean, and also the DataSource bean (with ID “mysql”). Theautoconfiguration is equivalent to adding @ServiceScan to your application configuration. It is only active if your application runs in the “cloud” profile, and only if there is no existing @Bean of type Cloud, and the configuration flagspring.cloud.enabled is not “false”. Add the “auto-reconfiguration” JAR from the Java buildpack (Maven co-ordinatesorg.cloudfoundry:auto-reconfiguration:1.7.0.RELEASE). You can add it as a local dependency to simulate running an application in Cloud Foundry, but it wouldn’t be normal to do this with a real application (this is just for experimenting with autoconfiguration). The auto-reconfiguration JAR now has everything it needs to create a DataSource, but it doesn’t (yet) because it detects that you already have a bean of type CloudFactory, one that was added by Spring Boot. Remove the explicit “cloud” profile. The profile will still be active when your app starts because the auto-reconfiguration JAR adds it back again. There is still no change to theDataSource because Spring Boot has created it for you via the @ServiceScan. Remove the spring-boot-starter-cloud-connectors dependency, so that Spring Boot backs off creating a CloudFactory. The auto-reconfiguration JAR actually has its own copy of Spring Cloud Connectors (all the classes with different package names) and it now uses them to create a DataSource (in a BeanFactoryPostProcessor). The Spring Boot autoconfigured DataSource is replaced with one that binds to MySQL via theVCAP_SERVICES. There is no control over pool properties, but it does still use the Tomcat pool if available (no support for Hikari or DBCP2). Remove the auto-reconfiguration JAR and the DataSource reverts to H2. TIP: use web and actuator starters with endpoints.health.sensitive=false to inspect the DataSource quickly through “/health”. You can also use the “/beans”, “/env” and “/autoconfig” endpoints to see what is going in in the autoconfigurations and why. NOTE: Running in Cloud Foundry or including auto-reconfiguration JAR in classpath locally both activate the “cloud” profile (for the same reason). The VCAP_* env vars are the thing that makes Spring Cloud and/or the auto-reconfiguration JAR create beans. NOTE: The URL in the VCAP_SERVICES is actually not a “jdbc” scheme, which should be mandatory for JDBC connections. This is, however, the format that Cloud Foundry normally presents it in because it works for nearly every language other than Java. Spring Cloud Connectors or the buildpack auto-reconfiguration, if they are creating a DataSource, will translate it into a jdbc:* URL for you. NOTE: The MySQL URL also contains user credentials and a database name which are valid for the Docker container created by the docker-compose.yml in the sample source code. If you have a local MySQL server with different credentials you could substitute those. TIP: If you use a local MySQL server and want to verify that it is connected, you can use the “/health” endpoint from the Spring Boot Actuator (included in the sample code already). Or you could create a schema-mysql.sql file in the root of the classpath and put a simple keep alive query in it (e.g. SELECT 1). Spring Boot will run that on startupso if the app starts successfully you have configured the database correctly. The auto-reconfiguration JAR is always on the classpath in Cloud Foundry (by default) but it backs off creating any DataSource if it finds a org.springframework.cloud.CloudFactorybean (which is provided by Spring Boot if the CloudAutoConfiguration is active). Thus the net effect of adding it to the classpath, if the Connectors are also present in a Spring Boot application, is only to enable the “cloud” profile. You can see it making the decision to skip auto-reconfiguration in the application logs on startup: 015-04-14 15:11:11.765 INFO 12727 --- [ main] urceCloudServiceBeanFactoryPostProcessor : Skipping auto-reconfiguring beans of type javax.sql.DataSource 2015-04-14 15:11:57.650 INFO 12727 --- [ main] ongoCloudServiceBeanFactoryPostProcessor : Skipping auto-reconfiguring beans of type org.springframework.data.mongodb.MongoDbFactory 2015-04-14 15:11:57.650 INFO 12727 --- [ main] bbitCloudServiceBeanFactoryPostProcessor : Skipping auto-reconfiguring beans of type org.springframework.amqp.rabbit.connection.ConnectionFactory 2015-04-14 15:11:57.651 INFO 12727 --- [ main] edisCloudServiceBeanFactoryPostProcessor : Skipping auto-reconfiguring beans of type org.springframework.data.redis.connection.RedisConnectionFactory ... etc. Create your own DataSource The last section walked through most of the important autoconfiguration features in the various libraries. If you want to take control yourself, one thing you could start with is to create your own instance of DataSource. You could do that, for instance, using aDataSourceBuilder which is a convenience class and comes as part of Spring Boot (it chooses an implementation based on the classpath): @SpringBootApplication public class CloudApplication { @Bean public DataSource dataSource() { return DataSourceBuilder.create().build(); } ... } The DataSource as we’ve defined it is useless because it doesn’t have a connection URL or any credentials, but that can easily be fixed. Let’s run this application as if it was in Cloud Foundry: with the VCAP_* environment variables and the auto-reconfiguration JAR but not Spring Cloud Connectors on the classpath and no explicit “cloud” profile. The buildpack activates the “cloud” profile, creates a DataSource and binds it to the VCAP_SERVICES. As already described briefly, it removes your DataSource completely and replaces it with a manually registered singleton (which doesn’t show up in the “/beans” endpoint in Spring Boot). Now add Spring Cloud Connectors back into the classpath the application and see what happens when you run it again. It actually fails on startup! What has happened? The@ServiceScan (from Connectors) goes and looks for bound services, and creates bean definitions for them. That’s a bit like the buildpack, but different because it doesn’t attempt to replace any existing bean definitions of the same type. So you get an autowiring error because there are 2 DataSources and no way to choose one to inject into your application in various places where one is needed. To fix that we are going to have to take control of the Cloud Connectors (or simply not use them). Using a CloudFactory to create a DataSource You can disable the Spring Boot autoconfiguration and the Java buildpack auto-reconfiguration by creating your own Cloud instance as a @Bean: @Bean public Cloud cloud() { return new CloudFactory().getCloud(); } @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return cloud().getSingletonServiceConnector(DataSource.class, null); } Pros: The Connectors autoconfiguration in Spring Boot backed off so there is only oneDataSource. It can be tweaked using application.properties via spring.datasource.*properties, per the Spring Boot User Guide. Cons: It doesn’t work without VCAP_* environment variables (or some other cloud platform). It also relies on user remembering to ceate the Cloud as a @Bean in order to disable the autoconfiguration. Summary: we are still not in a comfortable place (an app that doesn’t run without some intricate wrangling of environment variables is not much use in practice). Dual Running: Local with H2, in the Cloud with MySQL There is a local configuration file option in Spring Cloud Connectors, so you don’t have to be in a real cloud platform to use them, but it’s awkward to set up despite being boiler plate, and you also have to somehow switch it off when you are in a real cloud platform. The last point there is really the important one because you end up needing a local file to run locally, but only running locally, and it can’t be packaged with the rest of the application code (for instance violates the twelve factor guidelines). So to move forward with our explicit @Bean definition it’s probably better to stick to mainstream Spring and Spring Boot features, e.g. using the “cloud” profile to guard the explicit creation of a DataSource: @Configuration @Profile("cloud") public class DataSourceConfiguration { @Bean public Cloud cloud() { return new CloudFactory().getCloud(); } @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return cloud().getSingletonServiceConnector(DataSource.class, null); } } With this in place we have a solution that works smoothly both locally and in Cloud Foundry. Locally Spring Boot will create a DataSource with an H2 embedded database. In Cloud Foundry it will bind to a singleton service of type DataSource and switch off the autconfigured one from Spring Boot. It also has the benefit of working with any platform supported by Spring Cloud Connectors, so the same code will run on Heroku and Cloud Foundry, for instance. Because of the @ConfigurationProperties you can bind additional configuration to the DataSource to tweak connection pool properties and things like that if you need to in production. NOTE: We have been using MySQL as an example database server, but actually PostgreSQL is at least as compelling a choice if not more. When paired with H2 locally, for instance, you can put H2 into its “Postgres compatibility” mode and use the same SQL in both environments. Manually Creating a Local and a Cloud DataSource If you like creating DataSource beans, and you want to do it both locally and in the cloud, you could use 2 profiles (“cloud” and “local”), for example. But then you would have to find a way to activate the “local” profile by default when not in the cloud. There is already a way to do that built into Spring because there is always a default profile called “default” (by default). So this should work: @Configuration @Profile("default") // or "!cloud" public class LocalDataSourceConfiguration { @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return DataSourceBuilder.create().build(); } } @Configuration @Profile("cloud") public class CloudDataSourceConfiguration { @Bean public Cloud cloud() { return new CloudFactory().getCloud(); } @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return cloud().getSingletonServiceConnector(DataSource.class, null); } } The “default” DataSource is actually identical to the autoconfigured one in this simple example, so you wouldn’t do this unless you needed to, e.g. to create a custom concreteDataSource of a type not supported by Spring Boot. You might think it’s all getting a bit complicated, but in fact Spring Boot is not making it any harder, we are just dealing with the consequences of needing to control the DataSource construction in 2 environments. Using a Non-Embedded Database Locally If you don’t want to use H2 or any in-memory database locally, then you can’t really avoid having to configure it (Spring Boot can guess a lot from the URL, but it will need that at least). So at a minimum you need to set some spring.datasource.* properties (the URL for instance). That that isn’t hard to do, and you can easily set different values in different environments using additional profiles, but as soon as you do that you need to switch off the default values when you go into the cloud. To do that you could define thespring.datasource.* properties in a profile-specific file (or document in YAML) for the “default” profile, e.g. application-default.properties, and these will not be used in the “cloud” profile. A Purely Declarative Approach If you prefer not to write Java code, or don’t want to use Spring Cloud Connectors, you might want to try and use Spring Boot autoconfiguration and external properties (or YAML) files for everything. For example Spring Boot creates a DataSource for you if it finds the right stuff on the classpath, and it can be completely controlled through application.properties, including all the granular features on the DataSource that you need in production (like pool sizes and validation queries). So all you need is a way to discover the location and credentials for the service from the environment. The buildpack translates Cloud Foundry VCAP_*environment variables into usable property sources in the Spring Environment. Thus, for instance, a DataSource configuration might look like this: spring.datasource.url: ${cloud.services.mysql.connection.jdbcurl:jdbc:h2:mem:testdb} spring.datasource.username: ${cloud.services.mysql.connection.username:sa} spring.datasource.password: ${cloud.services.mysql.connection.password:} spring.datasource.testOnBorrow: true The “mysql” part of the property names is the service name in Cloud Foundry (so it is set by the user). And of course the same pattern applies to all kinds of services, not just a JDBCDataSource. Generally speaking it is good practice to use external configuration and in particular @ConfigurationProperties since they allow maximum flexibility, for instance to override using System properties or environment variables at runtime. Note: similar features are provided by Spring Boot, which provides vcap.services.*instead of cloud.services.*, so you actually end up with more than one way to do this. However, the JDBC urls are not available from the vcap.services.* properties (non-JDBC services work fine with tthe corresponding vcap.services.*credentials.url). One limitation of this approach is it doesn’t apply if the application needs to configure beans that are not provided by Spring Boot out of the box (e.g. if you need 2 DataSources), in which case you have to write Java code anyway, and may or may not choose to use properties files to parameterize it. Before you try this yourself, though, beware that actually it doesn’t work unless you also disable the buildpack auto-reconfiguration (and Spring Cloud Connectors if they are on the classpath). If you don’t do that, then they create a new DataSource for you and Spring Boot cannot bind it to your properties file. Thus even for this declarative approach, you end up needing an explicit @Bean definition, and you need this part of your “cloud” profile configuration: @Configuration @Profile("cloud") public class CloudDataSourceConfiguration { @Bean public Cloud cloud() { return new CloudFactory().getCloud(); } } This is purely to switch off the buildpack auto-reconfiguration (and the Spring Boot autoconfiguration, but that could have been disabled with a properties file entry). Mixed Declarative and Explicit Bean Definition You can also mix the two approaches: declare a single @Bean definition so that you control the construction of the object, but bind additional configuration to it using@ConfigurationProperties (and do the same locally and in Cloud Foundry). Example: @Configuration public class LocalDataSourceConfiguration { @Bean @ConfigurationProperties(DataSourceProperties.PREFIX) public DataSource dataSource() { return DataSourceBuilder.create().build(); } } (where the DataSourceBuilder would be replaced with whatever fancy logic you need for your use case). And the application.properties would be the same as above, with whatever additional properties you need for your production settings. A Third Way: Discover the Credentials and Bind Manually Another approach that lends itself to platform and environment independence is to declare explicit bean definitions for the @ConfigurationProperties beans that Spring Boot uses to bind its autoconfigured connectors. For instance, to set the default values for a DataSourceyou can declare a @Bean of type DataSourceProperties: @Bean @Primary public DataSourceProperties dataSourceProperties() { DataSourceProperties properties = new DataSourceProperties(); properties.setInitialize(false); return properties; } This sets a default value for the “initialize” flag, and allows other properties to be bound fromapplication.properties (or other external properties). Combine this with the Spring Cloud Connectors and you can control the binding of the credentials when a cloud service is detected: @Autowired(required="false") Cloud cloud; @Bean @Primary public DataSourceProperties dataSourceProperties() { DataSourceProperties properties = new DataSourceProperties(); properties.setInitialize(false); if (cloud != null) { List infos = cloud.getServiceInfos(RelationalServiceInfo.class); if (infos.size()==1) { RelationalServiceInfo info = (RelationalServiceInfo) infos.get(0); properties.setUrl(info.getJdbcUrl()); properties.setUsername(info.getUserName()); properties.setPassword(info.getPassword()); } } return properties; } and you still need to define the Cloud bean in the “cloud” profile. It ends up being quite a lot of code, and is quite unnecessary in this simple use case, but might be handy if you have more complicated bindings, or need to implement some logic to choose a DataSource at runtime. Spring Boot has similar *Properties beans for the other middleware you might commonly use (e.g. RabbitProperties, RedisProperties, MongoProperties). An instance of such a bean marked as @Primary is enough to reset the defaults for the autoconfigured connector. Deploying to Multiple Cloud Platforms So far, we have concentrated on Cloud Foundry as the only cloud platform in which to deploy the application. One of the nice features of Spring Cloud Connectors is that it supports other platforms, either out of the box or as extension points. Thespring-boot-starter-cloud-connectors even includes Heroku support. If you do nothing at all, and rely on the autoconfiguration (the lazy programmer’s approach), then your application will be deployable in all clouds where you have a connector on the classpath (i.e. Cloud Foundry and Heroku if you use the starter). If you take the explicit @Bean approach then you need to ensure that the “cloud” profile is active in the non-Cloud Foundry platforms, e.g. through an environment variable. And if you use the purely declarative approach (or any combination involving properties files) you need to activate the “cloud” profile and probably also another profile specific to your platform, so that the right properties files end up in theEnvironment at runtime. Summary of Autoconfiguration and Provided Behaviour Spring Boot provides DataSource (also RabbitMQ or Redis ConnectionFactory, Mongo etc.) if it finds all the right stuff on the classpath. Using the “spring-boot-starter-*” dependencies is sufficient to activate the behaviour. Spring Boot also provides an autowirable CloudFactory if it finds Spring Cloud Connectors on the classpath (but switches off only if it finds a @Bean of type Cloud). The CloudAutoConfiguration in Spring Boot also effectively adds a @CloudScan to your application, which you would want to switch off if you ever needed to create your ownDataSource (or similar). The Cloud Foundry Java buildpack detects a Spring Boot application and activates the “cloud” profile, unless it is already active. Adding the buildpack auto-reconfiguration JAR does the same thing if you want to try it locally. Through the auto-reconfiguration JAR, the buildpack also kicks in and creates aDataSource (ditto RabbitMQ, Redis, Mongo etc.) if it does not find a CloudFactory bean or a Cloud bean (amongst others). So including Spring Cloud Connectors in a Spring Boot application switches off this part of the “auto-reconfiguration” behaviour (the bean creation). Switching off the Spring Boot CloudAutoConfiguration is easy, but if you do that, you have to remember to switch off the buildpack auto-reconfiguration as well if you don’t want it. The only way to do that is to define a bean definition (can be of type Cloud orCloudFactory for instance). Spring Boot binds application.properties (and other sources of external properties) to@ConfigurationProperties beans, including but not limited to the ones that it autoconfigures. You can use this feature to tweak pool properties and other settings that need to be different in production environments. General Advice and Conclusion We have seen quite a few options and autoconfigurations in this short article, and we’ve only really used thee libraries (Spring Boot, Spring Cloud Connectors, and the Cloud Foundry buildpack auto-reconfiguration JAR) and one platform (Cloud Foundry), not counting local deployment. The buildpack features are really only useful for very simple applications because there is no flexibility to tune the connections in production. That said it is a nice thing to be able to do when prototyping. There are only three main approaches if you want to achieve the goal of deploying the same code locally and in the cloud, yet still being able to make necessary tweaks in production: Use Spring Cloud Connectors to explicitly create DataSource and other middleware connections and protect those @Beans with @Profile("cloud"). The approach always works, but leads to more code than you might need for many applications. Use the Spring Boot default autoconfiguration and declare the cloud bindings usingapplication.properties (or in YAML). To take full advantage you have to expliccitly switch off the buildpack auto-reconfiguration as well. Use Spring Cloud Connectors to discover the credentials, and bind them to the Spring Boot@ConfigurationProperties as default values if present. The three approaches are actually not incompatible, and can be mixed using@ConfigurationProperties to provide profile-specific overrides of default configuration (e.g. for setting up connection pools in a different way in a production environment). If you have a relatively simple Spring Boot application, the only way to choose between the approaches is probably personal taste. If you have a non-Spring Boot application then the explicit @Bean approach will win, and it may also win if you plan to deploy your application in more than one cloud platform (e.g. Heroku and Cloud Foundry). NOTE: This blog has been a journey of discovery (who knew there was so much to learn?). Thanks go to all those who helped with reviews and comments, in particularScott Frederick, who spotted most of the mistakes in the drafts and always had time to look at a new revision.
May 6, 2015
· 26,573 Views · 2 Likes
article thumbnail
Using Apache Kafka for Integration and Data Processing Pipelines with Spring
written by josh long on the spring blog applications generated more and more data than ever before and a huge part of the challenge - before it can even be analyzed - is accommodating the load in the first place. apache’s kafka meets this challenge. it was originally designed by linkedin and subsequently open-sourced in 2011. the project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. the design is heavily influenced by transaction logs. it is a messaging system, similar to traditional messaging systems like rabbitmq, activemq, mqseries, but it’s ideal for log aggregation, persistent messaging, fast (_hundreds_ of megabytes per second!) reads and writes, and can accommodate numerous clients. naturally, this makes it perfect for cloud-scale architectures! kafka powers many large production systems . linkedin uses it for activity data and operational metrics to power the linkedin news feed, and linkedin today, as well as offline analytics going into hadoop. twitter uses it as part of their stream-processing infrastructure. kafka powers online-to-online and online-to-offline messaging at foursquare. it is used to integrate foursquare monitoring and production systems with hadoop-based offline infrastructures. square uses kafka as a bus to move all system events through square’s various data centers. this includes metrics, logs, custom events, and so on. on the consumer side, it outputs into splunk, graphite, or esper-like real-time alerting. netflix uses it for 300-600bn messages per day. it’s also used by airbnb, mozilla, goldman sachs, tumblr, yahoo, paypal, coursera, urban airship, hotels.com, and a seemingly endless list of other big-web stars. clearly, it’s earning its keep in some powerful systems! installing apache kafka there are many different ways to get apache kafka installed. if you’re on osx, and you’re using homebrew, it can be as simple as brew install kafka . you can also download the latest distribution from apache . i downloaded kafka_2.10-0.8.2.1.tgz , unzipped it, and then within you’ll find there’s a distribution of apache zookeeper as well as kafka, so nothing else is required. i installed apache kafka in my $home directory, under another directory, bin , then i created an environment variable, kafka_home , that points to $home/bin/kafka . start apache zookeeper first, specifying where the configuration properties file it requires is: $kafka_home/bin/zookeeper-server-start.sh $kafka_home/config/zookeeper.properties the apache kafka distribution comes with default configuration files for both zookeeper and kafka, which makes getting started easy. you will in more advanced use cases need to customize these files. then start apache kafka. it too requires a configuration file, like this: $kafka_home/bin/kafka-server-start.sh $kafka_home/config/server.properties the server.properties file contains, among other things, default values for where to connect to apache zookeeper ( zookeeper.connect ), how much data should be sent across sockets, how many partitions there are by default, and the broker id ( broker.id - which must be unique across a cluster). there are other scripts in the same directory that can be used to send and receive dummy data, very handy in establishing that everything’s up and running! now that apache kafka is up and running, let’s look at working with apache kafka from our application. some high level concepts.. a kafka broker cluster consists of one or more servers where each may have one or more broker processes running. apache kafka is designed to be highly available; there are no master nodes. all nodes are interchangeable. data is replicated from one node to another to ensure that it is still available in the event of a failure. in kafka, a topic is a category, similar to a jms destination or both an amqp exchange and queue. topics are partitioned, and the choice of which of a topic’s partition a message should be sent to is made by the message producer. each message in the partition is assigned a unique sequenced id, its offset . more partitions allow greater parallelism for consumption, but this will also result in more files across the brokers. producers send messages to apache kafka broker topics and specify the partition to use for every message they produce. message production may be synchronous or asynchronous. producers also specify what sort of replication guarantees they want. consumers listen for messages on topics and process the feed of published messages. as you’d expect if you’ve used other messaging systems, this is usually (and usefully!) asynchronous. like spring xd and numerous other distributed system, apache kafka uses apache zookeeper to coordinate cluster information. apache zookeeper provides a shared hierarchical namespace (called znodes ) that nodes can share to understand cluster topology and availability (yet another reason that spring cloud has forthcoming support for it..). zookeeper is very present in your interactions with apache kafka. apache kafka has, for example, two different apis for acting as a consumer. the higher level api is simpler to get started with and it handles all the nuances of handling partitioning and so on. it will need a reference to a zookeeper instance to keep the coordination state. let’s turn now turn to using apache kafka with spring. using apache kafka with spring integration the recently released apache kafka 1.1 spring integration adapter is very powerful, and provides inbound adapters for working with both the lower level apache kafka api as well as the higher level api. the adapter, currently, is xml-configuration first, though work is already underway on a spring integration java configuration dsl for the adapter and milestones are available. we’ll look at both here, now. to make all these examples work, i added the libs-milestone-local maven repository and used the following dependencies: org.apache.kafka:kafka_2.10:0.8.1.1 org.springframework.boot:spring-boot-starter-integration:1.2.3.release org.springframework.boot:spring-boot-starter:1.2.3.release org.springframework.integration:spring-integration-kafka:1.1.1.release org.springframework.integration:spring-integration-java-dsl:1.1.0.m1 using the spring integration apache kafka with the spring integration xml dsl first, let’s look at how to use the spring integration outbound adapter to send message instances from a spring integration flow to an external apache kafka instance. the example is fairly straightforward: a spring integration channel named inputtokafka acts as a conduit that forwards message messages to the outbound adapter, kafkaoutboundchanneladapter . the adapter itself can take its configuration from the defaults specified in the kafka:producer-context element or it from the adapter-local configuration overrides. there may be one or many configurations in a given kafka:producer-context element. here’s the java code from a spring boot application to trigger message sends using the outbound adapter by sending messages into the incoming inputtokafka messagechannel . package xml; import org.apache.commons.logging.log; import org.apache.commons.logging.logfactory; import org.springframework.beans.factory.annotation.qualifier; import org.springframework.boot.commandlinerunner; import org.springframework.boot.springapplication; import org.springframework.boot.autoconfigure.springbootapplication; import org.springframework.context.annotation.bean; import org.springframework.context.annotation.dependson; import org.springframework.context.annotation.importresource; import org.springframework.integration.config.enableintegration; import org.springframework.messaging.messagechannel; import org.springframework.messaging.support.genericmessage; @springbootapplication @enableintegration @importresource("/xml/outbound-kafka-integration.xml") public class demoapplication { private log log = logfactory.getlog(getclass()); @bean @dependson("kafkaoutboundchanneladapter") commandlinerunner kickoff(@qualifier("inputtokafka") messagechannel in) { return args -> { for (int i = 0; i < 1000; i++) { in.send(new genericmessage<>("#" + i)); log.info("sending message #" + i); } }; } public static void main(string args[]) { springapplication.run(demoapplication.class, args); } } using the new apache kafka spring integration java configuration dsl shortly after the spring integration 1.1 release, spring integration rockstar artem bilan got to work on adding a spring integration java configuration dsl analog and the result is a thing of beauty! it’s not yet ga (you need to add the libs-milestone repository for now), but i encourage you to try it out and kick the tires. it’s working well for me and the spring integration team are always keen on getting early feedback whenever possible! here’s an example that demonstrates both sending messages and consuming them from two different integrationflow s. the producer is similar to the example xml above. new in this example is the polling consumer. it is batch-centric, and will pull down all the messages it sees at a fixed interval. in our code, the message received will be a map that contains as its keys the topic and as its value another map with the partition id and the batch (in this case, of 10 records), of records read. there is a messagelistenercontainer -based alternative that processes messages as they come. package jc; import org.apache.commons.logging.log; import org.apache.commons.logging.logfactory; import org.springframework.beans.factory.annotation.autowired; import org.springframework.beans.factory.annotation.qualifier; import org.springframework.beans.factory.annotation.value; import org.springframework.boot.commandlinerunner; import org.springframework.boot.springapplication; import org.springframework.boot.autoconfigure.springbootapplication; import org.springframework.context.annotation.bean; import org.springframework.context.annotation.configuration; import org.springframework.context.annotation.dependson; import org.springframework.integration.integrationmessageheaderaccessor; import org.springframework.integration.config.enableintegration; import org.springframework.integration.dsl.integrationflow; import org.springframework.integration.dsl.integrationflows; import org.springframework.integration.dsl.sourcepollingchanneladapterspec; import org.springframework.integration.dsl.kafka.kafka; import org.springframework.integration.dsl.kafka.kafkahighlevelconsumermessagesourcespec; import org.springframework.integration.dsl.kafka.kafkaproducermessagehandlerspec; import org.springframework.integration.dsl.support.consumer; import org.springframework.integration.kafka.support.zookeeperconnect; import org.springframework.messaging.messagechannel; import org.springframework.messaging.support.genericmessage; import org.springframework.stereotype.component; import java.util.list; import java.util.map; /** * demonstrates using the spring integration apache kafka java configuration dsl. * thanks to spring integration ninja artem bilan * for getting the java configuration dsl working so quickly! * * @author josh long */ @enableintegration @springbootapplication public class demoapplication { public static final string test_topic_id = "event-stream"; @component public static class kafkaconfig { @value("${kafka.topic:" + test_topic_id + "}") private string topic; @value("${kafka.address:localhost:9092}") private string brokeraddress; @value("${zookeeper.address:localhost:2181}") private string zookeeperaddress; kafkaconfig() { } public kafkaconfig(string t, string b, string zk) { this.topic = t; this.brokeraddress = b; this.zookeeperaddress = zk; } public string gettopic() { return topic; } public string getbrokeraddress() { return brokeraddress; } public string getzookeeperaddress() { return zookeeperaddress; } } @configuration public static class producerconfiguration { @autowired private kafkaconfig kafkaconfig; private static final string outbound_id = "outbound"; private log log = logfactory.getlog(getclass()); @bean @dependson(outbound_id) commandlinerunner kickoff( @qualifier(outbound_id + ".input") messagechannel in) { return args -> { for (int i = 0; i < 1000; i++) { in.send(new genericmessage<>("#" + i)); log.info("sending message #" + i); } }; } @bean(name = outbound_id) integrationflow producer() { log.info("starting producer flow.."); return flowdefinition -> { consumer spec = (kafkaproducermessagehandlerspec.producermetadataspec metadata)-> metadata.async(true) .batchnummessages(10) .valueclasstype(string.class) .valueencoder(string::getbytes); kafkaproducermessagehandlerspec messagehandlerspec = kafka.outboundchanneladapter( props -> props.put("queue.buffering.max.ms", "15000")) .messagekey(m -> m.getheaders().get(integrationmessageheaderaccessor.sequence_number)) .addproducer(this.kafkaconfig.gettopic(), this.kafkaconfig.getbrokeraddress(), spec); flowdefinition .handle(messagehandlerspec); }; } } @configuration public static class consumerconfiguration { @autowired private kafkaconfig kafkaconfig; private log log = logfactory.getlog(getclass()); @bean integrationflow consumer() { log.info("starting consumer.."); kafkahighlevelconsumermessagesourcespec messagesourcespec = kafka.inboundchanneladapter( new zookeeperconnect(this.kafkaconfig.getzookeeperaddress())) .consumerproperties(props -> props.put("auto.offset.reset", "smallest") .put("auto.commit.interval.ms", "100")) .addconsumer("mygroup", metadata -> metadata.consumertimeout(100) .topicstreammap(m -> m.put(this.kafkaconfig.gettopic(), 1)) .maxmessages(10) .valuedecoder(string::new)); consumer endpointconfigurer = e -> e.poller(p -> p.fixeddelay(100)); return integrationflows .from(messagesourcespec, endpointconfigurer) .>>handle((payload, headers) -> { payload.entryset().foreach(e -> log.info(e.getkey() + '=' + e.getvalue())); return null; }) .get(); } } public static void main(string[] args) { springapplication.run(demoapplication.class, args); } } the example makes heavy use of java 8 lambdas. the producer spends a bit of time establishing how many messages will be sent in a single send operation, how keys and values are encoded (kafka only knows about byte[] arrays, after all) and whether messages should be sent synchronously or asynchronously. in the next line, we configure the outbound adapter itself and then define an integrationflow such that all messages get sent out via the kafka outbound adapter. the consumer spends a bit of time establishing which zookeeper instance to connect to, how many messages to receive (10) in a batch, etc. once the message batches are recieved, they’re handed to the handle method where i’ve passed in a lambda that’ll enumerate the payload’s body and print it out. nothing fancy. using apache kafka with spring xd apache kafka is a message bus and it can be very powerful when used as an integration bus. however, it really comes into its own because it’s fast enough and scalable enough that it can be used to route big-data through processing pipelines. and if you’re doing data processing, you really want spring xd ! spring xd makes it dead simple to use apache kafka (as the support is built on the apache kafka spring integration adapter!) in complex stream-processing pipelines. apache kafka is exposed as a spring xd source - where data comes from - and a sink - where data goes to. spring xd exposes a super convenient dsl for creating bash -like pipes-and-filter flows. spring xd is a centralized runtime that manages, scales, and monitors data processing jobs. it builds on top of spring integration, spring batch, spring data and spring for hadoop to be a one-stop data-processing shop. spring xd jobs read data from sources , run them through processing components that may count, filter, enrich or transform the data, and then write them to sinks. spring integration and spring xd ninja marius bogoevici , who did a lot of the recent work in the spring integration and spring xd implementation of apache kafka, put together a really nice example demonstrating how to get a full working spring xd and kafka flow working . the readme walks you through getting apache kafka, spring xd and the requisite topics all setup. the essence, however, is when you use the spring xd shell and the shell dsl to compose a stream. spring xd components are named components that are pre-configured but have lots of parameters that you can override with --.. arguments via the xd shell and dsl. (that dsl, by the way, is written by the amazing andy clement of spring expression language fame!) here’s an example that configures a stream to read data from an apache kafka source and then write the message a component called log , which is a sink. log , in this case, could be syslogd, splunk, hdfs, etc. xd> stream create kafka-source-test --definition "kafka --zkconnect=localhost:2181 --topic=event-stream | log"--deploy and that’s it! naturally, this is just a tase of spring xd, but hopefully you’ll agree the possibilities are tantalizing. deploying a kafka server with lattice and docker it’s easy to get an example kafka installation all setup using lattice , a distributed runtime that supports, among other container formats, the very popular docker image format. there’s a docker image provided by spotify that sets up a collocated zookeeper and kafka image . you can easily deploy this to a lattice cluster, as follows: ltc create --run-as-root m-kafka spotify/kafka from there, you can easily scale the apache kafka instances and even more easily still consume apache kafka from your cloud-based services. next steps you can find the code for this blog on my github account . we’ve only scratched the surface! if you want to learn more (and why wouldn’t you?), then be sure to check out marius bogoevici and dr. mark pollack’s upcoming webinar on reactive data-pipelines using spring xd and apache kafka where they’ll demonstrate how easy it can be to use rxjava, spring xd and apache kafka!
April 18, 2015
· 28,731 Views
article thumbnail
Using Google Protocol Buffers with Spring MVC-based REST Services
Written by Josh Long on the Spring blog This week I’m in São Paulo, Brazil presenting at QCon SP. I had an interesting discussion with someone who loves Spring’s REST stack, but wondered if there was something more efficient than plain-ol’ JSON. Indeed, there is! I often get asked about Spring’s support for high-speed binary based encoding of messages. Spring’s long supported RPC encoding with the likes of Hessian, Burlap, etc., and Spring Framework 4.1 introduced support for Google Protocol Buffers which can be used with REST services as well. From the Google Protocol Buffer website: Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages… Google uses Protocol Buffers extensively in their own, internal, service-centric architecture. A .proto document describes the types (_messages_) to be encoded and contains a definition language that should be familiar to anyone who’s used C structs. In the document, you define types, fields in those types, and their ordering (memory offsets!) in the type relative to each other. The .proto files aren’t implementations - they’re declarative descriptions of messages that may be conveyed over the wire. They can prescribe and validate constraints - the type of a given field, or the cardinatlity of that field - on the messages that are encoded and decoded. You must use the Protobuf compiler to generate the appropriate client for your language of choice. You can use Google Protocol Buffers anyway you like, but in this post we’ll look at using it as a way to encode REST service payloads. This approach is powerful: you can use content-negotiation to serve high speed Protocol Buffer payloads to the clients (in any number of languages) that accept it, and something more conventional like JSON for those that don’t. Protocol Buffer messages offer a number of improvements over typical JSON-encoded messages, particularly in a polyglot system where microservices are implemented in various technologies but need to be able to reason about communication between services in a consistant, long-term manner. Protocol Buffers are several nice features that promote stable APIs: Protocol Buffers offer backward compatibility for free. Each field is numbered in a Protocol Buffer, so you don’t have to change the behavior of the code going forward to maintain backward compatability with older clients. Clients that don’t know about new fields won’t bother trying to parse them. Protocol Buffers provide a natural place to specify validation using the required,optional, and repeated keywords. Each client enforces these constraints in their own way. Protocol Buffers are polyglot, and work with all manner of technologies. In the example code for this blog alone there is a Ruby, Python and Java client for the Java service demonstrated. It’s just a matter of using one of the numerous supported compilers. You might think that you could just use Java’s inbuilt serialization mechanism in a homogeneous service environment but, as the Protocol Buffers team were quick to point out whent hey first introduced the technology, there are some problems even with that. Java language luminary Josh Bloch’s epic tome, Effective Java, on page 213, provides further details. Let’s first look at our .proto document: package demo; option java_package = "demo"; option java_outer_classname = "CustomerProtos"; message Customer { required int32 id = 1; required string firstName = 2; required string lastName = 3; enum EmailType { PRIVATE = 1; PROFESSIONAL = 2; } message EmailAddress { required string email = 1; optional EmailType type = 2 [default = PROFESSIONAL]; } repeated EmailAddress email = 5; } message Organization { required string name = 1; repeated Customer customer = 2; } You then pass this definition to the protoc compiler and specify the output type, like this: protoc -I=$IN_DIR --java_out=$OUT_DIR $IN_DIR/customer.proto Here’s the little Bash script I put together to code-generate my various clients: #!/usr/bin/env bash SRC_DIR=`pwd` DST_DIR=`pwd`/../src/main/ echo source: $SRC_DIR echo destination root: $DST_DIR function ensure_implementations(){ # Ruby and Go aren't natively supported it seems # Java and Python are gem list | grep ruby-protocol-buffers || sudo gem install ruby-protocol-buffers go get -u github.com/golang/protobuf/{proto,protoc-gen-go} } function gen(){ D=$1 echo $D OUT=$DST_DIR/$D mkdir -p $OUT protoc -I=$SRC_DIR --${D}_out=$OUT $SRC_DIR/customer.proto } ensure_implementations gen java gen python gen ruby This will generate the appropriate client classes in the src/main/{java,ruby,python}folders. Let’s first look at the Spring MVC REST service itself. A Spring MVC REST Service In our example, we’ll register an instance of Spring framework 4.1’s org.springframework.http.converter.protobuf.ProtobufHttpMessageConverter. This type is an HttpMessageConverter. HttpMessageConverters encode and decode the requests and responses in REST service calls. They’re usually activated after some sort of content negotiation has occurred: if the client specifies Accept: application/x-protobuf, for example, then our REST service will send back the Protocol Buffer-encoded response. package demo; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import org.springframework.http.converter.protobuf.ProtobufHttpMessageConverter; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import java.util.Arrays; import java.util.Collection; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.stream.Collectors; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Bean ProtobufHttpMessageConverter protobufHttpMessageConverter() { return new ProtobufHttpMessageConverter(); } private CustomerProtos.Customer customer(int id, String f, String l, Collection emails) { Collection emailAddresses = emails.stream().map(e -> CustomerProtos.Customer.EmailAddress.newBuilder() .setType(CustomerProtos.Customer.EmailType.PROFESSIONAL) .setEmail(e).build()) .collect(Collectors.toList()); return CustomerProtos.Customer.newBuilder() .setFirstName(f) .setLastName(l) .setId(id) .addAllEmail(emailAddresses) .build(); } @Bean CustomerRepository customerRepository() { Map customers = new ConcurrentHashMap<>(); // populate with some dummy data Arrays.asList( customer(1, "Chris", "Richardson", Arrays.asList("[email protected]")), customer(2, "Josh", "Long", Arrays.asList("[email protected]")), customer(3, "Matt", "Stine", Arrays.asList("[email protected]")), customer(4, "Russ", "Miles", Arrays.asList("[email protected]")) ).forEach(c -> customers.put(c.getId(), c)); // our lambda just gets forwarded to Map#get(Integer) return customers::get; } } interface CustomerRepository { CustomerProtos.Customer findById(int id); } @RestController class CustomerRestController { @Autowired private CustomerRepository customerRepository; @RequestMapping("/customers/{id}") CustomerProtos.Customer customer(@PathVariable Integer id) { return this.customerRepository.findById(id); } } Most of this code is pretty straightforward. It’s a Spring Boot application. Spring Boot automatically registers HttpMessageConverter beans so we need only define the ProtobufHttpMessageConverter bean and it gets configured appropriately. The @Configuration class seeds some dummy date and a mock CustomerRepository object. I won’t reproduce the Java type for our Protocol Buffer, demo/CustomerProtos.java, here as it is code-generated bit twiddling and parsing code; not all that interesting to read. One convenience is that the Java implementation automatically provides builder methods for quickly creating instances of these types in Java. The code-generated types are dumb struct like objects. They’re suitable for use as DTOs, but should not be used as the basis for your API. Do not extend them using Java inheritance to introduce new functionality; it’ll break the implementation and it’s bad OOP practice, anyway. If you want to keep things cleaner, simply wrapt and adapt them as appropriate, perhaps handling conversion from an ORM entity to the Protocol Buffer client type as appropriate in that wrapper. HttpMessageConverters may also be used with Spring’s REST client, the RestTemplate. Here’s the appropriate Java-language unit test: package demo; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.IntegrationTest; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.http.ResponseEntity; import org.springframework.http.converter.protobuf.ProtobufHttpMessageConverter; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.web.WebAppConfiguration; import org.springframework.web.client.RestTemplate; import java.util.Arrays; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = DemoApplication.class) @WebAppConfiguration @IntegrationTest public class DemoApplicationTests { @Configuration public static class RestClientConfiguration { @Bean RestTemplate restTemplate(ProtobufHttpMessageConverter hmc) { return new RestTemplate(Arrays.asList(hmc)); } @Bean ProtobufHttpMessageConverter protobufHttpMessageConverter() { return new ProtobufHttpMessageConverter(); } } @Autowired private RestTemplate restTemplate; private int port = 8080; @Test public void contextLoaded() { ResponseEntity customer = restTemplate.getForEntity( "http://127.0.0.1:" + port + "/customers/2", CustomerProtos.Customer.class); System.out.println("customer retrieved: " + customer.toString()); } } Things just work as you’d expect, not only in Java and Spring, but also in Ruby and Python. For completeness, here is a simple client using Ruby (client types omitted): #!/usr/bin/env ruby require './customer.pb' require 'net/http' require 'uri' uri = URI.parse('http://localhost:8080/customers/3') body = Net::HTTP.get(uri) puts Demo::Customer.parse(body) ..and here’s a client in Python (client types omitted): #!/usr/bin/env python import urllib import customer_pb2 if __name__ == '__main__': customer = customer_pb2.Customer() customers_read = urllib.urlopen('http://localhost:8080/customers/1').read() customer.ParseFromString(customers_read) print customer Where to go from Here If you want very high speed message encoding that works with multiple languages, Protocol Buffers are a compelling option. There are other encoding technologies like Avro or Thrift, but none nearly so mature and entrenched as Protocol Buffers. You don’t necessarily need to use Protocol Buffers with REST, either. You could plug it into some sort of RPC service, if that’s your style. There are almost as many client implementations as there are buildpacks for Cloud Foundry - so you could run almost anything on Cloud Foundry and enjoy the same high speed, consistent messaging across all your services! The code for this example is available online, as well, so don’t hesitate to check it out! Also.. Hi gang, in 2015, I’ve been trying to do a random tech-tip style post every week based on things that I see garnering interest in the community, either here or on the Pivotal blog. I use these weekly-_ish_ (OK! OK! - it’s not been easy doing them as regularly as This Week in Spring, but so far I haven’t missed a week! :-) ) posts as a chance to focus not on a specific new release, per se, but on the application of Spring in service to some community use case that might be cross-cutting or just might benefit from having a spotlight shined on it. So far we’ve looked at all manner of things - Vaadin, Activiti, 12-Factor App Style Configuration, Smarter Service to Service Invocations, Couchbase, and much more, etc. - and we’ve got some interesting stuff lined up, too. I wondered what else you want to see talked about, however. If you’ve got some ideas about what you’d like to see covered, or a community post of your own to contribute, reach out to me on Twitter (@starbuxman) or via email (jlong [at] pivotal [dot] io). I remain, as always, at your service.
March 27, 2015
· 14,824 Views
article thumbnail
Getting Started with Couchbase and Spring Data Couchbase
Written by Josh Long on the Spring blog. This blog was inspired by a talk that Laurent Doguin, a developer advocate over at Couchbase, and I gave at Couchbase Connect last year. Merci Laurent! This is a demo of the Spring Data Couchbase integration. From the project page, Spring Data Couchbase is: The Spring Data Couchbase project provides integration with the Couchbase Server database. Key functional areas of Spring Data Couchbase are a POJO centric model for interacting with Couchbase Buckets and easily writing a Repository style data access layer. What is Couchbase? Couchbase is a distributed data-store that enjoys true horizontal scaling. I like to think of it as a mix of Redis and MongoDB: you work with documents that are accessed through their keys. There are numerous client APIs for all languages. If you’re using Couchbase for your backend and using the JVM, you’ll love Spring Data Couchbase. The bullets on the project home page best enumerate its many features: Spring configuration support using Java based @Configuration classes or an XML namespace for the Couchbase driver. CouchbaseTemplate helper class that increases productivity performing common Couchbase operations. Includes integrated object mapping between documents and POJOs. Exception translation into Spring’s portable Data Access Exception hierarchy. Feature Rich Object Mapping integrated with Spring’s Conversion Service. Annotation based mapping metadata but extensible to support other metadata formats. Automatic implementation of Repository interfaces including support for custom finder methods (backed by Couchbase Views). JMX administration and monitoring Transparent @Cacheable support to cache any objects you need for high performance access. Running Couchbase Use Vagrant to Run Couchbase Locally You will need to have Couchbase installed if you don’t already (naturally). Michael Nitschinger (@daschl, also lead of the Spring Data Couchbase project), blogged about how to get a simple4-node Vagrant cluster up and running here. I’ve reproduced his example here in the vagrantdirectory. To use it, you’ll need to install Virtual Box and Vagrant, of course, but then simply run vagrant up in the vagrant directory. To get the most up-to-date version of this configuration script, I went to Michael’s GitHub vagrants project and found that, beyond this example, there are numerous other Vagrant scripts available. I have a submodule in this code’s project directory that points to that, but be sure to consult that for the latest-and-greatest. To get everything running on my machine, I chose the Ubuntu 12 installation of Couchbase 3.0.2. You can change how many nodes are started by configuring the VAGRANT_NODES environment variable before startup: VAGRANT_NODES=2 vagrant up You’ll need to administer and configure Couchbase on initial setup. Point your browser to the right IP for each node. The rules for determining that IP are well described in the README. The admin interface, in my case, was available at 192.168.105.101:8091 and192.168.105.102:8091. For more on this process, I recommend that you follow theguidelines here for the details. Here’s how I did it. I hit the admin interface on the first node and created a new cluster. I usedadmin for the username and password for the password. On all subsequent management pages, I simply joined the existing cluster by pointing the nodes to 192.168.105.101 and using the aforementioned admin credential. Once you’ve joined all nodes, look for theRebalance button in the Server Nodes panel and trigger a cluster rebalance. If you are done with your Vagrant cluster, you can use the vagrant halt command to shut it down cleanly. Very handy is also vagrant suspend, which will save the state of the nodes instead of shutting them down completely. If you want to administer the Couchbase cluster from the command line there is the handycouchbase-cli. You can simply use the vagrant ssh command to get into each of the nodes (by their node-names: node1, node2, etc..). Once there, you can run cluster configuration commands. For example the server-list command will enumerate cluster nodes. /opt/couchbase/bin/couchbase-cli server-list -c 192.168.56.101-u admin -p password It’s easy to trigger a rebalance using: /opt/couchbase/bin/couchbase-cli rebalance -c 192.168.56.101-u admin -p password Couchbase In the Cloud and on Cloud Foundry Couchbase lends itself to use in the cloud. It’s horizontally scalable (like Gemfire or Cassandra) in that there’s no single point of failure. It does not employ a master-slave or active/passive system. There are a few ways to get it up and running where your applications are running. If you’re running a Cloud Foundry installation, then you can install the the Cumulogic Service Broker which then lets your Cloud Foundry installation talk to the Cumulogic platform which itself can manage Couchbase instances. Service brokers are the bit of integration code that teach Cloud Foundry how to provision, destroy and generally interact with a managed service, like Couchbase, in this case. Using Spring Data Couchbase to Store Facebook Places Let’s look at a simple example that reads data (in this case from the Facebook Places API using Spring Social Facebook’s FacebookTemplate API) and then loads it into the Couchbase server. Get a Facebook Access Token You’ll also need a Facebook access token. The easiest way to do this is to go to the Facebook Developer Portal and create a new application and then get an application ID and an application secret. Take these two values and concatenate them with a pike character (|). Thus, you’ll have something of the form: appID|appSecret. The sample application uses Spring’s Environment mechanism to resolve the facebook.accessToken key. You can provide a value for it in the src/main/resources/application.properties file or using any of the other supported Spring Boot property resolution mechanisms. You could even provide the value as a -D argument: -Dfacebook.accessToken=...|... Telling Spring Data Couchbase About our Cluster Data in Couchbase is stored in buckets. It’s logically the same as a database in a SQL RDBMS. It is typically replicated across nodes and has its own configuration. We’ll be using the defaultbucket, but it’s a snap to create more buckets. Let’s look at the basic configuration required to use Spring Data Couchbase (in this case, in terms of a Spring Boot application): @SpringBootApplication @EnableScheduling @EnableCaching public class Application { @EnableCouchbaseRepositories @Configuration static class CouchbaseConfiguration extends AbstractCouchbaseConfiguration { @Value("${couchbase.cluster.bucket}") private String bucketName; @Value("${couchbase.cluster.password}") private String password; @Value("${couchbase.cluster.ip}") private String ip; @Override protected List bootstrapHosts() { return Arrays.asList(this.ip); } @Override protected String getBucketName() { return this.bucketName; } @Override protected String getBucketPassword() { return this.password; } } // more beans } A Spring Data Couchbase Repository Spring Data provides the notion of repositories - objects that handle typical data-access logic and provide convention-based queries. They can be used to map POJOs to data in the backing data store. Our example simply stores the information on businesses it reads from Facebook’s Places API. To acheive this we’ve created a simple Place entity that Spring Data Couchbase repositories will know how to persist: @Document(expiry = 0) class Place { @Id private String id; @Field private Location location; @Field @NotNull private String name; @Field private String affilitation, category, description, about; @Field private Date insertionDate; // .. getters, constructors, toString, etc } The Place entity references another entity, Location, which is basically the same. In the case of Spring Data Couchbase, repository finder methods map to views - queries written in JavaScript - in a Couchbase server. You’ll need to setup views on the Couchbase servers. Go to any Couchbase server’s admin console and visit the Views screen, then clickCreate Development View and name it place, as our entity will be demo.Place (the development view name is adapted from the entity’s class name by default). We’ll create two views, the generic all, which is required for any Spring Data Couchbase POJO, and the byName view, which will be used to drive the repository’s findByName finder method. This mapping is by convention, though you can override which view is employed with the @View annotation on the finder method’s declaration. First, all: Now, byName: When you’re done, be sure to Publish each view! Now you can use Spring Data repositories as you’d expect. The only thing that’s a bit different about these repositories is that we’re declaring a Spring Data Couchbase Query type for the argument to the findByName finder method, not a String. Using the @Query is straightforward: Query query = new Query(); query.setKey("Philz Coffee"); Collection places = placeRepository.findByName(query); places.forEach(System.out::println); Where to go from Here We’ve only covered some of the basics here. Spring Data Couchbase supports the Java bean validation API, and can be configured to honor validation constraints on its entities. Spring Data Couchbase also provides lower-level access to the CouchbaseClient API, if you want it. Spring Data Couchbase also implements the Spring CacheManager abstraction - you can use@Cacheable and friends with data on service methods and it’ll be transparently persisted to Couchbase for you. The code for this example is in my Github repository, co-developed with my pal Laurent Doguin (@ldoguin) over at Couchbase.
March 24, 2015
· 19,011 Views
article thumbnail
Getting Started With Activiti and Spring Boot
This post is a guest post by Activiti co-founder and community member Joram Barrez (@jbarrez) who works for Alfresco. Thanks Joram! I’d like to see more of these community guest posts, so - as usual - don’t hesitate to ping me (@starbuxman) with ideas and contributions! -Josh Introduction Activiti is an Apache-licensed business process management (BPM) engine. Such an engine has as core goal to take a process definition comprised of human tasks and service calls and execute those in a certain order, while exposing various API’s to start, manage and query data about process instances for that definition. Contrary to many of its competitors, Activiti is lightweight and integrates easily with any Java technology or project. All that, and it works at any scale - from just a few dozen to many thousands or even millions of process executions. The source code of Activiti can be found on Github. The project was founded and is sponsored by Alfresco, but enjoys contributions from all across the globe and industries. A process definition is typically visualized as a flow-chart-like diagram. In recent years, the BPMN 2.0 standard (an OMG standard, like UML) has become the de-facto ‘language’ of these diagrams. This standard defines how a certain shape on the diagram should be interpreted, both technically and business-wise and how it is stored, as a not-so-hipster XML file.. but luckily most of the tooling hides this for you. This is a standard, and you can use any number of compliant tools to design (and even run) your BPMN processes. That said, if you’re asking me, there is no better choice than Activiti! Spring Boot integration Activiti and Spring play nicely together. The convention-over-configuration approach in Spring Boot works nicely with Activiti’s process engine is setup and use. Out of the box, you only need a database, as process executions can span anywhere from a few seconds to a couple of years. Obviously, as an intrinsic part of a process definition is calling and consuming data to and from various systems with all kinds of technologies. The simplicity of adding the needed dependencies and integrating various pieces of (boiler-plate) logic with Spring Boot really makes this child’s play. Using Spring Boot and Activiti in a microservice approach also makes a lot of sense. Spring Boot makes it easy to get a production-ready service up and running in no time and - in a distributed microservice architecture - Activiti processes can glue together various microservices while also weaving in human workflow (tasks and forms) to achieve a certain goal. The Spring Boot integration in Activiti was created by Spring expert Josh Long. Josh and I did a webinar a couple of months ago that should give you a good insight into the basics of the Activiti integration for Spring Boot. The Activiti user guide section on Spring Boot is also a great starting place to get more information. Getting Started The code for this example can be found in my Github repository. The process we’ll implement here is a hiring process for a developer. It’s simplified of course (as it needs to fit on this web page), but you should get the core concepts. Here’s the diagram: As said in the introduction, all shapes here have a very specific interpretation thanks to the BPMN 2.0 standard. But even without knowledge of BPMN, the process is pretty easy to understand: When the process starts, the resume of the job applicant is stored in an external system. The process then waits until a telephone interview has been conducted. This is done by a user (see the little icon of a person in the corner). If the telephone interview wasn’t all that, a polite rejection email is sent. Otherwise, both a tech interview and financial negotiation should happen. Note that at any point, the applicant can cancel. That’s shown in the diagram as the event on the boundary of the big rectangle. When the event happens, everything inside will be killed and the process halts. If all goes well, a welcome email is sent. This is the BPMN for this process Let’s create a new Maven project, and add the dependencies needed to get Spring Boot, Activiti and a database. We’ll use an in memory database to keep things simple. org.activiti spring-boot-starter-basic ${activiti.version} com.h2database h2 1.4.185 So only two dependencies is what is needed to create a very first Spring Boot + Activiti application: @SpringBootApplication public class MyApp { public static void main(String[] args) { SpringApplication.run(MyApp.class, args); } } You could already run this application, it won’t do anything functionally but behind the scenes it already creates an in-memory H2 database creates an Activiti process engine using that database exposes all Activiti services as Spring Beans configures tidbits here and there such as the Activiti async job executor, mail server, etc. Let’s get something running. Drop the BPMN 2.0 process definition into thesrc/main/resources/processes folder. All processes placed here will automatically be deployed (ie. parsed and made to be executable) to the Activiti engine. Let’s keep things simple to start, and create a CommandLineRunner that will be executed when the app boots up: @Bean CommandLineRunner init( final RepositoryService repositoryService, final RuntimeService runtimeService, final TaskService taskService) { return new CommandLineRunner() { public void run(String... strings) throws Exception { Map variables = new HashMap(); variables.put("applicantName", "John Doe"); variables.put("email", "[email protected]"); variables.put("phoneNumber", "123456789"); runtimeService.startProcessInstanceByKey("hireProcess", variables); } }; } So what’s happening here is that we create a map of all the variables needed to run the process and pass it when starting process. If you’d check the process definition you’ll see we reference those variables using ${variableName} in many places (such as the task description). The first step of the process is an automatic step (see the little cogwheel icon), implemented using an expression that uses a Spring Bean: which is implemented with activiti:expression="${resumeService.storeResume()}" Of course, we need that bean or the process would not start. So let’s create it: @Component public class ResumeService { public void storeResume() { System.out.println("Storing resume ..."); } } When running the application now, you’ll see that the bean is called: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.2.0.RELEASE) 2015-02-16 11:55:11.129 INFO 304 --- [ main] MyApp : Starting MyApp on The-Activiti-Machine.local with PID 304 ... Storing resume ... 2015-02-16 11:55:13.662 INFO 304 --- [ main] MyApp : Started MyApp in 2.788 seconds (JVM running for 3.067) And that’s it! Congrats with running your first process instance using Activiti in Spring Boot! Let’s spice things up a bit, and add following dependency to our pom.xml: org.activiti spring-boot-starter-rest-api ${activiti.version} Having this on the classpath does a nifty thing: it takes the Activiti REST API (which is written in Spring MVC) and exposes this fully in your application. The REST API of Activiti is fully documented in the Activiti User Guide. The REST API is secured by basic auth, and won’t have any users by default. Let’s add an admin user to the system as shown below (add this to the MyApp class). Don’t do this in a production system of course, there you’ll want to hook in the authentication to LDAP or something else. @Bean InitializingBean usersAndGroupsInitializer(final IdentityService identityService) { return new InitializingBean() { public void afterPropertiesSet() throws Exception { Group group = identityService.newGroup("user"); group.setName("users"); group.setType("security-role"); identityService.saveGroup(group); User admin = identityService.newUser("admin"); admin.setPassword("admin"); identityService.saveUser(admin); } }; } Start the application. We can now start a process instance as we did in the CommandLineRunner, but now using REST: curl -u admin:admin -H "Content-Type: application/json" -d '{"processDefinitionKey":"hireProcess", "variables": [ {"name":"applicantName", "value":"John Doe"}, {"name":"email", "value":"[email protected]"}, {"name":"phoneNumber", "value":"1234567"} ]}' http://localhost:8080/runtime/process-instances Which returns us the json representation of the process instance: { "tenantId": "", "url": "http://localhost:8080/runtime/process-instances/5", "activityId": "sid-42BAE58A-8FFB-4B02-AAED-E0D8EA5A7E39", "id": "5", "processDefinitionUrl": "http://localhost:8080/repository/process-definitions/hireProcess:1:4", "suspended": false, "completed": false, "ended": false, "businessKey": null, "variables": [], "processDefinitionId": "hireProcess:1:4" } I just want to stand still for a moment how cool this is. Just by adding one dependency, you’re getting the whole Activiti REST API embedded in your application! Let’s make it even cooler, and add following dependency org.activiti spring-boot-starter-actuator ${activiti.version} This adds a Spring Boot actuator endpoint for Activiti. If we restart the application, and hithttp://localhost:8080/activiti/, we get some basic stats about our processes. With some imagination that in a live system you’ve got many more process definitions deployed and executing, you can see how this is useful. The same actuator is also registered as a JMX bean exposing similar information. { completedTaskCountToday: 0, deployedProcessDefinitions: [ "hireProcess (v1)" ], processDefinitionCount: 1, cachedProcessDefinitionCount: 1, runningProcessInstanceCount: { hireProcess (v1): 0 }, completedTaskCount: 0, completedActivities: 0, completedProcessInstanceCount: { hireProcess (v1): 0 }, openTaskCount: 0 } To finish our coding, let’s create a dedicated REST endpoint for our hire process, that could be consumed by for example a javascript web application (out of scope for this article). So most likely, we’ll have a form for the applicant to fill in the details we’ve been passing programmatically above. And while we’re at it, let’s store the applicant information as a JPA entity. In that case, the data won’t be stored in Activiti anymore, but in a separate table and referenced by Activiti when needed. You probably guessed it by now, JPA support is enabled by adding a dependency: org.activiti spring-boot-starter-jpa ${activiti.version} and add the entity to the MyApp class: @Entity class Applicant { @Id @GeneratedValue private Long id; private String name; private String email; private String phoneNumber; // Getters and setters We’ll also need a Repository for this Entity (put this in a separate file or also in MyApp). No need for any methods, the Repository magic from Spring will generate the methods we need for us. public interface ApplicantRepository extends JpaRepository { // .. } And now we can create the dedicated REST endpoint: @RestController public class MyRestController { @Autowired private RuntimeService runtimeService; @Autowired private ApplicantRepository applicantRepository; @RequestMapping(value="/start-hire-process", method= RequestMethod.POST, produces= MediaType.APPLICATION_JSON_VALUE) public void startHireProcess(@RequestBody Map data) { Applicant applicant = new Applicant(data.get("name"), data.get("email"), data.get("phoneNumber")); applicantRepository.save(applicant); Map variables = new HashMap(); variables.put("applicant", applicant); runtimeService.startProcessInstanceByKey("hireProcessWithJpa", variables); } } Note we’re now using a slightly different process called ‘hireProcessWithJpa’, which has a few tweaks in it to cope with the fact the data is now in a JPA entity. So for example, we can’t use ${applicantName} anymore, but we now have to use ${applicant.name}. Let’s restart the application and start a new process instance: curl -u admin:admin -H "Content-Type: application/json" -d '{"name":"John Doe", "email": "[email protected]", "phoneNumber":"123456789"}' http://localhost:8080/start-hire-process We can now go through our process. You could create a custom endpoints for this too, exposing different task queries with different forms … but I’ll leave this to your imagination and use the default Activiti REST end points to walk through the process. Let’s see which task the process instance currently is at (you could pass in more detailed parameters here, for example the ‘processInstanceId’ for better filtering): curl -u admin:admin -H "Content-Type: application/json" http://localhost:8080/runtime/tasks which returns { "order": "asc", "size": 1, "sort": "id", "total": 1, "data": [{ "id": "14", "processInstanceId": "8", "createTime": "2015-02-16T13:11:26.078+01:00", "description": "Conduct a telephone interview with John Doe. Phone number = 123456789", "name": "Telephone interview" ... }], "start": 0 } So, our process is now at the Telephone interview. In a realistic application, there would be a task list and a form that could be filled in to complete this task. Let’s complete this task (we have to set the telephoneInterviewOutcome variable as the exclusive gateway uses it to route the execution): curl -u admin:admin -H "Content-Type: application/json" -d '{"action" : "complete", "variables": [ {"name":"telephoneInterviewOutcome", "value":true} ]}' http://localhost:8080/runtime/tasks/14 When we get the tasks again now, the process instance will have moved on to the two tasks in parallel in the subprocess (big rectangle): { "order": "asc", "size": 2, "sort": "id", "total": 2, "data": [ { ... "name": "Tech interview" }, { ... "name": "Financial negotiation" } ], "start": 0 } We can now continue the rest of the process in a similar fashion, but I’ll leave that to you to play around with. Testing One of the strengths of using Activiti for creating business processes is that everything is simply Java. As a consequence, processes can be tested as regular Java code with unit tests. Spring Boot makes writing such test a breeze. Here’s how the unit test for the “happy path” looks like (while omitting @Autowired fields and test e-mail server setup). The code also shows the use of the Activiti API’s for querying tasks for a given group and process instance. @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = {MyApp.class}) @WebAppConfiguration @IntegrationTest public class HireProcessTest { @Test public void testHappyPath() { // Create test applicant Applicant applicant = new Applicant("John Doe", "[email protected]", "12344"); applicantRepository.save(applicant); // Start process instance Map variables = new HashMap(); variables.put("applicant", applicant); ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("hireProcessWithJpa", variables); // First, the 'phone interview' should be active Task task = taskService.createTaskQuery() .processInstanceId(processInstance.getId()) .taskCandidateGroup("dev-managers") .singleResult(); Assert.assertEquals("Telephone interview", task.getName()); // Completing the phone interview with success should trigger two new tasks Map taskVariables = new HashMap(); taskVariables.put("telephoneInterviewOutcome", true); taskService.complete(task.getId(), taskVariables); List tasks = taskService.createTaskQuery() .processInstanceId(processInstance.getId()) .orderByTaskName().asc() .list(); Assert.assertEquals(2, tasks.size()); Assert.assertEquals("Financial negotiation", tasks.get(0).getName()); Assert.assertEquals("Tech interview", tasks.get(1).getName()); // Completing both should wrap up the subprocess, send out the 'welcome mail' and end the process instance taskVariables = new HashMap(); taskVariables.put("techOk", true); taskService.complete(tasks.get(0).getId(), taskVariables); taskVariables = new HashMap(); taskVariables.put("financialOk", true); taskService.complete(tasks.get(1).getId(), taskVariables); // Verify email Assert.assertEquals(1, wiser.getMessages().size()); // Verify process completed Assert.assertEquals(1, historyService.createHistoricProcessInstanceQuery().finished().count()); } Next steps We haven’t touched any of the tooling around Activiti. There is a bunch more than just the engine, like the Eclipse plugin to design processes, a free web editor in the cloud (also included in the .zip download you can get from Activiti's site, a web application that showcases many of the features of the engine, … The current release of Activiti (version 5.17.0) has integration with Spring Boot 1.1.6. However, the current master version is compatible with 1.2.1. Using Spring Boot 1.2.0 brings us sweet stuff like support for XA transactions with JTA. This means you can hook up your processes easily with JMS, JPA and Activiti logic all in the same transaction! ..Which brings us to the next point … In this example, we’ve focussed heavily on human interactions (and barely touched it). But there’s many things you can do around orchestrating systems too. The Spring Boot integration also has Spring Integration support you could leverage to do just that in a very neat way! And of course there is much much more about the BPMN 2.0 standard. Read more about itin the Activiti docs.
March 20, 2015
· 50,351 Views · 4 Likes
article thumbnail
Better Application Events in Spring Framework 4.2
Application events are available since the very beginning of the Spring framework as a mean for loosely coupled components to exchange information.
February 27, 2015
· 20,911 Views · 2 Likes
article thumbnail
The API Gateway Pattern: Angular JS and Spring Security Part IV
Written by Dave Syer in the Spring blog In this article we continue our discussion of how to use Spring Security with Angular JS in a “single page application”. Here we show how to build an API Gateway to control the authentication and access to the backend resources using Spring Cloud. This is the fourth in a series of articles, and you can catch up on the basic building blocks of the application or build it from scratch by reading the first article, or you can just go straight to the source code in Github. In the last article we built a simple distributed application that used Spring Session to authenticate the backend resources. In this one we make the UI server into a reverse proxy to the backend resource server, fixing the issues with the last implementation (technical complexity introduced by custom token authentication), and giving us a lot of new options for controlling access from the browser client. Reminder: if you are working through this article with the sample application, be sure to clear your browser cache of cookies and HTTP Basic credentials. In Chrome the best way to do that for a single server is to open a new incognito window. Creating an API Gateway An API Gateway is a single point of entry (and control) for front end clients, which could be browser based (like the examples in this article) or mobile. The client only has to know the URL of one server, and the backend can be refactored at will with no change, which is a significant advantage. There are other advantages in terms of centralization and control: rate limiting, authentication, auditing and logging. And implementing a simple reverse proxy is really simple with Spring Cloud. If you were following along in the code, you will know that the application implementation at the end of the last article was a bit complicated, so it’s not a great place to iterate away from. There was, however, a halfway point which we could start from more easily, where the backend resource wasn’t yet secured with Spring Security. The source code for this is a separate project in Github so we are going to start from there. It has a UI server and a resource server and they are talking to each other. The resource server doesn’t have Spring Security yet so we can get the system working first and then add that layer. Declarative Reverse Proxy in One Line To turn it into an API Gateawy, the UI server needs one small tweak. Somewhere in the Spring configuration we need to add an @EnableZuulProxy annotation, e.g. in the main (only)application class: @SpringBootApplication @RestController @EnableZuulProxy public class UiApplication { ... } and in an external configuration file we need to map a local resource in the UI server to a remote one in the external configuration (“application.yml”): security: ... zuul: routes: resource: path: /resource/** url: http://localhost:9000 This says “map paths with the pattern /resource/** in this server to the same paths in the remote server at localhost:9000”. Simple and yet effective (OK so it’s 6 lines including the YAML, but you don’t always need that)! All we need to make this work is the right stuff on the classpath. For that purpose we have a few new lines in our Maven POM: org.springframework.cloud spring-cloud-starter-parent 1.0.0.BUILD-SNAPSHOT pom import org.springframework.cloud spring-cloud-starter-zuul ... Note the use of the “spring-cloud-starter-zuul” - it’s a starter POM just like the Spring Boot ones, but it governs the dependencies we need for this Zuul proxy. We are also using because we want to be able to depend on all the versions of transitive dependencies being correct. Consuming the Proxy in the Client With those changes in place our application still works, but we haven’t actually used the new proxy yet until we modify the client. Fortunately that’s trivial. We just need to go from this implementation of the “home” controller: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('http://localhost:9000/').success(function(data) { $scope.greeting = data; }) }); to a local resource: angular.module('hello', [ 'ngRoute' ]) ... .controller('home', function($scope, $http) { $http.get('resource/').success(function(data) { $scope.greeting = data; }) }); Now when we fire up the servers everything is working and the requests are being proxied through the UI (API Gateway) to the resource server. Further Simplifications Even better: we don’t need the CORS filter any more in the resource server. We threw that one together pretty quickly anyway, and it should have been a red light that we had to do anything as technically focused by hand (especially where it concerns security). Fortunately it is now redundant, so we can just throw it away, and go back to sleeping at night! Securing the Resource Server You might remember in the intermediate state that we started from there is no security in place for the resource server. Aside: Lack of software security might not even be a problem if your network architecture mirrors the application architecture (you can just make the resource server physically inaccessible to anyone but the UI server). As a simple demonstration of that we can make the resource server only accessible on localhost. Just add this to application.properties in the resource server: server.address: 127.0.0.1 Wow, that was easy! Do that with a network address that’s only visible in your data center and you have a security solution that works for all resource servers and all user desktops. Suppose that we decide we do need security at the software level (quite likely for a number of reasons). That’s not going to be a problem, because all we need to do is add Spring Security as a dependency (in the resource server POM): org.springframework.boot spring-boot-starter-security That’s enough to get us a secure resource server, but it won’t get us a working application yet, for the same reason that it didn’t in Part III: there is no shared authentication state between the two servers. Sharing Authentication State We can use the same mechanism to share authentication (and CSRF) state as we did in the last, i.e. Spring Session. We add the dependency to both servers as before: org.springframework.session spring-session 1.0.0.RELEASE org.springframework.boot spring-boot-starter-redis but this time the configuration is much simpler because we can just add the same Filterdeclaration to both. First the UI server (adding @EnableRedisHttpSession): @SpringBootApplication @RestController @EnableZuulProxy @EnableRedisHttpSession public class UiApplication { ... } and then the resource server. There are two changes to make: one is adding@EnableRedisHttpSession and a HeaderHttpSessionStrategy bean to theResourceApplication: @SpringBootApplication @RestController @EnableRedisHttpSession class ResourceApplication { ... @Bean HeaderHttpSessionStrategy sessionStrategy() { new HeaderHttpSessionStrategy(); } } and the other is to explicitly ask for a non-stateless session creation policy inapplication.properties: security.sessions: NEVER As long as redis is still running in the background (use the fig.yml if you like to start it) then the system will work. Load the homepage for the UI at http://localhost:8080 and login and you will see the message from the backend rendered on the homepage. How Does it Work? What is going on behind the scenes now? First we can look at the HTTP requests in the UI server (and API Gateway): VERB PATH STATUS RESPONSE GET / 200 index.html GET /css/angular-bootstrap.css 200 Twitter bootstrap CSS GET /js/angular-bootstrap.js 200 Bootstrap and Angular JS GET /js/hello.js 200 Application logic GET /user 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /resource 302 Redirect to login page GET /login 200 Whitelabel login page (ignored) GET /login.html 200 Angular login form partial POST /login 302 Redirect to home page (ignored) GET /user 200 JSON authenticated user GET /resource 200 (Proxied) JSON greeting That’s identical to the sequence at the end of Part II except for the fact that the cookie names are slightly different (“SESSION” instead of “JSESSIONID”) because we are using Spring Session. But the architecture is different and that last request to “/resource” is special because it was proxied to the resource server. We can see the reverse proxy in action by looking at the “/trace” endpoint in the UI server (from Spring Boot Actuator, which we added with the Spring Cloud dependencies). Go tohttp://localhost:8080/trace in a browser and scroll to the end (if you don’t have one already get a JSON plugin for your browser to make it nice and readable). You will need to authenticate with HTTP Basic (browser popup), but the same credentials are valid as for your login form. At or near the end you should see a pair of requests something like this: { "timestamp": 1420558194546, "info": { "method": "GET", "path": "/", "query": "" "remote": true, "proxy": "resource", "headers": { "request": { "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260", "x-forwarded-prefix": "/resource", "x-forwarded-host": "localhost:8080" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } }, } }, { "timestamp": 1420558200232, "info": { "method": "GET", "path": "/resource/", "headers": { "request": { "host": "localhost:8080", "accept": "application/json, text/plain, */*", "x-xsrf-token": "542c7005-309c-4f50-8a1d-d6c74afe8260", "cookie": "SESSION=c18846b5-f805-4679-9820-cd13bd83be67; XSRF-TOKEN=542c7005-309c-4f50-8a1d-d6c74afe8260" }, "response": { "Content-Type": "application/json;charset=UTF-8", "status": "200" } } } }, The second entry there is the request from the client to the gateway on “/resource” and you can see the cookies (added by the browser) and the CSRF header (added by Angular as discussed inPart II). The first entry has remote: true and that means it’s tracing the call to the resource server. You can see it went out to a uri path “/” and you can see that (crucially) the cookies and CSRF headers have been sent too. Without Spring Session these headers would be meaningless to the resource server, but the way we have set it up it can now use those headers to re-constitute a session with authentication and CSRF token data. So the request is permitted and we are in business! Conclusion We covered quite a lot in this article but we got to a really nice place where there is a minimal amount of boilerplate code in our two servers, they are both nicely secure and the user experience isn’t compromised. That alone would be a reason to use the API Gateway pattern, but really we have only scratched the surface of what that might be used for (Netflix uses it for a lot of things). Read up on Spring Cloud to find out more on how to make it easy to add more features to the gateway. The next article in this series will extend the application architecture a bit by extracting the authentication responsibilities to a separate server (the Single Sign On pattern).
February 9, 2015
· 16,033 Views
article thumbnail
Latest Jackson Integration Improvements in Spring
Originally written by Sébastien Deluze on the SpringSource blog Spring Jackson support has been improved lately to be more flexible and powerful. This blog post gives you an update about the most useful Jackson related features available in Spring Framework 4.x and Spring Boot. All the code samples are coming from this spring-jackson-demo sample application, feel free to have a look at the code. JSON Views It can sometimes be useful to filter contextually objects serialized to the HTTP response body. In order to provide such capabilities, Spring MVC now has builtin support for Jackson’s Serialization Views. The following example illustrates how to use @JsonView to filter fields depending on the context of serialization - e.g. getting a "summary" view when dealing with collections, and getting a full representation when dealing with a single resource: public class View { interface Summary {} } public class User { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private String firstname; @JsonView(View.Summary.class) private String lastname; private String email; private String address; private String postalCode; private String city; private String country; } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; private List recipients; private String body; } Thanks to Spring MVC @JsonView support, it is possible to choose, on a per handler method basis, which field should be serialized: @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.Summary.class) @RequestMapping("/") public List getAllMessages() { return messageService.getAll(); } @RequestMapping("/{id}") public Message getMessage(@PathVariable Long id) { return messageService.get(id); } } In this example, if all messages are retrieved, only the most important fields are serialized thanks to the getAllMessages() method annotated with@JsonView(View.Summary.class): [ { "id" : 1, "created" : "2014-11-14", "title" : "Info", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel" } }, { "id" : 2, "created" : "2014-11-14", "title" : "Warning", "author" : { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll" } }, { "id" : 3, "created" : "2014-11-14", "title" : "Alert", "author" : { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev" } } ] In Spring MVC default configuration, MapperFeature.DEFAULT_VIEW_INCLUSION is set tofalse. That means that when enabling a JSON View, non annotated fields or properties likebody or recipients are not serialized. When a specific Message is retrieved using the getMessage() handler method (no JSON View specified), all fields are serialized as expected: { "id" : 1, "created" : "2014-11-14", "title" : "Info", "body" : "This is an information message", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel", "email" : "[email protected]", "address" : "1 Jaures street", "postalCode" : "69003", "city" : "Lyon", "country" : "France" }, "recipients" : [ { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll", "email" : "[email protected]", "address" : "42 Obama street", "postalCode" : "1000", "city" : "Brussel", "country" : "Belgium" }, { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev", "email" : "[email protected]", "address" : "3 Warren street", "postalCode" : "10011", "city" : "New York", "country" : "USA" } ] } Only one class or interface can be specified with the @JsonView annotation, but you can use inheritance to represent JSON View hierarchies (if a field is part of a JSON View, it will be also part of parent view). For example, this handler method will serialize fields annotated with@JsonView(View.Summary.class) and @JsonView(View.SummaryWithRecipients.class): public class View { interface Summary {} interface SummaryWithRecipients extends Summary {} } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; @JsonView(View.SummaryWithRecipients.class) private List recipients; private String body; } @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.SummaryWithRecipients.class) @RequestMapping("/with-recipients") public List getAllMessagesWithRecipients() { return messageService.getAll(); } } JSON Views could also be specified when using RestTemplate HTTP client orMappingJackson2JsonView by wrapping the value to serialize in a MappingJacksonValue as shown in this code sample. JSONP As described in the reference documentation, you can enable JSONP for @ResponseBody andResponseEntity methods by declaring an @ControllerAdvice bean that extendsAbstractJsonpResponseBodyAdvice as shown below: @ControllerAdvice public class JsonpAdvice extends AbstractJsonpResponseBodyAdvice { public JsonpAdvice() { super("callback"); } } With such @ControllerAdvice bean registered, it will be possible to request the JSON webservice from another domain using a In this example, the received payload would be: parseResponse({ "id" : 1, "created" : "2014-11-14", ... }); JSONP is also supported and automatically enabled when using MappingJackson2JsonViewwith a request that has a query parameter named jsonp or callback. The JSONP query parameter name(s) could be customized through the jsonpParameterNames property. XML support Since 2.0 release, Jackson provides first class support for some other data formats than JSON. Spring Framework and Spring Boot provide builtin support for Jackson based XML serialization/deserialization. As soon as you include the jackson-dataformat-xml dependency to your project, it is automatically used instead of JAXB2. Using Jackson XML extension has several advantages over JAXB2: Both Jackson and JAXB annotations are recognized JSON View are supported, allowing you to build easily REST Webservices with the same filtered output for both XML and JSON data formats No need to annotate your class with @XmlRootElement, each class serializable in JSON will serializable in XML You usually also want to make sure that the XML library in use is Woodstox since: It is faster than Stax implementation provided with the JDK It avoids some known issues like adding unnecessary namespace prefixes Some features like pretty print don't work without it In order to use it, simply add the latest woodstox-core-asl dependency available to your project. Customizing the Jackson ObjectMapper Prior to Spring Framework 4.1.1, Jackson HttpMessageConverters were usingObjectMapper default configuration. In order to provide a better and easily customizable default configuration, a new Jackson2ObjectMapperBuilder has been introduced. It is the JavaConfig equivalent of the well known Jackson2ObjectMapperFactoryBean used in XML configuration. Jackson2ObjectMapperBuilder provides a nice API to customize various Jackson settings while retaining Spring Framework provided default ones. It also allows to createObjectMapper and XmlMapper instances based on the same configuration. Both Jackson2ObjectMapperBuilder and Jackson2ObjectMapperFactoryBean define a better Jackson default configuration. For example, theDeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES property set to false, in order to allow deserialization of JSON objects with unmapped properties. Jackson support for Java 8 Date & Time API data types is automatically registered when Java 8 is used and jackson-datatype-jsr310 is on the classpath. Joda-Time support is registered as well when jackson-datatype-joda is part of your project dependencies. These classes also allow you to register easily Jackson mixins, modules, serializers or even property naming strategy like PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES if you want to have your userName java property translated to user_name in JSON. With Spring Boot As described in the Spring Boot reference documentation, there are various ways tocustomize the Jackson ObjectMapper. You can for example enable/disable Jackson features easily by adding properties likespring.jackson.serialization.indent_output=true to application.properties. As an alternative, in the upcoming 1.2 release Spring Boot also allows to customize the Jackson configuration (JSON and XML) used by Spring MVC HttpMessageConverters by declaring a Jackson2ObjectMapperBuilder @Bean: @Bean public Jackson2ObjectMapperBuilder jacksonBuilder() { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); return builder; } This is useful if you want to use advanced Jackson configuration not exposed through regular configuration keys. Without Spring Boot In a plain Spring Framework application, you can also use Jackson2ObjectMapperBuilder to customize the XML and JSON HttpMessageConverters as shown bellow: @Configuration @EnableWebMvc public class WebConfiguration extends WebMvcConfigurerAdapter { @Override public void configureMessageConverters(List> converters) { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); converters.add(new MappingJackson2HttpMessageConverter(builder.build())); converters.add(new MappingJackson2XmlHttpMessageConverter(builder.createXmlMapper(true).build())); } } More to come With the upcoming Spring Framework 4.1.3 release, thanks to the addition of a Spring context aware HandlerInstantiator (see SPR-10768 for more details), you will be able to autowire Jackson handlers (serializers, deserializers, type and type id resolvers). This will allow you to build, for example, a custom deserializer that will replace a field containing only a reference in the JSON payload by the full Entity retrieved from the database.
December 9, 2014
· 32,184 Views · 1 Like
article thumbnail
Spring Integration Java DSL (pre Java 8): Line by Line Tutorial
Originally written by Artem Bilan on the SpringSource blog. Dear Spring Community! Recently we published the Spring Integration Java DSL: Line by line tutorial, which uses Java 8 Lambdas extensively. We received some feedback that this is good introduction to the DSL, but a similar tutorial is needed for those users, who can't move to the Java 8 or aren't yet familiar with Lambdas, but wish to take advantage So, to help those Spring Integration users who want to moved from XML configuration to Java & Annotation configuration, we provide this line-by-line tutorial to demonstrate that, even without Lambdas, we gain a lot from Spring Integration Java DSL usage. Although, most will agree that the lambda syntax provides for a more succinct definition. We analyse here the same Cafe Demo sample, but using the pre Java 8 variant for configuration. Many options are the same, so we just copy/paste their description here to achieve a complete picture. Since this Spring Integration Java DSL configuration is quite different to the Java 8 lambda style, it will be useful for all users to get a knowlage how we can achieve the same result with a rich variety of options provided by the Spring Integration Java DSL. The source code for our application is placed in a single class, which is a Boot application; significant lines are annotated with a number corresponding to the comments, which follow: @SpringBootApplication // 1 @IntegrationComponentScan // 2 public class Application { public static void main(String[] args) throws Exception { ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args); // 3 Cafe cafe = ctx.getBean(Cafe.class); // 4 for (int i = 1; i <= 100; i++) { // 5 Order order = new Order(i); order.addItem(DrinkType.LATTE, 2, false); order.addItem(DrinkType.MOCHA, 3, true); cafe.placeOrder(order); } System.out.println("Hit 'Enter' to terminate"); // 6 System.in.read(); ctx.close(); } @MessagingGateway // 7 public interface Cafe { @Gateway(requestChannel = "orders.input") // 8 void placeOrder(Order order); // 9 } private final AtomicInteger hotDrinkCounter = new AtomicInteger(); private final AtomicInteger coldDrinkCounter = new AtomicInteger(); // 10 @Autowired private CafeAggregator cafeAggregator; // 11 @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { // 12 return Pollers.fixedDelay(1000).get(); } @Bean @SuppressWarnings("unchecked") public IntegrationFlow orders() { // 13 return IntegrationFlows.from("orders.input") // 14 .split("payload.items", (Consumer) null) // 15 .channel(MessageChannels.executor(Executors.newCachedThreadPool()))// 16 .route("payload.iced", // 17 new Consumer>() { // 18 @Override public void accept(RouterSpec spec) { spec.channelMapping("true", "iced") .channelMapping("false", "hot"); // 19 } }) .get(); // 20 } @Bean public IntegrationFlow icedFlow() { // 21 return IntegrationFlows.from(MessageChannels.queue("iced", 10)) // 22 .handle(new GenericHandler() { // 23 @Override public Object handle(OrderItem payload, Map headers) { Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); System.out.println(Thread.currentThread().getName() + " prepared cold drink #" + coldDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; // 24 } }) .channel("output") // 25 .get(); } @Bean public IntegrationFlow hotFlow() { // 26 return IntegrationFlows.from(MessageChannels.queue("hot", 10)) .handle(new GenericHandler() { @Override public Object handle(OrderItem payload, Map headers) { Uninterruptibles.sleepUninterruptibly(5, TimeUnit.SECONDS); // 27 System.out.println(Thread.currentThread().getName() + " prepared hot drink #" + hotDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; } }) .channel("output") .get(); } @Bean public IntegrationFlow resultFlow() { // 28 return IntegrationFlows.from("output") // 29 .transform(new GenericTransformer() { // 30 @Override public Drink transform(OrderItem orderItem) { return new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots()); // 31 } }) .aggregate(new Consumer() { // 32 @Override public void accept(AggregatorSpec aggregatorSpec) { aggregatorSpec.processor(cafeAggregator, null); // 33 } }, null) .handle(CharacterStreamWritingMessageHandler.stdout()) // 34 .get(); } @Component public static class CafeAggregator { // 35 @Aggregator // 36 public Delivery output(List drinks) { return new Delivery(drinks); } @CorrelationStrategy // 37 public Integer correlation(Drink drink) { return drink.getOrderNumber(); } } } Examining the code line by line... 1. @SpringBootApplication This new meta-annotation from Spring Boot 1.2. Includes @Configuration and@EnableAutoConfiguration. Since we are in a Spring Integration application and Spring Boot has auto-configuration for it, the @EnableIntegration is automatically applied, to initialize the Spring Integration infrastructure including an environment for the Java DSL -DslIntegrationConfigurationInitializer, which is picked up by theIntegrationConfigurationBeanFactoryPostProcessor from /META-INF/spring.factories. 2. @IntegrationComponentScan The Spring Integration analogue of @ComponentScan to scan components based on interfaces, (the Spring Framework's @ComponentScan only looks at classes). Spring Integration supports the discovery of interfaces annotated with @MessagingGateway (see #7 below). 3. ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args); The main method of our class is designed to start the Spring Boot application using the configuration from this class and starts an ApplicationContext via Spring Boot. In addition, it delegates command line arguments to the Spring Boot. For example you can specify --debug to see logs for the boot auto-configuration report. 4. Cafe cafe = ctx.getBean(Cafe.class); Since we already have an ApplicationContext we can start to interact with application. AndCafe is that entry point - in EIP terms a gateway. Gateways are simply interfaces and the application does not interact with the Messaging API; it simply deals with the domain (see #7 below). 5. for (int i = 1; i <= 100; i++) { To demonstrate the cafe "work" we intiate 100 orders with two drinks - one hot and one iced. And send the Order to the Cafe gateway. 6. System.out.println("Hit 'Enter' to terminate"); Typically Spring Integration application are asynchronous, hence to avoid early exit from themain Thread we block the main method until some end-user interaction through the command line. Non daemon threads will keep the application open but System.read()provides us with a mechanism to close the application cleanly. 7. @MessagingGateway The annotation to mark a business interface to indicate it is a gateway between the end-application and integration layer. It is an analogue of component from Spring Integration XML configuration. Spring Integration creates a Proxy for this interface and populates it as a bean in the application context. The purpose of this Proxy is to wrap parameters in a Message object and send it to the MessageChannel according to the provided options. 8. @Gateway(requestChannel = "orders.input") The method level annotation to distinct business logic by methods as well as by the target integration flows. In this sample we use a requestChannel reference of orders.input, which is a MessageChannel bean name of our IntegrationFlow input channel (see below #14). 9. void placeOrder(Order order); The interface method is a central point to interact from end-application with the integration layer. This method has a void return type. It means that our integration flow is one-wayand we just send messages to the integration flow, but don't wait for a reply. 10. private AtomicInteger hotDrinkCounter = new AtomicInteger(); private AtomicInteger coldDrinkCounter = new AtomicInteger(); Two counters to gather the information how our cafe works with drinks. 11. @Autowired private CafeAggregator cafeAggregator; The POJO for the Aggregator logic (see #33 and #35 below). Since it is a Spring bean, we can simply inject it even to the current @Configuration and use in any place below, e.g. from the .aggregate() EIP-method. 12. @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { The default poller bean. It is a analogue of component from Spring Integration XML configuration. Required for endpoints where the inputChannelis a PollableChannel. In this case, it is necessary for the two Cafe queues - hot and iced (see below #18). Here we use the Pollers factory from the DSL project and use its method-chain fluent API to build the poller metadata. Note that Pollers can be used directly from an IntegrationFlow definition, if a specific poller (rather than the default poller) is needed for an endpoint. 13. @Bean public IntegrationFlow orders() { The IntegrationFlow bean definition. It is the central component of the Spring Integration Java DSL, although it does not play any role at runtime, just during the bean registration phase. All other code below registers Spring Integration components (MessageChannel,MessageHandler, EventDrivenConsumer, MessageProducer, MessageSource etc.) in theIntegrationFlow object, which is parsed by the IntegrationFlowBeanPostProcessor to process those components and register them as beans in the application context as necessary (some elements, such as channels may already exist). 14. return IntegrationFlows.from("orders.input") The IntegrationFlows is the main factory class to start the IntegrationFlow. It provides a number of overloaded .from() methods to allow starting a flow from aSourcePollingChannelAdapter for a MessageSource implementations, e.g.JdbcPollingChannelAdapter; from a MessageProducer, e.g.WebSocketInboundChannelAdapter; or simply a MessageChannel. All ".from()" options have several convenient variants to configure the appropriate component for the start of theIntegrationFlow. Here we use just a channel name, which is converted to aDirectChannel bean definition during the bean definition phase while parsing theIntegrationFlow. In the Java 8 variant, we used here a Lambda definition - and thisMessageChannel has been implicitly created with the bean name based on theIntegrationFlow bean name. 15. .split("payload.items", (Consumer) null) Since our integration flow accepts messages through the orders.input channel, we are ready to consume and process them. The first EIP-method in our scenario is .split(). We know that the message payload from orders.input channel is an Order domain object, so we can simply use here a Spring (SpEL) Expression to return Collection. So, this performs the split EI pattern, and we send each collection entry as a separate message to the next channel. In the background, the .split() method registers aExpressionEvaluatingSplitter MessageHandler implementation and anEventDrivenConsumer for that MessageHandler, wiring in the orders.input channel as the inputChannel. The second argument for the .split() EIP-method is for an endpointConfigurer to customize options like autoStartup, requiresReply, adviceChain etc. We use herenull to show that we rely on the default options for the endpoint. Many of EIP-methods provide overloaded versions with and without endpointConfigurer. Currently.split(String expression) EIP-method without the endpointConfigurer argument is not available; this will be addressed in a future release. 16. .channel(MessageChannels.executor(Executors.newCachedThreadPool())) The .channel() EIP-method allows the specification of concrete MessageChannels between endpoints, as it is done via output-channel/input-channel attributes pair with Spring Integration XML configuration. By default, endpoints in the DSL integration flow definition are wired with DirectChannels, which get bean names based on theIntegrationFlow bean name and index in the flow chain. In this case we select a specificMessageChannel implementation from the Channels factory class; the selected channel here is an ExecutorChannel, to allow distribution of messages from the splitter to separate Threads, to process them in parallel in the downstream flow. 17. .route("payload.iced", The next EIP-method in our scenario is .route(), to send hot/iced order items to different Cafe kitchens. We again use here a SpEL expression to get the routingKey from the incoming message. In the Java 8 variant, we used a method-reference Lambda expression, but for pre Java 8 style we must use SpEL or an inline interface implementation. Many anonymous classes in a flow can make the flow difficult to read so we prefer SpEL in most cases. 18. new Consumer>() { The second argument of .route() EIP-method is a functional interface Consumer to specify ExpressionEvaluatingRouter options using a RouterSpec Builder. Since we don't have any choice with pre Java 8, we just provide here an inline implementation for this interface. 19. spec.channelMapping("true", "iced") .channelMapping("false", "hot"); With the Consumer>#accept()implementation we can provide desired AbstractMappingMessageRouter options. One of them is channelMappings, when we specify the routing logic by the result of router expresion and the target MessageChannel for the apropriate result. In this case iced andhot are MessageChannel names for IntegrationFlows below. 20. .get(); This finalizes the flow. Any IntegrationFlows.from() method returns anIntegrationFlowBuilder instance and this get() method extracts an IntegrationFlowobject from the IntegrationFlowBuilder configuration. Everything starting from the.from() and up to the method before the .get() is an IntegrationFlow definition. All defined components are stored in the IntegrationFlow and processed by theIntegrationFlowBeanPostProcessor during the bean creation phase. 21. @Bean public IntegrationFlow icedFlow() { This is the second IntegrationFlow bean definition - for iced drinks. Here we demonstrate that several IntegrationFlows can be wired together to create a single complex application. Note: it isn't recommended to inject one IntegrationFlow to another; it might cause unexpected behaviour. Since they provide Integration components for the bean registration and MessageChannels one of them, the best way to wire and inject is viaMessageChannel or @MessagingGateway interfaces. 22. return IntegrationFlows.from(MessageChannels.queue("iced", 10)) The iced IntegrationFlow starts from a QueueChannel that has a capacity of 10messages; it is registered as a bean with the name iced. As you remember we use this name as one of the route mappings (see above #19). In our sample, we use here a restricted QueueChannel to reflect the Cafe kitchen busy state from real life. And here is a place where we need that global poller for the next endpoint which is listening on this channel. 23. .handle(new GenericHandler() { The .handle() EIP-method of the iced flow demonstrates the concrete Cafe kitchen work. Since we can't minimize the code with something like Java 8 Lambda expression, we provide here an inline implementation for the GenericHandler functional interface with the expected payload type as the generic argument. With the Java 8 example, we distribute this.handle() between several subscriber subflows for a PublishSubscribeChannel. However in this case, the logic is all implemented in the one method. 24. Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); System.out.println(Thread.currentThread().getName() + " prepared cold drink #" + coldDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; The business logic implementation for the current .handle() EIP-component. WithUninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); we just block the current Thread for some timeout to demonstrate how quickly the Cafe kitchen prepares a drink. After that we just report to STDOUT that the drink is ready and return the currentOrderItem from the GenericHandler for the next endpoint in our IntegrationFlow. In the background, the DSL framework registers a ServiceActivatingHandler for theMethodInvokingMessageProcessor to invoke the GenericHandler#handle at runtime. In addition, the framework registers a PollingConsumer endpoint for the QueueChannelabove. This endpoint relies on the default poller to poll messages from the queue. Of course, we always can use a specific poller for any concrete endpoint. In that case, we would have to provide a second endpointConfigurer argument to the .handle() EIP-method. 25. .channel("output") Since it is not the end of our Cafe scenario, we send the result of the current flow to theoutput channel using the convenient EIP-method .channel() and the name of theMessageChannel bean (see below #29). This is the logical end of the current iced drink subflow, so we use the .get() method to return the IntegrationFlow. Flows that end with a reply-producing handler that don't have a final .channel() will return the reply to the message replyChannel header. 26. @Bean public IntegrationFlow hotFlow() { The IntegrationFlow definition for hot drinks. It is similar to the previous iced drinks flow, but with specific hot business logic. It starts from the hot QueueChannel which is mapped from the router above. 27. Uninterruptibles.sleepUninterruptibly(5, TimeUnit.SECONDS); The sleepUninterruptibly for hot drinks. Right, we need more time to boil the water! 28. @Bean public IntegrationFlow resultFlow() { One more IntegrationFlow bean definition to prepare the Delivery for the Cafe client based on the Drinks. 29. return IntegrationFlows.from("output") The resultFlow starts from the DirectChannel, which is created during the bean definition phase with this provided name. You should remember that we use the outputchannel name from the Cafe kitchens flows in the last .channel() in those definitions. 30. .transform(new GenericTransformer() { The .transform() EIP-method is for the appropriate pattern implementation and expects some object to convert one payload to another. In our sample we use an inline implementation of the GenericTransformer functional interface to convert OrderItem to Drink and we specify that using generic arguments. In the background, the DSL framework registers aMessageTransformingHandler and an EventDrivenConsumer endpoint with default options to consume messages from the output MessageChannel. 31. public Drink transform(OrderItem orderItem) { return new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots()); } The business-specific GenericTransformer#transform() implementation to demonstrate how we benefit from Java Generics to transform one payload to another. Note: Spring Integration uses ConversionService before any method invocation and if you provide some specific Converter implementation, some domain payload can be converted to another automatically, when the framework has an appropriate registered Converter. 32. .aggregate(new Consumer() { The .aggregate() EIP-method provides options to configure anAggregatingMessageHandler and its endpoint, similar to what we can do with the component when using Spring Integration XML configuration. Of course, with the Java DSL we have more power to configure the aggregator in place, without any other extra beans. However we demonstrate here an aggregator configuration with annotations (see below #35). From the Cafe business logic perspective we compose the Delivery for the initial Order, since we .split() the original order to the OrderItems near the beginning. 33. public void accept(AggregatorSpec aggregatorSpec) { aggregatorSpec.processor(cafeAggregator, null); } An inline implementation of the Consumer for the AggregatorSpec. Using theaggregatorSpec Builder we can provide desired options for the aggregator component, which will be registered as an AggregatingMessageHandler bean. Here we just provide theprocessor as a reference to the autowired (see #11 above) CafeAggregator component (see #35 below). The second argument of the .processor() option is methodName. Since we are relying on the aggregator annotation configuration for the POJO, we don't need to provide the method here and the framework will determine the correct POJO methods in the background. 34. .handle(CharacterStreamWritingMessageHandler.stdout()) It is the end of our flow - the Delivery is delivered to the client! We just print here the message payload to STDOUT using out-of-the-boxCharacterStreamWritingMessageHandler from Spring Integration Core. This is a case to show how existing components from Spring Integration Core (and its modules) can be used from the Java DSL. 35. @Component public static class CafeAggregator { The bean to specify the business logic for the aggregator above. This bean is picked up by the @ComponentScan, which is a part of the @SpringBootApplication meta-annotation (see above #1). So, this component becomes a bean and we can automatically wire (@Autowired) it to other components in the application context (see #11 above). 36. @Aggregator public Delivery output(List drinks) { return new Delivery(drinks); } The POJO-specific MessageGroupProcessor to build the output payload based on the payloads from aggregated messages. Since we mark this method with the @Aggregatorannotation, the target AggregatingMessageHandler can extract this method for theMethodInvokingMessageGroupProcessor. 37. @CorrelationStrategy public Integer correlation(Drink drink) { return drink.getOrderNumber(); } The POJO-specific CorrelationStrategy to extract the custom correlationKey from each inbound aggregator message. Since we mark this method with @CorrelationStrategyannotation the target AggregatingMessageHandler can extract this method for theMethodInvokingCorrelationStrategy. There is a similar self-explained@ReleaseStrategy annotation, but we rely in our Cafe sample just on the defaultSequenceSizeReleaseStrategy, which is based on the sequenceDetails message header populated by the splitter from the beginning of our integration flow. Well, we have finished describing the Cafe Demo sample based on the Spring Integration Java DSL when Java Lambda support is not available. Compare it with XML sample and also seeLambda support tutorial to get more information regarding Spring Integration. As you can see, using the DSL without lambdas is a little more verbose because you need to provide boilerplate code for inline anonymous implementations of functional interfaces. However, we believe it is important to support the use of the DSL for users who can't yet move to Java 8. Many of the DSL benefits (fluent API, compile-time validation etc) are available for all users. The use of lambdas continues the Spring Framework tradition of reducing or eliminating boilerplate code, so we encourage users to try Java 8 and lambdas and to encourage their organizations to consider allowing the use of Java 8 for Spring Integration applications. In addition see the Reference Manual for more information. As always, we look forward to your comments and feedback (StackOverflow (spring-integration tag), Spring JIRA, GitHub) and we very much welcome contributions! Thank you for your time and patience to read this!
December 8, 2014
· 12,339 Views
article thumbnail
Spring Integration Java DSL: Line by Line Tutorial
Originally authored by Artem Bilan on the SpringSource blog Dear Spring Community! Just after the Spring Integration Java DSL 1.0 GA release announcement I want to introduce the Spring Integration Java DSL to you as a line by line tutorial based on the classic Cafe Demo integration sample. We describe here Spring Boot support, Spring Framework Java and Annotation configuration, the IntegrationFlow feature and pay tribute to Java 8 Lambdasupport which was an inspiration for the DSL style. Of course, it is all backed by the Spring Integration Core project. But, before we launch into the description of the Cafe demonstration app here's a shorter example just to get started... @Configuration @EnableAutoConfiguration @IntegrationComponentScan public class Start { public static void main(String[] args) throws InterruptedException { ConfigurableApplicationContext ctx = SpringApplication.run(Start.class, args); List strings = Arrays.asList("foo", "bar"); System.out.println(ctx.getBean(Upcase.class).upcase(strings)); ctx.close(); } @MessagingGateway public interface Upcase { @Gateway(requestChannel = "upcase.input") Collection upcase(Collection strings); } @Bean public IntegrationFlow upcase() { return f -> f .split() // 1 .transform(String::toUpperCase) // 2 .aggregate(); // 3 } } We will leave the description of the infrastructure (annotations etc) to the main cafe flow description. Here, we want you to concentrate on the last @Bean, the IntegrationFlow as well as the gateway method which sends messages to that flow. In the main method we send a collection of strings to the gateway and print the results to STDOUT. The flow first splits the collection into individual Strings (1); each string is then transformed to upper case (2) and finally we re-aggregate them back into a collection (3) Since that's the end of the flow, the framework returns the result of the aggregation back to the gateway and the new payload becomes the return value from the gateway method. The equivalent XML configuration might be... or... Cafe Demo The purpose of the Cafe Demo application is to demonstrate how Enterprise Integration Patterns (EIP) can be used to reflect the order-delivery scenario in a real life cafe. With this application, we handle several drink orders - hot and iced. After running the application we can see in the standard output (System.out.println) how cold drinks are prepared quicker than hot. However the delivery for the whole order is postponed until the hot drink is ready. To reflect the domain model we have several classes: Order, OrderItem, Drink andDelivery. They all are mentioned in the integration scenario, but we won't analyze them here, because they are simple enough. The source code for our application is placed only in a single class; significant lines are annotated with a number corresponding to the comments, which follow: @SpringBootApplication // 1 @IntegrationComponentScan // 2 public class Application { public static void main(String[] args) throws Exception { ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);// 3 Cafe cafe = ctx.getBean(Cafe.class); // 4 for (int i = 1; i <= 100; i++) { // 5 Order order = new Order(i); order.addItem(DrinkType.LATTE, 2, false); //hot order.addItem(DrinkType.MOCHA, 3, true); //iced cafe.placeOrder(order); } System.out.println("Hit 'Enter' to terminate"); // 6 System.in.read(); ctx.close(); } @MessagingGateway // 7 public interface Cafe { @Gateway(requestChannel = "orders.input") // 8 void placeOrder(Order order); // 9 } private AtomicInteger hotDrinkCounter = new AtomicInteger(); private AtomicInteger coldDrinkCounter = new AtomicInteger(); // 10 @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { // 11 return Pollers.fixedDelay(1000).get(); } @Bean public IntegrationFlow orders() { // 12 return f -> f // 13 .split(Order.class, Order::getItems) // 14 .channel(c -> c.executor(Executors.newCachedThreadPool()))// 15 .route(OrderItem::isIced, mapping -> mapping // 16 .subFlowMapping("true", sf -> sf // 17 .channel(c -> c.queue(10)) // 18 .publishSubscribeChannel(c -> c // 19 .subscribe(s -> // 20 s.handle(m -> sleepUninterruptibly(1, TimeUnit.SECONDS)))// 21 .subscribe(sub -> sub // 22 .transform(item -> Thread.currentThread().getName() + " prepared cold drink #" + this.coldDrinkCounter.incrementAndGet() + " for order #" + item.getOrderNumber() + ": " + item) // 23 .handle(m -> System.out.println(m.getPayload())))))// 24 .subFlowMapping("false", sf -> sf // 25 .channel(c -> c.queue(10)) .publishSubscribeChannel(c -> c .subscribe(s -> s.handle(m -> sleepUninterruptibly(5, TimeUnit.SECONDS)))// 26 .subscribe(sub -> sub .transform(item -> Thread.currentThread().getName() + " prepared hot drink #" + this.hotDrinkCounter.incrementAndGet() + " for order #" + item.getOrderNumber() + ": " + item) .handle(m -> System.out.println(m.getPayload())))))) .transform(orderItem -> new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots())) // 27 .aggregate(aggregator -> aggregator // 28 .outputProcessor(group -> // 29 new Delivery(group.getMessages() .stream() .map(message -> (Drink) message.getPayload()) .collect(Collectors.toList()))) // 30 .correlationStrategy(m -> ((Drink) m.getPayload()).getOrderNumber()), null) // 31 .handle(CharacterStreamWritingMessageHandler.stdout()); // 32 } } Examining the code line by line... 1. @SpringBootApplication This new meta-annotation from Spring Boot 1.2. Includes @Configuration and@EnableAutoConfiguration. Since we are in a Spring Integration application and Spring Boot has auto-configuration for it, the @EnableIntegration is automatically applied, to initialize the Spring Integration infrastructure including an environment for the Java DSL -DslIntegrationConfigurationInitializer, which is picked up by theIntegrationConfigurationBeanFactoryPostProcessor from /META-INF/spring.factories. 2. @IntegrationComponentScan The Spring Integration analogue of @ComponentScan to scan components based on interfaces, (the Spring Framework's @ComponentScan only looks at classes). Spring Integration supports the discovery of interfaces annotated with @MessagingGateway (see #7 below). 3. ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args); The main method of our class is designed to start the Spring Boot application using the configuration from this class and starts an ApplicationContext via Spring Boot. In addition, it delegates command line arguments to the Spring Boot. For example you can specify --debug to see logs for the boot auto-configuration report. 4. Cafe cafe = ctx.getBean(Cafe.class); Since we already have an ApplicationContext we can start to interact with application. AndCafe is that entry point - in EIP terms a gateway. Gateways are simply interfaces and the application does not interact with the Messaging API; it simply deals with the domain (see #7 below). 5. for (int i = 1; i <= 100; i++) { To demonstrate the cafe "work" we intiate 100 orders with two drinks - one hot and one iced. And send the Order to the Cafe gateway. 6. System.out.println("Hit 'Enter' to terminate"); Typically Spring Integration application are asynchronous, hence to avoid early exit from themain Thread we block the main method until some end-user interaction through the command line. Non daemon threads will keep the application open but System.read()provides us with a mechanism to close the application cleanly. 7. @MessagingGateway The annotation to mark a business interface to indicate it is a gateway between the end-application and integration layer. It is an analogue of component from Spring Integration XML configuration. Spring Integration creates a Proxy for this interface and populates it as a bean in the application context. The purpose of this Proxy is to wrap parameters in a Message object and send it to the MessageChannel according to the provided options. 8. @Gateway(requestChannel = "orders.input") The method level annotation to distinct business logic by methods as well as by the target integration flows. In this sample we use a requestChannel reference of orders.input, which is a MessageChannel bean name of our IntegrationFlow input channel (see below #13). 9. void placeOrder(Order order); The interface method is a central point to interact from end-application with the integration layer. This method has a void return type. It means that our integration flow is one-wayand we just send messages to the integration flow, but don't wait for a reply. 10. private AtomicInteger hotDrinkCounter = new AtomicInteger(); private AtomicInteger coldDrinkCounter = new AtomicInteger(); Two counters to gather the information how our cafe works with drinks. 11. @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { The default poller bean. It is a analogue of component from Spring Integration XML configuration. Required for endpoints where the inputChannelis a PollableChannel. In this case, it is necessary for the two Cafe queues - hot and iced (see below #18). Here we use the Pollers factory from the DSL project and use its method-chain fluent API to build the poller metadata. Note that Pollers can be used directly from an IntegrationFlow definition, if a specific poller (rather than the default poller) is needed for an endpoint. 12. @Bean public IntegrationFlow orders() { The IntegrationFlow bean definition. It is the central component of the Spring Integration Java DSL, although it does not play any role at runtime, just during the bean registration phase. All other code below registers Spring Integration components (MessageChannel,MessageHandler, EventDrivenConsumer, MessageProducer, MessageSource etc.) in theIntegrationFlow object, which is parsed by the IntegrationFlowBeanPostProcessor to process those components and register them as beans in the application context as necessary (some elements, such as channels may already exist). 13. return f -> f The IntegrationFlow is a Consumer functional interface, so we can minimize our code and concentrate just only on the integration scenario requirements. Its Lambda acceptsIntegrationFlowDefinition as an argument. This class offers a comprehensive set of methods which can be composed to the chain. We call these EIP-methods, because they provide implementations for EI patterns and populate components from Spring Integration Core. During the bean registration phase, the IntegrationFlowBeanPostProcessor converts this inline (Lambda) IntegrationFlow to a StandardIntegrationFlow and processes its components. The same we can achieve using IntegrationFlows factory (e.g.IntegrationFlow.from("channelX"). ... .get()), but we find the Lambda definition more elegant. An IntegrationFlow definition using a Lambda populates DirectChannel as an inputChannel of the flow and it is registered in the application context as a bean with the name orders.input in this our sample (flow bean name + ".input"). That's why we use that name for the Cafe gateway. 14. .split(Order.class, Order::getItems) Since our integration flow accepts message through the orders.input channel, we are ready to consume and process them. The first EIP-method in our scenario is .split(). We know that the message payload from orders.input channel is an Order domain object, so we can simply use its type here and use the Java 8 method-reference feature. The first parameter is a type of message payload we expect, and the second is a method reference to the getItems() method, which returns Collection. So, this performs thesplit EI pattern, when we send each collection entry as a separate message to the next channel. In the background, the .split() method registers a MethodInvokingSplitterMessageHandler implementation and the EventDrivenConsumer for thatMessageHandler, and wiring in the orders.input channel as the inputChannel. 15. .channel(c -> c.executor(Executors.newCachedThreadPool())) The .channel() EIP-method allows the specification of concrete MessageChannels between endpoints, as it is done via output-channel/input-channel attributes pair with Spring Integration XML configuration. By default, endpoints in the DSL integration flow definition are wired with DirectChannels, which get the bean names based on theIntegrationFlow bean name and index in the flow chain. In this case we use anotherLambda expression, which selects a specific MessageChannel implementation from itsChannels factory and configures it with the fluent API. The current channel here is anExecutorChannel, to allow to distribute messages from the splitter to separateThreads, to process them in parallel in the downstream flow. 16. .route(OrderItem::isIced, mapping -> mapping The next EIP-method in our scenario is .route(), to send hot/iced order items to different Cafe kitchens. We again use here a method reference (isIced()) to get theroutingKey from the incoming message. The second Lambda parameter represents arouter mapping - something similar to sub-element for the component from Spring Integration XML configuration. However since we are using Java we can go a bit further with its Lambda support! The Spring Integration Java DSL introduced thesubflow definition for routers in addition to traditional channel mapping. Each subflow is executed depending on the routing and, if the subflow produces a result, it is passed to the next element in the flow definition after the router. 17. .subFlowMapping("true", sf -> sf Specifies the integration flow for the current router's mappingKey. We have in this samples two subflows - hot and iced. The subflow is the same IntegrationFlow functional interface, therefore we can use its Lambda exactly the same as we do on the top levelIntegrationFlow definition. The subflows don't have any runtime dependency with its parent, it's just a logical relationship. 18. .channel(c -> c.queue(10)) We already know that a Lambda definition for the IntegrationFlow starts from[FLOW_BEAN_NAME].input DirectChannel, so it may be a question "how does it work here if we specify .channel() again?". The DSL takes care of such a case and wires those two channels with a BridgeHandler and endpoint. In our sample, we use here a restrictedQueueChannel to reflect the Cafe kitchen busy state from real life. And here is a place where we need that global poller for the next endpoint which is listening on this channel. 19. .publishSubscribeChannel(c -> c The .publishSubscribeChannel() EIP-method is a variant of the .channel() for aMessageChannels.publishSubscribe(), but with the .subscribe() option when we can specify subflow as a subscriber to the channel. Right, subflow one more time! So, subflows can be specified to any depth. Independently of the presence .subscribe() subflows, the next endpoint in the parent flow is also a subscriber to this .publishSubscribeChannel(). Since we are in the .route() subflow already, the last subscriber is an implicit BridgeHandlerwhich just pops the message to the top level - to a similar implicit BridgeHandler to pop message to the next .transform() endpoint in the main flow. And one more note about this current position of our flow: the previous EIP-method is .channel(c -> c.queue(10)) and this one is for MessageChannel too. So, they are again tied with an implicit BridgeHandleras well. In a real application we could avoid this .publishSubscribeChannel() just with the single .handle() for the Cafe kitchen, but our goal here to cover DSL features as much as possible. That's why we distribute the kitchen work to several subflows for the samePublishSubscribeChannel. 20. .subscribe(s -> The .subscribe() method accepts an IntegrationFlow as parameter, which can be specified as Lambda to configure subscriber as subflow. We use here several subflow subscribers to avoid multi-line Lambdas and cover some DSL as we as Spring Integration capabilities. 21. s.handle(m -> sleepUninterruptibly(1, TimeUnit.SECONDS))) Here we use a simple .handle() EIP-method just to block the current Thread for some timeout to demonstrate how quickly the Cafe kitchen prepares a drink. Here we use Google Guava Uninterruptibles.sleepUninterruptibly, to avoid using a try...catch block within the Lambda expression, although you can do that and your Lambda will be multi-line. Or you can move that code to a separate method and use it here as method reference. Since we don't use any Executor on the .publishSubscribeChannel() all subscribers will beperformed sequentially on the same Thread; in our case it is one of TaskScheduler's Threads from poller on the previous QueueChannel. That's why this sleep blocks all downstream process and allows to demonstrate the busy state for that restricted to 10QueueChannel. 22. .subscribe(sub -> sub The next subflow subscriber which will be performed only after that sleep with 1 second foriced drink. We use here one more subflow because .handle() of previous one is one-way with the nature of the Lambda for MessageHandler. That's why, to go ahead with process of our whole flow, we have several subscribers: some of subflows finish after their work and don't return anything to the parent flow. 23. .transform(item -> Thread.currentThread().getName() + " prepared cold drink #" + this.coldDrinkCounter.incrementAndGet() + " for order #" + item.getOrderNumber() + ": " + item) The transformer in the current subscriber subflow is to convert the OrderItem to the friendly STDOUT message for the next .handle. Here we see the use of generics with the Lambda expression. This is implemented using the GenericTransformer functional interface. 24. .handle(m -> System.out.println(m.getPayload()))))) The .handle() here just to demonstrate how to use Lambda expression to print thepayload to STDOUT. It is a signal that our drink is ready. After that the final (implicit) subscriber to the PublishSubscribeChannel just sends the message with the OrderItemto the .transform() in the main flow. 25. .subFlowMapping("false", sf -> sf The .subFlowMapping() for the hot drinks. Actually it is similar to the previous iceddrinks subflow, but with specific hot business logic. 26. s.handle(m -> sleepUninterruptibly(5, TimeUnit.SECONDS))) The sleepUninterruptibly for hot drinks. Right, we need more time to boil the water! 27. .transform(orderItem -> new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots())) The main OrderItem to Drink transformer, which is performed when the .route()subflow returns its result after the Cafe kitchen subscribers have finished preparing the drink. 28. .aggregate(aggregator -> aggregator The .aggregate() EIP-method provides similar options to configure anAggregatingMessageHandler and its endpoint, like we can do with the component when using Spring Integration XML configuration. Of course, with the Java DSL we have more power to configure the aggregator just in place, without any other extra beans. And Lambdas come to the rescue again! From the Cafe business logic perspective we compose theDelivery for the initial Order, since we .split() the original order to the OrderItems near the beginning. 29. .outputProcessor(group -> The .outputProcessor() of the AggregatorSpec allows us to emit a custom result after aggregator completes the group. It's an analogue of ref/method from the component or the @Aggregator annotation on a POJO method. Our goal here to compose aDelivery for all Drinks. 30. new Delivery(group.getMessages() .stream() .map(message -> (Drink) message.getPayload()) .collect(Collectors.toList()))) As you see we use here the Java 8 Stream feature for Collection. We iterate over messages from the released MessageGroup and convert (map) each of them to its Drinkpayload. The result of the Stream (.collect()) (a list of Drinks) is passed to theDelivery constructor. The Message with this new Delivery payload is sent to the next endpoint in our Cafe scenario. 31. .correlationStrategy(m -> ((Drink) m.getPayload()).getOrderNumber()), null) The .correlationStrategy() Lambda demonstrates how we can customize an aggregator behaviour. Of course, we can rely here just only on a built-in SequenceDetails from Spring Integration, which is populated by default from .split() in the beginning of our flow to each split message, but the Lambda sample for the CorrelationStrategy is included for illustration. (With XML, we could have used a correlation-expression or a customCorrelationStrategy). The second argument in this line for the .aggregate() EIP-method is for the endpointConfigurer to customize options like autoStartup,requiresReply, adviceChain etc. We use here null to show that we rely on the default options for the endpoint. Many of EIP-methods provide overloaded versions with and withoutendpointConfigurer, but .aggregate() requires an endpoint argument, to avoid an explicit cast for the AggregatorSpec Lambda argument. 32. .handle(CharacterStreamWritingMessageHandler.stdout()); It is the end of our flow - the Delivery is delivered to the client! We just print here the message payload to STDOUT using out-of-the-boxCharacterStreamWritingMessageHandler from Spring Integration Core. This is a case to show how existing components from Spring Integration Core (and its modules) can be used from the Java DSL. Well, we have finished describing the Cafe Demo sample based on the Spring Integration Java DSL. Compare it with XML sample to get more information regarding Spring Integration. This is not an overall tutorial to the DSL stuff. We don't review here theendpointConfigurer options, Transformers factory, the IntegrationComponentSpechierarchy, the NamespaceFactories, how we can specify several IntegrationFlow beans and wire them to a single application etc., see the Reference Manual for more information. At least this line-by-line tutorial should show you Spring Integration Java DSL basics and its seamless fusion between Spring Framework Java & Annotation configuration, Spring Integration foundation and Java 8 Lambda support! Also see the si4demo to see the evolution of Spring Integration including the Java DSL, as shown at the 2014 SpringOne/2GX Conference. (Video should be available soon). As always, we look forward to your comments and feedback (StackOverflow (spring-integration tag), Spring JIRA, GitHub) and we very much welcome contributions! P.S. Even if this tutorial is fully based on the Java 8 Lambda support, we don't want to miss pre Java 8 users, we are going to provide similar non-Lambda blog post. Stay tuned!
December 1, 2014
· 19,788 Views
article thumbnail
Spring Integration Java DSL 1.0 GA Released
[This article was written by Artem Bilan.] Dear Spring community, As we promised in the Release Candidate blog post, we are pleased to announce that the Spring Integration Java DSL 1.0 GA is now available. As usual, use the Release Repository with Maven or Gradle, or download a distribution archive, to give it a spin. See the project home page for more information. First of all, we are glad to share with you that on Nov 12, 2014, DZone research recognized Spring Integration as the leader in the ESB / Integration framework space, leading with 42% marketshare, in a publication of their recent survey results. And the report is the most popular DZone Guide in November, with more than 12 000 downloads already! Don't miss it: very exciting. We hope the release of the Spring Integration Java DSL adds more excitement!. Many thanks to all contributors, including several who are new to the community. The release includes just a few bug fixes, since the release candidate, and a lot of JavaDocs! Not specifically related to the the release, I want to present here some resources on the matter. We are observing many valuable DSL questions on Stack Overflow. Josh Long's tech tip showing how we can use together Spring Boot, REST, Spring Integration 4.1 WebSocket support and Spring Integration Java DSL plus Java 8 features. The Jdbc Splitter implementation in the project tests. My gist to demonstrate how we can use Reactor Streams together with the Spring Integration Java DSL. Dave Syer has started to use Spring Integration Java DSL in the Spring Cloud Bus project. Don't miss the si4demo to see the evolution of Spring Integration including the Java DSL, as shown at the 2014 SpringOne/2GX Conference. (Video should be available soon). Especial thanks to Biju Kunjummen who has done some nice articles on DZone to introduce Spring Integration Java DSL: https://dzone.com/articles/spring-integration-java-dsl, https://dzone.com/articles/spring-integration-java-dsl-0. And of course, with the latest Spring XD, we can build Modules based on @Configuration including Spring Integration Java DSL IntegrationFlow definitions. Just after this announcement I'm going to publish a DSL Tutorial to explain concepts and features using the Java DSL version of the Cafe Demo sample as material. As always, we look forward to your comments and feedback (StackOverflow (spring-integration tag), Spring JIRA, GitHub) and we very much welcome contributions!
November 25, 2014
· 4,918 Views
article thumbnail
Introducing the Spring YARN framework for Developing Apache Hadoop YARN Applications
Originally posted on the SpringSource blog by Janne Valkealahti We're super excited to let the cat out of the bag and release support for writing YARN based applications as part of the Spring for Apache Hadoop 2.0 M1 release. In this blog post I’ll introduce you to YARN, what you can do with it, and how Spring simplifies the development of YARN based applications. If you have been following the Hadoop community over the past year or two, you’ve probably seen a lot of discussions around YARN and the next version of Hadoop's MapReduce called MapReduce v2. YARN (Yet Another Resource Negotiator) is a component of the MapReduce project created to overcome some performance issues in Hadoop's original design. The fundamental idea of MapReduce v2 is to split the functionalities of the JobTracker, Resource Management and Job Scheduling/Monitoring, into separate daemons. The idea is to have a global Resource Manager (RM) and a per-application Application Master (AM). A generic diagram for YARN component dependencies can be found from YARN architecture. MapReduce Version 2 is an application running on top of YARN. It is also possible to make similar custom YARN based application which have nothing to do with MapReduce, it is simply running YARN application. However, writing a custom YARN based application is difficult. The YARN APIs are low-level infrastructure APIs and not developer APIs. Take a look at thedocumentation for writing a YARN application to get an idea of what is involved. Starting with the 2.0 version, Spring for Apache Hadoop introduces the Spring YARN sub-project to provide support for building Spring based YARN applications. This support for YARN steps in by trying to make development easier. “Spring handles the infrastructure so you can focus on your application” applies to writing Hadoop applications as well as other types of Java applications. Spring’s YARN support also makes it easier to test your YARN application. With Spring’s YARN support, you're going to use all familiar concepts of Spring Framework itself, including configuration and generally speaking what you can do in your application. At a high level, Spring YARN provides three different components, YarnClient, YarnAppmaster andYarnContainer which together can be called a Spring YARN Application. We provide default implementations for all components while still giving the end user an option to customize as much as he or she wants. Lets take a quick look at a very simplistic Spring YARN application which runs some custom code in a Hadoop cluster. The YarnClient is used to communicate with YARN's Resource Manager. This provides management actions like submitting new application instances, listing applications and killing running applications. When submitting applications from the YarnClient, the main concerns relate to how the Application Master is configured and launched. Both the YarnAppmaster andYarnContainer share the same common launch context config logic so you'll see a lot of similarities in YarnClient and YarnAppmaster configuration. Similar to how the YarnClient will define the launch context for the YarnAppmaster, the YarnAppmaster defines the launch context for the YarnContainer. The Launch context defines the commands to start the container, localized files, command line parameters, environment variables and resource limits(memory, cpu). The YarnContainer is a worker that does the heavy lifting of what a YARN application will actually do. The YarnAppmaster is communicating with YARN Resource Manager and starts and stops YarnContainers accordingly. You can create a Spring application that launches an ApplicationMaster by using the YARN XML namespace to define a Spring Application Context. Context configuration for YarnClient defines the launch context for YarnAppmaster. This includes resources and libraries needed by YarnAppmaster and its environment settings. An example of this is shown below. Note: Future releases will provide a Java based API for configuration, similar to what is done in Spring Security 3.2. The purpose of YarnAppmaster is to control the instance of a running application. TheYarnAppmaster is responsible for controlling the lifecycle of all its YarnContainers, the whole running application after the application is submitted, as well as itself. The example above is defining a context configuration for the YarnAppmaster. Similar to what we saw in YarnClient configuration, we define local resources for the YarnContainer and its environment. The classpath setting picks up hadoop jars as well as your own application jars in default locations, change the setting if you want to use non-default directories. Also within theYarnAppmaster we define components handling the container allocation and bootstrapping. Allocator component is interacting with YARN resource manager handling the resource scheduling. Runner component is responsible for bootstrapping of allocated containers. Above example defines a simple YarnContainer context configuration. To implement the functionality of the container, you implement the interface YarnContainer. The YarnContainer interface is similar to Java’s Runnable interface, its has a run() method, as well as two additional methods related to getting environment and command line information. Below is a simple hello world application that will be run inside of a YARN container: public class MyCustomYarnContainer implements YarnContainer { private static final Log log = LogFactory.getLog(MyCustomYarnContainer.class); @Override public void run() { log.info("Hello from MyCustomYarnContainer"); } @Override public void setEnvironment(Map environment) {} @Override public void setParameters(Properties parameters) {} } We just showed the configuration of a Spring YARN Application and the core application logic so what remains is how to bootstrap the application to run inside the Hadoop cluster. The utility class, CommandLineClientRunner provides this functionality. You can you use CommandLineClientRunner either manually from a command line or use it from your own code. # java -cp org.springframework.yarn.client.CommandLineClientRunner application-context.xml yarnClient -submit A Spring YARN Application is packaged into a jar file which then can be transferred into HDFS with the rest of the dependencies. A YarnClient can transfer all needed libraries into HDFS during the application submit process but generally speaking it is more advisable to do this manually in order to avoid unnecessary network I/O. Your application wont change until new version is created so it can be copied into HDFS prior the first application submit. You can i.e. use Hadoop's hdfs dfs -copyFromLocal command. Below you can see an example of a typical project setup. src/main/java/org/example/MyCustomYarnContainer.java src/main/resources/application-context.xml src/main/resources/appmaster-context.xml src/main/resources/container-context.xml As a wild guess, we'll make a bet that you have now figured out that you are not actually configuring YARN, instead you are configuring Spring Application Contexts for all three components, YarnClient, YarnAppmaster and YarnContainer. We have just scratched the surface of what we can do with Spring YARN. While we’re preparing more blog posts, go ahead and check existing samples in GitHub. Basically, to reflect the concepts we described in this blog post, see the multi-context example in our samples repository. Future blog posts will cover topics like Unit Testing and more advanced YARN application development.
September 11, 2013
· 13,356 Views
article thumbnail
Spring Security 3.2.0 RC1 Highlights: Security Headers
This post was originally authored by Rob Winch from SpringSource. This is my last post in a two part series on Spring Security 3.2.0.RC1. My previous post discussed Spring Security's CSRF protection. In this post we will discuss how to use Spring Security to add various response headers to help secure your application. SECURITY HEADERS Many of the new Spring Security features in 3.2.0.RC1 are implemented by adding headers to the response. The foundation for these features came from hard work from Marten Deinum. If the name sounds familiar, it may because one of his 10K+ posts on the Spring Forums has helped you out. If you are using XML configuration, you can add all of the default headers using Spring Security's element with no child elements to add all the default headers to the response: ... If you are using Spring Security's Java configuration, all of the default security headers are added by default. They can be disabled using the Java configuration below: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers().disable() ...; } } The remainder of this post will discuss each of the default headers in more detail: Cache Control Content Type Options HTTP Strict Transport Security X-Frame-Options X-XSS-PROTECTION Cache Control In the past Spring Security required you to provide your own cache control for your web application. This seemed reasonable at the time, but browser caches have evolved to include caches for secure connections as well. This means that a user may view an authenticated page, log out, and then a malicious user can use the browser history to view the cached page. To help mitigate this Spring Security has added cache control support which will insert the following headers into you response. Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Simply adding the element with no child elements will automatically add Cache Control and quite a few other protections. However, if you only want cache control, you can enable this feature using Spring Security's XML namespace with the element. ... Similarly, you can enable only cache control within Java Configuration with the following: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers() .cacheControl() .and() ...; } } If you actually want to cache specific responses, your application can selectively invokeHttpServletResponse.setHeader(String,String) to override the header set by Spring Security. This is useful to ensure things like CSS, JavaScript, and images are properly cached. When using Spring Web MVC, this is typically done within your configuration. For example, the following configuration will ensure that the cache headers are set for all of your resources: @EnableWebMvc public class WebMvcConfiguration extends WebMvcConfigurerAdapter { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry .addResourceHandler("/resources/**") .addResourceLocations("/resources/") .setCachePeriod(31556926); } // ... } Content Type Options Uploading Files There are many additional things one should do (i.e. only display the document in a distinct domain, ensure Content-Type header is set, sanitize the document, etc) when allowing content to be uploaded. However, these measures are out of the scope of what Spring Security provides. It is also important to point out when disabling content sniffing, you must specify the content type in order for things to work properly. Historically browsers, including Internet Explorer, would try to guess the content type of a request using content sniffing. This allowed browsers to improve the user experience by guessing the content type on resources that had not specified the content type. For example, if a browser encountered a JavaScript file that did not have the content type specified, it would be able to guess the content type and then execute it. The problem with content sniffing is that this allowed malicious users to use polyglots (i.e. a file that is valid as multiple content types) to execute XSS attacks. For example, some sites may allow users to submit a valid postscript document to a website and view it. A malicious user might create a postscript document that is also a valid JavaScript file and execute a XSS attack with it. Content sniffing can be disabled by adding the following header to our response: X-Content-Type-Options: nosniff Just as with the cache control element, the nosniff directive is added by default when using the element with no child elements. However, if you want more control over which headers are added you can use the element as shown below: ... The X-Content-Type-Options header is added by default with Spring Security Java configuration. If you want more control over the headers, you can explicitly specify the content type options with the following: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers() .contentTypeOptions() .and() ...; } } HTTP Strict Transport Security (HSTS) When you type in your bank's website, do you enter mybank.example.com or do you enter https://mybank.example.com? If you omit the https protocol, you are potentially vulnerable toMan in the Middle attacks. Even if the website performs a redirect to https://mybank.example.com a malicious user could intercept the initial HTTP request and manipulate the response (i.e. redirect to https://mibank.example.com and steal their credentials). Many users omit the https protocol and this is why HTTP Strict Transport Security (HSTS)was created. Once mybank.example.com is added as a HSTS host, a browser can know ahead of time that any request to mybank.example.com should be interpreted as https://mybank.example.com. This greatly reduces the possibility of a Man in the Middle attack occurring. HSTS Notes In accordance with RFC6797, the HSTS header is only injected into HTTPS responses. In order for the browser to acknowledge the header, the browser must first trust the CA that signed the SSL certificate used to make the connection (not just the SSL certificate). One way for a site to be marked as a HSTS host is to have the host preloaded into the browser. Another is to add the "Strict-Transport-Security" header to the response. For example the following would instruct the browser to treat the domain as an HSTS host for a year (there are approximately 31536000 seconds in a year): Strict-Transport-Security: max-age=31536000 ; includeSubDomains The optional includeSubDomains directive instructs Spring Security that subdomains (i.e. secure.mybank.example.com) should also be treated as an HSTS domain. As with the other headers, Spring Security adds the previous header to the response when the element is specified with no child elements. It is also automatically added when you are using Java Configuration. You can also only use HSTS headers with the element as shown below: ... Similarly, you can enable only HSTS headers with Java Configuration: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers() .hsts() .and() ...; } } X-Frame-Options Content Security Policy Another modern approach to dealing with clickjacking is using a Content Security Policy. Spring Security does not provide support for this as the specification is not released and it is quite a bit more complicated. To stay up to date with this issue and to see how you can implement it with Spring Security refer to SEC-2117 Allowing your website to be added to a frame can be a security issue. For example, using clever CSS styling users could be tricked into clicking on something that they were not intending (video demo). For example, a user that is logged into their bank might click a button that grants access to other users. This sort of attack is known asClickjacking. There are a number ways to mitigate clickjacking attacks. For example, to protect legacy browsers from clickjacking attacks you can use frame breaking code. While not perfect, the frame breaking code is the best you can do for the legacy browsers. A more modern approach to address clickjacking is to use X-Frame-Options header: X-Frame-Options: DENY The X-Frame-Options response header instructs the browser to prevent any site with this header in the response from being rendered within a frame. As with the other response headers, this is automatically included when the element is specified with no child elements. You can also explicitly specify the element to control which headers are added to the response. ... Similarly, you can enable only frame options within Java Configuration with the following: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers() .frameOptions() .and() ...; } } X-XSS-Protection Some browsers have built in support for filtering out reflected XSS attacks. This is by no means full proof, but does assist in XSS protection. The filtering is typically enabled by default, so adding the header typically just ensures it is enabled and instructs the browser what to do when a XSS attack is detected. For example, the filter might try to change the content in the least invasive way to still render everything. At times, this type of replacement can become a XSS vulnerability in itself. Instead, it is best to block the content rather than attempt to fix it. To do this we can add the following header: X-XSS-Protection: 1; mode=block This header is included by default when the element is specified with no child elements. We can explicitly state it using the element as shown below: ... Similarly, you can enable only xss protection within Java Configuration with the following: @EnableWebSecurity @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .headers() .xssProtection() .and() ...; } } FEEDBACK PLEASE If you encounter a bug, have an idea for improvement, etc please do not hesitate to bring it up! We want to hear your thoughts so we can ensure we get it right before the code is generally available. Trying out new features early is a good and simple way to give back to the community. This also ensures that the features you want are present and working as you think they should. Please log any issues or feature requests to the Spring Security JIRA. After logging a JIRA, we encourage (but do not require) you to submit your changes in a pull request. You can read more about how to do this in the Contributor Guidelines If you have questions on how to do something, please use the Spring Security forums orStack Overflow with the tag spring-security (I will be monitoring them closely). If you have specific comments questions about this blog, feel free to leave a comment. Using the appropriate tools will help make it easier for everyone. CONCLUSION You should have a good understanding of the new features present in Spring Security 3.2.RC1.
August 26, 2013
· 16,632 Views
article thumbnail
Content Negotiation Using Spring MVC
originally written by paul chapman there are two ways to generate output using spring mvc: you can use the restful @responsebody approach and http message converters, typically to return data-formats like json or xml. programmatic clients, mobile apps and ajax enabled browsers are the usual clients. alternatively you may use view resolution . although views are perfectly capable of generating json and xml if you wish (more on that in my next post), views are normally used to generate presentation formats like html for a traditional web-application. actually there is a third possibility – some applications require both, and spring mvc supports such combinations easily. we will come back to that right at the end. in either case you'll need to deal with multiple representations (or views) of the same data returned by the controller. working out which data format to return is called content negotiation . there are three situations where we need to know what type of data-format to send in the http response: httpmessageconverters: determine the right converter to use. request mappings: map an incoming http request to different methods that return different formats. view resolution: pick the right view to use. determining what format the user has requested relies on a contentnegotationstrategy . there are default implementations available out of the box, but you can also implement your own if you wish. in this post i want to discuss how to configure and use content negotiation with spring, mostly in terms of restful controllers using http message converters. in a later post i will show how to setup content negotiation specifically for use with views using spring's contentnegotiatingviewresolver . how does content negotiation work? getting the right content when making a request via http it is possible to specify what type of response you would like by setting the accept header property. web browsers have this preset to request html (among other things). in fact, if you look, you will see that browsers actually send very confusing accept headers, which makes relying on them impractical. see http://www.gethifi.com/blog/browser-rest-http-accept-headers for a nice discussion of this problem. bottom-line: accept headers are messed up and you can't normally change them either (unless you use javascript and ajax). so, for those situations where the accept header property is not desirable, spring offers some conventions to use instead. (this was one of the nice changes in spring 3.2 making a flexible content selection strategy available across all of spring mvc not just when using views). you can configure a content negotiation strategy centrally once and it will apply wherever different formats (media types) need to be determined. enabling content negotiation in spring mvc spring supports a couple of conventions for selecting the format required: url suffixes and/or a url parameter. these work alongside the use of accept headers. as a result, the content-type can be requested in any of three ways. by default they are checked in this order: add a path extension (suffix) in the url. so, if the incoming url is something like http://myserver/myapp/accounts/list.html then html is required. for a spreadsheet the url should be http://myserver/myapp/accounts/list.xls . the suffix to media-type mapping is automatically defined via the javabeans activation framework or jaf (so activation.jar must be on the class path). a url parameter like this: http://myserver/myapp/accounts/list?format=xls . the name of the parameter is format by default, but this may be changed. using a parameter is disabled by default, but when enabled, it is checked second. finally the accept http header property is checked. this is how http is actually defined to work, but, as previously mentioned, it can be problematic to use. the java configuration to set this up, looks like this. simply customize the predefined content negotiation manager via its configurer. note the mediatype helper class has predefined constants for most well-known media-types. @configuration @enablewebmvc public class webconfig extends webmvcconfigureradapter { /** * setup a simple strategy: use all the defaults and return xml by default when not sure. */ @override public void configurecontentnegotiation(contentnegotiationconfigurer configurer) { configurer.defaultcontenttype(mediatype.application_xml); } } when using xml configuration, the content negotiation strategy is most easily setup via the contentnegotiationmanagerfactorybean : the contentnegotiationmanager created by either setup is an implementation of contentnegotationstrategy that implements the ppa strategy (path extension, then parameter, then accept header) described above. additional configuration options in java configuration, the strategy can be fully customized using methods on the configurer: @configuration @enablewebmvc public class webconfig extends webmvcconfigureradapter { /** * total customization - see below for explanation. */ @override public void configurecontentnegotiation(contentnegotiationconfigurer configurer) { configurer.favorpathextension(false). favorparameter(true). parametername("mediatype"). ignoreacceptheader(true). usejaf(false). defaultcontenttype(mediatype.application_json). mediatype("xml", mediatype.application_xml). mediatype("json", mediatype.application_json); } } in xml, the strategy can be configured using methods on the factory bean: what we did, in both cases: disabled path extension. note that favor does not mean use one approach in preference to another, it just enables or disables it. the order of checking is always path extension, parameter, accept header. enable the use of the url parameter but instead of using the default parameter, format , we will use mediatype instead. ignore the accept header completely. this is often the best approach if most of your clients are actually web-browsers (typically making rest calls via ajax). don't use the jaf, instead specify the media type mappings manually – we only wish to support json and xml. listing user accounts example to demonstrate, i have put together a simple account listing application as our worked example – the screenshot shows a typical list of accounts in html. the complete code can be found at github: https://github.com/paulc4/mvc-content-neg . to return a list of accounts in json or xml, i need a controller like this. we will ignore the html generating methods for now. @controller class accountcontroller { @requestmapping(value="/accounts", method=requestmethod.get) @responsestatus(httpstatus.ok) public @responsebody list list(model model, principal principal) { return accountmanager.getaccounts(principal) ); } // other methods ... } here is the content-negotiation strategy setup: or, using java configuration, the code looks like this: @override public void configurecontentnegotiation( contentnegotiationconfigurer configurer) { // simple strategy: only path extension is taken into account configurer.favorpathextension(true). ignoreacceptheader(true). usejaf(false). defaultcontenttype(mediatype.text_html). mediatype("html", mediatype.text_html). mediatype("xml", mediatype.application_xml). mediatype("json", mediatype.application_json); } provided i have jaxb2 and jackson on my classpath, spring mvc will automatically setup the necessary httpmessageconverters . my domain classes must also be marked up with jaxb2 and jackson annotations to enable conversion (otherwise the message converters don't know what to do). in response to comments (below), the annotated account class is shown below . here is the json output from our accounts application (note path-extension in url). how does the system know whether to convert to xml or json? because of content negotiation – any one of the three ( ppa strategy ) options discussed above will be used depending on how the contentnegotiationmanager is configured. in this case the url ends in accounts.json because the path-extension is the only strategy enabled. in the sample code you can switch between xml or java configuration of mvc by setting an active profile in the web.xml . the profiles are "xml" and "javaconfig" respectively. combining data and presentation formats spring mvc's rest support builds on the existing mvc controller framework. so it is possible to have the same web-applications return information both as raw data (like json) and using a presentation format (like html). both techniques can easily be used side by side in the same controller, like this: @controller class accountcontroller { // restful method @requestmapping(value="/accounts", produces={"application/xml", "application/json"}) @responsestatus(httpstatus.ok) public @responsebody list listwithmarshalling(principal principal) { return accountmanager.getaccounts(principal); } // view-based method @requestmapping("/accounts") public string listwithview(model model, principal principal) { // call restful method to avoid repeating account lookup logic model.addattribute( listwithmarshalling(principal) ); // return the view to use for rendering the response return ¨accounts/list¨; } } there is a simple pattern here: the @responsebody method handles all data access and integration with the underlying service layer (the accountmanager ). the second method calls the first and sets up the response in the model for use by a view. this avoids duplicated logic. to determine which of the two @requestmapping methods to pick, we are again using our ppa content negotiation strategy. it allows the produces option to work. urls ending with accounts.xml or accounts.json map to the first method, any other urls ending in accounts.anything map to the second. another approach alternatively we could do the whole thing with just one method if we used views to generate all possible content-types. this is where the contentnegotiatingviewresolver comes in and that will be the subject of my next post . acknoweldgements i would like to thank rossen stoyanchev for his help in writing this post. any errors are my own. addendum: the annotated account class added 2 june 2013 . since there were some questions on how to annotate a class for jaxb, here is part of the account class. for brevity i have omitted the data-members, and all methods except the annotated getters. i could annotate the data-members directly if preferred (just like jpa annotations in fact). remember that jackson can marshal objects to json using these same annotations. /** * represents an account for a member of a financial institution. an account has * zero or more {@link transaction}s and belongs to a {@link customer}. an aggregate entity. */ @entity @table(name = "t_account") @xmlrootelement public class account { // data-members omitted ... public account(customer owner, string number, string type) { this.owner = owner; this.number = number; this.type = type; } /** * returns the number used to uniquely identify this account. */ @xmlattribute public string getnumber() { return number; } /** * get the account type. * * @return one of "credit", "savings", "check". */ @xmlattribute public string gettype() { return type; } /** * get the credit-card, if any, associated with this account. * * @return the credit-card number or null if there isn't one. */ @xmlattribute public string getcreditcardnumber() { return stringutils.hastext(creditcardnumber) ? creditcardnumber : null; } /** * get the balance of this account in local currency. * * @return current account balance. */ @xmlattribute public monetaryamount getbalance() { return balance; } /** * returns a single account transaction. callers should not attempt to hold * on or modify the returned object. this method should only be used * transitively; for example, called to facilitate reporting or testing. * * @param name * the name of the transaction account e.g "fred smith" * @return the beneficiary object */ @xmlelement // make these a nested element public set gettransactions() { return transactions; } // setters and other methods ... }
July 26, 2013
· 27,721 Views
article thumbnail
Spring Tool Suite (STS) and Groovy/Grails Tool Suite (GGTS) 3.0.0 releases
We are proud to announce that the newest major release of our Eclipse-based developer tooling is now available. This is a major release not only in terms of new features but because of other serious changes like project componentization, open-sourcing and the fact that for the first time we are making multiple distributions available, each tailored for a different kind of developer. Check out the release announcement on Martin Lippert's Blog. 100% Open Sourced – All STS features that were previously under a free commercial license, have been donated under the Eclipse Public License (EPL) at GitHub! Intelligent Repackaging - Repackaging the product itself makes identifying what tools you need, and getting started with them much easier. In the past, Groovy/Grails developers had to install several extensions manually into Eclipse to get started. Now there are two full eclipse distributions, one targeted at Spring developers, the other at Groovy/Grails developers – just download, install and go, no assembly required. Componentized projects: Componentizing allows installation and configuration flexibility – developers can install components individually into their existing, plain Eclipse Java EE installations if they wish, preserving their hard work of configuring their Eclipse IDEs just the way they like them. Downloads, more information and FAQ You can find the downloads as well as more information on the project websites for the toolsuites: Spring Tool Suite Groovy/Grails Tool Suite Installation Instructions FAQ Feedback and discussions If you have feedback or questions for us, please do not hesitate to contact us via our SpringSource Tool Suite forum. Bugs and feature requests are always welcome as tickets in our JIRA or, even better, as pull requests on GitHub.
August 22, 2012
· 3,204 Views

Comments

Understanding When to Use RabbitMQ or Apache Kafka

Jan 12, 2018 · Arran Glen

Hi folks, this ACM Queue paper is about $15.00 USD and well worth the read IMO.

https://dl.acm.org/citation.cfm?id=3093908

Why Spring Boot?

Sep 20, 2017 · Tim Spann

I think you'll have better luck on StackOverflow with this :)

Spring Boot Reactive Tutorial

Sep 06, 2017 · Mohit Sinha

my old friends at ORCL claim they are working on a reactive JDBC driver... FYI. Spring Data will have reactive drivers for Mongo, Cassandra, Redis at GA on Sept 21.

JAX-RS vs. Spring for REST Endpoints

Sep 06, 2017 · Thomas Martin

When Spring Boot 2.0 comes out this winter, you won't have to choose - JAX-RS will be supported natively

Java Memory Consumption in Docker and How We Employed Spring Boot

Aug 03, 2017 · Serhii Povisenko

Gaspar below is correct. this whole article has less to do with docker than it does about understanding how Java memory regions work. Another reason to use cloud foundry buildpacks to create containers, as the memory calculator helps a lot here

Java Memory Consumption in Docker and How We Employed Spring Boot

Jul 25, 2017 · Serhii Povisenko

also, see https://github.com/cloudfoundry/jvmkill

Java Memory Consumption in Docker and How We Employed Spring Boot

Jul 25, 2017 · Serhii Povisenko

YourKit is the way to go on analyzing memory usage. Also, try using a Cloud Foundry Java buildpack to automate container construction with intelligent, automatic initial memory settings! read more about the memory calculator here:

https://www.cloudfoundry.org/just-released-java-buildpack-4-0/

Why Spring Boot?

Jul 14, 2017 · Tim Spann

that's funny! good one

Understanding When to Use RabbitMQ or Apache Kafka

May 09, 2017 · Arran Glen

that's a great point. I have a few other corrections I'm going to make, I'm getting more feedback from Kafka users. I'll add that to the list. TY!

Why Spring Boot?

May 02, 2017 · Tim Spann

Looking for more detail about how spring boot really works?

Check out the short version:

https://www.youtube.com/watch?v=u1QnlAbCFys

the medium length version:

https://content.pivotal.io/webinars/spring-boot-under-the-hood

the long in depth version:

https://www.youtube.com/watch?v=uof5h-j0IeE

Monitoring Microservices With Spring Cloud Sleuth, ELK, and Zipkin

Apr 14, 2017 · Piotr Mińkowski

Try Spring Cloud Sleuth on cloud foundry it removes a lot of the pain here check out http://run.pivotal.io/spring

Deploying Microservices: Spring Cloud vs. Kubernetes

Dec 27, 2016 · Duncan Brown

this article is comparing apples and oranges - the entire premise is strange. Spring Cloud is a development stack. Kubernetes is a cloud platform. You even sort of acknlowedge this in the article when you say "The two platforms, Spring Cloud and Kubernetes, are very different and there is no direct feature parity between them". Your mistake is calling Spring Cloud a platform, it is not. If you wanted an meaningful comparison, I would compare running Spring Cloud on Cloud Foundry vs. Spring Cloud on Kubernetes.

Why I'm Using Java EE (Instead of Spring)

Oct 28, 2016 · Dave Fecak

I think many would agree that Spring is 1st and foremost an integration framework - and puts effort into curating & testing a huge subset of those 1000s of options you mention. I don't think that Java EE tests with anywhere near the same amount of the Java landscape outside Java EE, it's a much smaller box.

http://docs.spring.io/platform/docs/current/reference/htmlsingle/#appendix-dependency-versions

Why I'm Using Java EE (Instead of Spring)

Oct 28, 2016 · Dave Fecak

Some companies are stuck on last decade's tech, and there are jobs to be had sure, similar to mainframe market dynamics. But that runway is even more finite now - priority for digital business is increasing, and the pace of innovation has increased dramatically with cloud. Java EE's glacial pace of update will force those companies to abandon those stacks for cloud native ones faster than ever before, typically at least 3 years from finialization of a EE spec to arrival of supported-in-production commerical servers from the few remaining vendors. And that will only be to deliver what a small part of what Spring has already been shipping since 2015.

http://www.theregister.co.uk/2016/09/20/java_ee_8_delayed_new_projects_focus/

The good news is that much of what you learn about JPA, JAX-RS, JMX, Servlets are highly transferrable and are worth learning. The rest has become fairly irrelevant to modern Java development.

Why I'm Using Java EE (Instead of Spring)

Oct 28, 2016 · Dave Fecak

I suggest that the Java EE commenters on this thread read these articles, and then comment, so they aren't publicly displaying their ignorance / bias and inviting non-productive discourse from both sides.

https://spring.io/blog/2015/11/29/how-not-to-hate-spring-in-2016

And this one from 2015 has much that is still relevant: https://dzone.com/articles/java-doesnt-suck-rockin-jvm

http://www.theregister.co.uk/2016/09/20/java_ee_8_delayed_new_projects_focus/

Exploring Message Brokers: RabbitMQ, Kafka, ActiveMQ, and Kestrel

Sep 30, 2016 · Benjamin Ball

Yes, it sounds like some misconfigurations are present with your RabbitMQ instance used in this benchmark. To get the best possible performance from Rabbit, you do have to do some homework, both in config and in app/client code. A propertly configured Rabbit can run 1M/messages second on GCE:

https://blog.pivotal.io/pivotal/products/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine

https://cloudplatform.googleblog.com/2014/06/rabbitmq-on-google-compute-engine.html

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: