DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

The Latest Coding Topics

article thumbnail
AWS Activate: Pros, Cons, and Everything in Between
First and foremost, it is important to define what AWS Activate is and what it is used for before we can take a deeper look. Exactly one year ago, Amazon created a program specifically designed for a particular group of customers that often times is in need of as much help as they can get (AKA startups). This program supports startups in their initial phase of building their businesses. This includes providing AWS credits, taking part in startup contests, and receiving benefits from third party solutions on the AWS cloud. Activate allows AWS partners that want to create a presence within the Activate community offer perks to member startups. Some of which include discounts and extended free tiers. Some startups that have attained high levels of success with AWS include Spotify, Pinterest, and Dropbox. With the big shots maintaining their places in startup stardom, Amazon has opened its doors to the next generation of innovators. As such, Amazon offers two different Activate packages. The Self-Starter package is comprised of a limited amount of each of the offerings listed above, whereas the Portfolio package includes some added bonuses along the lines of more high-profile and technical support as well as more in-depth training. On his blog AWS’ CTO, Werner Vogel, reiterated the importance of startups, “Startups will forever be a very important customer segment of AWS. They were among our first customers and along the way some amazing businesses have been built by these startups, many of which running for 100% on AWS.” “We’re excited to be a part of this global momentum in the startup ecosystem. The challenge now is to support and assist an increasing number of startups across the world.” The fun doesn’t stop there. In April of this year, AWS expanded the Activate package to offer much more than generalupport. This entailed sponsoring solution architects to take startups through step by step consultations in the fields of security, architecture and performance. Consequently, though Amazon’s professional services teams were established for customers, it was natural to have them take part in Activate. By nurturing new startups and making them rely heavily on the AWS cloud. As we can see today, companies that started with AWS 4 years ago are now worth billions of dollars. Airbnb and Dropbox, for example, now thoroughly enjoy the flexibility Amazon offers, as well as the fact that they no longer have to maintain cumbersome IT operations. Why not from the get-go? So the question is, if Amazon essentially built AWS on startups, why hasn’t Activate been around from the get-go, 6 years ago? AWS owes a great deal of its success to scalable startups that wanted and needed servers to run their businesses, yet didn’t have the initial capital to build their own data centers. No one really knows why Amazon did not provide startups back then with the kind of support they do today. However, as the market matured, it became clear that Amazon realized that an increasing number of startups could use their help. As a result, Amazon discovered that marketing their support services through Venture Capitalists and incubators around the world would include them as partners in this program and aid in marketing the service to startups of all kinds. “AWS Activate requires a special registration that allows startup customers with a valid AWS account to apply for either a self-starter package or a portfolio package. If a startup is a member of one of the accelerators, seed funds, or startup organizations that Amazon already works with, they may apply for the more exclusive AWS Activate Portfolio Package.” Learn More Incubators and Accelerators It was a natural step for Amazon to partner with accelerators all over the world with the Activate package. In addition to supporting startups, as mentioned above, these accelerators act as channels in the startup scene.At the first AWS re:Invent, Bezos jokes to his fellow investors, saying that eventually some of the investments will return to him because of how heavily the startup scene relies on Amazon. Activate and the approximately 150 accelerators across the world, including White Accel, Techstars, Appwest, and Battery Ventures, genuinely support and understand the values of the AWS service. They are happy to be able to use the Activate platform to help their startups flourish within the AWS clouds. 3rd Party Partners Aside from the accelerators, as an Amazon partner, you can enroll special offers to Activate members. For example, members that are part of the Self-Starter package may receive a 3 month free trial for Chef, whereas Portfolio members may receive a 6 month trial. Most of the partners will provide an extended free trial or credits via Activate. For instance, Trend Micro, one of Amazon’s biggest partners in the security domain, provides $2500 credit for Activate members in the Portfolio package. While there are not many partners on the list, the ones that are mentioned are very helpful and provide nice benefits for Activate members. Reviews of the program from both the partners’ and startups’ side showed that Activate is ideal for startups that have resource constraints. While members within the Self-Starter package are able to use the AWS Free Usage Tier, Portfolio members can receive anywhere from $1,000 to $15,000 in AWS Promotional Credit. The credit is maybe the most important value for these startups. Bearing in mind that Google also has their own line of packages and credit for new companies, it makes sense for AWS to start giving more life to these companies, above the free tier. Everyone has access to the free tier, these startups simply get more of it. Seems that there is no downside to participating. There is no obligation and the worst thing that can happen is that you will find that the services are great, and simply continue using them, which may result in you being locked-in to the point where you need to eventually pay. On the other hand, seems that the last announcement in April, which is actually “meet our architects”. Meaning the knowledge that Amazon’s architects share with startups in their consultation sessions help them get a better grasp on the ecosystem, as well as understand that more resource utilization is ultimately the next logical step for growth. All in all, although Amazon didn’t offer with this program 4 years ago, the AWS cloud was still the natural choice for startups. It included all of the benefits a startup can get using and online and on-demand infinite amount of resources. As a result, it is the clear choice for web scale startups. There are many reasons why Amazon only recently decided to offer free benefits to their prized potential customers. While it could have stemmed from competition from Microsoft and Google, or Amazon may want to simply show their support for their potential customers, demonstrating their cloud’s benefits at an early stage. Aside from that, Amazon understands and is built on companies with long term goals and possibilities. Therefore Amazon sees startups as a long term investment, which starts off with little risk.
December 15, 2014
by Ofir Nachmani
· 10,027 Views
article thumbnail
XAML and Converters Chaining
Converters are an essential building block in XAML interfaces with one simple task: converting values of one type to another. Since they have a input, usually a view model property, and an output, it would be wonderful if we could somehow chain them to create a new converter that processes all internal converters. Luckily, this is quite simple to do, but we do need to create a new converter which will hold other converters and whose implementation will iterate over nested converters. Full code can be found over at Github repository here, only interesting parts will be highlighted in this blog post. Our combining converter class is also a converter itself, but it can contain other converters inside it: [ContentProperty("Converters")] public class ChainingConverter : IValueConverter { public Collection Converters { get; set; } } Converter functions are trivially implemented and iteratively go through the converters list and apply the converter on the previous value. public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { foreach (var converter in Converters) { value = converter.Convert(value, targetType, parameter, culture); } return value; } ConvertBack is implemented in the same fashion. This allows us to create new converters in XAML with the following syntax: But what if we need to send parameters to some of the converters, how can we do that when the same parameter is used throughout the ChainingConverter implementation? To provide custom parameter for individual converters, we can create a wrapper converter around existing converter and specify parameter on that wrapper. Here is a skeleton for such wrapper converter, notice that the wrapper is also a converter: [ContentProperty("Converter")] public class ParameterizedConverterWrapper : DependencyObject, IValueConverter { // IValueConverter Converter dependency property // object Parameter dependency property // object DefaultReturnValue dependency property public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (Converter != null) return Converter.Convert(value, targetType, Parameter ?? parameter, culture); return DefaultReturnValue; } } Converter wrappers allow us to create complex converters such as this one: The final converter should be self explanatory even though you probably haven’t seen these converters before. You can see that unlike other converters, the wrapper is a dependency object which allows us to use bindings on the Parameter property since it is in fact a dependency property. More complex converters should be created from ordinary converters whenever possible, especially when working with primitive types such as bool, string, enums and null values. What’s next? The last example looked like a small DSL embedded in XAML. We could create converters that simulate flow control or conditionals. We could even create converters that switch depending on the property before it, essentially coding logic inside such converters. Whether that is desirable is debatable, but it can be done. The full code with sample application can be found at the following Github repository: MassivePixel/wp-common.
December 15, 2014
by Toni Petrina
· 4,803 Views
article thumbnail
Using GeoJSON With Spring Data for MongoDB and Spring Boot
In my previous articles I compared 4 frameworks commonly used in communicating with MongoDB from the JVM and found out that in that use-case, Spring Data for MongoDB was the easiest solution. However I did make the remark that it doesn’t use the GeoJSON format to store geolocation coordinates and geometries. I tried to add GeoJSON support before, but couldn’t get the conversion to work propertly. But after some extensive searching I found out that the reason for it not working was my use of Spring Boot: its autoconfiguration for MongoDB does not support custom conversion out of the box. Luckily, the solution was simple: provide an extra configuration that extends from AbstractMongoConfiguration and import that in the Boot application. In that configuration you can override the customConversions() and add your converters. When you compare the geo classes in Spring Data and GeoJSON, I noticed that only a subset of GeoJSON geometries can be mapped on Spring Data geo classes: Point and Polygon. Spring Boot does not support LineString, MultiLineString, MultiPolygon or MultiPoint. However, in your mapped domain classes, you won’t use these normally. Creating a converter that adheres to the GeoJSON format is quite straightforward. import com.mongodb.BasicDBObject import com.mongodb.DBObject import org.springframework.core.convert.converter.Converter import org.springframework.data.convert.ReadingConverter import org.springframework.data.convert.WritingConverter import org.springframework.data.geo.Point import org.springframework.data.geo.Polygon final class GeoJsonConverters { static List> getConvertersToRegister() { return [ GeoJsonDBObjectToPointConverter.INSTANCE, GeoJsonDBObjectToPolygonConverter.INSTANCE, GeoJsonPointToDBObjectConverter.INSTANCE, GeoJsonPolygonToDBObjectConverter.INSTANCE ] } @WritingConverter static enum GeoJsonPointToDBObjectConverter implements Converter { INSTANCE; @Override DBObject convert(Point source) { return new BasicDBObject([type: 'Point', coordinates: [source.x, source.y]]) } } @ReadingConverter static enum GeoJsonDBObjectToPointConverter implements Converter { INSTANCE; @Override Point convert(DBObject source) { def coordinates = source.coordinates as double[] return new Point(coordinates[0], coordinates[1]) } } @WritingConverter static enum GeoJsonPolygonToDBObjectConverter implements Converter { INSTANCE; @Override DBObject convert(Polygon source) { def coordinates = source.points.collect { [it.x, it.y] } return new BasicDBObject([type: 'Polygon', coordinates: coordinates]) } } @ReadingConverter static enum GeoJsonDBObjectToPolygonConverter implements Converter { INSTANCE; @Override Polygon convert(DBObject source) { def coordinates = source.coordinates as double[] return new Point(coordinates[0], coordinates[1]) } } } To add those converters to the Spring context, you’ll have to override some methods in your MongoDB spring configuration class. import com.mongodb.Mongo import org.springframework.beans.factory.annotation.* import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.EnableAutoConfiguration import org.springframework.context.annotation.* import org.springframework.data.mongodb.config.AbstractMongoConfiguration import org.springframework.data.mongodb.core.convert.* @EnableAutoConfiguration @ComponentScan @Configuration @Import([MongoComparisonMongoConfiguration]) class MongoComparison { static void main(String[] args) { SpringApplication.run(MongoComparison, args); } } @Configuration class MongoComparisonMongoConfiguration extends AbstractMongoConfiguration { @Autowired Mongo mongo; @Value("\${spring.data.mongodb.database}") String databaseName; @Override protected String getDatabaseName() { return databaseName } @Override Mongo mongo() throws Exception { return mongo } @Override CustomConversions customConversions() { def customConverters = [] customConverters << GeoJsonConverters.convertersToRegister return new CustomConversions(customConverters.flatten()) } } As Spring Boot already provides the configuration of the Mongo instance and the name of the database, we can reuse these in the MongoDB configuration class. The custom conversions take preference over the existing ones for Point and Polygon. I’ll be writing a library this weekend to add support for all GeoJSON geometries in Spring Data for MongoDB. However, I already noticed it’ll be very hard to provide support for those in generated query methods in repositories, but with annotated queries being possible, I don’t think this will be a big issue but we’ll see.
December 13, 2014
by Lieven Doclo
· 22,587 Views · 1 Like
article thumbnail
An Introduction to BDD Test Automation with Serenity and JUnit
serenity bdd (previously known as thucydides ) is an open source reporting library that helps you write better structured, more maintainable automated acceptance criteria, and also produces rich meaningful test reports (or "living documentation") that not only report on the test results, but also what features have been tested. and for when your automated acceptance tests exercise a web interface, serenity comes with a host of features that make writing your automated web tests easier and faster. 1. bdd fundamentals but before we get into the nitty-gritty details, let’s talk about behaviour driven development, which is a core concept underlying many of serenity’s features. behaviour driven development, or bdd, is an approach where teams use conversations around concrete examples to build up a shared understanding of the features they are supposed to build. for example, suppose you are building a site where artists and craftspeople can sell their good online. one important feature for such a site would be the search feature. you might express this feature using a story-card format commonly used in agile projects like this: in order for buyers to find what they are looking for more efficiently as a seller i want buyers to be able to search for articles by keywords to build up a shared understanding of this requirement, you could talk through a few concrete examples. the converstaion might go something like this: "so give me an example of how a search might work." "well, if i search for wool , then i should see only woolen products." "sound’s simple enough. are there any other variations on the search feature that would produce different outcomes?" "well, i could also filter the search results; for example, i could look for only handmade woolen products." and so on. in practice, many of the examples that get discussed become "acceptance criteria" for the features. and many of these acceptance criteria become automated acceptance tests. automating acceptence tests provides valuable feedback to the whole team, as these tests, unlike unit and integrationt tests, are typically expressed in business terms, and can be easily understood by non-developers. and, as we will se later on in this article, the reports that are produced when these teste are executed give a clear picture of the state of the application. 2. serenity bdd and junit in this article, we will learn how to use serenity bdd using nothing more than junit, serenity bdd, and a little selenium webdriver. automated acceptance tests can use more specialized bdd tools such as cucumber or jbehave, but many teams like to keep it simple, and use more conventional unit testing tools like junit. this is fine: the essence of the bdd approach lies in the conversations that the teams have to discuss the requirements and discover the acceptance criteria. 2.1. writing the acceptance test let’s start off with a simple example. the first example that was discussed was searching for wool . the corresponding automated acceptance test for this example in junit looks like this: @runwith(serenityrunner.class) public class whensearchingbykeyword { @managed(driver="chrome", uniquesession = true) webdriver driver; @steps buyersteps buyer; @test public void should_see_a_list_of_items_related_to_the_specified_keyword() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); // then. buyer.should_see_items_related_to("wool"); } } the serenity test runner sets up the test and records the test results this is a web test, and serenity will manage the webdriver driver for us we hide implementation details about how the test will be executed in a "step library" our test itself is reduced to the bare essential business logic that we want to demonstrate there are several things to point out here. when you use serenity with junit, you need to use the serenityrunner test runner. this instruments the junit class and instantiates the webdriver driver (if it is a web test), as well as any step libraries and page objects that you use in your test (more on these later). the @managed annotation tells serenity that this is a web test. serenity takes care of instantiating the webdriver instance, opening the browser, and shutting it down at the end of the test. you can also use this annotation to specify what browser you want to use, or if you want to keep the browser open during all of the tests in this test case. the @steps annotation tells serenity that this variable is a step library. in serenity, we use step libraries to add a layer of abstraction between the "what" and the "how" of our acceptance tests. at the top level, the step methods document "what" the acceptance test is doing, in fairly implementation-neutral, business-friendly terms. so we say "searches for items containing wool ", not "enters wool into the search field and clicks on the search button". this layered approach makes the tests both easier to understand and to maintain, and helps build up a great library of reusable business-level steps that we can use in other tests. 2.2. the step library the step library class is just an ordinary java class, with methods annotated with the @step annotation: public class buyersteps { homepage homepage; searchresultspage searchresultspage; @step public void opens_etsy_home_page() { homepage.open(); } @step public void searches_for_items_containing(string keywords) { homepage.searchfor(keywords); } @step public void should_see_items_related_to(string keywords) { list resulttitles = searchresultspage.getresulttitles(); resulttitles.stream().foreach(title -> assertthat(title.contains(keywords))); } } //end:tail step libraries often use page objects, which are automatically instantiated the @step annotation indicates a method that will appear as a step in the test reports for automated web tests, the step library methods do not call webdriver directly, but rather they typically interact with page objects . 2.3. the page objects page objects encapsulate how a test interacts with a particular web page. they hide the webdriver implementation details about how elements on a page are accessed and manipulated behind more business-friendly methods. like steps, page objects are reusable components that make the tests easier to understand and to maintain. serenity automatically instantiates page objects for you, and injects the current webdriver instance. all you need to worry about is the webdriver code that interacts with the page. and serenity provides a few shortcuts to make this easier as well. for example, here is the page object for the home page: @defaulturl("http://www.etsy.com") public class homepage extends pageobject { @findby(css = "button[value='search']") webelement searchbutton; public void searchfor(string keywords) { $("#search-query").sendkeys(keywords); searchbutton.click(); } } what url should be used by default when we call the open() method a serenity page object must extend the pageobject class you can use the $ method to access elements directly using css or xpath expressions or you may use a member variable annotated with the @findby annotation and here is the second page object we use: public class searchresultspage extends pageobject { @findby(css=".listing-card") list listingcards; public list getresulttitles() { return listingcards.stream() .map(element -> element.gettext()) .collect(collectors.tolist()); } } in both cases, we are hiding the webdriver implementation of how we access the page elements inside the page object methods. this makes the code both easier to read and reduces the places you need to change if a page is modified. this approach encourages a very high degree of reuse. for example, the second example mentioned at the start of this article involved filtering results by type. the corresponding automated acceptance criteria might look like this: @test public void should_be_able_to_filter_by_item_type() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); int unfiltereditemcount = buyer.get_matching_item_count(); // and buyer.filters_results_by_type("handmade"); // then buyer.should_see_items_related_to("wool"); // and buyer.should_see_item_count(lessthan(unfiltereditemcount)); } @test public void should_be_able_to_view_details_about_a_searched_item() { // given buyer.opens_etsy_home_page(); // when buyer.searches_for_items_containing("wool"); buyer.selects_item_number(5); // then buyer.should_see_matching_details(); } notice how most of the methods here are reused from the previous steps: in fact, only two new methods are required. 3. reporting and living documentation reporting is one of serenity’s fortes. serenity not only reports on whether a test passes or fails, but documents what it did, in a step-by-step narrative format that inculdes test data and screenshots for web tests. for example, the following page illustrates the test results for our first acceptance criteria: figure 1. test results reported in serenity but test outcomes are only part of the picture. it is also important to know what work has been done, and what is work in progress. serenity provides the @pending annotation, that lets you indicate that a scenario is not yet completed, but has been scheduled for work, as illustrated here: @runwith(serenityrunner.class) public class whenputtingitemsintheshoppingcart { @pending @test public void shouldupdateshippingpricefordifferentdestinationcountries() { } } this test will appear in the reports as pending (blue in the graphs): figure 2. test result overview we can also organize our acceptance tests in terms of the features or requirements they are testing. one simple approach is to organize your requirements in suitably-named packages: |----net | |----serenity_bdd | | |----samples | | | |----etsy | | | | |----features | | | | | |----search | | | | | | |----whensearchingbykeyword.java | | | | | | |----whenviewingitemdetails.java | | | | | |----shopping_cart | | | | | | |----whenputtingitemsintheshoppingcart.java | | | | |----pages | | | | | |----homepage.java | | | | | |----itemdetailspage.java | | | | | |----registerpage.java | | | | | |----searchresultspage.java | | | | | |----shoppingcartpage.java | | | | |----steps | | | | | |----buyersteps.java all the test cases are organized under the features directory. test cass related to the search feature test cases related to the ‘shopping cart’ feature serenity can use this package structure to group and aggregate the test results for each feature. you need to tell serenity the root package that you are using, and what terms you use for your requirements. you do this in a special file called (for historical reasons) thucydides.properties , which lives in the root directory of your project: thucydides.test.root=net.serenity_bdd.samples.etsy.features thucydides.requirement.types=feature,story with this configured, serenity will report about how well each requirement has been tested, and will also tell you about the requirements that have not been tested: figure 3. serenity reports on requirements as well as tests 4. conclusion hopefully this will be enough to get you started with serenity. that said, we have barely scratched the surface of what serenity can do for your automated acceptance tests. you can read more about serenity, and the principles behind it, by reading the users manual , or by reading bdd in action , which devotes several chapters to these practices. and be sure to check out the online courses at parleys . you can get the source code for the project discussed in this article on github .
December 12, 2014
by John Ferguson Smart
· 59,580 Views · 6 Likes
article thumbnail
Using Azure AD SSO Tokens for Multiple AAD Resources from Native Mobile Apps
This blog post is the third in a series that cover Azure Active Directory Single Sign-On (SSO) authentication in native mobile applications. Authenticating iOS app users with Azure Active Directory How to Best handle AAD access tokens in native mobile apps Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps(this post) Sharing Azure SSO access tokens across multiple native mobile apps. Brief Start In an enterprise context, it is highly likely that you would have multiple web services that your native mobile app needs to consume. I had exactly this scenario, where one of my clients had asked if they could maintain the same token in the background in the mobile app to use it for accessing multiple web services. I spent some time digging through the documentation and conducting some experiments to confirm some points. Therefore, this post is to share my findings on accessing multiple Azure AD resources from native mobile apps using ADAL. In the previous two posts, we looked at implementing Azure AD SSO login on native mobile apps, then we looked at how to best maintain these access tokens. This post discusses how to use Azure AD SSO tokens to manage access to multiple AAD resources. Let’s assume that we have 2 web services sitting in Azure (ie WebApi1, and WebApi2), both of which are set to use Azure AD authentication. Then, we have the native mobile app, which needs access to both web services (WebApi1, and WebApi2). Let’s look at what we can and cannot do. Cannot Use the Same Azure AD Access-Token for Multiple Resources The first thing that comes to mind is to use the same access token for multiple Azure AD resources, and that is what the client asked about. However, this is not allowed. Azure AD issues a token for certain resource (which is mapped to an Azure AD app). When we call AcquireToken(), we need to provide a resourceID, only ONE resourceID. The result would have a token that can only be used for the supplied resource (id). There are ways where you could use the same token (as we will see later in this post), but it is not recommended as it complicates operations logging, authentication process tracing, etc. Therefore it is better to look at the other options provided by Azure and the ADAL library. Use the Refresh-Token to Acquire Tokens for Multiple Resources The ADAL library supports acquiring multiple access-Tokens for multiple resources using a refresh token. This means once a user is authenticated, the ADAL’s authentication context, would be able to generate an access-token to multiple resources without authenticating the user again. This was mentioned briefly by the MSDN documentation here. The refresh token issued by Azure AD can be used to access multiple resources. For example, if you have a client application that has permission to call two web APIs, the refresh token can be used to get an access token to the other web API as well. (MSDN documentation) public async Task RefreshTokens() { var tokenEntry = await tokensRepository.GetTokens(); var authorizationParameters = new AuthorizationParameters (_controller); var result = "Refreshed an existing Token"; bool hasARefreshToken = true; if (tokenEntry == null) { var localAuthResult = await _authContext.AcquireTokenAsync ( resourceId1, clientId, new Uri (redirectUrl), authorizationParameters, UserIdentifier.AnyUser, null); tokenEntry = new Tokens { WebApi1AccessToken = localAuthResult.AccessToken, RefreshToken = localAuthResult.RefreshToken, Email = localAuthResult.UserInfo.DisplayableId, ExpiresOn = localAuthResult.ExpiresOn }; hasARefreshToken = false; result = "Acquired a new Token"; } var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(tokenEntry.RefreshToken, clientId, resourceId2); tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken; tokenEntry.RefreshToken = refreshAuthResult.RefreshToken; tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn; if (hasARefreshToken) { // this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshAuthResult.RefreshToken, clientId, resourceId1); tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken; tokenEntry.RefreshToken = refreshAuthResult.RefreshToken; tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn; } await tokensRepository.InsertOrUpdateAsync (tokenEntry); return result; } As you can see from above, we check if we have an access-token from previous runs, and if we do, we refresh the access-tokens for both web services. Notice how the _authContext.AcquireTokenByRefreshTokenAsync() provides an overloading parameter that takes a resourceId. This enables us to get multiple access tokens for multiple resources without having to re-authenticate the user. The rest of the code is similar to what we have seen in the previous two posts. ADAL Library Can Produce New Tokens For Other Resources In the previous two posts, we looked at ADAL library and how it uses TokenCache. Although ADAL does not support persistent caching of tokens yet on mobile apps, it still uses the TokenCache for in-memory caching. This enables ADAL library to generate new access-tokens if the context (AuthenticationContext) still exists from previous authentications. Remember in the previous post we said it is recommended to keep a reference to the authentication-context? Here it comes in handy, as it enables us to generate new access-tokens for accessing multiple Azure AD resources. var localAuthResult = await _authContext.AcquireTokenAsync ( resourceId2, clientId, new Uri (redirectUrl), authorizationParameters, UserIdentifier.AnyUser, null); Calling AcquireToken() (even with no refresh-token) would give us a new access-token to webApi2. This is due to ADAL great goodness where it checks if we have a refresh-token in-memory (managed by ADAL), then it uses that to generate a new access-token for webApi2. An alternative The third alternative option is the simplest, but not necessarily the best. In this option, we could use the same access token to consume multiple Azure AD resources. To do this, we need to use the same Azure AD app ID when setting the web application’s authentication. This requires some understanding of how the Azure AD authentication happens on our web apps. If you refer to Taiseer Joudeh’s tutorial, which we mentioned before, you will see that in our web app, we need to tell the authentication framework what’s our Authority and the Audience (Azure AD app Id). If we set up both of our web apps, to use the same Audience (Azure AD app Id), meaning that we link them both into the same Azure AD application, then we could use the same access-token to use both web services. // linking our web app authentication to an Azure AD application private void ConfigureAuth(IAppBuilder app) { app.UseWindowsAzureActiveDirectoryBearerAuthentication( new WindowsAzureActiveDirectoryBearerAuthenticationOptions { Audience = ConfigurationManager.AppSettings["Audience"], Tenant = ConfigurationManager.AppSettings["Tenant"] }); } As we said before, this is very simple and requires less code, but could cause complications in terms of security logging and maintenance. At the end of the day, it depends on your context and what you are trying to achieve. Therefore, I thought it would be worth mentioning and I will leave the judgement for you on which option you choose. Conclusions We looked at how we could use Azure AD SSO with ADAL to access multiple resources from native mobile apps. As we saw, there are three main options, and the choice could be made based on the context of your app. I hope you find this useful and if you have any questions or you need help with some development that you are doing, then just get in touch. This blog post is the third in a series that cover Azure Active Directory Single Sign-On (SSO) authentication in native mobile applications.
December 12, 2014
by Has Altaiar
· 11,114 Views · 1 Like
article thumbnail
Latest Jackson Integration Improvements in Spring
Originally written by Sébastien Deluze on the SpringSource blog Spring Jackson support has been improved lately to be more flexible and powerful. This blog post gives you an update about the most useful Jackson related features available in Spring Framework 4.x and Spring Boot. All the code samples are coming from this spring-jackson-demo sample application, feel free to have a look at the code. JSON Views It can sometimes be useful to filter contextually objects serialized to the HTTP response body. In order to provide such capabilities, Spring MVC now has builtin support for Jackson’s Serialization Views. The following example illustrates how to use @JsonView to filter fields depending on the context of serialization - e.g. getting a "summary" view when dealing with collections, and getting a full representation when dealing with a single resource: public class View { interface Summary {} } public class User { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private String firstname; @JsonView(View.Summary.class) private String lastname; private String email; private String address; private String postalCode; private String city; private String country; } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; private List recipients; private String body; } Thanks to Spring MVC @JsonView support, it is possible to choose, on a per handler method basis, which field should be serialized: @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.Summary.class) @RequestMapping("/") public List getAllMessages() { return messageService.getAll(); } @RequestMapping("/{id}") public Message getMessage(@PathVariable Long id) { return messageService.get(id); } } In this example, if all messages are retrieved, only the most important fields are serialized thanks to the getAllMessages() method annotated with@JsonView(View.Summary.class): [ { "id" : 1, "created" : "2014-11-14", "title" : "Info", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel" } }, { "id" : 2, "created" : "2014-11-14", "title" : "Warning", "author" : { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll" } }, { "id" : 3, "created" : "2014-11-14", "title" : "Alert", "author" : { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev" } } ] In Spring MVC default configuration, MapperFeature.DEFAULT_VIEW_INCLUSION is set tofalse. That means that when enabling a JSON View, non annotated fields or properties likebody or recipients are not serialized. When a specific Message is retrieved using the getMessage() handler method (no JSON View specified), all fields are serialized as expected: { "id" : 1, "created" : "2014-11-14", "title" : "Info", "body" : "This is an information message", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel", "email" : "bclozel@pivotal.io", "address" : "1 Jaures street", "postalCode" : "69003", "city" : "Lyon", "country" : "France" }, "recipients" : [ { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll", "email" : "snicoll@pivotal.io", "address" : "42 Obama street", "postalCode" : "1000", "city" : "Brussel", "country" : "Belgium" }, { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev", "email" : "rstoyanchev@pivotal.io", "address" : "3 Warren street", "postalCode" : "10011", "city" : "New York", "country" : "USA" } ] } Only one class or interface can be specified with the @JsonView annotation, but you can use inheritance to represent JSON View hierarchies (if a field is part of a JSON View, it will be also part of parent view). For example, this handler method will serialize fields annotated with@JsonView(View.Summary.class) and @JsonView(View.SummaryWithRecipients.class): public class View { interface Summary {} interface SummaryWithRecipients extends Summary {} } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; @JsonView(View.SummaryWithRecipients.class) private List recipients; private String body; } @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.SummaryWithRecipients.class) @RequestMapping("/with-recipients") public List getAllMessagesWithRecipients() { return messageService.getAll(); } } JSON Views could also be specified when using RestTemplate HTTP client orMappingJackson2JsonView by wrapping the value to serialize in a MappingJacksonValue as shown in this code sample. JSONP As described in the reference documentation, you can enable JSONP for @ResponseBody andResponseEntity methods by declaring an @ControllerAdvice bean that extendsAbstractJsonpResponseBodyAdvice as shown below: @ControllerAdvice public class JsonpAdvice extends AbstractJsonpResponseBodyAdvice { public JsonpAdvice() { super("callback"); } } With such @ControllerAdvice bean registered, it will be possible to request the JSON webservice from another domain using a In this example, the received payload would be: parseResponse({ "id" : 1, "created" : "2014-11-14", ... }); JSONP is also supported and automatically enabled when using MappingJackson2JsonViewwith a request that has a query parameter named jsonp or callback. The JSONP query parameter name(s) could be customized through the jsonpParameterNames property. XML support Since 2.0 release, Jackson provides first class support for some other data formats than JSON. Spring Framework and Spring Boot provide builtin support for Jackson based XML serialization/deserialization. As soon as you include the jackson-dataformat-xml dependency to your project, it is automatically used instead of JAXB2. Using Jackson XML extension has several advantages over JAXB2: Both Jackson and JAXB annotations are recognized JSON View are supported, allowing you to build easily REST Webservices with the same filtered output for both XML and JSON data formats No need to annotate your class with @XmlRootElement, each class serializable in JSON will serializable in XML You usually also want to make sure that the XML library in use is Woodstox since: It is faster than Stax implementation provided with the JDK It avoids some known issues like adding unnecessary namespace prefixes Some features like pretty print don't work without it In order to use it, simply add the latest woodstox-core-asl dependency available to your project. Customizing the Jackson ObjectMapper Prior to Spring Framework 4.1.1, Jackson HttpMessageConverters were usingObjectMapper default configuration. In order to provide a better and easily customizable default configuration, a new Jackson2ObjectMapperBuilder has been introduced. It is the JavaConfig equivalent of the well known Jackson2ObjectMapperFactoryBean used in XML configuration. Jackson2ObjectMapperBuilder provides a nice API to customize various Jackson settings while retaining Spring Framework provided default ones. It also allows to createObjectMapper and XmlMapper instances based on the same configuration. Both Jackson2ObjectMapperBuilder and Jackson2ObjectMapperFactoryBean define a better Jackson default configuration. For example, theDeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES property set to false, in order to allow deserialization of JSON objects with unmapped properties. Jackson support for Java 8 Date & Time API data types is automatically registered when Java 8 is used and jackson-datatype-jsr310 is on the classpath. Joda-Time support is registered as well when jackson-datatype-joda is part of your project dependencies. These classes also allow you to register easily Jackson mixins, modules, serializers or even property naming strategy like PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES if you want to have your userName java property translated to user_name in JSON. With Spring Boot As described in the Spring Boot reference documentation, there are various ways tocustomize the Jackson ObjectMapper. You can for example enable/disable Jackson features easily by adding properties likespring.jackson.serialization.indent_output=true to application.properties. As an alternative, in the upcoming 1.2 release Spring Boot also allows to customize the Jackson configuration (JSON and XML) used by Spring MVC HttpMessageConverters by declaring a Jackson2ObjectMapperBuilder @Bean: @Bean public Jackson2ObjectMapperBuilder jacksonBuilder() { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); return builder; } This is useful if you want to use advanced Jackson configuration not exposed through regular configuration keys. Without Spring Boot In a plain Spring Framework application, you can also use Jackson2ObjectMapperBuilder to customize the XML and JSON HttpMessageConverters as shown bellow: @Configuration @EnableWebMvc public class WebConfiguration extends WebMvcConfigurerAdapter { @Override public void configureMessageConverters(List> converters) { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); converters.add(new MappingJackson2HttpMessageConverter(builder.build())); converters.add(new MappingJackson2XmlHttpMessageConverter(builder.createXmlMapper(true).build())); } } More to come With the upcoming Spring Framework 4.1.3 release, thanks to the addition of a Spring context aware HandlerInstantiator (see SPR-10768 for more details), you will be able to autowire Jackson handlers (serializers, deserializers, type and type id resolvers). This will allow you to build, for example, a custom deserializer that will replace a field containing only a reference in the JSON payload by the full Entity retrieved from the database.
December 9, 2014
by Pieter Humphrey DZone Core CORE
· 32,167 Views · 1 Like
article thumbnail
High Availability, Disaster Recovery, and Microsoft Azure
both high availability (ha) and disaster recovery (dr) have been essential it topics. fundamentally ha is about fault tolerance relevant to the availability of an examined subject like application, database, vms, etc. while dr roots on the ability to resume operations in the aftermath of a catastrophic event. a fundamental difference of these two is that ha expects no down time and no data loss, while dr does. they are different issues and should be addressed separately. background for many it shops, either ha or dr has been a high risk and high cost item. both are essential to business continuity, while traditionally tough technical problems to solve with very significant and long-term commitments on resources. not only they are technically challenging, but a continual cost-cutting which has become an it standard practice in the past two decades makes purchasing hardware/software and constructing either ha or dr solution on premises further distant from it’s financial and technical realties. sense of urgency too often, the technical challenges and resource commitments overwhelm it and turn ha and dr into academic discussions, or symbolic items on a project checklist. at the same time, information is rapidly exploding as internet, mobility and social-network are becoming integral in our daily lives and businesses. there are progressively more data to process and store. for many businesses, the needs for ha and dr is urgent for better managing risks. and continual availability and on-demand recoverability of it are becoming increasingly critical. this is the reality, now the good news is that the recent introduction of cloud computing has fundamentally changed how an ha or dr solution can be implemented. microsoft azure is a vivid example of ha and dr solutions with significantly reduced the required financial commitment and involved technical complexities. the traditional approach by establishing redundancy and acquiring a physical dr site with long-term resources and financial commitments is now largely replaced with consumable services which can be configured in minutes by mouse-clicking and with a manageable cost structure based on usage. ha and dr have become it solutions which are financially realistic and technically feasible for businesses in all sizes. ha, redundancy, and microsoft azure lrs ha is to eliminate a single point of failure of an examined component, an application for example. it denotes a strategy to employ redundancy such that a target application can and will continue being available without downtime while experiencing a failure of hosting hardware or software. there are various and well-developed ha solutions like a hyper-v host cluster using redundant hardware to eliminate a single point of failure of hosting os or hardware, and an application cluster for eliminating a single point of failure by running the application in multiple vm instances with a synchronous state. although ha implementations may vary, the fundamental principle nevertheless remains the same. ha expects neither downtime nor data loss while experiencing an outage of a target hardware or software. ha has become dramatically simple in microsoft azure. basically, all data written to disk in microsoft azure are kept at least in the so-called lrs, locally redundant storage. lrs replicates a transaction synchronously to three different storage nodes across fault domains and upgrade domains within the same region for durability. in layman’s terms, microsoft azure by default maintains at least three copies of user data to achieve ha. dr, replication, and microsoft azure grs dr is about having a plan and backups in place to resume operations in the aftermath of a catastrophic event. unplanned outage is assumed in a dr scenario, therefore some data loss is also expected. notice that ha and dr are different business problems and addressed differently. while both ha and dr are based on applying redundancy, i.e. a source and replicas, or multiple identical nodes of an examines component like application instance, databases, or vms, there are however differences between the two. a dr solution generally employs replicas or backups, are implemented with asynchronous processes, and expects an outage of a source and with some data loss in transit while the outage occurs. while ha requires a logical representation with a real-time integrity using synchronous processes across all participating nodes, expects neither downtime nor data loss while experiencing an outage of a participating node. for a critical workload, one approach of dr is to establish geo-replication to address an outage of an entire geographic area caused by a natural disaster, for example. the concern is that a catastrophic event may impact an entire geographic area causing a datacenter where a mission critical application is being hosted becomes unavailable for an extended period of time. in microsoft azure, geo redundant storage or grs is the default and an optional setting, as shown above, while configuring a storage account. grs will queue a transaction committed to lrs as an asynchronous replication to a secondary region, a few hundreds miles away from the primary region where a storage account is originated. at the secondary region, data is also stored in lrs, i.e. made durable by replicating it to three storage nodes. specifically, a microsoft azure storage account configured with grs essentially maintains three replicas locally for high availability, and replicates the content and maintains three replicas at a secondary datacenter a few hundreds miles away for dr. so all are six copies, three locally and three remotely. all these are configured by one, yes one mouse click from a dropdown list while creating a storage account. the above is a conceptual model illustrated a data flow of grs. grs replication has little performance impact on an application since application data are committed to lrs in real-time while replication to grs is queued, i.e. asynchronously. a write to lrs is synchronous and in real-time, once committed, the changes are expected within 15 minutes to be asynchronously replicated to the secondary site. for a ra-grs storage account, in addition to one primary endpoint for read/write operations as it is in a grs, there is also one secondary endpoint as read only becomes available as shown below. the cost implications of grs or ra-grs include the additional storage and the transmission costs for egress traffic, as applicable, of the secondary datacenter. ingress traffic is free . and microsoft azure storage sla offers 99.9% availability and a cost calculator is also available. microsoft azure recovery services so far, much is about backing up or replicating data. to successfully restore, a dr plan must be put in place and ensure its availability upon a dr scenario in progress. either placing a dr plan at a primary site where the source is or a secondary site where a replica stays has some issues and concerns. keeping a dr plan at the source site where all the resources are in place and on-the-job trainings seems logical. or does it? dr is assuming a catastrophic event over an extended geographic areas where the source site is experiencing an outage. in such case, keeping a dr plan in the source site defeats the purpose. maintaining a dr plan at the secondary site is the choice then. in a dr scenario, a recovery site is to be brought on line within a expected period of time according to a dr plan, and having the dr plan right there and then at a recovery site makes all the sense. or does it? this decision introduces a number of requirements including the physical readiness, the timeliness, and the financial implications on securing and maintaining a dr plan at a remote physical facility. for a vmm server running on system center 2012 sp1 or later, an idea, reliable and straightforward way is to use azure recovery services to maintain a dr plan as shown below. and for any backup needs, using cloud as a backup site makes backing up and restoring data an anytime anywhere operation. azure site recovery vault this service essentially acts as the director of a dr process. it orchestrates and manages the protection and failover of vms in clouds managed by virtual machine manager 2012 sp1 or later. a noticeable advantage is the ability to test a recovery configuration, exercise a proactive failover and recovery, and automate recovery in the event of a site outage. the sla of site recovery services is 99.9% availability to ensure a configured dr plan is always in place with expected updates. this is a dr solution that it can implement, simulate, verify, bring online and be absolutely confident with the readiness. azure backup vault this is a reliable, scalable and inexpensive data protection solution with zero capital investment and extremely low operational expense. like other secure communication with microsoft azure, you will first upload a public certificate to microsoft azure. then download the backup agent to register a target server with the backup vault. then select what to be backed up. both microsoft azure backup sla (99.9% availability) and cost calculator are available for better assessing the solution. closing thoughts form an application’s view, ha is an on-going event while dr is an anticipation. ha and dr are different business problems and should be addressed differently. nevertheless, microsoft azure provides a single platform to gracefully address ha with lrs, dr with grs, and dr orchestration with recovery services, and all with published sla s and a predictable cost structure . going forward, it pros can now include ha and dr as a reliable, scalable and relatively inexpensive proposition by employing microsoft azure as a solution platform. call to action register at microsoft virtual academy, http://aka.ms/mva1 , and train yourself on microsoft azure by taking the track of courses. go to http://aka.ms/azure200 and acquire a free trial subscription and assess microsoft azure for ha and dr solutions. review my recommended content at http://aka.ms/recommended .
December 9, 2014
by Yung Chou
· 11,321 Views · 2 Likes
article thumbnail
Configuring RBAC in JBoss EAP and Wildfly - Part One
In this blog post I will look into the basics of configuring Role Based Access Control (RBAC) in EAP and Wildfly. RBAC was introduced in EAP 6.2 and WildFly 8 so you will need either of those if you wish to use RBAC. For the purposes of this blog I will be using the following: OS - Ubuntu 14 Java - 1.7.0_67 JBoss - EAP 6.3 Although I'm using EAP these instructions should work just the same on Wildfly. What is RBAC? Role Based Access Control is designed to restrict system access by specifying permissions for management users. Each user with management access is given a role and that role defines what they can and cannot access. In EAP 6.2+ and Wildfly 8+ there are seven predefined roles each of which has different permissions. Details on each of the roles can be found here: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.2/html/Security_Guide/Supported_Roles.html In order to authenticate users one of the three standard authentication providers must be used. These are: Local User - The local user is automatically added as a SuperUser so a user on the server machine has full access. This user should be removed in a production system and access locked down to named users. Username/Password - using either the mgmt-users.properties file, or an LDAP server. Client Certificate - using a trust store For the purposes of this blog and to keep things simple we will use username/passwords and the mgmt-users.properties file Why do we need RBAC? The easiest way to show this is through a practical demo. Configuration can be done either via the Management Console or via the Command Line Interface (CLI). However, only a limited set of tasks can be done via the management console whereas all tasks are available via the CLI. Therefore, for the purposes of this blog I will be doing all configuration via the CLI. In our test scenario we have 4 users: Andy - This user is the main sys-admin and therefore we want him to be able to access everything. Bob - This user is a lead developer and therefore will need to be able to deploy apps and make changes to certain application resources. Clare & Dave - These users are standard developers and will need to be able to view application resources but should not be able to make changes. First of all we will set up a number of users. In order to do so we will use the add-user.sh script which can be found in: /bin Create the following users in the stated groups. (Enter No for the final question for all users) Andy - no group Bob - lead-developers Clare - standard-developers Dave - standard-developers In /domain/configuration you will find a file called mgmt-users.properties. At the bottom of this file you will see a list of the users we've created similar to this: Andy=82153e0297590cceb14e7620ccd3b6ed Bob=06a61e836d9d2d5be98517b468ab72cc Clare=63a8ff615a122c56b1d47fc098ff5124 Dave=2df8d1e02e7f3d13dcea7f4b022d0165 In the same directory you will find a a file called mgmt-groups.properties, at the bottom of this file you will see a list of users and the groups they are in, like so: Andy= Bob=lead-developers Clare=developers Dave=developers Now point a browser at http://localhost:9990 and log in as the user Dave. Navigate around and you will see you have full access to everything. This is precisely why RBAC is needed! Allowing all users to not only access the management console but to be able to access and alter anything is a recipe for disaster and guaranteed to cause issues further down the line. Often users don't understand the implications of the changes they have made, it may just be a quick fix to resolve an immediate issue but it may have long term consequences that are not noticed until much further down the line when the changes that were made have been forgotten about or are not documented. As someone who works in support we see these kind of issues on a regular basis and they can be difficult to track down with no audit trail and users not realising that the minor change they made to one part of the system is now causing a major issue in some other part of the system. OK, so we now have our users set up but at the moment they have full access to everything. Next up we will configure these users and assign them to roles. First of all start up the CLI. Run the following command: /bin/jboss-cli.sh -c Change directory to the authorisation node cd /core-service=management/access=authorization Running the following command lists the current role names and the standard role names along with two other attributes ls -l The two we are interested in here are permission-combination-policy and provider. The permission-combination-policy defines how permissions are determined if a user is assigned more than one role. The default setting is permissive. This means that if a user is assigned to any role that allows a particular action then the user can perform that action. The opposite of this is rejecting. This means that if a user is assigned to multiple roles then all those roles must permit an action for a user to be able to perform that action. The other attribute of interest here is provider. This can be set to either simple (which is the default) or rbac. In simple mode all management users can access everything and make changes, as we have seen. In rbac mode users are assigned roles and each of those roles has difference privileges. Switching on RBAC OK, lets turn on RBAC... Run the following commands to turn on RBAC cd /core-service=management/access=authorization :write-attribute(name=provider, value=rbac) Restart JBoss Now point a browser at http://localhost:9990 and try to log in as the user Andy (who should be able to access everything). You should see the following message : Insufficient privileges to access this interface. This is because at the moment the user Andy isn't mapped to any role. Let's fix that now: If you look in domain.xml in the management element you will see the following: This shows that at the moment only the local user is mapped to the SuperUser role. Mapping users and groups to roles We need to map our users to the relevant roles to allow them access. In order to do this we need the following command: role-mapping=ROLENAME/include=ALIAS:add(name=USERNAME, type=USER) Where rolename is one of the pre-configured roles, alias is a unique name for the mapping and user is the name of the user to map. So, lets map the user Andy to the SuperUser role. ./role-mapping=SuperUser/include=user-Andy:add(name=Andy, type=USER) In domain.xml you will see that our user has been added to the SuperUser role: Now point a browser at http://localhost:9990 you should now be able to log in as the user Andy and have full access to everything. Next we need to add mappings for the other roles we want to use. ./role-mapping=Deployer:add ./role-mapping=Monitor:add Now we need to give role mappings to all our other users. As we have them in groups we can assign the groups to roles, rather than mapping by user. The command is basically the same as for a user but the type is GROUP rather than user. Here we are mapping lead developers to the Deployer role and standard developers to the Monitor role. ./role-mapping=Deployer/include=group-lead-devs:add(name=lead-developers, type=GROUP) ./role-mapping=Monitor/include=group-standard-devs:add(name=developers, type=GROUP) If you look in domain.xml you should now see the following showing that the user Andy is mapped to the SuperUser role and the two groups are mapped to the Deployer and Monitor roles. You can also view the role mappings in the admin console. Click on the Administration tab. Expand the Access Control item on the left and select Role Assignment. Select the Users tab - this shows users that are mapped to roles. Select the Groups tab and you will see the mapping between groups and roles. Log in as the different users and see the differences between what you can and can't access. Conclusion So, that's it for Part One. We have switched on RBAC, set up a number of users and groups and mapped those users and groups to particular roles to give them different levels of access. In Part Two of this blog I will look at constraints which allow more fine grained permission setting, scoped roles which allow you to set permissions on individual servers and audit logging which allows you to see who is accessing the management console and see what changes they are making.
December 9, 2014
by Andy Overton
· 10,752 Views
article thumbnail
Spring Integration Java DSL (pre Java 8): Line by Line Tutorial
Originally written by Artem Bilan on the SpringSource blog. Dear Spring Community! Recently we published the Spring Integration Java DSL: Line by line tutorial, which uses Java 8 Lambdas extensively. We received some feedback that this is good introduction to the DSL, but a similar tutorial is needed for those users, who can't move to the Java 8 or aren't yet familiar with Lambdas, but wish to take advantage So, to help those Spring Integration users who want to moved from XML configuration to Java & Annotation configuration, we provide this line-by-line tutorial to demonstrate that, even without Lambdas, we gain a lot from Spring Integration Java DSL usage. Although, most will agree that the lambda syntax provides for a more succinct definition. We analyse here the same Cafe Demo sample, but using the pre Java 8 variant for configuration. Many options are the same, so we just copy/paste their description here to achieve a complete picture. Since this Spring Integration Java DSL configuration is quite different to the Java 8 lambda style, it will be useful for all users to get a knowlage how we can achieve the same result with a rich variety of options provided by the Spring Integration Java DSL. The source code for our application is placed in a single class, which is a Boot application; significant lines are annotated with a number corresponding to the comments, which follow: @SpringBootApplication // 1 @IntegrationComponentScan // 2 public class Application { public static void main(String[] args) throws Exception { ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args); // 3 Cafe cafe = ctx.getBean(Cafe.class); // 4 for (int i = 1; i <= 100; i++) { // 5 Order order = new Order(i); order.addItem(DrinkType.LATTE, 2, false); order.addItem(DrinkType.MOCHA, 3, true); cafe.placeOrder(order); } System.out.println("Hit 'Enter' to terminate"); // 6 System.in.read(); ctx.close(); } @MessagingGateway // 7 public interface Cafe { @Gateway(requestChannel = "orders.input") // 8 void placeOrder(Order order); // 9 } private final AtomicInteger hotDrinkCounter = new AtomicInteger(); private final AtomicInteger coldDrinkCounter = new AtomicInteger(); // 10 @Autowired private CafeAggregator cafeAggregator; // 11 @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { // 12 return Pollers.fixedDelay(1000).get(); } @Bean @SuppressWarnings("unchecked") public IntegrationFlow orders() { // 13 return IntegrationFlows.from("orders.input") // 14 .split("payload.items", (Consumer) null) // 15 .channel(MessageChannels.executor(Executors.newCachedThreadPool()))// 16 .route("payload.iced", // 17 new Consumer>() { // 18 @Override public void accept(RouterSpec spec) { spec.channelMapping("true", "iced") .channelMapping("false", "hot"); // 19 } }) .get(); // 20 } @Bean public IntegrationFlow icedFlow() { // 21 return IntegrationFlows.from(MessageChannels.queue("iced", 10)) // 22 .handle(new GenericHandler() { // 23 @Override public Object handle(OrderItem payload, Map headers) { Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); System.out.println(Thread.currentThread().getName() + " prepared cold drink #" + coldDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; // 24 } }) .channel("output") // 25 .get(); } @Bean public IntegrationFlow hotFlow() { // 26 return IntegrationFlows.from(MessageChannels.queue("hot", 10)) .handle(new GenericHandler() { @Override public Object handle(OrderItem payload, Map headers) { Uninterruptibles.sleepUninterruptibly(5, TimeUnit.SECONDS); // 27 System.out.println(Thread.currentThread().getName() + " prepared hot drink #" + hotDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; } }) .channel("output") .get(); } @Bean public IntegrationFlow resultFlow() { // 28 return IntegrationFlows.from("output") // 29 .transform(new GenericTransformer() { // 30 @Override public Drink transform(OrderItem orderItem) { return new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots()); // 31 } }) .aggregate(new Consumer() { // 32 @Override public void accept(AggregatorSpec aggregatorSpec) { aggregatorSpec.processor(cafeAggregator, null); // 33 } }, null) .handle(CharacterStreamWritingMessageHandler.stdout()) // 34 .get(); } @Component public static class CafeAggregator { // 35 @Aggregator // 36 public Delivery output(List drinks) { return new Delivery(drinks); } @CorrelationStrategy // 37 public Integer correlation(Drink drink) { return drink.getOrderNumber(); } } } Examining the code line by line... 1. @SpringBootApplication This new meta-annotation from Spring Boot 1.2. Includes @Configuration and@EnableAutoConfiguration. Since we are in a Spring Integration application and Spring Boot has auto-configuration for it, the @EnableIntegration is automatically applied, to initialize the Spring Integration infrastructure including an environment for the Java DSL -DslIntegrationConfigurationInitializer, which is picked up by theIntegrationConfigurationBeanFactoryPostProcessor from /META-INF/spring.factories. 2. @IntegrationComponentScan The Spring Integration analogue of @ComponentScan to scan components based on interfaces, (the Spring Framework's @ComponentScan only looks at classes). Spring Integration supports the discovery of interfaces annotated with @MessagingGateway (see #7 below). 3. ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args); The main method of our class is designed to start the Spring Boot application using the configuration from this class and starts an ApplicationContext via Spring Boot. In addition, it delegates command line arguments to the Spring Boot. For example you can specify --debug to see logs for the boot auto-configuration report. 4. Cafe cafe = ctx.getBean(Cafe.class); Since we already have an ApplicationContext we can start to interact with application. AndCafe is that entry point - in EIP terms a gateway. Gateways are simply interfaces and the application does not interact with the Messaging API; it simply deals with the domain (see #7 below). 5. for (int i = 1; i <= 100; i++) { To demonstrate the cafe "work" we intiate 100 orders with two drinks - one hot and one iced. And send the Order to the Cafe gateway. 6. System.out.println("Hit 'Enter' to terminate"); Typically Spring Integration application are asynchronous, hence to avoid early exit from themain Thread we block the main method until some end-user interaction through the command line. Non daemon threads will keep the application open but System.read()provides us with a mechanism to close the application cleanly. 7. @MessagingGateway The annotation to mark a business interface to indicate it is a gateway between the end-application and integration layer. It is an analogue of component from Spring Integration XML configuration. Spring Integration creates a Proxy for this interface and populates it as a bean in the application context. The purpose of this Proxy is to wrap parameters in a Message object and send it to the MessageChannel according to the provided options. 8. @Gateway(requestChannel = "orders.input") The method level annotation to distinct business logic by methods as well as by the target integration flows. In this sample we use a requestChannel reference of orders.input, which is a MessageChannel bean name of our IntegrationFlow input channel (see below #14). 9. void placeOrder(Order order); The interface method is a central point to interact from end-application with the integration layer. This method has a void return type. It means that our integration flow is one-wayand we just send messages to the integration flow, but don't wait for a reply. 10. private AtomicInteger hotDrinkCounter = new AtomicInteger(); private AtomicInteger coldDrinkCounter = new AtomicInteger(); Two counters to gather the information how our cafe works with drinks. 11. @Autowired private CafeAggregator cafeAggregator; The POJO for the Aggregator logic (see #33 and #35 below). Since it is a Spring bean, we can simply inject it even to the current @Configuration and use in any place below, e.g. from the .aggregate() EIP-method. 12. @Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerMetadata poller() { The default poller bean. It is a analogue of component from Spring Integration XML configuration. Required for endpoints where the inputChannelis a PollableChannel. In this case, it is necessary for the two Cafe queues - hot and iced (see below #18). Here we use the Pollers factory from the DSL project and use its method-chain fluent API to build the poller metadata. Note that Pollers can be used directly from an IntegrationFlow definition, if a specific poller (rather than the default poller) is needed for an endpoint. 13. @Bean public IntegrationFlow orders() { The IntegrationFlow bean definition. It is the central component of the Spring Integration Java DSL, although it does not play any role at runtime, just during the bean registration phase. All other code below registers Spring Integration components (MessageChannel,MessageHandler, EventDrivenConsumer, MessageProducer, MessageSource etc.) in theIntegrationFlow object, which is parsed by the IntegrationFlowBeanPostProcessor to process those components and register them as beans in the application context as necessary (some elements, such as channels may already exist). 14. return IntegrationFlows.from("orders.input") The IntegrationFlows is the main factory class to start the IntegrationFlow. It provides a number of overloaded .from() methods to allow starting a flow from aSourcePollingChannelAdapter for a MessageSource implementations, e.g.JdbcPollingChannelAdapter; from a MessageProducer, e.g.WebSocketInboundChannelAdapter; or simply a MessageChannel. All ".from()" options have several convenient variants to configure the appropriate component for the start of theIntegrationFlow. Here we use just a channel name, which is converted to aDirectChannel bean definition during the bean definition phase while parsing theIntegrationFlow. In the Java 8 variant, we used here a Lambda definition - and thisMessageChannel has been implicitly created with the bean name based on theIntegrationFlow bean name. 15. .split("payload.items", (Consumer) null) Since our integration flow accepts messages through the orders.input channel, we are ready to consume and process them. The first EIP-method in our scenario is .split(). We know that the message payload from orders.input channel is an Order domain object, so we can simply use here a Spring (SpEL) Expression to return Collection. So, this performs the split EI pattern, and we send each collection entry as a separate message to the next channel. In the background, the .split() method registers aExpressionEvaluatingSplitter MessageHandler implementation and anEventDrivenConsumer for that MessageHandler, wiring in the orders.input channel as the inputChannel. The second argument for the .split() EIP-method is for an endpointConfigurer to customize options like autoStartup, requiresReply, adviceChain etc. We use herenull to show that we rely on the default options for the endpoint. Many of EIP-methods provide overloaded versions with and without endpointConfigurer. Currently.split(String expression) EIP-method without the endpointConfigurer argument is not available; this will be addressed in a future release. 16. .channel(MessageChannels.executor(Executors.newCachedThreadPool())) The .channel() EIP-method allows the specification of concrete MessageChannels between endpoints, as it is done via output-channel/input-channel attributes pair with Spring Integration XML configuration. By default, endpoints in the DSL integration flow definition are wired with DirectChannels, which get bean names based on theIntegrationFlow bean name and index in the flow chain. In this case we select a specificMessageChannel implementation from the Channels factory class; the selected channel here is an ExecutorChannel, to allow distribution of messages from the splitter to separate Threads, to process them in parallel in the downstream flow. 17. .route("payload.iced", The next EIP-method in our scenario is .route(), to send hot/iced order items to different Cafe kitchens. We again use here a SpEL expression to get the routingKey from the incoming message. In the Java 8 variant, we used a method-reference Lambda expression, but for pre Java 8 style we must use SpEL or an inline interface implementation. Many anonymous classes in a flow can make the flow difficult to read so we prefer SpEL in most cases. 18. new Consumer>() { The second argument of .route() EIP-method is a functional interface Consumer to specify ExpressionEvaluatingRouter options using a RouterSpec Builder. Since we don't have any choice with pre Java 8, we just provide here an inline implementation for this interface. 19. spec.channelMapping("true", "iced") .channelMapping("false", "hot"); With the Consumer>#accept()implementation we can provide desired AbstractMappingMessageRouter options. One of them is channelMappings, when we specify the routing logic by the result of router expresion and the target MessageChannel for the apropriate result. In this case iced andhot are MessageChannel names for IntegrationFlows below. 20. .get(); This finalizes the flow. Any IntegrationFlows.from() method returns anIntegrationFlowBuilder instance and this get() method extracts an IntegrationFlowobject from the IntegrationFlowBuilder configuration. Everything starting from the.from() and up to the method before the .get() is an IntegrationFlow definition. All defined components are stored in the IntegrationFlow and processed by theIntegrationFlowBeanPostProcessor during the bean creation phase. 21. @Bean public IntegrationFlow icedFlow() { This is the second IntegrationFlow bean definition - for iced drinks. Here we demonstrate that several IntegrationFlows can be wired together to create a single complex application. Note: it isn't recommended to inject one IntegrationFlow to another; it might cause unexpected behaviour. Since they provide Integration components for the bean registration and MessageChannels one of them, the best way to wire and inject is viaMessageChannel or @MessagingGateway interfaces. 22. return IntegrationFlows.from(MessageChannels.queue("iced", 10)) The iced IntegrationFlow starts from a QueueChannel that has a capacity of 10messages; it is registered as a bean with the name iced. As you remember we use this name as one of the route mappings (see above #19). In our sample, we use here a restricted QueueChannel to reflect the Cafe kitchen busy state from real life. And here is a place where we need that global poller for the next endpoint which is listening on this channel. 23. .handle(new GenericHandler() { The .handle() EIP-method of the iced flow demonstrates the concrete Cafe kitchen work. Since we can't minimize the code with something like Java 8 Lambda expression, we provide here an inline implementation for the GenericHandler functional interface with the expected payload type as the generic argument. With the Java 8 example, we distribute this.handle() between several subscriber subflows for a PublishSubscribeChannel. However in this case, the logic is all implemented in the one method. 24. Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); System.out.println(Thread.currentThread().getName() + " prepared cold drink #" + coldDrinkCounter.incrementAndGet() + " for order #" + payload.getOrderNumber() + ": " + payload); return payload; The business logic implementation for the current .handle() EIP-component. WithUninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS); we just block the current Thread for some timeout to demonstrate how quickly the Cafe kitchen prepares a drink. After that we just report to STDOUT that the drink is ready and return the currentOrderItem from the GenericHandler for the next endpoint in our IntegrationFlow. In the background, the DSL framework registers a ServiceActivatingHandler for theMethodInvokingMessageProcessor to invoke the GenericHandler#handle at runtime. In addition, the framework registers a PollingConsumer endpoint for the QueueChannelabove. This endpoint relies on the default poller to poll messages from the queue. Of course, we always can use a specific poller for any concrete endpoint. In that case, we would have to provide a second endpointConfigurer argument to the .handle() EIP-method. 25. .channel("output") Since it is not the end of our Cafe scenario, we send the result of the current flow to theoutput channel using the convenient EIP-method .channel() and the name of theMessageChannel bean (see below #29). This is the logical end of the current iced drink subflow, so we use the .get() method to return the IntegrationFlow. Flows that end with a reply-producing handler that don't have a final .channel() will return the reply to the message replyChannel header. 26. @Bean public IntegrationFlow hotFlow() { The IntegrationFlow definition for hot drinks. It is similar to the previous iced drinks flow, but with specific hot business logic. It starts from the hot QueueChannel which is mapped from the router above. 27. Uninterruptibles.sleepUninterruptibly(5, TimeUnit.SECONDS); The sleepUninterruptibly for hot drinks. Right, we need more time to boil the water! 28. @Bean public IntegrationFlow resultFlow() { One more IntegrationFlow bean definition to prepare the Delivery for the Cafe client based on the Drinks. 29. return IntegrationFlows.from("output") The resultFlow starts from the DirectChannel, which is created during the bean definition phase with this provided name. You should remember that we use the outputchannel name from the Cafe kitchens flows in the last .channel() in those definitions. 30. .transform(new GenericTransformer() { The .transform() EIP-method is for the appropriate pattern implementation and expects some object to convert one payload to another. In our sample we use an inline implementation of the GenericTransformer functional interface to convert OrderItem to Drink and we specify that using generic arguments. In the background, the DSL framework registers aMessageTransformingHandler and an EventDrivenConsumer endpoint with default options to consume messages from the output MessageChannel. 31. public Drink transform(OrderItem orderItem) { return new Drink(orderItem.getOrderNumber(), orderItem.getDrinkType(), orderItem.isIced(), orderItem.getShots()); } The business-specific GenericTransformer#transform() implementation to demonstrate how we benefit from Java Generics to transform one payload to another. Note: Spring Integration uses ConversionService before any method invocation and if you provide some specific Converter implementation, some domain payload can be converted to another automatically, when the framework has an appropriate registered Converter. 32. .aggregate(new Consumer() { The .aggregate() EIP-method provides options to configure anAggregatingMessageHandler and its endpoint, similar to what we can do with the component when using Spring Integration XML configuration. Of course, with the Java DSL we have more power to configure the aggregator in place, without any other extra beans. However we demonstrate here an aggregator configuration with annotations (see below #35). From the Cafe business logic perspective we compose the Delivery for the initial Order, since we .split() the original order to the OrderItems near the beginning. 33. public void accept(AggregatorSpec aggregatorSpec) { aggregatorSpec.processor(cafeAggregator, null); } An inline implementation of the Consumer for the AggregatorSpec. Using theaggregatorSpec Builder we can provide desired options for the aggregator component, which will be registered as an AggregatingMessageHandler bean. Here we just provide theprocessor as a reference to the autowired (see #11 above) CafeAggregator component (see #35 below). The second argument of the .processor() option is methodName. Since we are relying on the aggregator annotation configuration for the POJO, we don't need to provide the method here and the framework will determine the correct POJO methods in the background. 34. .handle(CharacterStreamWritingMessageHandler.stdout()) It is the end of our flow - the Delivery is delivered to the client! We just print here the message payload to STDOUT using out-of-the-boxCharacterStreamWritingMessageHandler from Spring Integration Core. This is a case to show how existing components from Spring Integration Core (and its modules) can be used from the Java DSL. 35. @Component public static class CafeAggregator { The bean to specify the business logic for the aggregator above. This bean is picked up by the @ComponentScan, which is a part of the @SpringBootApplication meta-annotation (see above #1). So, this component becomes a bean and we can automatically wire (@Autowired) it to other components in the application context (see #11 above). 36. @Aggregator public Delivery output(List drinks) { return new Delivery(drinks); } The POJO-specific MessageGroupProcessor to build the output payload based on the payloads from aggregated messages. Since we mark this method with the @Aggregatorannotation, the target AggregatingMessageHandler can extract this method for theMethodInvokingMessageGroupProcessor. 37. @CorrelationStrategy public Integer correlation(Drink drink) { return drink.getOrderNumber(); } The POJO-specific CorrelationStrategy to extract the custom correlationKey from each inbound aggregator message. Since we mark this method with @CorrelationStrategyannotation the target AggregatingMessageHandler can extract this method for theMethodInvokingCorrelationStrategy. There is a similar self-explained@ReleaseStrategy annotation, but we rely in our Cafe sample just on the defaultSequenceSizeReleaseStrategy, which is based on the sequenceDetails message header populated by the splitter from the beginning of our integration flow. Well, we have finished describing the Cafe Demo sample based on the Spring Integration Java DSL when Java Lambda support is not available. Compare it with XML sample and also seeLambda support tutorial to get more information regarding Spring Integration. As you can see, using the DSL without lambdas is a little more verbose because you need to provide boilerplate code for inline anonymous implementations of functional interfaces. However, we believe it is important to support the use of the DSL for users who can't yet move to Java 8. Many of the DSL benefits (fluent API, compile-time validation etc) are available for all users. The use of lambdas continues the Spring Framework tradition of reducing or eliminating boilerplate code, so we encourage users to try Java 8 and lambdas and to encourage their organizations to consider allowing the use of Java 8 for Spring Integration applications. In addition see the Reference Manual for more information. As always, we look forward to your comments and feedback (StackOverflow (spring-integration tag), Spring JIRA, GitHub) and we very much welcome contributions! Thank you for your time and patience to read this!
December 8, 2014
by Pieter Humphrey DZone Core CORE
· 12,321 Views
article thumbnail
Comparing Constants Safely
When comparing two objects, the equals method is used to return true if they are identical. Typically, this leads to the following code : if (name.equals("Jim")) { } The problem here is that whether intended or not, it is quite possible that the name value is null, in which case a null pointer exception would be thrown. A better practice is to execute the equals method of the string constant “Jim” instead : if ("Jim".equals(name)) { } Since the constant is never null, a null exception will not be thrown, and if the other value is null, the equals comparison will fail. If you are using Java 7 or above, the new Objects class has an equals static method to compare two objects while taking null values into account. if (Objects.equals(name,"Jim")) { } Alternatively if you are using a java version prior to Java 7, but using the guava library you can use the Objects class which has a static equal() method that takes two objects and handles null cases for you. It should also be noted that there are probably a number of other implementations in various libraries (i.e. Apache Commons)
December 8, 2014
by Andy Gibson
· 6,918 Views
article thumbnail
JVM and Garbage Collection Interview Questions: The Beginners Guide
Have an interview coming up? Let us help you prep with these JVA and garbage collection basics.
December 8, 2014
by Sam Atkinson
· 84,136 Views · 9 Likes
article thumbnail
Learn R: How to Extract Rows and Columns From Data Frame
This article represents command set in R programming language, which could be used to extract rows and columns from a given data frame.
December 8, 2014
by Ajitesh Kumar
· 1,103,441 Views · 5 Likes
article thumbnail
Spring RestTemplate with a Linked Resource
Spring Data REST is an awesome project that provides mechanisms to expose the resources underlying a Spring Data based repository as REST resources. Exposing a service with a linked resource Consider two simple JPA based entities, Course and Teacher: @Entity @Table(name = "teachers") public class Teacher { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private Long id; @Size(min = 2, max = 50) @Column(name = "name") private String name; @Column(name = "department") @Size(min = 2, max = 50) private String department; ... } @Entity @Table(name = "courses") public class Course { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private Long id; @Size(min = 1, max = 10) @Column(name = "coursecode") private String courseCode; @Size(min = 1, max = 50) @Column(name = "coursename") private String courseName; @ManyToOne @JoinColumn(name = "teacher_id") private Teacher teacher; .... } essentially the relation looks like this: Now, all it takes to expose these entities as REST resources is adding a @RepositoryRestResource annotation on their JPA based Spring Data repositories this way, first for the "Teacher" resource: import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.rest.core.annotation.RepositoryRestResource; import univ.domain.Teacher; @RepositoryRestResource public interface TeacherRepo extends JpaRepository { } and for exposing the Course resource: @RepositoryRestResource public interface CourseRepo extends JpaRepository { } With this done and assuming a few teachers and a few courses are already in the datastore, a GET on courses would yield a response of the following type: { "_links" : { "self" : { "href" : "http://localhost:8080/api/courses{?page,size,sort}", "templated" : true } }, "_embedded" : { "courses" : [ { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/1" }, "teacher" : { "href" : "http://localhost:8080/api/courses/1/teacher" } } }, { "courseCode" : "Course2", "courseName" : "Course Name 2", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/2" }, "teacher" : { "href" : "http://localhost:8080/api/courses/2/teacher" } } } ] }, "page" : { "size" : 20, "totalElements" : 2, "totalPages" : 1, "number" : 0 } } and a specific course looks like this: { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/1" }, "teacher" : { "href" : "http://localhost:8080/api/courses/1/teacher" } } } If you are wondering what the "_links", "_embedded" are - Spring Data REST uses Hypertext Application Language(or HAL for short) to represent the links, say the one between a course and a teacher. HAL Based REST service - Using RestTemplate Given this HAL based REST service, the question that I had in my mind was how to write a client to this service. I am sure there are better ways of doing this, but what follows worked for me and I welcome any cleaner ways of writing the client. First, I modified the RestTemplate to register a custom Json converter that understands HAL based links: public RestTemplate getRestTemplateWithHalMessageConverter() { RestTemplate restTemplate = new RestTemplate(); List> existingConverters = restTemplate.getMessageConverters(); List> newConverters = new ArrayList<>(); newConverters.add(getHalMessageConverter()); newConverters.addAll(existingConverters); restTemplate.setMessageConverters(newConverters); return restTemplate; } private HttpMessageConverter getHalMessageConverter() { ObjectMapper objectMapper = new ObjectMapper(); objectMapper.registerModule(new Jackson2HalModule()); MappingJackson2HttpMessageConverter halConverter = new TypeConstrainedMappingJackson2HttpMessageConverter(ResourceSupport.class); halConverter.setSupportedMediaTypes(Arrays.asList(HAL_JSON)); halConverter.setObjectMapper(objectMapper); return halConverter; } The Jackson2HalModule is provided by the Spring HATEOS project and understands HAL representation. Given this shiny new RestTemplate, first let us create a Teacher entity: Teacher teacher1 = new Teacher(); teacher1.setName("Teacher 1"); teacher1.setDepartment("Department 1"); URI teacher1Uri = testRestTemplate.postForLocation("http://localhost:8080/api/teachers", teacher1); Note that when the entity is created, the response is a http status code of 201 with the Location header pointing to the uri of the newly created resource, Spring RestTemplate provides a neat way of posting and getting hold of this Location header through an API. So now we have a teacher1Uri representing the newly created teacher. Given this teacher URI, let us now retrieve the teacher, the raw json for the teacher resource looks like the following: { "name" : "Teacher 1", "department" : "Department 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/teachers/1" } } } and to retrieve this using RestTemplate: ResponseEntity> teacherResponseEntity = testRestTemplate.exchange("http://localhost:8080/api/teachers/1", HttpMethod.GET, null, new ParameterizedTypeReference>() { }); Resource teacherResource = teacherResponseEntity.getBody(); Link teacherLink = teacherResource.getLink("self"); String teacherUri = teacherLink.getHref(); Teacher teacher = teacherResource.getContent(); Jackson2HalModule is the one which helps unpack the links this cleanly and to get hold of the Teacher entity itself. I have previously explained ParameterizedTypeReference here. Now, to a more tricky part, creating a Course. Creating a course is tricky as it has a relation to the Teacher and representing this relation using HAL is not that straightforward. A raw POST to create the course would look like this: { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "teacher" : "http://localhost:8080/api/teachers/1" } Note how the reference to the teacher is a URI, this is how HAL represents an embedded reference specifically for a POST'ed content, so now to get this form through RestTemplate - First to create a Course: Course course1 = new Course(); course1.setCourseCode("Course1"); course1.setCourseName("Course Name 1"); At this point, it will be easier to handle providing the teacher link by dealing with a json tree representation and adding in the teacher link as the teacher uri: ObjectMapper objectMapper = getObjectMapperWithHalModule(); ObjectNode jsonNodeCourse1 = (ObjectNode) objectMapper.valueToTree(course1); jsonNodeCourse1.put("teacher", teacher1Uri.getPath()); and posting this should create the course with the linked teacher: URI course1Uri = testRestTemplate.postForLocation(coursesUri, jsonNodeCourse1); and to retrieve this newly created Course: ResponseEntity> courseResponseEntity = testRestTemplate.exchange(course1Uri, HttpMethod.GET, null, new ParameterizedTypeReference>() { }); Resource courseResource = courseResponseEntity.getBody(); Link teacherLinkThroughCourse = courseResource.getLink("teacher"); This concludes how to use the RestTemplate to create and retrieve a linked resource, alternate ideas are welcome. If you are interested in exploring this further, the entire sample is available at this github repo - and the test is here
December 6, 2014
by Biju Kunjummen
· 28,348 Views · 1 Like
article thumbnail
Black Box Testing of Spring Boot Microservice is so Easy
When I needed to do prototyping, proof of concept or play with some new technology in free time, starting new project was always a little annoying barrier with Maven. Have to say that setting up Maven project is not hard and you can use Maven Archetypes. But Archetypes are often out of date. Who wants to play with old technologies? So I always end up wiring in dependencies I wanted to play with. Not very productive spent time. But than Spring Boot came to my way. I fell in love. In last few months I created at least 50 small playground projects, prototypes with Spring Boot. Also incorporated it at work. It’s just perfect for prototyping, learning, microservices, web, batch, enterprise, message flow or command line applications. You have to be dinosaur or be blind not to evaluate Spring Boot for your next Spring project. And when you finish evaluate it, you will go for it. I promise. I feel a need to highlight how easy is Black Box Testing of Spring Boot microservice. Black Box Testing refers to testing without any poking with application artifact. Such testing can be called also integration testing. You can also perform performance or stress testing way I am going to demonstrate. Spring Boot Microservice is usually web application with embedded Tomcat. So it is executed as JAR from command line. There is possibility to convert Spring Boot project into WAR artifact, that can be hosted on shared Servlet container. But we don’t want that now. It’s better when microservice has its own little embedded container. I used existing Spring’s REST service guide as testing target. Focus is mostly on testing project, so it is handy to use this “Hello World” REST application as example. I expect these two common tools are set up and installed on your machine: Maven 3 Git So we’ll need to download source code and install JAR artifact into our local repository. I am going to use command line to download and install the microservice. Let’s go to some directory where we download source code. Use these commands: git clone git@github.com:spring-guides/gs-rest-service.git cd gs-rest-service/complete mvn clean install If everything went OK, Spring Boot microservice JAR artifact is now installed in our local Maven repository. In serious Java development, it would be rather installed into shared repository (e.g. Artifactory, Nexus,… ). When our microservice is installed, we can focus on testing project. It is also Maven and Spring Boot based. Black box testing will be achieved by downloading the artifact from Maven repository (doesn’t matter if it is local or remote). Maven-dependency-plugin can help us this way: org.apache.maven.plugins maven-dependency-plugin copy-dependencies compile copy-dependencies gs-rest-service true It downloads microservice artifact into target/dependency directory by default. As you can see, it’s hooked to compile phase of Maven lifecycle, so that downloaded artifact is available during test phase. Artifact version is stripped from version information. We use latest version. It makes usage of JAR artifact easier during testing. Readers skilled with Maven may notice missing plugin version. Spring Boot driven project is inherited from parent Maven project called spring-boot-starter-parent. It contains versions of main Maven plugins. This is one of the Spring Boot’s opinionated aspects. I like it, because it provides stable dependencies matrix. You can change the version if you need. When we have artifact in our file system, we can start testing. We need to be able to execute JAR file from command line. I used standard JavaProcessBuilder this way: public class ProcessExecutor { public Process execute(String jarName) throws IOException { Process p = null; ProcessBuilder pb = new ProcessBuilder("java", "-jar", jarName); pb.directory(new File("target/dependency")); File log = new File("log"); pb.redirectErrorStream(true); pb.redirectOutput(Redirect.appendTo(log)); p = pb.start(); return p; } } This class executes given process JAR based on given file name. Location is hard-coded to target/dependency directory, where maven-dependency-plugin located our artifact. Standard and error outputs are redirected to file. Next class needed for testing is DTO (Data transfer object). It is simple POJO that will be used for deserialization from JSON. I use Lombok project to reduce boilerplate code needed for getters, setters, hashCode and equals. @Data @AllArgsConstructor @NoArgsConstructor public class Greeting { private long id; private String content; } Test itself looks like this: public class BlackBoxTest { private static final String RESOURCE_URL = "http://localhost:8080/greeting"; @Test public void contextLoads() throws InterruptedException, IOException { Process process = null; Greeting actualGreeting = null; try { process = new ProcessExecutor().execute("gs-rest-service.jar"); RestTemplate restTemplate = new RestTemplate(); waitForStart(restTemplate); actualGreeting = restTemplate.getForObject(RESOURCE_URL, Greeting.class); } finally { process.destroyForcibly(); } Assert.assertEquals(new Greeting(2L, "Hello, World!"), actualGreeting); } private void waitForStart(RestTemplate restTemplate) { while (true) { try { Thread.sleep(500); restTemplate.getForObject(RESOURCE_URL, String.class); return; } catch (Throwable throwable) { // ignoring errors } } } } It executes Spring Boot microservice process first and wait unit it starts. To verify if microservice is started, it sends HTTP request to URL where it’s expected. The service is ready for testing after first successful response. Microservice should send simple greeting JSON response for HTTP GET request. Deserialization from JSON into our Greeting DTO is verified at the end of the test. Source code is shared on Github.
December 5, 2014
by Lubos Krnac
· 11,408 Views · 1 Like
article thumbnail
Headless Setup of a Java Project with Tomcat, IntelliJ Community Edition and Tomcat Maven Plugin
Use IntelliJ Community Edition, Tomcat and Tomcat Maven Plugin.
December 5, 2014
by Taimur Mirza
· 46,483 Views · 2 Likes
article thumbnail
A Look Into HTML6 - What Is It, and What Does it Have to Offer?
HTML is a simple web development language that keeps on rolling out new versions, and has started working on its sixth revision. HTML5 the current revision of HTML is considered to be one of the most sought-after revisions, compared to all the previous HTML versions. Let’s have an Overview of HTML5 HTML5 gave us some very exciting features like audio and video support, offline local storage, and most importantly ability to build mobile optimized websites. In addition, it gave us freedom from using type attribute from tags such as and
December 5, 2014
by Andrey Prikaznov
· 12,668 Views
article thumbnail
Avoid Unwanted Component Scanning of Spring Configuration
I came through interesting problem on Stack Overflow. Brett Ryan had problem that Spring Security configuration was initialized twice. When I was looking into his code I spot the problem. Let me show show the code. He has pretty standard Spring application (not using Spring Boot). Uses more modern Java servlet Configuration based on Spring’s AbstractAnnotationConfigDispatcherServletInitializer. import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer; public class AppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { @Override protected Class[] getRootConfigClasses() { return new Class[]{SecurityConfig.class}; } @Override protected Class[] getServletConfigClasses() { return new Class[]{WebConfig.class}; } @Override protected String[] getServletMappings() { return new String[]{"/"}; } } As you can see, there are two configuration classes: SecurityConfig – holds Spring Security configuration WebConfig – main Spring’s IoC container configuration package net.lkrnac.blog.dontscanconfigurations; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.annotation.web.servlet.configuration.EnableWebMvcSecurity; @Configuration @EnableWebMvcSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { System.out.println("Spring Security init..."); auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER"); } } import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.EnableWebMvc; import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter; @Configuration @EnableWebMvc @ComponentScan(basePackages = "net.lkrnac.blog.dontscanconfigurations") public class WebConfig extends WebMvcConfigurerAdapter { } Pay attention to the component scanning in WebConfig. It is scanning package where all three classes are located. When you run this on servlet container, text “Spring Security init…” is written to console twice. It mean mean SecurityConfig configuration is loaded twice. It was loaded During creation of root context in method AppInitializer.getRootConfigClasses() By component scan in class WebConfig. This instance is created as part of servlet context creation in method AppInitializer.getServletConfigClasses(). Why? I found this explanation in Spring’s documentation: Remember that @Configuration classes are meta-annotated with @Component, so they are candidates for component-scanning! So this is feature of Spring and therefore we want to avoid component scanning of Spring @Configuration used by Servlet configuration. Brett Ryan independently found this problem and showed his solution in mentioned Stack Overflow question: @ComponentScan(basePackages = "com.acme.app", excludeFilters = { @Filter(type = ASSIGNABLE_TYPE, value = { WebConfig.class, SecurityConfig.class }) }) I don’t like this solution. Annotation is too verbose for me. Also some developer can create new @Configuration class and forget to include it into this filter. I would rather specify special package that would be excluded from Spring’s component scanning. Even better solution for this problem would be for me not to define separate contexts and rather use only servlet context as described in Spring Reference Documentation. Far most optimal solution is using Spring Boot with embedded servlet container, where you don’t need to define AbstractAnnotationConfigDispatcherServletInitializer at all. I created sample project on Github so that you can play with it.
December 4, 2014
by Lubos Krnac
· 47,478 Views · 1 Like
article thumbnail
Java vs. Other Programming Languages: Does Java Come Out on Top?
Java is, arguably, one of the most popular programming languages amongst developers and is used to create web applications, customized software and web portals, including eCommerce and m-Commerce solutions. For many developers, programming languages begin and end with Java. While there is no doubt Java has been going strong over the years and therefore must be doing a whole lot of things right, it will be a mistake to think there is no other language as good as Java. The fact is, every language has strengths and weaknesses; yes even Java has a bunch of lacunae that get overlooked by programmers because of the truckload of benefits it brings to the table. As a programmer, it’s important to compare Java with other programing languages so that you are able to choose the best language for a particular project. This article compares Java to some other commonly used languages and tries to find out whether Java comes out on top. (Note: We have not drawn comparisons with each and every feature offered by the languages covered in this article. We have identified certain key features offered by them and talk about how they compare with similar features in Java.) 1. Python Python is a high-level language which fully supports object-oriented programming. Java on the other hand is not a pure object-oriented language. Python is a powerful easy-to-use scripting language that excels as a “glue” language because it connects system components, whereas Java is characterized as a low-level implementation language. One of the key differences between the two is that Python programs are shorter as compared to Java programs. Let’s for instance see the example of ‘Hello World’: ‘Hello World’ in Java: public class example{ public static void main(String[] args) { System.out.println(“hello world”);} } ‘Hello World’ in Python: print “hello world”; Python has rich built-in high-level data types and even supports dynamic typing; this makes it one of the preferred choices of newbie programmers as they have to write less code. But same is not the case with Java, as developers are required to define the type of each variable before using it. Swift, a programming language created by Apple this year for iOS and OS X development has some Python inspired syntax. Many large organizations like Google, Yahoo, NASA, etc. are making use of Python. If they can trust Python, you can too! All said and done, Python does have some flaws. Python programs are generally expected to run slower than Java programs making Java a favorable choice for enterprise level application development. Moreover, Java has much better library support for some of the use cases than Python. 2. C++ Java was basically derived from C++. However, there are a surprising number of differences between the two as the objectives were different for both these languages. C++ was designed mainly for systems programming and extending the C programming language whereas Java was created initially to support network computing. Though Java is fast as compared to Python, it runs significantly slower than C++. If we compare the libraries of two languages, C++ standard libraries are simple and robust, providing containers and associative arrays whereas Java has a powerful cross-platform library. The other crucial difference between the two is – in Java garbage collection happens automatically but there is no automatic garbage collection in C++; all objects must be destroyed manually through the code. There are pretty high chances of a developer forgetting to delete all objects at the end. This leads to an increase in size and memory of the software, which can lead to an increase in costing. 3. Ruby Ruby and Java have a lot in common, beginning with the fact that both are object-oriented languages and are strongly typed. The main difference between the two programming languages lies in the method of executing the code. Java code is first translated into virtual machine code which runs faster than Ruby’s interpreted code. Just like Python, the biggest reason developers prefer Ruby over Java is that a function that is implemented in Ruby will take fewer lines of code as compared to Java. This makes it easier for Ruby developers to manage the code. Generally, high traffic sites use Java rather than Ruby. A few years back, Twitter migrated to Java and Scala from Ruby. Java and Ruby can be used together, and they complement each other. JRuby, basically written in Java is an implementation of the Ruby programming language atop the Java Virtual Machine. 4. C# Since the last few years, there is a raging debate in the development community as to which language outperforms - Java or C#. If security or performance is being considered then both languages receive a similar score. However, Java has a comparative advantage over C# because it is a platform-independent language. It is supported on more operating systems than C# without recompiling code. On the other hand, C# is not quite platform independent as it can run on Windows and Mac OS-X but not Linux. The two languages are quite similar in syntax and programming style. Developers should opt for a language that is a perfect fit for their project requirement; the focus should be on using a language that ensures a project can be developed easily and efficiently. For instance, if you are developing an application for Windows desktop or Windows phone then pick C# but if developing for an Android phone, go with Java. 5. PHP PHP is a server side scripting language whereas Java is a general purpose language. These two languages are structurally different and mutually inclusive. PHP is a weakly typed language whereas Java is a strongly typed language where a programmer is required to declare a data type for each variable and/or value. This may make PHP more attractive to programmers as it does not adhere to fixed standards like Java, but in turn it may complicate certain tasks. Apart from the structural difference, a major difference between the two is that in PHP, the JVM is restarted after every request; this can result in extra performance problems. A programmer should choose PHP if he/she doesn’t have a lot of time to complete a project, but should go for Java if the project lays emphasis on features like scalability, performance and security. CONCLUSION After comparing Java with five languages, do we now have a clear answer whether Java is superior to all other languages? The answer is ‘YES’ and ‘NO’. YES, because it is a low level language that lets you understand the basics by implementing the algorithms in the simplest possible form and at the same time high level enough to implement any task efficiently. And No, because everything that can be written in Java can be written in other languages (like C#) but the reverse is not true. Java has evolved a lot since its inception and holds the lead in many areas of software development. So, its survivability is not in doubt. In fact, die hard Java folks are expected to stick to it for years! However, it is advisable programmers adopt a horses for courses policy while making use of a programming language. The choice of a language should be dependent on their needs and requirements not on the popularity of a language.
December 4, 2014
by Michael Georgiou
· 59,921 Views
article thumbnail
Hibernate: @Where Clause
Recently I’ve worked on a part of project where are a lot of entities. As in many other projects with the same feature there was implemented “soft delete” approach. That’s mean that when someone deletes any entity it remains in a database but a special field (e.g. ‘isDeleted’) changes its value to true. As you’ve already guessed in every SELECT operation for this kind of entities we need to apply condition: WHERE isDeleted = false It’s a little bit redundant and boring to append each time this condition to a SQL query. So I started look at solutions which could give me some elegant solution of the problem. Fortunately a colleague of mine have given me a hint how to deal with such cases. The answer is covered behind the Hibernate‘s annotation @Where. Let’s consider how we can decorate an entity with the @Where annotation to avoid extra condition in regular SQL queries: import org.hibernate.annotations.Where; import javax.persistence.*; @Entity @Table @Where(clause = "isDeleted='false'") public class Customer { @Id @GeneratedValue @Column private Integer id; @Column private String name; @Column private Boolean isDeleted; //Getters and setters } Now when you want to select Customer on JPA level you will always get only isDeleted=false records. It’s very convenient when you are working with “soft delete” or any other situation which requires permanent application of some condition. I hope it will be useful for your projects.
December 2, 2014
by Alexey Zvolinskiy
· 54,325 Views · 7 Likes
article thumbnail
Tutorial: Web Server with the ESP8266 WiFi Module
It has been a while since my first post about the ESP8266 (see “Cheap and Simple WiFi with ESP8266 for the FRDM Board“). The ESP8266 is a new inexpensive ($4.50) WiFi module which makes it easy to connect to the network or internet. Finally this week-end I have found the time to write up a tutorial: how to implement a WiFi web server for the ESP8266 WiFi module and the Freescale FRDM-KL25Z board: WSP8266 Web Server FRDM-KL25Z with ESP8266 WiFi Module Outline In this tutorial I’m using a Freescale FRDM-KL25Z board as a web server, using theESP8266 board. The ESP8266 is a ‘less than $4.5′ WiFi board getting more and more popular as an IoT board. There is even a way to run the ESP8266 standalone (because it has a full processor on that board). However, that development is still in the flux and rather unstable. Instead, I’m using a serial connection to the ESP8266 instead. With this, any small microcontroller can send and receive data from the internet: connect that board to a microcontroller with 3.3V, GND, Tx and Rx, and you have a W-LAN connection! I’m using in this tutorial Eclipse with GNU/GDB with Processor Expert, but with the steps in this tutorial you should be able to use any other toolchain too. As things might change in the future with different firmware on the ESP8266: the firmware I’m having on the board is version 00160901. Board Connections Since my first post on the ESP8266 I have cleaned up the wiring. The pins are as below for the ESP8266: ESP8266 Pins Because the ESP8266 can take > 200 mA, I’m using a 5-to-3.3V DC-DC converter. I measured around 70 to 90 mA, so it is not (yet) really needed, but I wanted to use it to protect to board. The ESP8266 Rx and Tx are connected to the microcontroller Tx and Rx pins. A general frustration point for the ESP8266 module is the connection oft the remaining pins. What worked for me is to connect CH_PD to 3.3V and leaving RST,GPIO0 and GPIO2 unconnected/floating. Wiring Setup with FRDM-KL25Z and ESP8266 Communication Protocol I recommend to use a logic analyzer to verify the communication between the ESP8266 and the microcontroller. My module communicates with 115200, but I see reports that other modules (other firmware) can use a different baud. The module uses an AT command send. The simplest command is to send “AT\r\n” and it responds with “AT\r\r\n\r\nOK\r\n”: AT Command Sent to ESP8266 In this tutorial I’m using a command line shell (see “A Shell for the Freedom KL25Z Board“) to have a manual mode to send commands to the module. More about this later. Project Creation You can use my project and source files available on GitHub (see link at the end of this article). Or create your own project. My project is using the Kinetis Design Studio and for the FRDM-KL25Z board (MKL25Z128VLK4). I have created a project for Processor Expert, as I’m using several components of it: Processor Expert Project For the project I have several files added: ESP8266 Project in Eclipse With the following source files: Application.c/.h: This runs the application and web server program ESP8266.c/.h: Driver for the ESP8266 Events.c/.h: Processor Expert event hooks main.c: main entry point Shell.c/.h: command line interface Sources Project and Source files are available on GitHub here: https://github.com/ErichStyger/mcuoneclipse/tree/master/Examples/KDS/FRDM-KL25Z/FRDM-KL25Z_ESP8266 Please check the latest source files on GitHub. At the time of writing this article, I’m using the following: Shell.h is the interface to command line shell: view source print? 01./* 02. * Shell.h 03. * 04. * Author: Erich Styger 05. */ 06. 07.#ifndef SHELL_H_ 08.#define SHELL_H_ 09. 10./*! 11. * \brief Shell parse routine 12. */ 13.voidSHELL_Parse(void); 14. 15./*! 16. * \brief Shell initialization 17. */ 18.voidSHELL_Init(void); 19. 20.#endif /* SHELL_H_ */ Shell.c implements the application part of the shell: view source print? 01./* 02. * Shell.c 03. * 04. * Author: Erich Styger 05. */ 06. 07.#include "Shell.h" 08.#include "CLS1.h" 09.#include "ESP8266.h" 10. 11./* table with shell parser/handler */ 12.staticconstCLS1_ParseCommandCallback CmdParserTable[] = 13.{ 14. CLS1_ParseCommand, 15. ESP_ParseCommand, 16. NULL /* sentinel */ 17.}; 18. 19.staticunsigned charlocalConsole_buf[48]; /* buffer for command line */ 20. 21.voidSHELL_Parse(void) { 22. (void)CLS1_ReadAndParseWithCommandTable(localConsole_buf, sizeof(localConsole_buf), CLS1_GetStdio(), CmdParserTable); 23.} 24. 25.voidSHELL_Init(void) { 26. localConsole_buf[0] = '\0'; /* initialize buffer */ 27.} ESP8266.h is the interface to the WiFi module: view source print? 001./* 002. * ESP8266.h 003. * 004. * Author: Erich Styger 005. */ 006. 007.#ifndef ESP8266_H_ 008.#define ESP8266_H_ 009. 010.#include "CLS1.h" 011. 012.#define ESP_DEFAULT_TIMEOUT_MS (100) 013. /*!< Default timeout value in milliseconds */ 014. 015./*! 016. * \brief Command line parser routine 017. * \param cmd Pointer to command line string 018. * \param handled Return value if command has been handled 019. * \param io Standard Shell I/O handler 020. * \return Error code, ERR_OK for no failure 021. */ 022.uint8_t ESP_ParseCommand(constunsigned char*cmd, bool *handled, constCLS1_StdIOType *io); 023. 024./*! 025. * \brief Send a string to th ESP8266 module 026. * \param str String to send, "\r\n" will be appended 027. * \param io Shell I/O handler or NULL if not used 028. * \return Error code, ERR_OK for no failure 029. */ 030.uint8_t ESP_SendStr(constuint8_t *str, CLS1_ConstStdIOType *io); 031. 032./*! 033. * \brief Used to send an AT command to the ESP8266 module 034. * \param cmd Command string to send 035. * \param rxBuf Buffer for the response, can be NULL 036. * \param rxBufSize Size of response buffer 037. * \param expectedTailStr Expected response from the module, can be NULL 038. * \param msTimeout Timeout time in milliseconds 039. * \param io Shell I/O handler or NULL if not used 040. * \return Error code, ERR_OK for no failure 041. */ 042.uint8_t ESP_SendATCommand(uint8_t *cmd, uint8_t *rxBuf, size_t rxBufSize, uint8_t *expectedTailStr, uint16_t msTimeout, constCLS1_StdIOType *io); 043. 044./*! 045. * \brief Read from the serial line from the module until a sentinel char is received 046. * \param buf 047. * \param bufSize 048. * \param sentinelChar 049. * \param timeoutMs Timeout time in milliseconds 050. * \return Error code, ERR_OK for no failure 051. */ 052.uint8_t ESP_ReadCharsUntil(uint8_t *buf, size_t bufSize, uint8_t sentinelChar, uint16_t timeoutMs); 053. 054./*! 055. * \brief Sends an AT command to test the connection 056. * \return Error code, ERR_OK for no failure 057. */ 058.uint8_t ESP_TestAT(void); 059. 060./*! 061. * \brief Restarts the ESP8266 module 062. * \param io Shell I/O handler or NULL if not used 063. * \param timeoutMs Timeout time in milliseconds 064. * \return Error code, ERR_OK for no failure 065. */ 066.uint8_t ESP_Restart(constCLS1_StdIOType *io, uint16_t timeoutMs); 067. 068./*! 069. * \brief Set the current mode of the module 070. * \param mode Where is 1=Sta, 2=AP or 3=both 071. * \return Error code, ERR_OK for no failure 072. */ 073.uint8_t ESP_SelectMode(uint8_t mode); 074. 075./*! 076. * \Brief returns the firmware version string 077. * \param fwBuf Buffer for the string 078. * \param fwBufSize Size of buffer in bytes 079. * \return Error code, ERR_OK for no failure 080. */ 081.uint8_t ESP_GetFirmwareVersionString(uint8_t *fwBuf, size_t fwBufSize); 082. 083./*! 084. * \brief Join an access point. 085. * \param ssid SSID of access point 086. * \param pwd Password of access point 087. * \param nofRetries Number of connection retries 088. * \param io Shell I/O or NULL if not used 089. * \return Error code, ERR_OK for no failure 090. */ 091.uint8_t ESP_JoinAP(constuint8_t *ssid, constuint8_t *pwd, intnofRetries, CLS1_ConstStdIOType *io); 092. 093./*! 094. * \brief Scans for an IPD message sent by the module 095. * \param msgBuf Pointer to the message buffer where to store the message 096. * \param msgBufSize Size of message buffer 097. * \param ch_id Pointer to where to store the channel/id 098. * \param size Pointer where to store the size of the message 099. * \param isGet TRUE if it is a GET message, FALSE for a POST message 100. * \param timeoutMs Error code, ERR_OK for no failure 101. * \param io 102. * \return Error code, ERR_OK for no failure 103. */ 104.uint8_t ESP_GetIPD(uint8_t *msgBuf, size_t msgBufSize, uint8_t *ch_id, uint16_t *size, bool *isGet, uint16_t timeoutMs, constCLS1_StdIOType *io); 105. 106./*! 107. * \brief Closes a connection 108. * \param channel Channel ID 109. * \param io Error code, ERR_OK for no failure 110. * \param timeoutMs Error code, ERR_OK for no failure 111. * \return Error code, ERR_OK for no failure 112. */ 113.uint8_t ESP_CloseConnection(uint8_t channel, constCLS1_StdIOType *io, uint16_t timeoutMs); 114. 115./*! 116. * \brief Used to determine if the web server is running or not. 117. * \return TRUE if web server has beens started 118. */ 119.bool ESP_IsServerOn(void); 120. 121./*! 122. * \brief Driver initialization 123. */ 124.voidESP_Init(void); 125. 126./*! 127. * \brief Driver de-initialization 128. */ 129.voidESP_Deinit(void); 130. 131.#endif /* ESP8266_H_ */ And the ESP8266 driver is in ESP8266.c which implements all the low level SPI access functions, the functional implementation and a command line shell interface: view source print? 001./* 002. * ESP8266.c 003. * 004. * Author: Erich Styger 005. */ 006. 007.#include "ESP8266.h" 008.#include "Shell.h" 009.#include "UTIL1.h" 010.#include "CLS1.h" 011.#include "AS2.h" 012.#include "WAIT1.h" 013. 014.staticbool ESP_WebServerIsOn = FALSE; 015. 016.bool ESP_IsServerOn(void) { 017. returnESP_WebServerIsOn; 018.} 019. 020.staticvoidSend(unsigned char*str) { 021. while(*str!='\0') { 022. AS2_SendChar(*str); 023. str++; 024. } 025.} 026. 027.staticvoidSkipNewLines(constunsigned char**p) { 028. while(**p=='\n'|| **p=='\r') { 029. (*p)++; /* skip new lines */ 030. } 031.} 032. 033.uint8_t ESP_ReadCharsUntil(uint8_t *buf, size_t bufSize, uint8_t sentinelChar, uint16_t timeoutMs) { 034. uint8_t ch; 035. uint8_t res = ERR_OK; 036. 037. if(bufSize<=1) { 038. returnERR_OVERRUN; /* buffer to small */ 039. } 040. buf[0] = '\0'; buf[bufSize-1] = '\0'; /* always terminate */ 041. bufSize--; 042. for(;;) { /* breaks */ 043. if(bufSize==0) { 044. res = ERR_OVERRUN; 045. break; 046. } 047. if(AS2_GetCharsInRxBuf()>0) { 048. (void)AS2_RecvChar(&ch); 049. *buf = ch; 050. buf++; 051. bufSize--; 052. if(ch==sentinelChar) { 053. *buf = '\0'; /* terminate string */ 054. break; /* sentinel found */ 055. } 056. } else{ 057. if(timeoutMs>10) { 058. WAIT1_WaitOSms(5); 059. timeoutMs -= 5; 060. } else{ 061. res = ERR_NOTAVAIL; /* timeout */ 062. break; 063. } 064. } 065. } 066. returnres; 067.} 068. 069.staticuint8_t RxResponse(unsigned char*rxBuf, size_t rxBufLength, unsigned char*expectedTail, uint16_t msTimeout) { 070. unsigned charch; 071. uint8_t res = ERR_OK; 072. unsigned char*p; 073. 074. if(rxBufLength < sizeof("x\r\n")) { 075. returnERR_OVERFLOW; /* not enough space in buffer */ 076. } 077. p = rxBuf; 078. p[0] = '\0'; 079. for(;;) { /* breaks */ 080. if(msTimeout == 0) { 081. break; /* will decide outside of loop if it is a timeout. */ 082. } elseif(rxBufLength == 0) { 083. res = ERR_OVERFLOW; /* not enough space in buffer */ 084. break; 085. } elseif(AS2_GetCharsInRxBuf() > 0) { 086.#if0 087. if(AS2_RecvChar(&ch) != ERR_OK) { 088. res = ERR_RXEMPTY; 089. break; 090. } 091.#else 092. /* might get an overrun OVERRUN_ERR error here? Ignoring error for now */ 093. (void)AS2_RecvChar(&ch); 094.#endif 095. *p++ = ch; 096. *p = '\0'; /* always terminate */ 097. rxBufLength--; 098. } elseif(expectedTail!=NULL && expectedTail[0]!='\0' 099. && UTIL1_strtailcmp(rxBuf, expectedTail) == 0) { 100. break; /* finished */ 101. } else{ 102. WAIT1_WaitOSms(1); 103. msTimeout--; 104. } 105. } /* for */ 106. if(msTimeout==0) { /* timeout! */ 107. if(expectedTail[0] != '\0'/* timeout, and we expected something: an error for sure */ 108. || rxBuf[0] == '\0'/* timeout, did not know what to expect, but received nothing? There has to be a response. */ 109. ) 110. { 111. res = ERR_FAULT; 112. } 113. } 114. returnres; 115.} 116. 117.uint8_t ESP_SendATCommand(uint8_t *cmd, uint8_t *rxBuf, size_t rxBufSize, uint8_t *expectedTailStr, uint16_t msTimeout, constCLS1_StdIOType *io) { 118. uint16_t snt; 119. uint8_t res; 120. 121. if(rxBuf!=NULL) { 122. rxBuf[0] = '\0'; 123. } 124. if(io!=NULL) { 125. CLS1_SendStr("sending>>:\r\n", io->stdOut); 126. CLS1_SendStr(cmd, io->stdOut); 127. } 128. if(AS2_SendBlock(cmd, (uint16_t)UTIL1_strlen((char*)cmd), &snt) != ERR_OK) { 129. returnERR_FAILED; 130. } 131. if(rxBuf!=NULL) { 132. res = RxResponse(rxBuf, rxBufSize, expectedTailStr, msTimeout); 133. if(io!=NULL) { 134. CLS1_SendStr("received<<:\r\n", io->stdOut); 135. CLS1_SendStr(rxBuf, io->stdOut); 136. } 137. } 138. returnres; 139.} 140. 141.uint8_t ESP_TestAT(void) { 142. /* AT */ 143. uint8_t rxBuf[sizeof("AT\r\r\n\r\nOK\r\n")]; 144. uint8_t res; 145. 146. res = ESP_SendATCommand("AT\r\n", rxBuf, sizeof(rxBuf), "AT\r\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, NULL); 147. returnres; 148.} 149. 150.uint8_t ESP_Restart(constCLS1_StdIOType *io, uint16_t timeoutMs) { 151. /* AT+RST */ 152. uint8_t rxBuf[sizeof("AT+RST\r\r\n\r\nOK\r\n")]; 153. uint8_t res; 154. uint8_t buf[64]; 155. 156. AS2_ClearRxBuf(); /* clear buffer */ 157. res = ESP_SendATCommand("AT+RST\r\n", rxBuf, sizeof(rxBuf), "AT+RST\r\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, io); 158. if(res==ERR_OK) { 159. for(;;) { 160. ESP_ReadCharsUntil(buf, sizeof(buf), '\n', 1000); 161. if(io!=NULL) { 162. CLS1_SendStr(buf, io->stdOut); 163. } 164. if(UTIL1_strncmp(buf, "ready", sizeof("ready")-1)==0) { /* wait until ready message from module */ 165. break; /* module has restarted */ 166. } 167. } 168. } 169. AS2_ClearRxBuf(); /* clear buffer */ 170. returnres; 171.} 172. 173.uint8_t ESP_CloseConnection(uint8_t channel, constCLS1_StdIOType *io, uint16_t timeoutMs) { 174. /* AT+CIPCLOSE= */ 175. uint8_t res; 176. uint8_t cmd[64]; 177. 178. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPCLOSE="); 179. UTIL1_strcatNum8u(cmd, sizeof(cmd), channel); 180. UTIL1_strcat(cmd, sizeof(cmd), "\r\n"); 181. res = ESP_SendATCommand(cmd, NULL, 0, "Unlink\r\n", timeoutMs, io); 182. returnres; 183.} 184. 185.uint8_t ESP_SetNumberOfConnections(uint8_t nof, constCLS1_StdIOType *io, uint16_t timeoutMs) { 186. /* AT+CIPMUX=, 0: single connection, 1: multiple connections */ 187. uint8_t res; 188. uint8_t cmd[sizeof("AT+CIPMUX=12\r\n")]; 189. uint8_t rxBuf[sizeof("AT+CIPMUX=12\r\n\r\nOK\r\n")+10]; 190. 191. if(nof>1) { /* only 0 and 1 allowed */ 192. if(io!=NULL) { 193. CLS1_SendStr("Wrong number of connection parameter!\r\n", io->stdErr); 194. } 195. returnERR_FAILED; 196. } 197. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPMUX="); 198. UTIL1_strcatNum8u(cmd, sizeof(cmd), nof); 199. UTIL1_strcat(cmd, sizeof(cmd), "\r\n"); 200. res = ESP_SendATCommand(cmd, rxBuf, sizeof(rxBuf), "OK\r\n", timeoutMs, io); 201. returnres; 202.} 203. 204.uint8_t ESP_SetServer(bool startIt, uint16_t port, constCLS1_StdIOType *io, uint16_t timeoutMs) { 205. /* AT+CIPSERVER=,, where : 0: stop, 1: start */ 206. uint8_t res; 207. uint8_t cmd[sizeof("AT+CIPSERVER=1,80\r\n\r\nOK\r\n")+sizeof("no change")]; 208. uint8_t rxBuf[sizeof("AT+CIPSERVER=1,80\r\n\r\nOK\r\n")+sizeof("no change")]; 209. 210. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPSERVER="); 211. if(startIt) { 212. UTIL1_strcat(cmd, sizeof(cmd), "1,"); 213. } else{ 214. UTIL1_strcat(cmd, sizeof(cmd), "0,"); 215. } 216. UTIL1_strcatNum16u(cmd, sizeof(cmd), port); 217. UTIL1_strcat(cmd, sizeof(cmd), "\r\n"); 218. res = ESP_SendATCommand(cmd, rxBuf, sizeof(rxBuf), "OK\r\n", timeoutMs, io); 219. if(res!=ERR_OK) { /* accept "no change" too */ 220. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPSERVER="); 221. if(startIt) { 222. UTIL1_strcat(cmd, sizeof(cmd), "1,"); 223. } else{ 224. UTIL1_strcat(cmd, sizeof(cmd), "0,"); 225. } 226. UTIL1_strcatNum16u(cmd, sizeof(cmd), port); 227. UTIL1_strcat(cmd, sizeof(cmd), "\r\r\nno change\r\n"); 228. if(UTIL1_strcmp(rxBuf, cmd)==0) { 229. res = ERR_OK; 230. } 231. } 232. returnres; 233.} 234. 235.uint8_t ESP_SelectMode(uint8_t mode) { 236. /* AT+CWMODE=, where is 1=Sta, 2=AP or 3=both */ 237. uint8_t txBuf[sizeof("AT+CWMODE=x\r\n")]; 238. uint8_t rxBuf[sizeof("AT+CWMODE=x\r\r\nno change\r\n")]; 239. uint8_t expected[sizeof("AT+CWMODE=x\r\r\nno change\r\n")]; 240. uint8_t res; 241. 242. if(mode<1|| mode>3) { 243. returnERR_RANGE; /* only 1, 2 or 3 */ 244. } 245. UTIL1_strcpy(txBuf, sizeof(txBuf), "AT+CWMODE="); 246. UTIL1_strcatNum16u(txBuf, sizeof(txBuf), mode); 247. UTIL1_strcat(txBuf, sizeof(txBuf), "\r\n"); 248. UTIL1_strcpy(expected, sizeof(expected), "AT+CWMODE="); 249. UTIL1_strcatNum16u(expected, sizeof(expected), mode); 250. UTIL1_strcat(expected, sizeof(expected), "\r\r\n\n"); 251. res = ESP_SendATCommand(txBuf, rxBuf, sizeof(rxBuf), expected, ESP_DEFAULT_TIMEOUT_MS, NULL); 252. if(res!=ERR_OK) { 253. /* answer could be as well "AT+CWMODE=x\r\r\nno change\r\n"!! */ 254. UTIL1_strcpy(txBuf, sizeof(txBuf), "AT+CWMODE="); 255. UTIL1_strcatNum16u(txBuf, sizeof(txBuf), mode); 256. UTIL1_strcat(txBuf, sizeof(txBuf), "\r\n"); 257. UTIL1_strcpy(expected, sizeof(expected), "AT+CWMODE="); 258. UTIL1_strcatNum16u(expected, sizeof(expected), mode); 259. UTIL1_strcat(expected, sizeof(expected), "\r\r\nno change\r\n"); 260. if(UTIL1_strcmp(rxBuf, expected)==0) { 261. res = ERR_OK; 262. } 263. } 264. returnres; 265.} 266. 267.uint8_t ESP_GetFirmwareVersionString(uint8_t *fwBuf, size_t fwBufSize) { 268. /* AT+GMR */ 269. uint8_t rxBuf[32]; 270. uint8_t res; 271. constunsigned char*p; 272. 273. res = ESP_SendATCommand("AT+GMR\r\n", rxBuf, sizeof(rxBuf), "\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, NULL); 274. if(res!=ERR_OK) { 275. if(UTIL1_strtailcmp(rxBuf, "\r\n\r\nOK\r\n")) { 276. res = ERR_OK; 277. } 278. } 279. if(res==ERR_OK) { 280. if(UTIL1_strncmp(rxBuf, "AT+GMR\r\r\n", sizeof("AT+GMR\r\r\n")-1)==0) { /* check for beginning of response */ 281. UTIL1_strCutTail(rxBuf, "\r\n\r\nOK\r\n"); /* cut tailing response */ 282. p = rxBuf+sizeof("AT+GMR\r\r\n")-1; /* skip beginning */ 283. UTIL1_strcpy(fwBuf, fwBufSize, p); /* copy firmware information string */ 284. } else{ 285. res = ERR_FAILED; 286. } 287. } 288. if(res!=ERR_OK) { 289. UTIL1_strcpy(fwBuf, fwBufSize, "ERROR"); /* default error */ 290. } 291. returnres; 292.} 293. 294.uint8_t ESP_GetIPAddrString(uint8_t *ipBuf, size_t ipBufSize) { 295. /* AT+CIFSR */ 296. uint8_t rxBuf[32]; 297. uint8_t res; 298. constunsigned char*p; 299. 300. res = ESP_SendATCommand("AT+CIFSR\r\n", rxBuf, sizeof(rxBuf), NULL, ESP_DEFAULT_TIMEOUT_MS, NULL); 301. if(res!=ERR_OK) { 302. if(UTIL1_strtailcmp(rxBuf, "\r\n")) { 303. res = ERR_OK; 304. } 305. } 306. if(res==ERR_OK) { 307. if(UTIL1_strncmp(rxBuf, "AT+CIFSR\r\r\n", sizeof("AT+CIFSR\r\r\n")-1)==0) { /* check for beginning of response */ 308. UTIL1_strCutTail(rxBuf, "\r\n"); /* cut tailing response */ 309. p = rxBuf+sizeof("AT+CIFSR\r\r\n")-1; /* skip beginning */ 310. SkipNewLines(&p); 311. UTIL1_strcpy(ipBuf, ipBufSize, p); /* copy IP information string */ 312. } else{ 313. res = ERR_FAILED; 314. } 315. } 316. if(res!=ERR_OK) { 317. UTIL1_strcpy(ipBuf, ipBufSize, "ERROR"); 318. } 319. returnres; 320.} 321. 322.uint8_t ESP_GetModeString(uint8_t *buf, size_t bufSize) { 323. /* AT+CWMODE? */ 324. uint8_t rxBuf[32]; 325. uint8_t res; 326. constunsigned char*p; 327. 328. res = ESP_SendATCommand("AT+CWMODE?\r\n", rxBuf, sizeof(rxBuf), "\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, NULL); 329. if(res==ERR_OK) { 330. if(UTIL1_strncmp(rxBuf, "AT+CWMODE?\r\r\n+CWMODE:", sizeof("AT+CWMODE?\r\r\n+CWMODE:")-1)==0) { /* check for beginning of response */ 331. UTIL1_strCutTail(rxBuf, "\r\n\r\nOK\r\n"); /* cut tailing response */ 332. p = rxBuf+sizeof("AT+CWMODE?\r\r\n+CWMODE:")-1; /* skip beginning */ 333. UTIL1_strcpy(buf, bufSize, p); /* copy information string */ 334. } else{ 335. res = ERR_FAILED; 336. } 337. } 338. if(res!=ERR_OK) { 339. UTIL1_strcpy(buf, bufSize, "ERROR"); 340. } 341. returnres; 342.} 343. 344.uint8_t ESP_GetCIPMUXString(uint8_t *cipmuxBuf, size_t cipmuxBufSize) { 345. /* AT+CIPMUX? */ 346. uint8_t rxBuf[32]; 347. uint8_t res; 348. constunsigned char*p; 349. 350. res = ESP_SendATCommand("AT+CIPMUX?\r\n", rxBuf, sizeof(rxBuf), "\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, NULL); 351. if(res==ERR_OK) { 352. if(UTIL1_strncmp(rxBuf, "AT+CIPMUX?\r\r\n+CIPMUX:", sizeof("AT+CIPMUX?\r\r\n+CIPMUX:")-1)==0) { /* check for beginning of response */ 353. UTIL1_strCutTail(rxBuf, "\r\n\r\nOK\r\n"); /* cut tailing response */ 354. p = rxBuf+sizeof("AT+CIPMUX?\r\r\n+CIPMUX:")-1; /* skip beginning */ 355. UTIL1_strcpy(cipmuxBuf, cipmuxBufSize, p); /* copy IP information string */ 356. } else{ 357. res = ERR_FAILED; 358. } 359. } 360. if(res!=ERR_OK) { 361. UTIL1_strcpy(cipmuxBuf, cipmuxBufSize, "ERROR"); 362. } 363. returnres; 364.} 365. 366.uint8_t ESP_GetConnectedAPString(uint8_t *apBuf, size_t apBufSize) { 367. /* AT+CWJAP? */ 368. uint8_t rxBuf[48]; 369. uint8_t res; 370. constunsigned char*p; 371. 372. res = ESP_SendATCommand("AT+CWJAP?\r\n", rxBuf, sizeof(rxBuf), "\r\n\r\nOK\r\n", ESP_DEFAULT_TIMEOUT_MS, NULL); 373. if(res==ERR_OK) { 374. if(UTIL1_strncmp(rxBuf, "AT+CWJAP?\r\r\n+CWJAP:\"", sizeof("AT+CWJAP?\r\r\n+CWJAP:\"")-1)==0) { /* check for beginning of response */ 375. UTIL1_strCutTail(rxBuf, "\"\r\n\r\nOK\r\n"); /* cut tailing response */ 376. p = rxBuf+sizeof("AT+CWJAP?\r\r\n+CWJAP:\"")-1; /* skip beginning */ 377. UTIL1_strcpy(apBuf, apBufSize, p); /* copy IP information string */ 378. } else{ 379. res = ERR_FAILED; 380. } 381. } 382. if(res!=ERR_OK) { 383. UTIL1_strcpy(apBuf, apBufSize, "ERROR"); 384. } 385. returnres; 386. 387.} 388. 389.staticuint8_t JoinAccessPoint(constuint8_t *ssid, constuint8_t *pwd, CLS1_ConstStdIOType *io) { 390. /* AT+CWJAP="","" */ 391. uint8_t txBuf[48]; 392. uint8_t rxBuf[64]; 393. uint8_t expected[48]; 394. 395. UTIL1_strcpy(txBuf, sizeof(txBuf), "AT+CWJAP=\""); 396. UTIL1_strcat(txBuf, sizeof(txBuf), ssid); 397. UTIL1_strcat(txBuf, sizeof(txBuf), "\",\""); 398. UTIL1_strcat(txBuf, sizeof(txBuf), pwd); 399. UTIL1_strcat(txBuf, sizeof(txBuf), "\"\r\n"); 400. 401. UTIL1_strcpy(expected, sizeof(expected), "AT+CWJAP=\""); 402. UTIL1_strcat(expected, sizeof(expected), ssid); 403. UTIL1_strcat(expected, sizeof(expected), "\",\""); 404. UTIL1_strcat(expected, sizeof(expected), pwd); 405. UTIL1_strcat(expected, sizeof(expected), "\"\r\r\n\r\nOK\r\n"); 406. 407. returnESP_SendATCommand(txBuf, rxBuf, sizeof(rxBuf), expected, ESP_DEFAULT_TIMEOUT_MS, io); 408.} 409. 410.uint8_t ESP_JoinAP(constuint8_t *ssid, constuint8_t *pwd, intnofRetries, CLS1_ConstStdIOType *io) { 411. uint8_t buf[32]; 412. uint8_t res; 413. 414. do{ 415. res = JoinAccessPoint(ssid, pwd, io); 416. if(res==ERR_OK) { 417. break; 418. } 419. WAIT1_WaitOSms(1000); 420. nofRetries--; 421. } while(nofRetries>0); 422. returnres; 423.} 424. 425.staticuint8_t ReadIntoIPDBuffer(uint8_t *buf, size_t bufSize, uint8_t *p, uint16_t msgSize, uint16_t msTimeout, constCLS1_StdIOType *io) { 426. uint8_t ch; 427. size_t nofInBuf; 428. inttimeout; 429. 430. nofInBuf = p-buf; 431. bufSize -= nofInBuf; /* take into account what we already have in buffer */ 432. timeout = msTimeout; 433. while(msgSize>0&& bufSize>0) { 434. if(AS2_GetCharsInRxBuf()>0) { 435. (void)AS2_RecvChar(&ch); 436. *p = ch; 437. if(io!=NULL) { /* copy on console */ 438. io->stdOut(ch); 439. } 440. p++; 441. *p = '\0'; /* terminate */ 442. nofInBuf++; msgSize--; bufSize--; 443. } else{ 444. /* check in case we recveive less characters than expected, happens for POST? */ 445. if(nofInBuf>6&& UTIL1_strncmp(&p[-6], "\r\nOK\r\n", sizeof("\r\nOK\r\n")-1)==0) { 446. break; 447. } else{ 448. timeout -= 10; 449. WAIT1_WaitOSms(10); 450. if(timeout<0) { 451. returnERR_BUSY; 452. } 453. } 454. } 455. } 456. returnERR_OK; 457.} 458. 459.uint8_t ESP_GetIPD(uint8_t *msgBuf, size_t msgBufSize, uint8_t *ch_id, uint16_t *size, bool *isGet, uint16_t timeoutMs, constCLS1_StdIOType *io) { 460. /* scan e.g. for 461. * +IPD,0,404:POST / HTTP/1.1 462. * and return ch_id (0), size (404) 463. */ 464. uint8_t res = ERR_OK; 465. constuint8_t *p; 466. bool isIPD = FALSE; 467. uint8_t cmd[24], rxBuf[48]; 468. uint16_t ipdSize; 469. 470. *ch_id = 0; *size = 0; *isGet = FALSE; /* init */ 471. for(;;) { /* breaks */ 472. res = ESP_ReadCharsUntil(msgBuf, msgBufSize, '\n', timeoutMs); 473. if(res!=ERR_OK) { 474. break; /* timeout */ 475. } 476. if(res==ERR_OK) { /* line read */ 477. if(io!=NULL) { 478. CLS1_SendStr(msgBuf, io->stdOut); /* copy on console */ 479. } 480. isIPD = UTIL1_strncmp(msgBuf, "+IPD,", sizeof("+IPD,")-1)==0; 481. if(isIPD) { /* start of IPD message */ 482. p = msgBuf+sizeof("+IPD,")-1; 483. if(UTIL1_ScanDecimal8uNumber(&p, ch_id)!=ERR_OK) { 484. if(io!=NULL) { 485. CLS1_SendStr("ERR: wrong channel?\r\n", io->stdErr); /* error on console */ 486. } 487. res = ERR_FAILED; 488. break; 489. } 490. if(*p!=',') { 491. res = ERR_FAILED; 492. break; 493. } 494. p++; /* skip comma */ 495. if(UTIL1_ScanDecimal16uNumber(&p, size)!=ERR_OK) { 496. if(io!=NULL) { 497. CLS1_SendStr("ERR: wrong size?\r\n", io->stdErr); /* error on console */ 498. } 499. res = ERR_FAILED; 500. break; 501. } 502. if(*p!=':') { 503. res = ERR_FAILED; 504. break; 505. } 506. ipdSize = p-msgBuf; /* length of "+IPD,," string */ 507. p++; /* skip ':' */ 508. if(UTIL1_strncmp(p, "GET", sizeof("GET")-1)==0) { 509. *isGet = TRUE; 510. } elseif(UTIL1_strncmp(p, "POST", sizeof("POST")-1)==0) { 511. *isGet = FALSE; 512. } else{ 513. res = ERR_FAILED; 514. } 515. while(*p!='\0') { 516. p++; /* skip to the end */ 517. } 518. /* read the rest of the message */ 519. res = ReadIntoIPDBuffer(msgBuf, msgBufSize, (uint8_t*)p, (*size)-ipdSize, ESP_DEFAULT_TIMEOUT_MS, io); 520. break; 521. } 522. } 523. } 524. returnres; 525.} 526. 527.uint8_t ESP_StartWebServer(constCLS1_StdIOType *io) { 528. uint8_t buf[32]; 529. uint8_t res; 530. 531. res = ESP_SetNumberOfConnections(1, io, ESP_DEFAULT_TIMEOUT_MS); 532. if(res!=ERR_OK) { 533. CLS1_SendStr("ERR: failed to set multiple connections.\r\n", io->stdErr); 534. returnres; 535. } 536. res = ESP_SetServer(TRUE, 80, io, ESP_DEFAULT_TIMEOUT_MS); 537. if(res!=ERR_OK) { 538. CLS1_SendStr("ERR: failed to set server.\r\n", io->stdErr); 539. returnres; 540. } 541. CLS1_SendStr("INFO: Web Server started, waiting for connection on ", io->stdOut); 542. if(ESP_GetIPAddrString(buf, sizeof(buf))==ERR_OK) { 543. CLS1_SendStr(buf, io->stdOut); 544. CLS1_SendStr(":80", io->stdOut); 545. } else{ 546. CLS1_SendStr("(ERROR!)", io->stdOut); 547. } 548. CLS1_SendStr("\r\n", io->stdOut); 549. 550. returnERR_OK; 551.} 552. 553.uint8_t ESP_SendStr(constuint8_t *str, CLS1_ConstStdIOType *io) { 554. uint8_t buf[32]; 555. uint8_t rxBuf[48]; 556. uint8_t res; 557. uint16_t timeoutMs; 558. #define RX_TIMEOUT_MS 3000 559. AS2_TComData ch; 560. 561. UTIL1_strcpy(buf, sizeof(buf), str); 562. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 563. res = ESP_SendATCommand(buf, rxBuf, sizeof(rxBuf), NULL, ESP_DEFAULT_TIMEOUT_MS, io); 564. timeoutMs = 0; 565. while(timeoutMs0) { 569. (void)AS2_RecvChar(&ch); 570. CLS1_SendChar(ch); 571. } 572. } 573. returnERR_OK; 574.} 575. 576.staticuint8_t ESP_PrintHelp(constCLS1_StdIOType *io) { 577. CLS1_SendHelpStr("ESP", "ESP8200 commands\r\n", io->stdOut); 578. CLS1_SendHelpStr(" help|status", "Print help or status information\r\n", io->stdOut); 579. CLS1_SendHelpStr(" send ", "Sends a string to the module\r\n", io->stdOut); 580. CLS1_SendHelpStr(" test", "Sends a test AT command\r\n", io->stdOut); 581. CLS1_SendHelpStr(" restart", "Restart module\r\n", io->stdOut); 582. CLS1_SendHelpStr(" listAP", "List available Access Points\r\n", io->stdOut); 583. CLS1_SendHelpStr(" connectAP \"ssid\",\"pwd\"", "Connect to an Access Point\r\n", io->stdOut); 584. CLS1_SendHelpStr(" server (start|stop)", "Start or stop web server\r\n", io->stdOut); 585. returnERR_OK; 586.} 587. 588.staticuint8_t ESP_PrintStatus(constCLS1_StdIOType *io) { 589. uint8_t buf[48]; 590. 591. CLS1_SendStatusStr("ESP8266", "\r\n", io->stdOut); 592. 593. CLS1_SendStatusStr(" Webserver", ESP_WebServerIsOn?"ON\r\n":"OFF\r\n", io->stdOut); 594. 595. if(ESP_GetFirmwareVersionString(buf, sizeof(buf)) != ERR_OK) { 596. UTIL1_strcpy(buf, sizeof(buf), "FAILED\r\n"); 597. } else{ 598. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 599. } 600. CLS1_SendStatusStr(" AT+GMR", buf, io->stdOut); 601. 602. if(ESP_GetModeString(buf, sizeof(buf)) != ERR_OK) { 603. UTIL1_strcpy(buf, sizeof(buf), "FAILED\r\n"); 604. } else{ 605. if(UTIL1_strcmp(buf, "1")==0) { 606. UTIL1_strcat(buf, sizeof(buf), " (device)"); 607. } elseif(UTIL1_strcmp(buf, "2")==0) { 608. UTIL1_strcat(buf, sizeof(buf), " (AP)"); 609. } elseif(UTIL1_strcmp(buf, "3")==0) { 610. UTIL1_strcat(buf, sizeof(buf), " (device+AP)"); 611. } else{ 612. UTIL1_strcat(buf, sizeof(buf), " (ERROR)"); 613. } 614. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 615. } 616. CLS1_SendStatusStr(" AT+CWMODE?", buf, io->stdOut); 617. 618. if(ESP_GetIPAddrString(buf, sizeof(buf)) != ERR_OK) { 619. UTIL1_strcpy(buf, sizeof(buf), "FAILED\r\n"); 620. } else{ 621. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 622. } 623. CLS1_SendStatusStr(" AT+CIFSR", buf, io->stdOut); 624. 625. if(ESP_GetConnectedAPString(buf, sizeof(buf)) != ERR_OK) { 626. UTIL1_strcpy(buf, sizeof(buf), "FAILED\r\n"); 627. } else{ 628. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 629. } 630. CLS1_SendStatusStr(" AT+CWJAP?", buf, io->stdOut); 631. 632. if(ESP_GetCIPMUXString(buf, sizeof(buf)) != ERR_OK) { 633. UTIL1_strcpy(buf, sizeof(buf), "FAILED\r\n"); 634. } else{ 635. if(UTIL1_strcmp(buf, "0")==0) { 636. UTIL1_strcat(buf, sizeof(buf), " (single connection)"); 637. } elseif(UTIL1_strcmp(buf, "1")==0) { 638. UTIL1_strcat(buf, sizeof(buf), " (multiple connections)"); 639. } else{ 640. UTIL1_strcat(buf, sizeof(buf), " (ERROR)"); 641. } 642. UTIL1_strcat(buf, sizeof(buf), "\r\n"); 643. } 644. CLS1_SendStatusStr(" CIPMUX", buf, io->stdOut); 645. returnERR_OK; 646.} 647. 648.uint8_t ESP_ParseCommand(constunsigned char*cmd, bool *handled, constCLS1_StdIOType *io) { 649. uint32_t val; 650. uint8_t res; 651. constunsigned char*p; 652. uint8_t pwd[24], ssid[24]; 653. 654. if(UTIL1_strcmp((char*)cmd, CLS1_CMD_HELP)==0|| UTIL1_strcmp((char*)cmd, "ESP help")==0) { 655. *handled = TRUE; 656. res = ESP_PrintHelp(io); 657. } elseif(UTIL1_strcmp((char*)cmd, CLS1_CMD_STATUS)==0|| UTIL1_strcmp((char*)cmd, "ESP status")==0) { 658. *handled = TRUE; 659. res = ESP_PrintStatus(io); 660. } elseif(UTIL1_strncmp((char*)cmd, "ESP send ", sizeof("ESP send ")-1)==0) { 661. *handled = TRUE; 662. p = cmd+sizeof("ESP send ")-1; 663. 664. (void)ESP_SendStr(p, io); 665. } elseif(UTIL1_strcmp((char*)cmd, "ESP test")==0) { 666. *handled = TRUE; 667. if(ESP_TestAT()!=ERR_OK) { 668. CLS1_SendStr("TEST failed!\r\n", io->stdErr); 669. res = ERR_FAILED; 670. } else{ 671. CLS1_SendStr("TEST ok!\r\n", io->stdOut); 672. } 673. } elseif(UTIL1_strcmp((char*)cmd, "ESP listAP")==0) { 674. *handled = TRUE; 675. (void)ESP_SendStr("AT+CWLAP", io); 676. /* AT + CWLAP 677. response 678. + CWLAP: , , [, ] 679. OK Or Fails, the return ERROR 680. 0 OPEN 681. 1 WEP 682. 2 WPA_PSK 683. 3 WPA2_PSK 684. 4 WPA_WPA2_PSK 685. string parameter, the access point name 686. signal strength 687. 0: manually connect 1: An automatic connection 688. */ 689. returnERR_OK; 690. } elseif(UTIL1_strncmp((char*)cmd, "ESP connectAP ", sizeof("ESP connectAP ")-1)==0) { 691. *handled = TRUE; 692. p = cmd+sizeof("ESP connectAP ")-1; 693. ssid[0] = '\0'; pwd[0] = '\0'; 694. res = UTIL1_ScanDoubleQuotedString(&p, ssid, sizeof(ssid)); 695. if(res==ERR_OK && *p!='\0'&& *p==',') { 696. p++; /* skip comma */ 697. res = UTIL1_ScanDoubleQuotedString(&p, pwd, sizeof(pwd)); 698. } else{ 699. CLS1_SendStr("Comma expected between strings!\r\n", io->stdErr); 700. res = ERR_FAILED; 701. } 702. if(res==ERR_OK) { 703. res = ESP_JoinAP(ssid, pwd, 3, io); 704. } else{ 705. CLS1_SendStr("Wrong command format!\r\n", io->stdErr); 706. res = ERR_FAILED; 707. } 708. } elseif(UTIL1_strcmp((char*)cmd, "ESP server start")==0) { 709. *handled = TRUE; 710. res = ESP_StartWebServer(io); 711. ESP_WebServerIsOn = res==ERR_OK; 712. } elseif(UTIL1_strcmp((char*)cmd, "ESP server stop")==0) { 713. *handled = TRUE; 714. ESP_WebServerIsOn = FALSE; 715. } elseif(UTIL1_strcmp((char*)cmd, "ESP restart")==0) { 716. *handled = TRUE; 717. ESP_Restart(io, 2000); 718. } 719. returnres; 720.} 721. 722.voidESP_Deinit(void) { 723. /* nothing to do */ 724.} 725. 726.voidESP_Init(void) { 727. AS2_ClearRxBuf(); /* clear buffer */ 728.} The application interface in Application.h is rather short :-): view source print? 01./* 02. * Application.h 03. * 04. * Author: Erich Styger 05. */ 06. 07.#ifndef APPLICATION_H_ 08.#define APPLICATION_H_ 09. 10./*! 11. * \brief Application main routine 12. */ 13.voidAPP_Run(void); 14. 15.#endif /* APPLICATION_H_ */ The main loop of the application is Application.c, along with the application specific web server code. As the SendWebPage function contains HTML code, I’m posting it here separately: view source print? 01.staticuint8_t SendWebPage(uint8_t ch_id, bool ledIsOn, uint8_t temperature, constCLS1_StdIOType *io) { 02. staticuint8_t http[1024]; 03. uint8_t cmd[24], rxBuf[48], expected[48]; 04. uint8_t buf[16]; 05. uint8_t res = ERR_OK; 06. 07. /* construct web page content */ 08. UTIL1_strcpy(http, sizeof(http), (uint8_t*)"HTTP/1.0 200 OK\r\nContent-Type: text/html\r\nPragma: no-cache\r\n\r\n"); 09. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n\r\n"); 10. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n"); 11. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Web Server using ESP8266\r\n"); 12. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 13.\r\n"); 14. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Temp: OC"); 17. if(ledIsOn) { 18. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Red LED off"); 19. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 20.Red LED on"); 21. } else{ 22. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Red LED off"); 23. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 24.Red LED on"); 25. } 26. UTIL1_strcat(http, sizeof(http), (uint8_t*)""); 27. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n\r\n"); 28. 29. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPSEND="); /* parameters are , */ 30. UTIL1_strcatNum8u(cmd, sizeof(cmd), ch_id); 31. UTIL1_chcat(cmd, sizeof(cmd), ','); 32. UTIL1_strcatNum16u(cmd, sizeof(cmd), UTIL1_strlen(http)); 33. UTIL1_strcpy(expected, sizeof(expected), cmd); /* we expect the echo of our command */ 34. UTIL1_strcat(expected, sizeof(expected), "\r\r\n> "); /* expect "> " */ 35. UTIL1_strcat(cmd, sizeof(cmd), "\r\n"); 36. res = ESP_SendATCommand(cmd, rxBuf, sizeof(rxBuf), expected, ESP_DEFAULT_TIMEOUT_MS, io); 37. if(res!=ERR_OK) { 38. if(io!=NULL) { 39. CLS1_SendStr("INFO: TIMEOUT, closing connection!\r\n", io->stdOut); 40. } 41. } else{ 42. if(io!=NULL) { 43. CLS1_SendStr("INFO: Sending http page...\r\n", io->stdOut); 44. } 45. UTIL1_strcat(http, sizeof(http), "\r\n\r\n"); /* need to add this to end the command! */ 46. res = ESP_SendATCommand(http, NULL, 0, NULL, ESP_DEFAULT_TIMEOUT_MS, io); 47. if(res!=ERR_OK) { 48. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 49. } else{ 50. for(;;) { /* breaks */ 51. res = ESP_ReadCharsUntil(buf, sizeof(buf), '\n', 1000); 52. if(res==ERR_OK) { /* line read */ 53. if(io!=NULL) { 54. CLS1_SendStr(buf, io->stdOut); /* copy on console */ 55. } 56. } 57. if(UTIL1_strncmp(buf, "SEND OK\r\n", sizeof("SEND OK\r\n")-1)==0) { /* ok from module */ 58. break; 59. } 60. } 61. } 62. } 63. returnres; 64.} The rest of Application.c is rather simple: view source print? 01./* 02. * Application.c 03. * 04. * Author: Erich Styger 05. */ 06.#include "PE_Types.h" 07.#include "CLS1.h" 08.#include "WAIT1.h" 09.#include "Shell.h" 10.#include "UTIL1.h" 11.#include "ESP8266.h" 12.#include "LEDR.h" 13.#include "LEDG.h" 14.#include "AS2.h" 15. 16.staticuint8_t APP_EspMsgBuf[512]; /* buffer for messages from ESP8266 */ 17. 18.staticvoidWebProcess(void) { 19. uint8_t res=ERR_OK; 20. bool isGet; 21. uint8_t ch_id=0; 22. uint16_t size=0; 23. constuint8_t *p; 24. constCLS1_StdIOType *io; 25. 26. if(ESP_IsServerOn()) { 27. io = CLS1_GetStdio(); 28. res = ESP_GetIPD(APP_EspMsgBuf, sizeof(APP_EspMsgBuf), &ch_id, &size, &isGet, 1000, io); 29. if(res==ERR_OK) { 30. if(isGet) { /* GET: put web page */ 31. res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 21/*dummy temperature*/, io); 32. if(res!=ERR_OK && io!=NULL) { 33. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 34. } 35. } else{ /* POST: received info */ 36. intpos; 37. 38. pos = UTIL1_strFind(APP_EspMsgBuf, "radio="); 39. if(pos!=-1) { /* found */ 40. if(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=0", sizeof("radio=0")-1)) { 41. LEDR_On(); 42. } elseif(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=1", sizeof("radio=1")-1)) { 43. LEDR_Off(); 44. } 45. } 46. res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 20/*dummy temperature*/, io); 47. if(res!=ERR_OK && io!=NULL) { 48. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 49. } 50. } 51. CLS1_SendStr("INFO: Closing connection...\r\n", io->stdOut); 52. res = ESP_CloseConnection(ch_id, io, ESP_DEFAULT_TIMEOUT_MS); 53. } 54. } else{ /* copy messages we receive to console */ 55. while(AS2_GetCharsInRxBuf()>0) { 56. uint8_t ch; 57. 58. (void)AS2_RecvChar(&ch); 59. CLS1_SendChar(ch); 60. } 61. } 62.} 63. 64.voidAPP_Run(void) { 65. CLS1_ConstStdIOType *io; 66. 67. WAIT1_Waitms(1000); /* wait after power-on */ 68. ESP_Init(); 69. SHELL_Init(); 70. io = CLS1_GetStdio(); 71. CLS1_SendStr("\r\n------------------------------------------\r\n", io->stdOut); 72. CLS1_SendStr("ESP8266 with FRDM-KL25Z\r\n", io->stdOut); 73. CLS1_SendStr("------------------------------------------\r\n", io->stdOut); 74. CLS1_PrintPrompt(io); 75. for(;;) { 76. WebProcess(); 77. SHELL_Parse(); 78. WAIT1_Waitms(10); 79. LEDG_Neg(); 80. } 81.} In main.c I call the application part: view source print? 01./* ################################################################### 02.** Filename : main.c 03.** Project : FRDM-KL25Z_ESP8266 04.** Processor : MKL25Z128VLK4 05.** Version : Driver 01.01 06.** Compiler : GNU C Compiler 07.** Date/Time : 2014-10-15, 14:28, # CodeGen: 0 08.** Abstract : 09.** Main module. 10.** This module contains user's application code. 11.** Settings : 12.** Contents : 13.** No public methods 14.** 15.** ###################################################################*/ 16./*! 17.** @file main.c 18.** @version 01.01 19.** @brief 20.** Main module. 21.** This module contains user's application code. 22.*/ 23./*! 24.** @addtogroup main_module main module documentation 25.** @{ 26.*/ 27./* MODULE main */ 28. 29./* Including needed modules to compile this module/procedure */ 30.#include "Cpu.h" 31.#include "Events.h" 32.#include "WAIT1.h" 33.#include "UTIL1.h" 34.#include "AS1.h" 35.#include "ASerialLdd1.h" 36.#include "CLS1.h" 37.#include "CS1.h" 38.#include "AS2.h" 39.#include "ASerialLdd2.h" 40.#include "LEDR.h" 41.#include "LEDpin1.h" 42.#include "BitIoLdd1.h" 43.#include "LEDG.h" 44.#include "LEDpin2.h" 45.#include "BitIoLdd2.h" 46.#include "LEDB.h" 47.#include "LEDpin3.h" 48.#include "BitIoLdd3.h" 49./* Including shared modules, which are used for whole project */ 50.#include "PE_Types.h" 51.#include "PE_Error.h" 52.#include "PE_Const.h" 53.#include "IO_Map.h" 54./* User includes (#include below this line is not maintained by Processor Expert) */ 55.#include "Application.h" 56. 57./*lint -save -e970 Disable MISRA rule (6.3) checking. */ 58.intmain(void) 59./*lint -restore Enable MISRA rule (6.3) checking. */ 60.{ 61. /* Write your local variable definition here */ 62. 63. /*** Processor Expert internal initialization. DON'T REMOVE THIS CODE!!! ***/ 64. PE_low_level_init(); 65. /*** End of Processor Expert internal initialization. ***/ 66. 67. APP_Run(); 68. 69. /*** Don't write any code pass this line, or it will be deleted during code generation. ***/ 70. /*** RTOS startup code. Macro PEX_RTOS_START is defined by the RTOS component. DON'T MODIFY THIS CODE!!! ***/ 71. #ifdef PEX_RTOS_START 72. PEX_RTOS_START(); /* Startup of the selected RTOS. Macro is defined by the RTOS component. */ 73. #endif 74. /*** End of RTOS startup code. ***/ 75. /*** Processor Expert end of main routine. DON'T MODIFY THIS CODE!!! ***/ 76. for(;;){} 77. /*** Processor Expert end of main routine. DON'T WRITE CODE BELOW!!! ***/ 78.} /*** End of main routine. DO NOT MODIFY THIS TEXT!!! ***/ 79. 80./* END main */ 81./*! 82.** @} 83.*/ 84./* 85.** ################################################################### 86.** 87.** This file was created by Processor Expert 10.4 [05.10] 88.** for the Freescale Kinetis series of microcontrollers. 89.** 90.** ################################################################### 91.*/ Processor Expert Components In addition, I’m using several Processor Expert component which are available fromSourceForge. Processor Expert Components Wait: Busy waiting component, e.g. to wait for a few milliseconds. Utility: string manipulation and utility functions. AsynchroSerial (AS1): serial interface to the host for the shell command line interface Shell: command line shell implementation CriticalSection: for creating critical sections AsynchroSerial (AS2): serial interface to the ESP8266 module LEDR, LEDG and LEDB: Red, Green and Blue LED on the FRDM-KL25Z board AS1 is configured as UART connection (over OpenSDA) for the shell: Shell UART Settings There are no special settings for the Shell component: Shell Settings Important are the correct settings to the ESP8266 UART: 115200 baud and using the correct pins on the board connected to the Rx and Tx lines of the ESP8266. I’m using rather large input and output buffers: UART connection to ESP8266 The LED components are configured for the pins used on the board: PTB18 for red, PTB19 for green and PTD1 for blue LED. Red LED for FRDM-KL25Z Sending Commands The shell implements the command ESP send which I can use to send a string or command to the module: ESP send Note that for every command a trailing “\r\n” will be sent. So instead of using the programmatic way, the shell can be used to ‘manually’ drive a web server, at least most of the part. So I’m using command line commands below to explore how the ESP8266 module works. Using the Shell With the project (link to GitHub below), I have a serial connection and command line shell interface to the module. Compile the project and download it to the FRDM-KL25Z board and use a terminal program (I use Termite) to talk with the module. It power-up, the program shows a greeting message: Greeting Message With ‘help‘ I get a list of the available commands: Help Command The ‘status‘ command gives a system status: Status Command Output With this, I’m ready to send commands to the module :-). Connection Test To test the connection I send a simple ‘AT’ command ESP send AT AT Command Output and the module should respond with AT\r\r\n\r\nOK\r\n Module Restart Sometimes the module gets stuck. What helps is a power-on reset of the module. Another way is to send the AT+RST command to reset the module. The module will boot up and print a ‘ready’ message: Reset of the ESP8266 Access Point or Device First I need to configure if the ESP is either a device or an access point. For this, theCWMODE command is used: AT+CWMODE= where is one of: 1: ‘Sta’, ESP8266 is a device, it connects to an existing access point 2: ‘AP’, ESP8266 is an access point, so other devices can connect to it 3: ‘both’. Not really clear to me, but it seems that in this mode the device is in a hybrid mode? To have the ESP as device so it can connect to an existing access point I use AT+CWMODE=1 and the module should answer with AT+CWMODE=1\r\r\n\r\nOK\r\n or with a ‘no change': AT+CWMODE=1\r\r\nno change\r\n With AT+CWMODE? I can ask for the current mode: Retrieving Current Mode List of Access Points With AT+CWLAP I get a list of access points. It reports a list like this: AT+CWLAP +CWLAP:(0,"",0) +CWLAP:(4,"APforESP",-39) +CWLAP:(4,"iza-97497",-94) OK :!: I experienced problems with that command in an environment with lots of access points visible. In this case it seems the module hands up. Try first in a place with only a few access points. For this tutorial I have configured an access point with SSID “APforESP” which shows up in my list. The list is formatted like this + CWLAP: , , [, ] With following encoding: : 0: OPEN 1: WPA_PSK 2: WPA2_PSK 4: WPA_WPA2_PSK : the SSID (string) of the access point. : Signal strength. : 0: manually connect 1: automatic connect Connecting to Access Point To connect to an access point I use the command AT+CWJAP="","" Of course replace and with your setup. The module should report back an “OK” message, and you are connected :-). :!: The module stores the ssid and password. After power-up, the module will automatically reconnect to the Access Point. IP Address Once connected I can check the IP address I have been assigned to with AT+CIFSR which should give something like AT+CIFSR 192.168.0.111 So now I know my module IP address :-). With this I can ping my module: Pinging my ESP Module Building a Web Server Now as we hav a connection, it is time to use it to run a web server :-).What I want to serve a web page which I can use to turn on or off the LEDs on the board. Number of Connections: CIPMUX Before I start the server I need to make sure it accepts multiple connections. For this I use the following command: AT+CIPMUX=1 The parameter is either 0 (single connection), or 1 (multiple connections). For a web server I need to set it up for multiple connections. The ESP module should respond with AT+CIPMUX=1\r\n\r\nOK\r\n :info: To make it clear, I have included the ‘\r’ and ‘\n’ in the responses. Starting the Server: CIPSERVER I start the server with AT+CIPSERVER=1,80 The first parameter is either 0 (close connection) or 1 (open connection), followed by the port. I use here the standard http port (80). The module should answer with: AT+CIPSERVER=1,80\r\r\n\r\nOK\r\n or if it is already running the server with a ‘no change': AT+CIPSERVER=1,80\r\r\nno change\r\n No I have a connection open on my IP address (see above: 192.168.0.111), listening to the port I have specified (80). Connecting to the Server with Browser I enter the IP address in a web browser: http://192.168.0.111:80 For clarity I have specified the standard HTTP port (80). So if you are using a different port, make sure you specify it in the address line. Connection from FireFox The browser now sends a GET request to the module, and I will see this from the message printed out from the module: First response from Module The ‘Link’ indicates that it has established a link. IPD (IP Data?) is followed by the channel number (this will the one we have to respond to), plus the size of the following data (296 bytes in that case). As I’m not responding (yet), there will be a timeout (after about 1 minute or so), with an ‘Unlink’ message from the module: Link +IPD,0,296:GET / HTTP/1.1 Host: 192.168.0.111 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:33.0) Gecko/20100101 Firefox/33.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de,en-US;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive OK Unlink Unlink message Sending Data to Server: CIPSEND Now I need to respond and send data to the browser. For this I need to know the channel number, and this is provided in the IPD message from above, right after the comma: +IPD,0 To send data, I use the command AT+CIPSEND=, So I connect again with the browser, and I send 5 bytes (“hello”) with: AT+CIPSEND=0,5 The ESP8266 responds with AT+CIPSEND=0,5\r\n> Notice the ‘>’ at the end: this is my signal to send the actual data (“hello” in my case): hello The ESP8266 now resonds with a SEND OK: Data Sent However, the browser is still busy and spins around. I already thought that I did something wrong, but after the browser run into a timeout (after about one minute), my data is there! :-) Hello in Browser Closing Connection: CIPCLOSE So things *are* working :-). The trick is that I have to close the connection after I have sent the data. There is a CIPCLOSE command I can use: AT+CIPCLOSE= which I can use to close a channel. So I close the connection with AT+CIPCLOSE=0 and now the browser shows the content right away :-). Web Server Implementation So far I have used the module in command line and manual mode. This is great for exploration of the protocol, but for building the web server I need to do this programmatically. For this I run my ‘main’ loop in APP_Run(). After printing a greeting message and initializing the sub modules, it processes the web/module responses, parses the shell command line interfaces and blinks the green (LEDG) LED. view source print? 01.voidAPP_Run(void) { 02. CLS1_ConstStdIOType *io; 03. 04. WAIT1_Waitms(1000); /* wait after power-on */ 05. ESP_Init(); 06. SHELL_Init(); 07. io = CLS1_GetStdio(); 08. CLS1_SendStr("\r\n------------------------------------------\r\n", io->stdOut); 09. CLS1_SendStr("ESP8266 with FRDM-KL25Z\r\n", io->stdOut); 10. CLS1_SendStr("------------------------------------------\r\n", io->stdOut); 11. CLS1_PrintPrompt(io); 12. for(;;) { 13. WebProcess(); 14. SHELL_Parse(); 15. WAIT1_Waitms(10); 16. LEDG_Neg(); 17. } 18.} With ESP server start I start the web server: Starting the Web Server It sends the AT+CIPMUX command followed by the AT+CIPSERVER to start the server, and then listens to the port. Reading and responding messages is done in WebProcess(): view source print? 01.staticvoidWebProcess(void) { 02. uint8_t res=ERR_OK; 03. bool isGet; 04. uint8_t ch_id=0; 05. uint16_t size=0; 06. constuint8_t *p; 07. constCLS1_StdIOType *io; 08. 09. if(ESP_IsServerOn()) { 10. io = CLS1_GetStdio(); 11. res = ESP_GetIPD(APP_EspMsgBuf, sizeof(APP_EspMsgBuf), &ch_id, &size, &isGet, 1000, io); 12. if(res==ERR_OK) { 13. if(isGet) { /* GET: put web page */ 14. res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 21/*dummy temperature*/, io); 15. if(res!=ERR_OK && io!=NULL) { 16. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 17. } 18. } else{ /* POST: received info */ 19. intpos; 20. 21. pos = UTIL1_strFind(APP_EspMsgBuf, "radio="); 22. if(pos!=-1) { /* found */ 23. if(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=0", sizeof("radio=0")-1)) { 24. LEDR_On(); 25. } elseif(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=1", sizeof("radio=1")-1)) { 26. LEDR_Off(); 27. } 28. } 29. res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 20/*dummy temperature*/, io); 30. if(res!=ERR_OK && io!=NULL) { 31. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 32. } 33. } 34. CLS1_SendStr("INFO: Closing connection...\r\n", io->stdOut); 35. res = ESP_CloseConnection(ch_id, io, ESP_DEFAULT_TIMEOUT_MS); 36. } 37. } else{ /* copy messages we receive to console */ 38. while(AS2_GetCharsInRxBuf()>0) { 39. uint8_t ch; 40. 41. (void)AS2_RecvChar(&ch); 42. CLS1_SendChar(ch); 43. } 44. } 45.} If the server is not enabled, it simply copies the received messages to the console: view source print? 1.} else{ /* copy messages we receive to console */ 2. while(AS2_GetCharsInRxBuf()>0) { 3. uint8_t ch; 4. 5. (void)AS2_RecvChar(&ch); 6. CLS1_SendChar(ch); 7. } 8. } Otherwise it scans for an IPD message (ESP_GetIPD()). This function returns the whole message, the channel, the message size and if it is a GET or POST message: 1 res = ESP_GetIPD(APP_EspMsgBuf, sizeof(APP_EspMsgBuf), &ch_id, &size, &isGet, 1000, io); If it is a GET message, then it sends a HTML page to the module: 1 res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 21 /*dummy temperature*/, io); This web page shows the status of the red LED on the board, a (dummy) temperature value and a button to submit new LED values: WSP8266 Web Server The HTML code for this page is constructed in SendWebPage() and sent withAT+CIPSEND: view source print? 01.staticuint8_t SendWebPage(uint8_t ch_id, bool ledIsOn, uint8_t temperature, constCLS1_StdIOType *io) { 02. staticuint8_t http[1024]; 03. uint8_t cmd[24], rxBuf[48], expected[48]; 04. uint8_t buf[16]; 05. uint8_t res = ERR_OK; 06. 07. /* construct web page content */ 08. UTIL1_strcpy(http, sizeof(http), (uint8_t*)"HTTP/1.0 200 OK\r\nContent-Type: text/html\r\nPragma: no-cache\r\n\r\n"); 09. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n\r\n"); 10. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n"); 11. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Web Server using ESP8266\r\n"); 12. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 13.\r\n"); 14. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Temp: OC"); 17. if(ledIsOn) { 18. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Red LED off"); 19. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 20.Red LED on"); 21. } else{ 22. UTIL1_strcat(http, sizeof(http), (uint8_t*)"Red LED off"); 23. UTIL1_strcat(http, sizeof(http), (uint8_t*)" 24.Red LED on"); 25. } 26. UTIL1_strcat(http, sizeof(http), (uint8_t*)""); 27. UTIL1_strcat(http, sizeof(http), (uint8_t*)"\r\n\r\n"); 28. 29. UTIL1_strcpy(cmd, sizeof(cmd), "AT+CIPSEND="); /* parameters are , */ 30. UTIL1_strcatNum8u(cmd, sizeof(cmd), ch_id); 31. UTIL1_chcat(cmd, sizeof(cmd), ','); 32. UTIL1_strcatNum16u(cmd, sizeof(cmd), UTIL1_strlen(http)); 33. UTIL1_strcpy(expected, sizeof(expected), cmd); /* we expect the echo of our command */ 34. UTIL1_strcat(expected, sizeof(expected), "\r\r\n> "); /* expect "> " */ 35. UTIL1_strcat(cmd, sizeof(cmd), "\r\n"); 36. res = ESP_SendATCommand(cmd, rxBuf, sizeof(rxBuf), expected, ESP_DEFAULT_TIMEOUT_MS, io); 37. if(res!=ERR_OK) { 38. if(io!=NULL) { 39. CLS1_SendStr("INFO: TIMEOUT, closing connection!\r\n", io->stdOut); 40. } 41. } else{ 42. if(io!=NULL) { 43. CLS1_SendStr("INFO: Sending http page...\r\n", io->stdOut); 44. } 45. UTIL1_strcat(http, sizeof(http), "\r\n\r\n"); /* need to add this to end the command! */ 46. res = ESP_SendATCommand(http, NULL, 0, NULL, ESP_DEFAULT_TIMEOUT_MS, io); 47. if(res!=ERR_OK) { 48. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 49. } else{ 50. for(;;) { /* breaks */ 51. res = ESP_ReadCharsUntil(buf, sizeof(buf), '\n', 1000); 52. if(res==ERR_OK) { /* line read */ 53. if(io!=NULL) { 54. CLS1_SendStr(buf, io->stdOut); /* copy on console */ 55. } 56. } 57. if(UTIL1_strncmp(buf, "SEND OK\r\n", sizeof("SEND OK\r\n")-1)==0) { /* ok from module */ 58. break; 59. } 60. } 61. } 62. } 63. returnres; 64.} In case of a POST message (user has pressed the button), I scan for the radio element string and turn on/off the LED accordingly, and re-submit the new web page: view source print? 01.} else{ /* POST: received info */ 02. intpos; 03. 04. pos = UTIL1_strFind(APP_EspMsgBuf, "radio="); 05. if(pos!=-1) { /* found */ 06. if(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=0", sizeof("radio=0")-1)) { 07. LEDR_On(); 08. } elseif(UTIL1_strncmp(&APP_EspMsgBuf[pos], "radio=1", sizeof("radio=1")-1)) { 09. LEDR_Off(); 10. } 11. } 12. res = SendWebPage(ch_id, LEDR_Get()!=FALSE, 20/*dummy temperature*/, io); 13. if(res!=ERR_OK && io!=NULL) { 14. CLS1_SendStr("Sending page failed!\r\n", io->stdErr); /* copy on console */ 15. } 16. } Finally, it closes the connection at the end: view source print? 1.CLS1_SendStr("INFO: Closing connection...\r\n", io->stdOut); 2. res = ESP_CloseConnection(ch_id, io, ESP_DEFAULT_TIMEOUT_MS); With this, I handle GET and POST messages and can toggle the LED on my board :-) :-). Summary It is amazing what is possible with this tiny and inexpensive ($4.50) WiFi module. The simple AT interface allows small and tiny microprocesors to connect to the internet or the local network. With all the hype around ‘Internet of Things’ this is where things very likely will end up: small nodes connecting in an easy way to the network. The processor on that ESP8266 is probably more powerful than the KL25Z (the specs and data sheets of that ESP8266 are still evolving). Or it is possible to run that module in standalone mode too which is a very interesting approach too, see the links at the end of this article. But still having an UART way to connect to the network is very useful and powerful. Other modules costs multiple times more. I expect that many vendors will come up with similar integrated modules e.g. to combine an ARM processor with the WiFi radio, similar that ESP8266 module. For sure that ESP8266 has a head start and paved the way how WiFi connectivity should work. We all will see what the future brings. Until then, that ESP8266 module is something I can use in many projects :-). The sources and project files can be found on GitHub: https://github.com/ErichStyger/mcuoneclipse/tree/master/Examples/KDS/FRDM-KL25Z/FRDM-KL25Z_ESP8266 Happy Web-Serving :-) Useful Links: http://www.electrodragon.com/w/Wi07c http://scargill.wordpress.com/category/esp8266/ https://github.com/esp8266/esp8266-webserver http://www.cse.dmu.ac.uk/~sexton/ESP8266/ http://defcon-cc.dyndns.org/wiki/ESP8266#Update http://www.xess.com/blog/esp8266-resources/
December 2, 2014
by Erich Styger
· 33,388 Views
  • Previous
  • ...
  • 745
  • 746
  • 747
  • 748
  • 749
  • 750
  • 751
  • 752
  • 753
  • 754
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: