The author conducts two tests with differing service delay times to measure any difference in performance between reactive and synchronous programming.
Find a suitable client tool for MQTT testing. Listed by desktop, browser, and command line categories, these tools are available for free (most, open source).
You have probably heard a lot of talk about the wonderful things the cloud can do for you, and you are probably curious about how those services may come into play in your daily life. If this sounds like you, then you need to know that cloud services are playing an increasingly important role in our lives, and we need to look at how they can change how we message one another. Many people are looking at Android cloud messaging as the next leap forward into a future where it is possible to reach out to the people we care about and save those messages directly in the cloud. Never miss the opportunity to communicate with someone who truly matters to you, and start using cloud storage to back up your messages. It is as simple as that! You might have heard of c2dm (cloud-to-device messaging), which basically allowed third-party applications to send (push) lightweight messages to their android applications. Well, c2dm as such is now deprecated and replaced with its successor up the evolutionary ladder: GCM, or google cloud messaging. GCM is a (free) service that allows developers to push two types of messages from their application servers to any number of android devices registered with the service: collapsible, "send-to-sync" messages non-collapsible messages with a payload up to 4k in size "Collapsible" means that the most recent message overwrites the previous one. A "send-to-sync" message is used to notify a mobile application to sync its data with the server. In case the device comes online after being offline for a while, the client will only get the most recent server message. If you want to add push notifications to your android applications, the getting started guide will walk you through the setup process step by step, even supplying you with a two-part demo application (client + server) that you can just install and play around with. The setup process will provide you with the two most essential pieces of information needed to run GCM: An API Key is needed by your server to send GCM push notifications A Sender ID is needed by your clients to receive GCM messages from the server Everything is summarized in the following screen you get after using the google API console: The quickest way to write both server and client code is to install the sample demo application and tweak it to your needs. In particular, you might want to at least do any of the following: Change the demo's in-memory datastore into a real persistent one. Change the type and/or the content of push messages. Change the client's automatic device registration on start-up to a user preference so that the handset user may have the option to register/unregister for the push notifications. We'll do the last option as an example. Picking up where the demo ends, here's a quick way to set up push preferences and integrate them into your existing android application clients. in your android project-resources ( res/xml) directory, create a preference.xml file such as this one: and the corresponding activity: // package here import android.os.bundle; import android.preference.preferenceactivity; public class pushprefsactivity extends preferenceactivity { @override protected void oncreate(bundle savedinstancestate) { super.oncreate(savedinstancestate); addpreferencesfromresource(r.xml.preferences); } } the above will provide the following ui: The "enable server push" checkbox is where your android application user decides to register for your push messages. Then, it's only a matter of using that preferences class in your main activity and doing the required input processing. the following skeleton class only shows your own code add-ons to the pre-existing sample application: // package here import com.google.android.gcm.gcmregistrar; // other imports here public class mainactivity extends activity { /** these two should be static imports from a utilities class*/ public static string server_url; public static string sender_id; private boolean push_enabled; /** called when the activity is first created. */ @override public void oncreate(bundle savedinstancestate) { super.oncreate(savedinstancestate); // other code here... processpush(); } /** check push on back button * if pushprefsactivity is next activity on stack */ @override public void onresume(){ super.onresume(); processpush(); } /** * enable user to register/unregister for push notifications * 1. register user if all fields in prefs are filled and flag is set * 2. un-register if flag is un-set and user is registered * */ private void processpush(){ if( checkpushprefs() && push_enabled ){ // register for gcm using the sample app code } if(! push_enabled && gcmregistrar.isregisteredonserver(this) ){ gcmregistrar.unregister(this); } } /** check server push preferences */ private boolean checkpushprefs(){ sharedpreferences prefs = preferencemanager .getdefaultsharedpreferences(this); string name = prefs.getstring("sname", ""); string ip = prefs.getstring("sip", ""); string port = prefs.getstring("sport", ""); string senderid = prefs.getstring("sid", ""); push_enabled = prefs.getboolean("enable", false); boolean allfilled = checkallfilled(name, ip, port, senderid); if( allfilled ){ sender_id = senderid; server_url = "http://" + ip + ":" + port + "/" + name; } return allfilled; } /** checks if any number of string fields are filled */ private boolean checkallfilled(string... fields){ for (string field:fields){ if(field == null || field.length() == 0){ return false; } } return true; } } The above is pretty much self-explanatory. Now GCM push notifications have been integrated into your existing application. If you are registered, you get a system notification message at each server push, even when your application is not running. Opening up the message will automatically open your application: GCM is pretty easy to set up since most of the plumbing work is done for you. a side note: if you like to isolate the push functionality in its own sub-package, be aware that the GCM service gcmintentservice, provided by the sample application and responsible for handling GCM messages, needs to be in your main package (as indicated in the-set up documentation)—otherwise GCM won't work. When communicating with the sample server via an HTTP post, the sample client does a number of automatic retries using exponential back-off, meaning that the waiting period before a retry in case of failure is each time twice the amount of the preceding wait period, up to the maximum number of retries (5 at the time of this writing). You might want to change that if it doesn't suit you. It may not matter that much, though, since those retries are done in a separate thread (using asynctask) from the main UI thread, which therefore minimizes the effects on your mobile application's pre-existing flow of operations.
For developers having worked on J2EE web applications for many years, getting into Flex will seem both very fun and familiar of the simplicity and power of ActionScript and the UI framework, and quite tedious and frustrating when it comes to developing the core application logic and the server integration. In some ways, developing Flex applications with widely used frameworks like Cairngorm and BlazeDS involves a lot of plumbing code (business delegates, service facades, conversions between JPA entities and value objects,...) and will remind you of the days of Struts and EJB2. Data is King Let’s not pretend like data isn’t and won’t continue to be a nearly invaluable tool that companies and individuals alike can use to make a lot of progress towards their end goals. If we look at things objectively, it is clearly possible that data is the most valuable resource known to man at this time. It seems like a good idea to both invest in data services and to build up one’s knowledge about what they are and why they are so important to the development of many projects at this time. If we can all learn a little more about this topic, then we stand a very good chance of being able to use data in the ways that it was designed to be used. The Granite Data Services project was started with the (ambitious) goal of providing Flex with the same kind of development model we were used to with modern J2EE frameworks. The GDS remoting functionality has been designed from the beginning to support the serialization of JPA/Hibernate detached entities and to easily connect to the most important J2EE frameworks (EJB3, Spring, Seam, Guice). In most cases, this removes the need to write and maintain service facades and value objects on the J2EE layer. In fact, that finally means that a Flex client can consume the exact same set of server services as a classic web application. Another repetitive task is to build the ActionScript model classes. GraniteDS provides the Gas3 tool that can automatically generate the ActionScript model classes from the Java data model. With the latest GraniteDS 1.1 release, the process is still improved by the Granite Eclipse builder plugin, which regenerates on the fly the necessary ActionScript classes whenever JPA entities are created or modified in the Eclipse project. You just have to write your JPA data model, and you can directly make use of the generated AS3 classes in the Flex UI layer. This is already a big step towards a more simple server integration in Flex, but GDS 1.1 brings an even simpler programming model. It targets only JBoss Seam for now but integration with Spring and EJB3 are already on the way. The Tide project aims at providing the same simplicity to build Flex/AIR applications that JBoss Seam has brought to J2EE. Tide requires almost no configuration during development and automates most of the plumbing code generally found for example in Cairngorm business delegates or service locators. Contrary to other Flex frameworks whose goal is to put all business logic on the client, Tide client/server interactions are exclusively done by method calls on exposed services, allowing to respect transaction boundaries, security, and validation rules defined by these services. Tide mainly consists of a Flex library that provides data-oriented functionality : An entity cache ensures that all managed entity instances are unique in a Tide context. This allows in particular to maintain correct data bindings between calls to remote services. A collection wrapping mechanism that enables transparent initialization of lazy collections. A Flex collection component integrated with JBoss Seam’s paged query component that can be used as a data provider for Flex UI components and supports remote sorting and filtering. Complete integration with Seam’s events, both synchronous and asynchronous, allows a Flex client to observe events raised by the server. Flex validators integrated with server-side Hibernate validator, allowing validation messages either on the fly, or after remote calls. Client support for Seam conversations. A lightweight component-based Flex framework that is deeply integrated with the other features and can advantageously replace Cairngorm or PureMVC. Let's have a look at a very simple example to see how this works. Seam component (simply extracted from the Seam booking sample): @Stateful @Name("hotelSearch") @Scope(ScopeType.SESSION) @Restrict("#{identity.loggedIn}") public class HotelSearchingAction implements HotelSearching { @PersistenceContext private EntityManager em; private String searchString; private int pageSize = 10; private int page; @DataModel private List hotels; public void find() { page = 0; queryHotels(); } ... private void queryHotels() { hotels = em.createQuery( "select h from Hotel h where lower(h.name) like #{pattern} " + "or lower(h.city) like #{pattern} or lower(h.zip) like #{pattern} " + "or lower(h.address) like #{pattern}") .setMaxResults(pageSize) .setFirstResult( page * pageSize ) .getResultList(); } ... public List getHotels() { return this.hotels; } public int getPageSize() { return pageSize; } public void setPageSize(int pageSize) { this.pageSize = pageSize; } @Factory(value="pattern", scope=ScopeType.EVENT) public String getSearchPattern() { return searchString == null ? "%" : '%' + searchString.toLowerCase().replace('*', '%') + '%'; } public String getSearchString() { return searchString; } public void setSearchString(String searchString) { this.searchString = searchString; } @Remove public void destroy() {} } MXML application: [Bindable] private var tideContext:Context = Seam.getInstance().getSeamContext(); // Components initialization in a static block { Seam.getInstance().addComponents([HotelsCtl]); } [Bindable] [In] public var hotels:ArrayCollection; private function init():void { tideContext.mainAppUI = this; // Registers the application with Tide } private function search(searchString:String):void { dispatchEvent(new TideUIEvent("search", searchString)); } Tide Flex component: import mx.collections.ArrayCollection; [Name("hotelsCtl")] [Bindable] public class HotelsCtl { [In] public var hotels:ArrayCollection; [In] public var hotelSearch:Object; [Observer("searchForHotels")] public function search(searchString:String):void { hotelSearch.searchString = text; hotelSearch.find(); } } Of course, this is an overly simple example but there is close to no code which seems unnecessary while having a clean separation of concerns between the UI, the client component, and the remote service. Building Flex applications could hardly be easier. There are a lot of choices out there today for creating rich Internet applications, each with its own set of advantages. When making the decision on which path to take, you want to get started easily but without sacrificing the ability to create a robust, scalable, and maintainable application. GraniteDS maintains this balance.
Over the last few days, I had the chance to test Datameer analytics solution (das). Das is a platform for Hadoop which includes data source integration, an analytics engine, and visualization functionality. This promise of a fully integrated big data analysis process motivated me to test the product. It really includes all required functionality for data management or ETL, it provides standard tools to analyze data and there are nice ways to build visualization dashboards. For example, there are connectors for Twitter, IMAP, HDFS, or FTP available. All menus and processes are self-explaining and the complete interface is strongly Excel or spreadsheet oriented. If you are familiar with excel you can do the analyses on your big data out of the box. For a fast on-the-fly analysis performance you only work with a subset of your data and the analyses you store will then be automatically transformed into a kind of procedure. In the end – or according to a schedule you set – you “run” the analyses on your big data: Das collects the latest data for you, Das creates MapReduce jobs in the background, and updates all your spreadsheets and visualizations. To close the analysis circle you can use the connectors to write your results back to HDFS or a database such as HBase or many more technologies. Analytics the Way You Need Them There are a lot of ways that Datameer can prove useful to you, and you don’t want to discount that fact. You will discover that it is likely the case that Datameer is a great way to display a large-scale amount of data in a format that is digestible and useful to you. We can only absorb so much data as individual human beings, but we can certainly use the information that we receive to make important decisions about our business, our products, and the future experiences that our customers will have as a result. Thus, it makes sense that there are many people who want to use Datameer as a means of getting this done. If you have felt what it is like to see all of your data on a big spreadsheet and be able to visualize what it looks like coming together, then you know that you need Datameer and similar products to help you get the results that you really need. Das is really designed for big data. If you test it with small data you will be frustrated by the performance – the overhead of creating MapReduce jobs dominates in this situation. But as soon as you start with real big data analyses this overhead gets negligible and das is taking over a lot of your programming work. My Test Infrastructure The following figure provides a nice overview of the Datameer infrastructure. Das supports many data sources, it runs on all Hadoop distributions, it provides a rest API and you can add plugins as connectors for other modeling languages such as r (#rstats). I tested das version 3.1.2 running on our MapR Hadoop cluster version 3.0.2. After getting the latest package version from Datameer support the installation was straightforward and it worked out of the box. Thanks to Datameer for providing a full test license. There are several online tutorials and videos available and there are some tutorial apps. Apps are another great feature of Datameer. You can download Datameer apps which include connectors, workbooks, and visualizations for different analysis examples. And you can create your own app from your analyses and share them with your colleagues or the community. My Test Data And Analyses I tested das with the famous “airline on-time performance” data set consisting of flight arrival and departure details for all commercial flights within the USA, from October 1987 to April 2008. I downloaded all the data (including supplements) to MapR FS, created connectors for the data, and imported the data into a workbook. In the workbook I tested many classical statistical counting analyses: Grouping functionality for the airports and counting the number of flights Grouping for the airlines and calculating different statistics as mean values for the air time Using joins to add additional information like the airline name to the airline identifier Doing sorts to extract the most interesting airports depending on different measures I am not an Excel expert. so it took me some time to get used to this low-level process of doing analysis on spreadsheets. But in the end, it is a very intuitive process of creating analyses. Every new analysis will be available in a new tab in your workbook. There are several nice functionalities to support your work. For example, there is a “sheet dependencies” overview which provides information about the dependencies between sheets. Apart from the classical analyses, das provides some data mining functionality. It is called “smart analytics”. So far, it covers k-means clustering, decision trees, column dependencies, and recommendations. It works out of the box but is not yet on the level to be satisfying for real analyses. e.g. For k-means clustering, there is no support for choosing the right number of clusters (k) and you can not switch between different distance functions (default is euclidean distance). Finally, I visualized all my results in a nice “infographic”. There are many different visualization tools and parameters available. After playing around with the settings you can create a nice dashboard and share it with your colleagues. Please be aware that the complete data set is about 5 GB. Importing the data set takes about 30 minutes and running the workbook took more than 3h in my case. In the end, I split my analyses into several workbooks to improve the feasibility. Summary It was easy to get started with Datameer analytics solution (das). It is definitely a great tool to do big data analyses without any detailed Hadoop or big data knowledge. Furthermore, it covers many use cases and provides all required functionality for your daily analysis process. However, as soon as your analyses get more complex, the limitations of Datameer become apparent and you will probably look for a more powerful toolset or start implementing your big data analyses directly on Hadoop. Finally, Datameer supports many steps in the big data analysis process, it works efficiently and the usability is straightforward. But big data is more than ETL, data analysis and visualizing the results. You should never forget to think about your use case and the business value that you want to extract from your data. In the end, this is what should guide you in choosing the tools and/or implementations to use.
We take a comparative look at two of the most popular databases on the market today, CouchDB and MariaDB, and what each brings to the table for your team.
Open banking and APIs are at the heart of the digital finance revolution. Monitoring APIs helps protect banks from issues including failures and slow-downs.
Conceptual Architecture for Conversational Artificial Intelligence (AI) / Natural Language Processing (NLP) based Platform for Customer Support and Agent Services.
This article will show you how to use the Great Expectations library to test data migration and how to automate your tests in Azure Databricks using C# and NUnit.