I wrote about cypher’s MERGE function a couple of weeks ago, and over the last few days, I’ve been exploring how it works with schema indexes and unique constraints. An Exciting Time to Be a Developer There is so much that could be said about the merging of Neo4j and Cypher right now, but it is certainly reasonable to point out that this merger will likely result in many exciting developments in the programming world. Programmers virtually always appreciate it when they are given the products and tools they require to get their job done properly, and now is the time for steps like this to be taken. The fact that Neo4J and Cypher have decided to merge means that the upsides of both will soon be apparent. You deserve to use all of the best tools to make informed decisions about your next software project, and a great way to make it happen is to use what has been given to you regarding product functionality. This is to say that you can use both the upsides of Neo4J and Cypher to come up with the exact tools you need to make a difference in your sphere of influence. Could Other Products Soon Merge? There has been some strong demand for other software development products to consider merging. Coders and programmers want to use their favorite projects in exactly how they were meant to be used, and this means getting them to merge in ways that are useful to the programmers. They just want to be able to squeeze as much use out of each program as they possibly can. You want to make sure that you can see what is going on with your codes as you are directly applying them to whichever problem you are working on at this time. To be sure, it is not an easy task, but no one ever said it would be easy. The important thing is that you get the work done so that you can start to become more productive in the coding you are doing now. A common use case with Neo4j is to model users and events where an event could be a tweet, Facebook post, or Pinterest pin. The model might look like this: We’d like to ensure that we don’t get duplicate users or events, and MERGE provides the semantics to do this: MERGE (u:User {id: {userId}) MERGE (e:Event {id: {eventId}) MERGE (u)-[:CREATED_EVENT]->(m) RETURN u, e We’d like to ensure that we don’t get duplicate users or events and MERGE provides the semantics to do this: MERGE ensures that a pattern exists in the graph. Either the pattern already exists, or it needs to be created. import org.neo4j.cypher.javacompat.ExecutionEngine; import org.neo4j.cypher.javacompat.ExecutionResult; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.factory.GraphDatabaseFactory; import org.neo4j.helpers.collection.MapUtil; import org.neo4j.kernel.impl.util.FileUtils; ... public class MergeTime { public static void main(String[] args) throws Exception { String pathToDb = "/tmp/foo"; FileUtils.deleteRecursively(new File(pathToDb)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( pathToDb ); final ExecutionEngine engine = new ExecutionEngine( db ); ExecutorService executor = Executors.newFixedThreadPool( 50 ); final Random random = new Random(); final int numberOfUsers = 10; final int numberOfEvents = 50; int iterations = 100; final List userIds = generateIds( numberOfUsers ); final List eventIds = generateIds( numberOfEvents ); List merges = new ArrayList<>( ); for ( int i = 0; i < iterations; i++ ) { Integer userId = userIds.get(random.nextInt(numberOfUsers)); Integer eventId = eventIds.get(random.nextInt(numberOfEvents)); merges.add(executor.submit(mergeAway( engine, userId, eventId) )); } for ( Future merge : merges ) { merge.get(); } executor.shutdown(); ExecutionResult userResult = engine.execute("MATCH (u:User) RETURN u.id as userId, COUNT(u) AS count ORDER BY userId"); System.out.println(userResult.dumpToString()); } private static Runnable mergeAway(final ExecutionEngine engine, final Integer userId, final Integer eventId) { return new Runnable() { @Override public void run() { try { ExecutionResult result = engine.execute( "MERGE (u:User {id: {userId})\n" + "MERGE (e:Event {id: {eventId})\n" + "MERGE (u)-[:CREATED_EVENT]->(m)\n" + "RETURN u, e", MapUtil.map( "userId", userId, "eventId", eventId) ); // throw away for ( Map row : result ) { } } catch ( Exception e ) { e.printStackTrace(); } } }; } private static List generateIds( int amount ) { List ids = new ArrayList<>(); for ( int i = 1; i <= amount; i++ ) { ids.add( i ); } return ids; } } We create a maximum of 10 users and 50 events and then do 100 iterations of random (user, event) pairs with 50 concurrent threads. Afterward, we execute a query that checks how many users of each id have been created and get the following output: +----------------+ | userId | count | +----------------+ | 1 | 6 | | 2 | 3 | | 3 | 4 | | 4 | 8 | | 5 | 9 | | 6 | 7 | | 7 | 5 | | 8 | 3 | | 9 | 3 | | 10 | 2 | +----------------+ 10 rows Next, I added a schema index on users and events to see if that would make any difference, something Javad Karabi recently asked on the user group. CREATE INDEX ON :User(id) CREATE INDEX ON :Event(id) We wouldn’t expect this to make a difference as schema indexes don’t ensure uniqueness, but I ran it anyway t and got the following output: +----------------+ | userId | count | +----------------+ | 1 | 2 | | 2 | 9 | | 3 | 7 | | 4 | 2 | | 5 | 3 | | 6 | 7 | | 7 | 7 | | 8 | 6 | | 9 | 5 | | 10 | 3 | +----------------+ 10 rows If we want to ensure the uniqueness of users and events, we need to add a unique constraint on the id of both of these labels: CREATE CONSTRAINT ON (user:User) ASSERT user.id IS UNIQUE CREATE CONSTRAINT ON (event:Event) ASSERT event.id IS UNIQUE Now if we run the test, we’ll only end up with one of each user: +----------------+ | userId | count | +----------------+ | 1 | 1 | | 2 | 1 | | 3 | 1 | | 4 | 1 | | 5 | 1 | | 6 | 1 | | 7 | 1 | | 8 | 1 | | 9 | 1 | | 10 | 1 | +----------------+ 10 rows We’d see the same result if we ran a similar query checking for the uniqueness of events. As far as I can tell, this duplication of nodes that we merge on only happens if you try to create the same node twice concurrently. Once the node has been created, we can use MERGE with a non-unique index, and a duplicate node won’t get created. All the code from this post is available as a gist if you want to play around with it.
The author conducts two tests with differing service delay times to measure any difference in performance between reactive and synchronous programming.
You have probably heard a lot of talk about the wonderful things the cloud can do for you, and you are probably curious about how those services may come into play in your daily life. If this sounds like you, then you need to know that cloud services are playing an increasingly important role in our lives, and we need to look at how they can change how we message one another. Many people are looking at Android cloud messaging as the next leap forward into a future where it is possible to reach out to the people we care about and save those messages directly in the cloud. Never miss the opportunity to communicate with someone who truly matters to you, and start using cloud storage to back up your messages. It is as simple as that! You might have heard of c2dm (cloud-to-device messaging), which basically allowed third-party applications to send (push) lightweight messages to their android applications. Well, c2dm as such is now deprecated and replaced with its successor up the evolutionary ladder: GCM, or google cloud messaging. GCM is a (free) service that allows developers to push two types of messages from their application servers to any number of android devices registered with the service: collapsible, "send-to-sync" messages non-collapsible messages with a payload up to 4k in size "Collapsible" means that the most recent message overwrites the previous one. A "send-to-sync" message is used to notify a mobile application to sync its data with the server. In case the device comes online after being offline for a while, the client will only get the most recent server message. If you want to add push notifications to your android applications, the getting started guide will walk you through the setup process step by step, even supplying you with a two-part demo application (client + server) that you can just install and play around with. The setup process will provide you with the two most essential pieces of information needed to run GCM: An API Key is needed by your server to send GCM push notifications A Sender ID is needed by your clients to receive GCM messages from the server Everything is summarized in the following screen you get after using the google API console: The quickest way to write both server and client code is to install the sample demo application and tweak it to your needs. In particular, you might want to at least do any of the following: Change the demo's in-memory datastore into a real persistent one. Change the type and/or the content of push messages. Change the client's automatic device registration on start-up to a user preference so that the handset user may have the option to register/unregister for the push notifications. We'll do the last option as an example. Picking up where the demo ends, here's a quick way to set up push preferences and integrate them into your existing android application clients. in your android project-resources ( res/xml) directory, create a preference.xml file such as this one: and the corresponding activity: // package here import android.os.bundle; import android.preference.preferenceactivity; public class pushprefsactivity extends preferenceactivity { @override protected void oncreate(bundle savedinstancestate) { super.oncreate(savedinstancestate); addpreferencesfromresource(r.xml.preferences); } } the above will provide the following ui: The "enable server push" checkbox is where your android application user decides to register for your push messages. Then, it's only a matter of using that preferences class in your main activity and doing the required input processing. the following skeleton class only shows your own code add-ons to the pre-existing sample application: // package here import com.google.android.gcm.gcmregistrar; // other imports here public class mainactivity extends activity { /** these two should be static imports from a utilities class*/ public static string server_url; public static string sender_id; private boolean push_enabled; /** called when the activity is first created. */ @override public void oncreate(bundle savedinstancestate) { super.oncreate(savedinstancestate); // other code here... processpush(); } /** check push on back button * if pushprefsactivity is next activity on stack */ @override public void onresume(){ super.onresume(); processpush(); } /** * enable user to register/unregister for push notifications * 1. register user if all fields in prefs are filled and flag is set * 2. un-register if flag is un-set and user is registered * */ private void processpush(){ if( checkpushprefs() && push_enabled ){ // register for gcm using the sample app code } if(! push_enabled && gcmregistrar.isregisteredonserver(this) ){ gcmregistrar.unregister(this); } } /** check server push preferences */ private boolean checkpushprefs(){ sharedpreferences prefs = preferencemanager .getdefaultsharedpreferences(this); string name = prefs.getstring("sname", ""); string ip = prefs.getstring("sip", ""); string port = prefs.getstring("sport", ""); string senderid = prefs.getstring("sid", ""); push_enabled = prefs.getboolean("enable", false); boolean allfilled = checkallfilled(name, ip, port, senderid); if( allfilled ){ sender_id = senderid; server_url = "http://" + ip + ":" + port + "/" + name; } return allfilled; } /** checks if any number of string fields are filled */ private boolean checkallfilled(string... fields){ for (string field:fields){ if(field == null || field.length() == 0){ return false; } } return true; } } The above is pretty much self-explanatory. Now GCM push notifications have been integrated into your existing application. If you are registered, you get a system notification message at each server push, even when your application is not running. Opening up the message will automatically open your application: GCM is pretty easy to set up since most of the plumbing work is done for you. a side note: if you like to isolate the push functionality in its own sub-package, be aware that the GCM service gcmintentservice, provided by the sample application and responsible for handling GCM messages, needs to be in your main package (as indicated in the-set up documentation)—otherwise GCM won't work. When communicating with the sample server via an HTTP post, the sample client does a number of automatic retries using exponential back-off, meaning that the waiting period before a retry in case of failure is each time twice the amount of the preceding wait period, up to the maximum number of retries (5 at the time of this writing). You might want to change that if it doesn't suit you. It may not matter that much, though, since those retries are done in a separate thread (using asynctask) from the main UI thread, which therefore minimizes the effects on your mobile application's pre-existing flow of operations.
For developers having worked on J2EE web applications for many years, getting into Flex will seem both very fun and familiar of the simplicity and power of ActionScript and the UI framework, and quite tedious and frustrating when it comes to developing the core application logic and the server integration. In some ways, developing Flex applications with widely used frameworks like Cairngorm and BlazeDS involves a lot of plumbing code (business delegates, service facades, conversions between JPA entities and value objects,...) and will remind you of the days of Struts and EJB2. Data is King Let’s not pretend like data isn’t and won’t continue to be a nearly invaluable tool that companies and individuals alike can use to make a lot of progress towards their end goals. If we look at things objectively, it is clearly possible that data is the most valuable resource known to man at this time. It seems like a good idea to both invest in data services and to build up one’s knowledge about what they are and why they are so important to the development of many projects at this time. If we can all learn a little more about this topic, then we stand a very good chance of being able to use data in the ways that it was designed to be used. The Granite Data Services project was started with the (ambitious) goal of providing Flex with the same kind of development model we were used to with modern J2EE frameworks. The GDS remoting functionality has been designed from the beginning to support the serialization of JPA/Hibernate detached entities and to easily connect to the most important J2EE frameworks (EJB3, Spring, Seam, Guice). In most cases, this removes the need to write and maintain service facades and value objects on the J2EE layer. In fact, that finally means that a Flex client can consume the exact same set of server services as a classic web application. Another repetitive task is to build the ActionScript model classes. GraniteDS provides the Gas3 tool that can automatically generate the ActionScript model classes from the Java data model. With the latest GraniteDS 1.1 release, the process is still improved by the Granite Eclipse builder plugin, which regenerates on the fly the necessary ActionScript classes whenever JPA entities are created or modified in the Eclipse project. You just have to write your JPA data model, and you can directly make use of the generated AS3 classes in the Flex UI layer. This is already a big step towards a more simple server integration in Flex, but GDS 1.1 brings an even simpler programming model. It targets only JBoss Seam for now but integration with Spring and EJB3 are already on the way. The Tide project aims at providing the same simplicity to build Flex/AIR applications that JBoss Seam has brought to J2EE. Tide requires almost no configuration during development and automates most of the plumbing code generally found for example in Cairngorm business delegates or service locators. Contrary to other Flex frameworks whose goal is to put all business logic on the client, Tide client/server interactions are exclusively done by method calls on exposed services, allowing to respect transaction boundaries, security, and validation rules defined by these services. Tide mainly consists of a Flex library that provides data-oriented functionality : An entity cache ensures that all managed entity instances are unique in a Tide context. This allows in particular to maintain correct data bindings between calls to remote services. A collection wrapping mechanism that enables transparent initialization of lazy collections. A Flex collection component integrated with JBoss Seam’s paged query component that can be used as a data provider for Flex UI components and supports remote sorting and filtering. Complete integration with Seam’s events, both synchronous and asynchronous, allows a Flex client to observe events raised by the server. Flex validators integrated with server-side Hibernate validator, allowing validation messages either on the fly, or after remote calls. Client support for Seam conversations. A lightweight component-based Flex framework that is deeply integrated with the other features and can advantageously replace Cairngorm or PureMVC. Let's have a look at a very simple example to see how this works. Seam component (simply extracted from the Seam booking sample): @Stateful @Name("hotelSearch") @Scope(ScopeType.SESSION) @Restrict("#{identity.loggedIn}") public class HotelSearchingAction implements HotelSearching { @PersistenceContext private EntityManager em; private String searchString; private int pageSize = 10; private int page; @DataModel private List hotels; public void find() { page = 0; queryHotels(); } ... private void queryHotels() { hotels = em.createQuery( "select h from Hotel h where lower(h.name) like #{pattern} " + "or lower(h.city) like #{pattern} or lower(h.zip) like #{pattern} " + "or lower(h.address) like #{pattern}") .setMaxResults(pageSize) .setFirstResult( page * pageSize ) .getResultList(); } ... public List getHotels() { return this.hotels; } public int getPageSize() { return pageSize; } public void setPageSize(int pageSize) { this.pageSize = pageSize; } @Factory(value="pattern", scope=ScopeType.EVENT) public String getSearchPattern() { return searchString == null ? "%" : '%' + searchString.toLowerCase().replace('*', '%') + '%'; } public String getSearchString() { return searchString; } public void setSearchString(String searchString) { this.searchString = searchString; } @Remove public void destroy() {} } MXML application: [Bindable] private var tideContext:Context = Seam.getInstance().getSeamContext(); // Components initialization in a static block { Seam.getInstance().addComponents([HotelsCtl]); } [Bindable] [In] public var hotels:ArrayCollection; private function init():void { tideContext.mainAppUI = this; // Registers the application with Tide } private function search(searchString:String):void { dispatchEvent(new TideUIEvent("search", searchString)); } Tide Flex component: import mx.collections.ArrayCollection; [Name("hotelsCtl")] [Bindable] public class HotelsCtl { [In] public var hotels:ArrayCollection; [In] public var hotelSearch:Object; [Observer("searchForHotels")] public function search(searchString:String):void { hotelSearch.searchString = text; hotelSearch.find(); } } Of course, this is an overly simple example but there is close to no code which seems unnecessary while having a clean separation of concerns between the UI, the client component, and the remote service. Building Flex applications could hardly be easier. There are a lot of choices out there today for creating rich Internet applications, each with its own set of advantages. When making the decision on which path to take, you want to get started easily but without sacrificing the ability to create a robust, scalable, and maintainable application. GraniteDS maintains this balance.
We take a comparative look at two of the most popular databases on the market today, CouchDB and MariaDB, and what each brings to the table for your team.
Open banking and APIs are at the heart of the digital finance revolution. Monitoring APIs helps protect banks from issues including failures and slow-downs.
This article will show you how to use the Great Expectations library to test data migration and how to automate your tests in Azure Databricks using C# and NUnit.
Web analytics are different than API analytics. The right platform will aid growth with meaningful metrics. Learn what solution is best for your API product.
When you hear the term "data mining," How about the word "data warehousing"? Find out what exactly is the difference between data mining and data warehousing.