CouchDB and NodeJS are a perfect fit. The perfect glue? NodeJS's request. And here's why.
Paul Gross describes a tool created at Braintree for dealing with using Riak as their next generation data store. In order to manage the continuous evolution of Braintree apps, Curator helps with what the team calls "lazy data migration".
The challenge winner was Luanne Misquitta's Flavorwocky. Also, some DZone MVBs were Language winners for the challenge.
De Marzi continues his series on Neo4j with the Java Universal Network/Graph Framework(JUNG).
This writer was surprised to find such elegant code while looking into how Sinatra calls code via throw and catch.
Bastuträsk Bänk marks the first milestone in the Neo4j 1.7 series. Includes updates on the evolution of the Cypher language, as well as Documentation goodies.
Max De Marzi uses the "arsenal" of algorithms from graph theory, data mining, and social network analysis that makes up the Java Universal Network/Graph Framework(JUNG).
We love Cassandra as a data store, but unfortunately it doesn't support all of our use cases natively. In the end, we decided to implement our own trigger mechanism using Aspect-Oriented Programming (AOP). Our mechanism is roughly based on Jonathan Ellis's Crack-Smoking Commit Log (CSCL).
Heroku recently released hstore, a seamless key value pair that can be used in Rails 4. This tool is used in the relational database PostgreSQL. The creator compares this to hashes in Ruby.
Further developments on the NoSQL front continue with Datomic, a powerful data manipulation tool designed to transform data into applications for ease of processing. If you head over to the official Datomic site . . .
Sonya Arouje recently created NodeMaper, a layer on top of Neo4jD that deals with static typed entities. Here's how it works.
MongoDB is much-loved for its flexibility, but flexibility is a double-edged sword: MongoDB doesn't exactly walk you through the data modeling process. This post particularly examines one of the major, high-level data-modeling decisions -- what data to embed -- and answers: not everything. Here's why.
Hone your big-data skills with these three exercises, covering (1) sorting huge files, (2) finding the k-smallest element in an unsorted list, and (3) explaining the quicksort algorithm, and solving its implicit equation.
Storing massive amounts of data in a NoSQL data store is just one side of the Big Data equation. Being able to visualize your data in such a way that you can easily gain deeper insights, is where things really start to get interesting. This article details how to visualize your Google Analytics data by using the Circos circular lay-outing software.
Using production data from LinkedIn and Yammer, this video quantitatively demonstrates why, in practice, eventually consistent partial quorums often serve consistent data.
Includes all the rules and some strategies for Shard Carks, a cooperative strategy game that helps you choose a shard key.
Presentation includes intriguing stats and some pitfalls of using Cassandra. Includes times to skip to if you don't want to view the entire post.
After finding that memcached and MongoDB weren't performing when it comes to cache implementation, the new solution was Redis.
Marco's a great writer who provides an informative introduction to and exploration of the graph nature of Wikipedia.
De Marzi writes, "When exploring a social network it is important that we understand not only the strength of the relationship now, but over time."
First thing to configure: the switches in neo4j-wrapper.conf. Straight-up tutorial includes useful screenshots.
After some benchmarking and testing, we found that DynamoDB has some interesting features. Among the most interesting is how they price it -- i.e., where you pay for performance and resources used. This post reveals our experience so far with DynamoDB and our migrations to this service.
Max De Marzi discusses and tries out Michael Hunger's batch importer for quick loading of csv data.
MongoDB's recent introduction of the Aggregation Framework provides a simpler solution for calculating aggregate values; this post describes the refactoring of the map-reduce algorithm for optimal use of the aggregation framework.
"What distinguishes one database type from another is the structure of the data they store and the means by which that data is retrieved . . . this short post will only discuss a few."