This week we're talking to Chanwit Kaewkasi, Assistant Professor at the Suranaree University of Technology’s School of Computer Engineering in Thailand, co-developer of a series of low-cost Big Data clusters, and featured author in DZone's upcoming 2014 Guide to Big Data.
By the time you have developed something and fixed any issues with it, your version is simply not going to be as tested as a ready built component that is used by thousands of people.
In anticipation of our 2014 Guide to Big Data, we have arranged for a panel of experts - Kirk Borne, Carla Gentry, Jonathan Ellis, and James G. Kobielus, to answer your Big Data questions on Twitter on Monday, September 22, 2014. To participate, simply ask a question using the hashtag #DZBigData.
The DB2 CONCAT function will combine two separate expressions to form a single string expression. It can leverage database fields, or explicitly defined strings as one or both expression when concatenating the values together.
A “quantile forecast” is a quantile of the forecast distribution. Still assuming normality, we could generate the forecast quantiles from 1% to 99% in R using...
Computer Security breaches can end up costing even the average small business up to $200,000
This article represents key aspects of starting up a Big Data practice in your organization. Currently, I have started working in the same area and this blog is the result of my research. Hope you find it useful.
The article discusses what stream processing is, how it fits into a big data architecture with Hadoop and a data warehouse (DWH), when stream processing makes sense, and what technologies and products you can choose from. Comparison of open source and proprietary stream processing / streaming analytics alternatives: Apache Storm, Spark, IBM InfoSphere Streams, TIBCO StreamBase, Software AG's Apama, etc.
Every week at DZone, we feature a new developer/blogger to catch up and find out what he or she is working on now and what's coming next. This week we're talking to Adam Diaz, Hadoop Architect at the Teradata Big Data Center of Excellence and featured author in DZone's upcoming 2014 Guide to Big Data.
Hortonworks’ Stinger Initiative, which finished rolling out in April, expanded on the Hive engine to allow for interactive SQL queries at the Hadoop scale. Now Hortonworks has announced their next set of objectives for Hive, which they are calling Stinger.next.
In 2006, Hadoop became one predominant solution in the world of Big Data, and it remains a major player for processing Big Data today. But as needs for Big Data analysis expand and evolve, some analysts and developers consider Hadoop unable to perform to their standards.
Every 6 months at Canonical, the company behind Ubuntu, I work on something technical to test our tools first hand and to show others new ideas. This time around I created an Instant Big Data solution, more concretely “Instant Storm”.
In my last blog post I showed how to group timestamp based data by week, month and quarter. I wanted to pull this code out into a function. It turns out if we want to do this then we actually want the regroup function rather than group_by:
Syslog has been around for a number of decades and provides a protocol used for transporting event messages between computer systems and software applications. The protocol utilizes a layered architecture, which allows the use of any number of transport protocols for transmission of syslog messages.
Links to Big Data Articles and Information, with recent articles on real-world applications of Big Data analysis, thoughts on new and different ways to look at Big Data, and tools for starting Big Data analysis.
The first step was to transform the data so that I had a data frame where a row represented a day where a member joined the group. To turn that into a chart we can plug it into ggplot and use the cumsum function to generate a line showing the cumulative total:
If you missed anything on DZone this week, now's your chance to catch up! This week's best include the anatomy of Hibernate dirty checking, the similarities of Swift and Scala, the Agile version of Superman vs. Batman, and more.
In my continued playing around with R and meetup data I wanted to have a look at when people joined the London Neo4j group based on week, month or quarter of the year to see when they were most likely to do so.
A list of 22 federal agencies who have published data.json files.
How valuable is big data? It’s an important question for developers, who need to be able to respond to ever-shifting markets quickly so they are not left behind.
Last night I woke up after a night mare. A nightmare containing a future, “improved” version of powershell a competing blogger and Entity Framework Migrations. Slightly off topic, but I’ll share it anyway.
On 23–25 September, I will be running a 3-day workshop in Perth on “Forecasting: principles and practice” mostly based on my book of the same name.
I had a talk at ECSA 2014 in Vienna: The Next-Generation BPM for a Big Data World: Intelligent Business Process Management Suites (iBPMS), sometimes also abbreviated iBPM. I want to share the slides with you.
I’ve been playing around with the Rook library and struggled a bit getting a basic Hello World application up and running so I thought I should document it.
As these two fields converge, work has to be done to provide the right set of mechanisms and abstractions. Right now I still think there is a considerable gap which we need to close over the next few years.