DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Trending

  • Multi-Stream Joins With SQL
  • DevOps Pipeline and Its Essential Tools
  • The Native Way To Configure Path Aliases in Frontend Projects
  • Testing, Monitoring, and Data Observability: What’s the Difference?
  1. DZone
  2. Data Engineering
  3. Data
  4. Splunk Storm – Machine Data Processing in the Cloud

Splunk Storm – Machine Data Processing in the Cloud

Sasha Goldshtein user avatar by
Sasha Goldshtein
·
Jun. 25, 13 · Interview
Like (0)
Save
Tweet
Share
3.86K Views

Join the DZone community and get the full member experience.

Join For Free

introduction

splunk is a platform to process machine data from various sources such as weblogs, syslogs, log4j logs and can also work with json and csv file formats thus any application that produces json or csv output can be seen as a source for splunk. as the volume and variety of machine data are increasing, splunk is becoming a more and more interesting player in big data world, too.

splunk can be considered as a search engine for it data . splunk collects data from multiple sources, indexes them and the users can search them using splunk proprietary language called spl (search processing language). the search results can then be used to create reports and dashboards to visualize the data.

splunk architecture

under the hood splunk architecture has the following key components:
- forwarders are used to forward data to splunk receiver instances. receiver instances are normally indexers.
- indexers that are splunk instances to index data. indexes are stored in files. there are two types of files; raw datafiles which store the data in compressed format and index files that contain metadata for search queries. during indexing, splunk extracts default fields and identifies events based on timestamps or creates them if there is no timestamp found.
- search head and search peers . in a distributed environment search head manages the search requests, directs them to search peers and then merges result back to the users.
- splunk web is a graphical user interface based on python application server.
splunkarchitecture

splunk storm

splunk storm is a cloud service version of splunk. splunk storm runs in the amazon cloud and uses of both elastic block storage (ebs) and the simple storage service (s3).

the price plan is based on monthly fee, it depends on the volume of the data that you want to store. as of writing this article, there is a free tier with 1 gb storage, while for example 100 gb storage volume costs 400 usd and the maximum 1 tb storage volume costs 3,000 usd per month.

to get started, we need to sign up and crate a project.
splunk-1

then we can define the data inputs. there are four options: upload a file, use forwarders, use the api (it is in beta yet) or use network data sent directly from the servers.

as a first test, we will use data files uploaded from a local directory. we used a sample apache web access.log and a syslog available from http://www.monitorware.com/en/logsamples/

it takes a some time to index the files and then they become available for search queries.

splunk-11

we can run a search query to identify all http client side error codes:

"source="access_log.txt" status>="400" and status <="500"

splunk-15

if we want to identify all the access log entries with http post method, we can run the following search query:

source="access_log.txt" method="post"

in a similar way, if we want to find all the messages from the uploaded syslog file that were generated by the kernel process then we can run the following query:

source="syslog-messages.txt" process="kernel"



splunk-16

splunk forwarder and twitter api

as a next example, we want to test output generated by our program using twitter api. the program will generate json format in a file using python based twitter api. the directory is monitored by a splunk forwarder and once the file is created in the predefined directory, the forwarder will send it to splunk storm.

first we need to create an application in twitter via https://dev/twitter.com portal. the application will have its customer_key, customer_secret, access_token_key and access_token_secret that is going to be required by the twitter api.

twitter-6

the twitter api that we are going to use for the python application is downloadable from github, https://github.com/bear/python-twitter.git .

this api depends oauth2, simplejson and httplib2 so we need to installed them first. then we can get the code from github and build and install the package.

$ git clone https://github.com/bear/python-twitter.git

# build and install:
$ python setup.py build
$ python setup.py install

the twitter application code – twtr.py –  is as follows:

# twtr.py
import sys
import twitter

if len(sys.argv) < 3:
    print "usage: " + sys.argv[0] + " keyword count"
    sys.exit(1)

keyword = sys.argv[1]
count = sys.argv[2]
# twitter api 1.1. count - up to a maximum of 100
# https://dev.twitter.com/docs/api/1.1/get/search/tweets
if int(count) > 100:
    count = 100

api = twitter.api(consumer_key="consumer_key", consumer_secret="consumer_secret", access_token_key="access_token_key", access_token_secret="4pxvz7qiiwtwhfrfxfekc9wy7ibodgusd8zqlvuhabm" )

search_result = api.getsearch(term=keyword, count=count)

for s in search_result:
    print s.asjsonstring()

the python program can be run as follows:

$ python twtr.py "big data" 100

installing splunk forwarder

then we need to install splunk forwarder, see http://www.splunk.com/download/universalforwarder . we also need to download the splunk credentials that will allow the forwarder to send data to our project. once the forwarder and the ceredentials are installed we can login and add a directory (twitter_status) for our forwarder to be monitored. we defined the sourcetype as json_notimestamp.

# download splunk forwarder
$ wget -o splunkforwarder-5.0.3-163460-linux-x86_64.tgz 'http://www.splunk.com/page/download_track?file=5.0.3/universalforwarder/linux/splunkforwarder-5.0.3-163460-linux-x86_64.tgz&ac=&wget=true&name=wget&typed=releases&elq=8ccba442-db76-4fc8-b36b-36252bb61257'

# install and start splunk forwarder
$ tar xvzf splunkforwarder-5.0.3-163460-linux-x86_64.tgz
$ export splunk_home=/home/ec2-user/splunkforwarder
$ $splunk_home/bin/splunk start
# install project credentials
$ $splunk_home/bin/splunk install app ./stormforwarder_2628fbc8d76811e2b09622000a1cdcf0.spl -auth admin:changeme
app '/home/ec2-user/stormforwarder_2628fbc8d76811e2b09622000a1cdcf0.spl' installed

# login
$splunk_home/bin/splunk login -auth admin:changeme

#' add monitor (directory or file)
 $splunk_home/bin/splunk add monitor /home/ec2-user/splunk_blog/twitter_status -sourcetype json_no_timestamp
added monitor of '/home/ec2-user/splunk_blog/twitter_status'.

now we are ready to run the python code using twitter api:

$ python twtr.py "big data" 100 | tee twitter_status/twitter_status.txt

the program creates a twitter_status.txt file under twitter_status directory which is monitored by splunk forwarder. the forwarder sends the output file to splunk storm. after some time it will appears under the inputs sections as authenticated forwarder. the  file will be shown as a source together with the previously uploaded apache access log and syslog.
splunk-17

splunk-18

if we want to search for users with location london, the search query looks like this:

source="/home/ec2-user/splunk_blog/twitter_status/twitter_status.txt" user.location="london, uk"

we can also define a search query to show the top 10 timezones from the twitter result and from the search result it is easy to create a report with just a few clicks on the web user interface. the report allows to chose multiple visualization options like column, area or pie chart types, etc.

source="/home/ec2-user/splunk_blog/twitter_status/twitter_status.txt" | top limit=10 user.time_zone

splunk-22

splunk-25

conclusion

as mentioned in the beginning of this article, the variety and the volume generated by machines are increasing dramatically; sensor data, application logs, web access logs, syslogs, database and filesystem audit logs are just a few examples of the potential data sources that require attention but can pose difficulties to process and analyse them in a timely manner. splunk is a great tool to deal with the ever increasing data volume and with splunk storm users can start analysing their data in the cloud without hassle.




Big data Cloud Machine Data processing Database application Processing twitter

Published at DZone with permission of Sasha Goldshtein, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Trending

  • Multi-Stream Joins With SQL
  • DevOps Pipeline and Its Essential Tools
  • The Native Way To Configure Path Aliases in Frontend Projects
  • Testing, Monitoring, and Data Observability: What’s the Difference?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: