DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

The Latest Coding Topics

article thumbnail
Here’s how Bell was Hacked: SQL Injection Blow-by-Blow
OWASP’s number one risk in the Top 10 has featured prominently in a high-profile attack this time resulting in the leak of over 40,000 records from Bell in Canada. It was pretty self-evident from the original info leaked by the attackers that SQL injection had played a prominent role in the breach, but now we have some pretty conclusive evidence of it as well: The usual fanfare quickly followed – announcements by the attackers, silence by the impacted company (at least for the first day), outrage by affected customers and the new normal for public breaches: I got the data loaded into Have I been pwned? and searchable as soon as I’d verified it. Now you would think – quite reasonably – that SQLi would be becoming a thing of the past what with all the awareness and Top 10 stuff let alone the emergence of tools like object relational mappers that make it almost impossible to screw this up, but here we are. Clearly we need a bit of a refresher on the risk and what better way to do it than to reconstruct the Bell system that was breached then, well, breach it again. Let’s get to it. A long time ago in a language far, far away The first thing that should hit savvy readers in the image above is that this is an ASP web site. No, not ASP.NET, go back further – classic ASP. This is not classic like, say, a Ferrari 250 GTO which has grown increasingly desirable with age, rather it’s “classic” like Citroën 2CV; it was kinda cool at the time but you’d be damned if you want a mate seeing you in one today. But I digress. Classic ASP was replaced almost 12 years ago to the day with the platform that remains Microsoft’s framework of choice for building web sites today – ASP.NET. You could forgive someone for persevering with classic ASP a decade ago, perhaps even 5 years ago, but today? I don’t think so. If you’re running this platform today to host anything of any value whatsoever on the web, you’ve got rocks in your head. (Yes, I know it’s still supported but seriously folks, it was built for another era and just isn’t resilient to today’s web risks) Anyway, to reproduce this risk I’m going to create a very simple classic ASP site that looks just like the one above. That’s one thing that was great about classic ASP – it was dead easy to create a simple site! For added realism I’ll create a local host entry for protectionmanagement.bell.ca and even add a self-signed cert so we can hit it via HTTPS. The affected site has been well and truly pulled by now, but of course nothing is ever really gone on the internet, it just goes to Google cache heaven: That shouldn’t be too hard to reproduce, how’s this look? Forgive me if I don’t go so far as to recreate the broken images! Let’s move on to the other thing we know about the attack, and that’s what the back end database looks like. Implementing the back end What we need to make this whole thing resemble a real attack is a little bit of classic ASP wiring and a database. The latter is quite easy to reconstruct because the entire schema was dumped along with the breach. Yes, yes, the breach got pulled very early by the powers that be, but per the earlier point, cache is your friend. Here’s what we’ve been told about the credentials table: Columns: tblCredentials tblCredentials.CredentialID, tblCredentials.OrderID, tblCredentials.CustomerID, tblCredentials.ServiceType, tblCredentials.UserName, tblCredentials.Password, tblCredentials.Level, tblCredentials.CustomerName, tblCredentials.PersonName, tblCredentials.GroupID, tblCredentials.SecretQuestion, tblCredentials.SecretAnswer, tblCredentials.UserEmail tblCredentials.UserLanguage By prefixing the table with the letters “tbl” you know that it’s a table and not a magic unicorn or a Chinese dissident (let us not digress into the insanity that is the tbl prefix). Anyway, I’ve recreated that table and another one called tblTransaction2010 in my local SQL instance that looks just like this. I’ve then whipped up a little VB Script in the .asp file (“Tonight we’re gonna code like it’s 1999”) which connects to the database and runs a SQL statement constructed like this: SQL = "SELECT * FROM tblCredentials WHERE UserName='" + Request.Form("UserName") + "'" Yeah, that looks about right! Let’s see what happens now… Employing HackBar The extension you see in the first image of this post is HackBar, a simple little add-on for testing things like SQLi and XSS. The premise is that it can monitor requests the browser makes and then make it dead easy to reconstruct them with manipulated parameters, you know, the kind of stuff that can exploit SQLi risks. It looks like this: What I’ve done is tried to perform a reset for the username “troy” (which performs a post request to the server) then I’ve just hit the “Load URL” button and checked “Enable Post data”. That then gives us the resource that was hit in the top text box and the form data with name value pairs in the bottom. Dead simple, now let’s break some stuff. Mounting the attack What we see in the first image above is what’s known as an error-based SQLi attack or in other words, the attacks are using exceptions thrown by the server and sent back in the response to discover the internal implementation of the system. I talk about this and other SQLi attacks patterns in my post on Everything you wanted to know about SQL injection (but were afraid to ask). Let’s reproduce what the attackers have in that first image – disclosure of the internal database version. This is a useful first step as it helps attackers understand what they’re playing with. Different database environments and even versions are exploited in different ways so discovering this early is important, question is, how do you get the database to cough this information up? In the post I mention above, I show how attempting to cast non-integer values to an integer will throw an internal exception which discloses the data. The first thing we need to establish is how to generate the data which in this case is the DB version. That’s dead simple, we just ask for @@VERSION then if we try to convert that to an int and the exception bubbles up to the browser, we’ve got ourselves some useful info. Does this look about right? And there we have it – the DB version data. I’ve all done with the post data is sent it over like this: UserName=' or 1=convert(int, @@version)-- Clearly the version won’t convert to an int so we get the error above. The %2x values you’re seeing in the HackBar window are simply URL encoded characters which can be achieved by selecting the string and then then choosing the correct encoding context from the menu (I’ll leave the unencoded values there in future grabs for the sake of legibility): So this is a start, but where’s the good stuff? How about we move onto discovering the schema because until we know what tables and columns are in there, it’s going to be a tough job pulling the data. Let’s start with table names: tblTransaction2010 is it? We know it’s a table because of the prefix… ok, I’ll let it go, point is we now know a table name and all it took was to select out of sysobjects. I go into detail about how this works in the aforementioned everything you want to know post so I won’t dwell on it here, let’s get another table name: Ah, so there’s our tblCredentials table and all it took was to adjust one number in the query so that the inner select statement took the top 2 records instead of the top 1 thus allowing the outer select to grab the next table in sysobjects. Let’s get some columns and there’s no one “right way” of doing this as there are multiple ways of pulling columns names from SQL Server (and for pulling table names too, for that matter). Let’s try this one: The exception discloses the presence of a column called UserName on the table tblCredentials. That’s handy, let’s move on and I’ll just keep incrementing the integer in the inner select statement: Ah, so there’s a password column as well, that’s handy, let’s see about pulling some data out of there: I’ve deliberately simplified this statement so it just pulls the first record in the default order but by now these nested, sorted selects should give you an idea of how easy it is to enumerate through the data. So there’s the username – “troy” – let’s grab the password too: This is unfortunate because clearly I’ve taken my personal security seriously and substituted not only the “a” for an “@”, but also the “o” for a “0”. But when you don’t have any cryptographic storage on the credentials which was the case with Bell, even my real passwords that are all randomly generated by 1Password have nowhere to hide when an SQLi attack hits pay dirt. In practice, you’re not going to go through and manually enumerate every single table, column and then row (column by column, I might add), instead you’re going to automate the process using a tool like Havij once you’ve discovered an at-risk target. If Havij is new to you, it’s child’s play – here’s my 3 year old learning how to use it, it really is that simple. There will be nuances between how I’ve replicated the attack here and how the guys behind it actually went about it. There might be other vectors through other pages or depending on how the original password recovery page responded, more streamlined ways of pulling the data. There may have even been SQL credential exposure at some point which would make the whole thing dead easy. Either way Bell (or whoever is copping the blame) will have more than enough data in their logs to reconstruct the attack and know exactly where it all went wrong. Hardening Bell’s environment Firstly, yes, I know that Bell has laid the blame on a partner providing services to some of their customers but it’s Bell in the headlines, it’s Bell sending out the apology emails and it’s Bell who now has to clean up this mess. I say this not to berate Bell but to draw attention to the responsibility that organisations have to ensure that their partners are employing appropriate security measures. The risk above could have been discovered in minutes by and automated tool and almost as quickly by even the most junior penetration tester. Nobody tested this system for security vulnerabilities – including Bell – and now they have a very unfortunate blight on their record that will be referenced for years to come. Anyway, let’s focus on the mitigations of this risk because as I said from the outset, this needs to be taken as an opportunity for others to learn some fundamentals that could save them from a similar fate. Let me summarise in point form: Using out-dated frameworks: Classic ASP guys – get rid of it. It has nowhere near the defences that modern web platforms have in place not just for SQLi, but for a whole range of attacks. You cannot afford to keep running VB script on the server. No white-listing of untrusted data: In the example above (and inevitably in the real system), SQLi attacks were thrown at the website and it… welcomed them with open arms. “Validate all untrusted data against a whitelist of allowable values” is the mantra I’ve repeated so many times and the username should only be allowing characters that it actually accepted when people signed up so that means no brackets, quotes, spaces, etc (none of these are in the breached data). Non-parameterised SQL: My example earlier on about how the SQL statement was likely constructed shows just a concatenated string with the potential to mix the query with untrusted data. This is what got them and I talk extensively about the right way to do this in part one of my series on the Top 10. Internal implementation leakage: This attack was made dead easy by the fact that internal exceptions bubbled up to the UI. Someone had to actually enable this – newer versions of IIS won’t allow to happen by default. The extent of this risk goes well beyond SQLi as well as there are some very, very juicy things that web sites sharing their internals can disclose. Plain text password storage: Shit happens. Sites get breached. We now all understand that, but what makes it a whole lot worse is when the data is usable by attackers, not just the ones who pulled it, but anyone in the general public who now has access to it. Passwords should always be stored with a strong cryptographic hashing algorithm designed for protecting credentials. Anything short of this leaves you naked in an attack. They’re just a few easy ones – SQLi 101 – and they should be painfully obvious. In conclusion… SQLi attacks remain rampant. They’re still in the number one spot in OWASP’s Top 10 (even the latest 2013 version) and it’s still rated as easy to exploit and as having a severe impact. They’re favoured by attackers because they’re just so easy to crack open which was the point of showing my 3 year old doing it earlier on. In this case the attackers actually showed a decent understanding of the mechanics behind SQLi, but the point is that the barrier to entry for this attack can be very, very low. Lastly, if you’re a dev or managing devs then get them into some in-depth security training whether that be via my Pluralsight courses or though any of the other excellent resources out there. You can’t wait until after things go wrong to do this.
Updated October 11, 2022
by Troy Hunt
· 11,014 Views · 1 Like
article thumbnail
Harnessing a New Java Web Dev Stack: Play 2.0, Akka, Comet
for people in hurry, here is the code and some steps to run few demo samples . disclaimer: i am still learning play 2.0, please point to me if something is incorrect. play 2.0 is a web application stack that bundled with netty for http server , akka for loosely coupled backend processing and comet / websocket for asynchronous browser rendering. play 2.0 itself does not do any session state management, but uses cookies to manage user sessions and flash data . play 2.0 advocates reactive model based on iteratee io . please also see my blog on how play 2.0 pits against spring mvc . in this blog, i will discuss some of these points and also discuss how akka and comet complement play 2.0. the more i understand play 2.0 stack the more i realize that scala is better suited to take advantages of capabilities of play 2.0 compared to java. there is a blog on how web developers view of play 2.0 . you can understand how akka’s actor pits against jms refer this stackoverflow writeup . a good documentation on akka’s actor is here . play 2.0, netty, akka, commet: how it fits play 2.0, netty, akka, comet: how it fits servlet container like tomcat blocks each request until the backend processing is complete. play 2.0 stack will help in achieving the usecase like, you need to web crawl and get all the product listing from various sources in a non-blocking and asynchronous way using loosely coupled message oriented architecture. for example, the below code will not be scalable in play 2.0 stack, because play has only 1 main thread and the code blocks other requests to be processed. in play 2.0/netty the application registers with callback on a long running process using frameworks like akka when it is completed, in a reactive pattern. public static result index() { //here is where you can put your long running blocking code like getting // the product feed from various sources return ok("hello world"); } the controller code to use akka to work in a non-blocking way with async callback is as below, public static result index() { return async( future(new callable() { public integer call() { //here is where you can put your long running blocking code like getting //the product feed from various sources return 4; } }).map(new function() { public result apply(integer i) { objectnode result = json.newobject(); result.put("id", i); return ok(result); } }) ); } and more cleaner and preferred way is akka’s actor model is as below, public static result sayhello(string data) { logger.debug("got the request: {}" + data); actorsystem system = actorsystem.create("mysystem"); actorref myactor = system.actorof(new props(myuntypedactor.class), "myactor"); return async( akka.aspromise(ask(myactor, data, 1000)).map( new function() { public result apply(object response) { objectnode result = json.newobject(); result.put("message", response.tostring()); return ok(result); } } ) ); } static public class myuntypedactor extends untypedactor { public void onreceive(object message) throws exception { if (message instanceof string){ logger.debug("received string message: {}" + message); //here is where you can put your long running blocking code like getting //the product feed from various sources getsender().tell("hello world"); } else { unhandled(message); } } } f you want to understand how we can use comet for asynchronously render data to the browser using play, akka and comet refer the code in github . here is some good writeup comparing comet and websocket in stackoverflow .
Updated October 11, 2022
by Krishna Prasad
· 11,456 Views · 2 Likes
article thumbnail
Handling Big Data with HBase Part 5: Data Modeling (or, Life Without SQL)
This is the fifth of a series of blogs introducing Apache HBase. In the fourth part, we saw the basics of using the Java API to interact with HBase to create tables, retrieve data by row key, and do table scans. This part will discuss how to design schemas in HBase. HBase has nothing similar to a rich query capability like SQL from relational databases. Instead, it forgoes this capability and others like relationships, joins, etc. to instead focus on providing scalability with good performance and fault-tolerance. So when working with HBase you need to design the row keys and table structure in terms of rows and column families to match the data access patterns of your application. This is completely opposite what you do with relational databases where you start out with a normalized database schema, separate tables, and then you use SQL to perform joins to combine data in the ways you need. With HBase you design your tables specific to how they will be accessed by applications, so you need to think much more up-front about how data is accessed. You are much closer to the bare metal with HBase than with relational databases which abstract implementation details and storage mechanisms. However, for applications needing to store massive amounts of data and have inherent scalability, performance characteristics and tolerance to server failures, the potential benefits can far outweigh the costs. In the last part on the Java API, I mentioned that when scanning data in HBase, the row key is critical since it is the primary means to restrict the rows scanned; there is nothing like a rich query like SQL as in relational databases. Typically you create a scan using start and stop row keys and optionally add filters to further restrict the rows and columns data returned. In order to have some flexibility when scanning, the row key should be designed to contain the information you need to find specific subsets of data. In the blog and people examples we've seen so far, the row keys were designed to allow scanning via the most common data access patterns. For the blogs, the row keys were simply the posting date. This would permit scans in ascending order of blog entries, which is probably not the most common way to view blogs; you'd rather see the most recent blogs first. So a better row key design would be to use a reverse order timestamp, which you can get using the formula (Long.MAX_VALUE - timestamp), so scans return the most recent blog posts first. This makes it easy to scan specific time ranges, for example to show all blogs in the past week or month, which is a typical way to navigate blog entries in web applications. For the people table examples, we used a composite row key composed of last name, first name, middle initial, and a (unique) person identifier to distinguish people with the exact same name, separated by dashes. For example, Brian M. Smith with identifier 12345 would have row key smith-brian-m-12345. Scans for the people table can then be composed using start and end rows designed to retrieve people with specific last names, last names starting with specific letter combinations, or people with the same last name and first name initial. For example, if you wanted to find people whose first name begins with B and last name is Smith you could use the start row key smith-b and stop row key smith-c (the start row key is inclusive while the stop row key is exclusive, so the stop key smith-c ensures all Smiths with first name starting with the letter "B" are included). You can see that HBase supports the notion of partial keys, meaning you do not need to know the exact key, to provide more flexibility creating appropriate scans. You can combine partial key scans with filters to retrieve only the specific data needed, thus optimizing data retrieval for the data access patterns specific to your application. So far the examples have involved only single tables containing one type of information and no related information. HBase does not have foreign key relationships like in relational databases, but because it supports rows having up to millions of columns, one way to design tables in HBase is to encapsulate related information in the same row - a "wide" table design. It is called a "wide" design since you are storing all information related to a row together in as many columns as there are data items. In our blog example, you might want to store comments for each blog. The "wide" way to design this would be to include a column family named comments and then add columns to the comment family where the qualifiers are the comment timestamp; the comment columns would look like comments:20130704142510 and comments:20130707163045. Even better, when HBase retrieves columns it returns them in sorted order, just like row keys. So in order to display a blog entry and its comments, you can retrieve all the data from one row by asking for the content, info, and comments column families. You could also add a filter to retrieve only a specific number of comments, adding pagination to them. The people table column families could also be redesigned to store contact information such as separate addresses, phone numbers, and email addresses in column families allowing all of a person's information to be stored in one row. This kind of design can work well if the number of columns is relatively modest, as blog comments and a person's contact information would be. If instead you are modeling something like an email inbox, financial transactions, or massive amounts of automatically collected sensor data, you might choose instead to spread a user's emails, transactions, or sensor readings across multiple rows (a "tall" design) and design the row keys to allow efficient scanning and pagination. For an inbox the row key might look like - which would permit easily scanning and paginating a user's inbox, while for financial transactions the row key might be -. This kind of design can be called "tall" since you are spreading information about the same thing (e.g. readings from the same sensor, transactions in an account) across multiple rows, and is something to consider if there will be an ever-expanding amount of information, as would be the case in a scenario involving data collection from a huge network of sensors. Designing row keys and table structures in HBase is a key part of working with HBase, and will continue to be given the fundamental architecture of HBase. There are other things you can do to add alternative schemes for data access within HBase. For example, you could implement full-text searching via Apache Lucene either within rows or external to HBase (search Google for HBASE-3529). You can also create (and maintain) secondary indexes to permit alternate row key schemes for tables; for example in our people table the composite row key consists of the name and a unique identifier. But if we desire to access people by their birth date, telephone area code, email address, or any other number of ways, we could add secondary indexes to enable that form of interaction. Note, however, that adding secondary indexes is not something to be taken lightly; every time you write to the "main" table (e.g. people) you will need to also update all the secondary indexes! (Yes, this is something that relational databases do very well, but remember that HBase is designed to accomodate a lot more data than traditional RDBMSs were.) Conclusion to Part 5 In this part of the series, we got an introduction to schema design in HBase (without relations or SQL). Even though HBase is missing some of the features found in traditional RDBMS systems such as foreign keys and referential integrity, multi-row transactions, multiple indexes, and son on, many applications that need inherent HBase benefits like scaling can benefit from using HBase. As with anything complex, there are tradeoffs to be made. In the case of HBase, you are giving up some richness in schema design and query flexibility, but you gain the ability to scale to massive amounts of data by (more or less) simply adding additional servers to your cluster. In the next and last part of this series, we'll wrap up and mention a few (of the many) things we didn't cover in these introductory blogs. References HBase web site, http://hbase.apache.org/ HBase wiki, http://wiki.apache.org/hadoop/Hbase HBase Reference Guide http://hbase.apache.org/book/book.html HBase: The Definitive Guide, http://bit.ly/hbase-definitive-guide Google Bigtable Paper, http://labs.google.com/papers/bigtable.html Hadoop web site, http://hadoop.apache.org/ Hadoop: The Definitive Guide, http://bit.ly/hadoop-definitive-guide Fallacies of Distributed Computing, http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing HBase lightning talk slides, http://www.slideshare.net/scottleber/hbase-lightningtalk Sample code, https://github.com/sleberknight/basic-hbase-examples
Updated October 11, 2022
by Scott Leberknight
· 19,521 Views · 3 Likes
article thumbnail
Geek Reading Link List
I have talked about human filters and my plan for digital curation. These items are the fruits of those ideas, the items I deemed worthy from my Google Reader feeds. These items are a combination of tech business news, development news and programming tools and techniques. Making accessible icon buttons (NCZOnline) Double Shot #1097 (A Fresh Cup) Life Beyond Rete – R.I.P Rete 2013 (Java Code Geeks) My Passover Project: Introducing Rattlesnake.CLR (Ayende @ Rahien) Super useful jQuery plugins for responsive web design (HTML5 Zone) Android Development – Your First Steps (Javalobby – The heart of the Java developer community) Never Ever Rewrite Your System (Javalobby – The heart of the Java developer community) Telecommuting, Hoteling, and Managing Product Development (Javalobby – The heart of the Java developer community) The Daily Six Pack: April 1, 2013 (Dirk Strauss) Optimizing Proto-Geeks for Business (DaedTech) Learning Bootstrap Part 2: Working with Buttons (debug mode……) Rumination on Time (Rob Williams' Blog) Unit-Testing Multi-Threaded Code Timers (Architects Zone – Architectural Design Patterns & Best Practices) Metrics for Agile (Javalobby – The heart of the Java developer community) Detecting Java Threads in Deadlock with Groovy and JMX (Inspired by Actual Events) Entrepreneurs: Stop participating in hackathons just to win them (VentureBeat) How to hack the recruitment process to find the best developers for your startup or agency (The Next Web) Hardware Hacks: MongoLab + Arduino (Architects Zone – Architectural Design Patterns & Best Practices) The Daily Six Pack: March 30, 2013 (Dirk Strauss) I hope you enjoy today’s items, and please participate in the discussions on those sites.
Updated October 11, 2022
by Robert Diana
· 6,435 Views · 1 Like
article thumbnail
Geek Reading June 7, 2013
I have talked about human filters and my plan for digital curation. These items are the fruits of those ideas, the items I deemed worthy from my Google Reader feeds. These items are a combination of tech business news, development news and programming tools and techniques. Dew Drop – June 7, 2013 (#1,563) (Alvin Ashcraft's Morning Dew) On friction in software (Ayende @ Rahien) Caching, jQuery Ajax and Other IE Fun (HTML5 Zone) IndexedDB and Date Example (HTML5 Zone) DevOps Scares Me – Part 1 (Architects Zone – Architectural Design Patterns & Best Practices) Visualizing the News with Vivagraph.js (Architects Zone – Architectural Design Patterns & Best Practices) My First Clojure Workflow (Javalobby – The heart of the Java developer community) Helping an ISV Look at Their Cloud Options (Architects Zone – Architectural Design Patterns & Best Practices) Ignore Requirements to Gain Flexibility, Value, Insights! The Power of Why (Javalobby – The heart of the Java developer community) Estimating the Unknown: Dates or Budgets, Part 1 (Agile Zone – Software Methodologies for Development Managers) Team Decision Making Techniques – Fist to Five and others (Agile Zone – Software Methodologies for Development Managers) The Daily Six Pack: June 7, 2013 (Dirk Strauss) Pastime (xkcd.com) The Affect Heuristic (Mark Needham) Every great company has been built the same way: bit by bit (Hacker News) Under the Hood: The entities graph (Facebook Engineering's Facebook Notes) Entrepreneurship With a Family is for Crazy People (Stay N Alive) Thinking Together for Release Planning (Javalobby – The heart of the Java developer community) I hope you enjoy today’s items, and please participate in the discussions on those sites.
Updated October 11, 2022
by Robert Diana
· 6,406 Views · 1 Like
article thumbnail
Geek Reading - Cloud, SQL, NoSQL, HTML5
I have talked about human filters and my plan for digital curation. These items are the fruits of those ideas, the items I deemed worthy from my Google Reader feeds. These items are a combination of tech business news, development news and programming tools and techniques. Real-Time Ad Impression Bids Using DynamoDB (Amazon Web Services Blog) The mother of all M&A rumors: AT&T, Verizon to jointly buy Vodafone (GigaOM) Is this the future of memory? A Hybrid Memory Cube spec makes its debut. (GigaOM) Dew Drop – April 2, 2013 (#1,518) (Alvin Ashcraft's Morning Dew) Rosetta Stone acquires Livemocha for $8.5m to move its language learning platform into the cloud (The Next Web) Double Shot #1098 (A Fresh Cup) Extending git (Atlassian Blogs) A Thorough Introduction To Backbone.Marionette (Part 2) (Smashing Magazine Feed) 60 Problem Solving Strategies (Javalobby – The heart of the Java developer community) Why asm.js is a big deal for game developers (HTML5 Zone) Implementing DAL in Play 2.x (Scala), Slick, ScalaTest (Javalobby – The heart of the Java developer community) “It’s Open Source, So the Source is, You Know, Open.” (Javalobby – The heart of the Java developer community) How to Design a Good, Regular API (Javalobby – The heart of the Java developer community) Scalding: Finding K Nearest Neighbors for Fun and Profit (Javalobby – The heart of the Java developer community) The Daily Six Pack: April 2, 2013 (Dirk Strauss) Usually When Developers Are Mean, It Is About Power (Agile Zone – Software Methodologies for Development Managers) Do Predictive Modelers Need to Know Math? (Data Mining and Predictive Analytics) Heroku Forces Customer Upgrade To Fix Critical PostgreSQL Security Hole (TechCrunch) DYNAMO (Lambda the Ultimate – Programming Languages Weblog) FitNesse your ScalaTest with custom Scala DSL (Java Code Geeks) LinkBench: A database benchmark for the social graph (Facebook Engineering's Facebook Notes) Khan Academy Checkbook Scaling to 6 Million Users a Month on GAE (High Scalability) Famo.us, The Framework For Fast And Beautiful HTML5 Apps, Will Be Free Thanks To “Huge Hardware Vendor Interest” (TechCrunch) Why We Need Lambda Expressions in Java – Part 2 (Javalobby – The heart of the Java developer community) I hope you enjoy today’s items, and please participate in the discussions on those sites.
Updated October 11, 2022
by Robert Diana
· 7,990 Views · 1 Like
article thumbnail
Feature Comparison of Java Job Schedulers – Plus One
Poor Oddjob, I thought as I read Craig Flichel’s Feature Comparison of Java Job Schedulers featuring Obsidian, Quartz, Cron4j and Spring. Yet again it hasn’t made the grade, it’s been passed over for the scheduling team. Never mind I say, you’re just a little bit different and misunderstood. Let’s have a kick about in the back yard and see what you can do… Real-time Schedule Changes / Real-time Job Configuration Oddjob: Yes Here is Oddjob’s Client GUI, connecting to an instance of Oddjob running as a Windows Service on my home PC. My Oddob instance sends me a reminder email when it’s someone’s birthday, and also tells me when it’s going to rain. The swing UI allows complete configuration of the server. With it I can configure the jobs and their schedules, but unfortunately I can’t control the weather! Ad-hoc Job Submission: Yes Configurable Job Conflicts: Not Really Applicable Ad-hoc job submission is really what Oddjob is all about. Many jobs won’t be scheduled at all and will sit in a folder to be manually run as required. To run a job, scheduled or not, just click ‘run’ from the job menu. Job conflicts aren’t really a problem for Oddjob because it won’t reschedule a job again until it’s finished. If a job’s next slot has passed, you have the choice to run immediately or skip missed runs and reschedule from the current time. If you want concurrent execution you can configure different jobs to run at the same time or use a single schedule and launch the jobs in parallel. Manually Stopping a job is just as easy as running it. Click ‘stop’ from the job menu. Code- and XML-Free Job Configuration Oddjob: Yes You saw this in the first section. Oddjob’s configuration is via a UI and is done in real time. In fact I often start one job as I’m configuring the next. It’s all very interactive. Oddjob uses XML behind the scenes for those that like to see under the hood. Job Event Subscription/Notification Oddjob: Yes It’s very easy to trigger a job based on the completion state of another job. You would have to write code to listen to configuration changes though. Custom Listeners: Undocumented Job Chaining: Yes There’s lots of options for job chaining, sequential, parallel, or cascade, and any nested combinations thereof. Adding a custom Stateful listener would be easy enough, and might be useful if embedding Oddjob but this isn’t the normal use case. The unit tests do this extensively however. Monitoring & Management UI Oddjob: Yes The same UI allows you to see the job state, job by job log messages, the console of an Exec Job, and the run time properties of all the jobs. Zero Configuration Clustering and Load Sharing Oddjob: Kind Of Oddjob has a Grab Job so you can run the same configuration on several servers and have them compete for work. I wrote it as a proof of concept but I’ve never had cause to use it in the field and I haven’t had any reports that others have either. Job Execution Host Affinity: Kind Of In the same way that you add the ‘Grab job’ to many servers to share work, you could in theory just add Grab for a particular job to only certain servers. I guess this is server Affinity? Scripting Language Support in Jobs Oddjob: Yes Oddjob has a Script Job for any language that supports the Java Scripting Framework. JavaScript is shipped by default.With the Script Job you can also interact with Oddjob to use the properties of other jobs, and set variables for other jobs to use. Scheduling Precision Oddjob: Millisecond In theory Oddjob can schedule with millisecond precision, but this isn’t usual practice. Polling for a file every 30 seconds, for instance, is normally fine. Job Scheduling & Management REST API Oddjob: JMX Only No REST API. You can embed the JMX Client easily enough and control everything from Java, but not for other languages. Not yet. Custom Calendar Support Oddjob: Yes Oddjob has the concept of breaking a schedule with another. The breaks can be as flexible as the job schedule itself – define days, weeks or just a few hours off for a task. The Scheduling section of the User Guide has much more on Oddjob’s flexible scheduling capabilities. Conclusion Oddjob has many other features to make automating repetitive tasks easy. One noteworthy feature is the ease of adding custom jobs by just implementing java.lang.Runnable. Oddjob is undeniably an amateur player in the Scheduler league, and one that is often overlooked. With its Apache licence it is completely free and open. Why not check it out when you have an hour or two? You might be pleasantly surprised by the quality of play.
Updated October 11, 2022
by Rob Gordon
· 11,344 Views · 1 Like
article thumbnail
In Defense of Scala. Response to "I Don’t Like Scala"
There were several posts lately critical of Scala language, specifically this onehttps://dzone.com/articles/i-dont-scala. It is a well written, critical of Scala post by someone who clearly prefers other languages (i.e. Java) at this point. However, having used Scala exclusively for the last 4 years and having led the company (GridGain Systems) that has been one of the pioneers in Scala adoption boasting one of the largest production code based in Scala across many projects – I can see all too familiar “reasoning” in that post… The biggest issue with Scala’s perception is the deeply varying quality of frameworks and tools that significantly affect one’s perception of the Scala language itself. I’ve been saying openly that SBT and Scalaz projects, for example, have had the cumulatively negative impact on Scala’s initial adoption. While poor engineering behind SBT and colossal snobbism of Scalaz have been well understood – I can addSpray.io to this list as well now. The engineering ineptitude of people behindSpray.io (and the Spray.routing specifically) is worrisome to say the least. When someone takes a test run with Scala and gets exposed to SBT, Scalaz andSpray.io – on top of the existing growing pains of binary compatibly, IDE support and slow compilation – I’m surprised we have even that small community around Scala as it is today. Yet – remove these engineering warts – and Scala provides brilliantly simple, extremely productive and intellectually sutisfying world in which we can express our algorithms. What attracted me the most to Scala almost 5 years ago is its engineering pragmatisms vs. hopeless academic idealism of Haskell or intellectual laziness of dynamically typed languages. That engineering pragmatism coupled with an almost algebraic elegance and simplicity is what makes Scala probably the best general purpose language available today. So, I say to my engineers to look at Scala holistically, away from sub-standard projects, tools and individual snobs. There are plenty of good examples in Scala eco-system where one can learn how to think and ultimately write quality code in Scala: For sane type-level programming look at latest Scala collections For DLS look at ScalaTest For concurrency look at Akka For boundary pushing yet still useful type-level programming look at Shapeless and the list can go on. When tinkering with Scala for the first time always remember that Scala the language is much bigger than the sum of many of its projects – and there are few of them that you probably should stay clear off anyways.
Updated October 11, 2022
by Nikita Ivanov
· 10,132 Views · 2 Likes
article thumbnail
Android Cloud Apps with Azure
a recent study by gartner predicts a very significant increase in cloud usage by consumers in a few years, due in great part to the ever growing use of smartphone cameras by the average household. in this context, it could be useful to have a smartphone application that is able to upload / download digital content from a cloud provider. in this article, we will construct a basic android prototype that will allow us to plug in the windows azure cloud provider, and use the windows azure toolkit for android ( available at github ) to do all of the basic cloud operations : upload content to cloud storage, browse the storage, download or delete files in cloud storage. once those operations are implemented, we will see how to enable our android application to receive server push notifications . first things first, we need to set up a storage account in the azure cloud: a storage account comes with several options as for data management : we can keep data in blob, table or queue storage. in this article, we will use the blob storage to work with images. the storage account has a primary and secondary access key , either one of the two can be used to execute operations on the storage account. any of those keys can be regenerated if compromised. 1. preliminaries first, the prerequisites: eclipse ide for java android plugin for eclipse ( adt ) windows azure toolkit for android windows azure subscription (you can get a 90-day free trial ) a getting-started document on windows azure toolkit’s github page covers the installation procedure of all the the required software in detail. this whole project ( cloid ) is freely available at github . so here we’ll limit ourselves to presenting the most relevant code sections along with the corresponding screens. the user interface is composed of a few basic activity screens, spawned from the main screen (top center): since we use a technology not for its own sake but according to our needs, let’s start by specifying what we want: public abstract class storage { /** all providers will have accesss to context*/ protected context context; /** all providers will have accesss to sharedpreferences */ protected cloudpreferences prefs; /** all downloads from providers will be saved on sd card */ protected string download_path = "/sdcard/dcim/camera/"; /** * @throws operationexception * */ public storage(context ctx) throws operationexception { context = ctx; prefs = new cloudpreferences(ctx); } /** * @throws operationexception * */ public abstract void uploadtostorage(string file_path) throws operationexception; /** * @throws operationexception * */ public abstract void downloadfromstorage(string file_name) throws operationexception; /** * @throws operationexception * */ public abstract void browsestorage() throws operationexception; /** * @throws operationexception * */ public abstract void deleteinstorage(string file_name) throws operationexception; } the above is the contract that our cloud storage provider will satisfy. we’ll provide a mockstorage implementation that will pretend to carry out a command in order to test our ui (i.e. our scrollable items list, progress bar, exception messages, etc.), so that we can later just plug in azure storage operations. note from our activities screen above, that we can switch anytime between azure storage and mock storage with the press of the toggle button “cloud on/off” in the settings screen, saving the preferences afterward. public class mockstorage extends storage { // code here... @override public void uploadtostorage(string file_path) throws operationexception { donothingbutsleep(); //throw new operationexception( "test error message", // new throwable("reason: upload test") ); } // other methods will also do nothing but sleep... /***/ private void donothingbutsleep(){ try{ thread.sleep(5000l); } catch (interruptedexception iex){ return; } } 2. the azure toolkit the toolkit comes with a sample application called “simple”, and two library jars: access control for android.jar in the wa-toolkit-android\library\accesscontrol\bin folder azure storage for android.jar in the wa-toolkit-android\library\storage\bin folder here we will only use the latter, since we will access directly azure’s blob storage. needless to say, this is not the recommended way , since our credentials will be stored on the handset. a better approach security-wise would be to access azure storage through web services hosted on either azure or other public/private clouds. once the toolkit is ready for use, we need to think a bit about settings . using an azure blob storage only requires 3 fields: an account name , an access key , and a container for our images. the access key is quite a long string (88 characters) and is kind of a pain to type, so one way to do the setup is to configure the android res/values/strings.xml file to set the default values: ... cloid insert-access-key-here pictures ... however, because we may want to overwrite the default values above (e.g. create another container), we will also save the values on the settings screen in android’s sharedpreferences . and now, let’s implement the azurestorage class. 3. azure blob storage operations 3.1. storage initialization the azurestorage constructor gets its data from android preferences (from its superclass), then constructs a connection string used to access the storage account, creates a blob client and retrieves a reference to the container of images. if the user changed the default container “pictures” in settings, then a new (empty) one will be created with that new name. a container is any grouping of blobs under a name. no blob exists outside of a container. // package here // other imports import com.windowsazure.samples.android.storageclient.blobproperties; import com.windowsazure.samples.android.storageclient.cloudblob; import com.windowsazure.samples.android.storageclient.cloudblobclient; import com.windowsazure.samples.android.storageclient.cloudblobcontainer; import com.windowsazure.samples.android.storageclient.cloudblockblob; import com.windowsazure.samples.android.storageclient.cloudstorageaccount; public class azurestorage extends storage { private cloudblobcontainer container; / * @throws operationexception * */ public azurestorage(context ctx) throws operationexception { super(ctx); // set from prefs string acct_name = prefs.getaccountname(); string access_key = prefs.getaccesskey(); // get connection string string storageconn = "defaultendpointsprotocol=http;" + "accountname=" + acct_name + ";accountkey=" + access_key; // get cloudblobcontainer try { // retrieve storage account from storageconn cloudstorageaccount storageaccount = cloudstorageaccount.parse(conn); // create the blob client // to get reference objects for containers and blobs cloudblobclient blobclient = storageaccount.createcloudblobclient(); // retrieve reference to a previously created container container = blobclient.getcontainerreference( prefs.getcontainer() ); container.createifnotexist(); } catch (exception e) { throw new operationexception("error from initblob: " + e.getmessage(), e); } } // code... we will use that container reference cloudblobcontainer throughout our upcoming cloud operations. 3.2. uploading images we will upload a file from android’s gallery to the cloud, keeping the same filename. “screener” is just a utilities class (see github repository) that does a number of useful things, e.g. extracting a file name from its path and setting the right mime type (“image/jpeg”, “image/png”, etc.). the two kinds of blobs are page blobs and block blobs . the (very) short story is that page blobs are optimized for read & write operations, while block blobs let us upload large files efficiently. in particular we can upload multiple blocks in parallel to decrease upload time. here we are uploading a blob (gallery image) as a set of blocks. /** * @throws operationexception */ @override public void uploadtostorage(string file_path) throws operationexception { try { // create or overwrite blob with contents from a local file // use same name than in local storage cloudblockblob blob = container.getblockblobreference( screener.getnamefrompath(file_path) ); file source = new file(file_path); blob.upload( new fileinputstream(source), source.length() ); blob.getproperties().contenttype = screener.getimagemimetype(file_path); blob.uploadproperties(); } catch (exception e) { throw new operationexception("error from uploadtostorage: " + e.getmessage(), e); } } bear in mind that we are not checking if the file already exists in cloud storage. therefore we will overwrite any existing file with the same name as the one we are uploading. that is usually not desirable in production code. here’s the screen flow of the upload operation: 3.3. browsing the cloud for browsing, we store all our blobs in our container into a list of items that we will display in android as a scrollable list of image names in a subclass of android.app.listactivity . once one item in the list is clicked (“touched”) by the user, we want to display some image properties such as the image size (important when deciding to download), its mime type, and the date it was last operated upon. /** * @throws operationexception * */ @override public void browsestorage() throws operationexception{ // reset uri list for refresh - no caching item.itemlist.clear(); // loop over blobs within the container try { for (cloudblob blob : container.listblobs()) { blob.downloadattributes(); blobproperties props = blob.getproperties(); long ksize = props.length/1024; string type = props.contenttype; date lastmodified = props.lastmodified; item item = new item(blob.geturi(), blob.getname(), ksize, type, lastmodified); item.itemlist.add(item); } // end loop } catch (exception e) { throw new operationexception("error from browsestorage: " + e.getmessage(), e); } } here’s the screen flow of the browse operation. pressing on an item on the list displays its details and operations on the image, which we will look at next: 3.4. downloading images our download method is pretty straightforward. note that we are downloading to the android handset’s sd card by using download_path from the superclass. /** * @throws operationexception * */ @override public void downloadfromstorage(string file_name) throws operationexception{ try { for (cloudblob blob : container.listblobs()) { // download the item and save it to a file with the same name as arg if(blob.getname().equals(file_name)){ blob.download( new fileoutputstream(download_path + blob.getname()) ); break; } } } catch (exception e) { throw new operationexception("error from downloadfromstorage: " + e.getmessage(), e); } } and the corresponding ui flow. instead of displaying the image right after the download, we chose to include a link to the gallery (bottom of the screen) where the freshly retrieved image appears on top of the gallery’s stack of pictures: 3.5. deleting images the delete operation performed on a blob up in the cloud is also rather simple: /** * @throws operationexception */ @override public void deleteinstorage(string file_name) throws operationexception{ try { // retrieve reference to a blob named file_name cloudblockblob blob = container.getblockblobreference(file_name); // delete the blob blob.delete(); } catch (exception e) { throw new operationexception("error from deleteinstorage: " + e.getmessage(), e); } } and its associated ui screens series. note that after confirming the operation, and when deletion completes, the browsing list of items is automatically refreshed, and we can see that the image is no longer on the list of blobs in our storage container. 3.6. wrapping up the azurestorage methods are called inside a basic work thread, which will take care of all cloud operations: // called inside a thread try { // get storage instance from factory storage store = storagefactory.getstorageinstance(this, storagefactory.provider.azure_storage); // for the progress bar incrementworkcount(); // do ops switch(operation){ case upload : store.uploadtostorage(path); break; case browse : store.browsestorage(); break; case download : store.downloadfromstorage(path); // refresh gallery sendbroadcast( new intent( intent.action_media_mounted, uri.parse("file://"+ environment.getexternalstoragedirectory()) ) ); break; case delete : store.deleteinstorage(path); break; } // end switch } catch (operationexception e) { recorderror(e); } notice how we are telling the android image gallery to refresh by issuing a broadcast once a new file is downloaded from the cloud to the sd card. there are different ways to do this, but without that call, the gallery won’t show the new image before the next system scheduled media scan. again, for the full code, refer to this project on github. we are done with the basic cloud operations. all we had to do was plug in our azurestorage implementation class and get an instance of it through a factory, with minimal impact on preexisting code. 4. push notifications up to this point we have demonstrated device-initiated communication with the cloud. for cloud-initiated or push communication, the android platform uses google cloud messaging (gcm). in a previous article , i wrote about how to integrate gcm into an existing android application. here we will add a second set of settings for server push. our client code will connect with any gcm server and it will set the status on our main activity (last screen shot on the right) once the information in push preferences is correctly set. 5. conclusions the toolkit documentation is kind of sparse (which is why the community needs more articles like this). also, the sample application doesn’t cover much (maybe the reason why it’s called “simple”), and it has room for improvement. however, the library itself is fully functional, and once we figure out the api, it all works quite nicely. of course, this application is itself pretty basic and doesn’t cover lots of other features, like access control, permissions, metadata, and snapshots. but it is a start.
Updated October 11, 2022
by Tony Siciliani
· 15,363 Views · 1 Like
article thumbnail
Building a Data Warehouse, Part 5: Application Development Options
see also: part i: when to build your data warehouse part ii: building a new schema part iii: location of your data warehouse part iv: extraction, transformation, and load in part i we looked at the advantages of building a data warehouse independent of cubes/a bi system and in part ii we looked at how to architect a data warehouse’s table schema. in part iii, we looked at where to put the data warehouse tables. in part iv, we are going to look at how to populate those tables and keep them in sync with your oltp system. today, our last part in this series, we will take a quick look at the benefits of building the data warehouse before we need it for cubes and bi by exploring our reporting and other options. as i said in part i, you should plan on building your data warehouse when you architect your system up front. doing so gives you a platform for building reports, or even application such as web sites off the aggregated data. as i mentioned in part ii, it is much easier to build a query and a report against the rolled up table than the oltp tables. to demonstrate, i will make a quick pivot table using sql server 2008 r2 powerpivot for excel (or just powerpivot for short!). i have showed how to use powerpivot before on this blog , however, i usually was going against a sql server table, sql azure table, or an odata feed. today we will use a sql server table, but rather than build a powerpivot against the oltp data of northwind, we will use our new rolled up fact table. to get started, i will open up powerpivot and import data from the data warehouse i created in part ii. i will pull in the time, employee, and product dimension tables as well as the fact table. once the data is loaded into powerpivot, i am going to launch a new pivottable. powerpivot understands the relationships between the dimension and fact tables and places the tables in the designed shown below. i am going to drag some fields into the boxes on the powerpivot designer to build a powerful and interactive pivot table. for rows i will choose the category and product hierarchy and sum on the total sales. i’ll make the columns (or pivot on this field) the month from the time dimension to get a sum of sales by category/product by month. i will also drag in year and quarter in my vertical and horizontal slicers for interactive filtering. lastly i will place the employee field in the report filter pane, giving the user the ability to filter by employee. the results look like this, i am dynamically filtering by 1997, third quarter and employee name janet leverling. this is a pretty powerful interactive report build in powerpivot using the four data warehouse tables. if there was no data warehouse, this pivot table would have been very hard for an end user to build. either they or a developer would have to perform joins to get the category and product hierarchy as well as more joins to get the order details and sum of the sales. in addition, the breakout and dynamic filtering by year and quarter, and display by month, are only possible by the dimtime table, so if there were no data warehouse tables, the user would have had to parse out those dateparts. just about the only thing the end user could have done without assistance from a developer or sophisticated query is the employee filter (and even that would have taken some powerpivot magic to display the employee name, unless the user did a join.) of course pivot tables are not the only thing you can create from the data warehouse tables you can create reports, ad hoc query builders, web pages, and even an amazon style browse application. (amazon uses its data warehouse to display inventory and oltp to take your order.) i hope you have enjoyed this series, enjoy your data warehousing.
Updated October 11, 2022
by John Cook
· 13,955 Views · 1 Like
article thumbnail
Kubernetes Services Explained
A rundown of NodePorts, LoadBalancers, Ingresses, and more in Kubernetes!
October 11, 2022
by Sharad Regoti
· 6,504 Views · 5 Likes
article thumbnail
How to Automate Certificate Issuance to Kubernetes Deployments Using Autocert
Using TLS everywhere is one of the Kubernetes team's recommendations for hardening cluster security and increasing resilience. In this tutorial, you'll learn how to automate TLS certificate issuance to Kubernetes deployments.
October 10, 2022
by Linda Ikechukwu
· 5,612 Views · 1 Like
article thumbnail
Transit Gateway With Anypoint Platform
Here we will use the Mulesoft Anypoint platform to attach VPC to the AWS transit gateway to form a single network topology.
October 10, 2022
by Gaurav Dhimate DZone Core CORE
· 4,258 Views · 2 Likes
article thumbnail
Quickly Setup LDAP User Directory for Jira
In this article, I will discuss how we can configure the OpenLDAP user directory for Jira Data Center Setup.
October 10, 2022
by Chandra Shekhar Pandey
· 5,067 Views · 1 Like
article thumbnail
What Is Transaction Management in Java?
We will discuss transaction management in Java; we should know what a transaction is; therefore, the following are some important points about the transaction.
October 10, 2022
by Mahesh Sharma
· 13,242 Views · 3 Likes
article thumbnail
Google Cloud for Beginners — How to Choose a Compute Service?
Cloud platforms provide greater flexibility. How do you choose to compute service in Google Cloud?
October 10, 2022
by Ranga Karanam DZone Core CORE
· 5,033 Views · 3 Likes
article thumbnail
Top Commonly Used JavaScript Functions
Functions are one of the most important aspects of JavaScript. This article will explore the top nine commonly used JavaScript functions with examples.
October 10, 2022
by Akash Chauhan
· 6,702 Views · 3 Likes
article thumbnail
AWS Step Function for Modernization of Integration Involving High-Volume Transaction: A Case Study
The serverless offerings of AWS are getting more and more popular. But it remains a challenge to know them well enough to leverage them properly.
October 9, 2022
by Satyaki Sensarma
· 4,078 Views · 3 Likes
article thumbnail
Message Routing and Topics: A Thought Shift
This article makes some observations on the advancements in real-time, event-driven messaging with hierarchical topics from the MoM perspective.
October 9, 2022
by Giri Venkatesan
· 4,237 Views · 5 Likes
article thumbnail
Golang vs. Python: Which Is Better?
Let's dive into a comparison between Go and Python.
October 8, 2022
by Apoorva Goel
· 5,715 Views · 2 Likes
  • Previous
  • ...
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: